Back

Data Protection

The Duty of Care Gap: Why Today's Breach Litigation Standard Was Built for Yesterday's Attack

Current litigation focuses on access controls. But most breaches bypass them entirely. The standard of care is measuring the wrong layer.

Written by

Chris Dailey (CRO) & Hari Indukuri (CTO)

Published On

In the week of April 1 through April 7, 2026, five class action lawsuits were filed against Mercor, a $10 billion AI training startup serving OpenAI, Anthropic, and Meta. Five lawsuits in seven days. Each one built around the same fundamental argument - that Mercor failed to implement adequate security measures to protect the sensitive data of more than 40,000 contractors whose personal information, professional work product, and identifying documents were stolen in one of the most consequential data breaches of 2026.

The plaintiffs are not wrong that a failure occurred. The breach was real. The harm is real. The stolen data - 939 gigabytes of proprietary source code, 3 terabytes of video interview recordings and identity verification documents, a 211 gigabyte user database, internal communications, and AI training methodologies that Y Combinator CEO Garry Tan described as representing billions in value and a major national security issue - is now in the hands of attackers who obtained it through a cascading supply chain attack that harvested legitimate credentials from a compromised open source dependency.

The lawsuits are right that Mercor failed. They are wrong about what that failure actually was. And in being wrong about that, they are asking for a legal remedy built on a standard of care argument that - even if fully satisfied - would not have protected a single file when the credentials were compromised.

That is not a minor procedural deficiency. It is a fundamental misidentification of the duty that was breached. And it matters enormously - not just for the 40,000 contractors who deserve meaningful remedy, but for every organization that will read the Mercor settlement, implement its required controls, and believe they have met their obligation to protect the people whose data they hold.

They will not have. And the next breach will prove it.

The Standard of Care Argument the Lawsuits Are Building

To understand why the lawsuits are asking for the wrong fix it is necessary to understand precisely what legal standard they are invoking and where that standard falls short.

Data breach class actions in the United States are predominantly built on negligence theory. To succeed on a negligence claim a plaintiff must establish that the defendant owed a duty of care, that the defendant breached that duty, that the breach caused the plaintiff's harm, and that the plaintiff suffered cognizable damages.

The duty of care in data breach cases has been progressively defined by courts, regulators, and compliance frameworks over the past two decades. The FTC has enforcement authority over unfair or deceptive data security practices. The SEC has specific guidance for registered investment advisers and technology companies on data protection obligations. State attorneys general have brought actions under consumer protection statutes. Courts have increasingly recognized an implicit duty to protect sensitive personal data commensurate with the nature of the data held and the reasonable expectations of the people who provided it.

What has emerged from this body of law, regulation, and enforcement is a standard of care built almost entirely around access layer controls. The duty as courts and regulators currently understand it is a duty to prevent unauthorized access. Implement MFA. Segment networks. Monitor for anomalous activity. Rotate credentials. Conduct regular security audits. Encrypt data at rest and in transit.

The Mercor lawsuits invoke exactly this standard. The Gill complaint alleges failure to implement MFA, failure to limit access to PII, failure to monitor systems, failure to rotate passwords, and failure to encrypt sensitive data during storage and transmission. It is a textbook recitation of the access layer standard of care as it currently exists in data breach litigation doctrine.

And here is the legal problem that nobody in any of the five courtrooms is currently confronting:

That standard of care - even fully satisfied - would not have prevented the harm the plaintiffs suffered. Because the harm did not originate from a failure of access layer controls. It originated from a failure at the data layer. And the legal doctrine has not yet caught up to that distinction.

The Encryption Allegation Points at the Right Problem and Then Misses It

Among all the allegations in the Mercor complaints, the failure to encrypt sensitive data during storage and transmission is the one that comes closest to identifying the actual duty that was breached. It points toward the right problem. But the way it is framed - listed alongside MFA and password rotation as one item among several access layer improvements - reveals that the plaintiff's attorneys understand encryption as a storage security measure rather than as a fundamentally different category of data protection obligation.

That distinction is not semantic. It is the difference between a remedy that changes the outcome for 40,000 contractors and a remedy that produces a more expensive breach with identical consequences.

Encryption at rest means data sitting in a database or storage system is encrypted when it is not being accessed. Encryption in transit means data moving between systems is encrypted as it travels. Both are legitimate and important security controls. Both are widely recognized components of the current standard of care. And both are rendered completely ineffective the moment an attacker obtains valid credentials - because when a user authenticates through the normal access pathway the system decrypts the data for them, it cannot distinguish between a legitimate user and an attacker holding stolen credentials, and the encryption that was supposed to protect the data dissolves on contact with a valid authenticated session. In the exact breach scenario the Mercor lawsuits describe, both controls perform exactly as designed and protect nothing.

This means that in the exact breach scenario the Mercor lawsuits describe - an attacker authenticating successfully with stolen credentials and accessing files through the authorized decryption pathway - both forms of encryption the complaint demands would have been fully satisfied and would have protected nothing. The files would still have been usable. The exfiltration would still have proceeded. The harm would still have flowed to 40,000 contractors.

The lawsuits are demanding a standard of care that has already been implicitly satisfied by the mechanism of the attack itself. And demanding it more rigorously produces no meaningful benefit to the people the litigation is supposed to protect.

The Duty That Was Actually Breached

If the current standard of care - even fully implemented - would not have changed the outcome, the legal question becomes what duty would have. What obligation, if discharged, would have rendered the breach consequence-free for the 40,000 contractors who are now plaintiffs?

The answer is precise and it points to a duty that existing doctrine has not yet adequately articulated: the duty to protect data at the file layer after authentication succeeds.

This is the Post Authentication Data Security duty. It is distinct from and more demanding than the access layer duty that current doctrine recognizes. It is not a duty to prevent unauthorized access - though that duty exists and matters. It is a duty to ensure that data remains protected even when access succeeds, whether that access was legitimately obtained or achieved through credential theft, supply chain compromise, insider misuse, or any other vector that produces a valid authenticated session.

The distinction maps directly onto the facts of the Mercor breach. The attackers authenticated successfully. Every access control performed exactly as designed. The breach did not occur at the access layer - it occurred at the data layer, where no protection existed to govern what happened to files after authentication succeeded.

Under the current standard of care doctrine, Mercor's failure is characterized as an access layer failure - insufficient MFA, inadequate monitoring, poor credential hygiene. Those characterizations may be legally valid but they are factually incomplete. The more precise and more legally significant failure was the absence of file layer protection that would have rendered the authenticated access consequence-free regardless of who held the credentials.

The duty to protect data at the file layer after authentication succeeds is the duty the Mercor lawsuits are gesturing toward but failing to name. And naming it precisely is the most important legal contribution the Mercor litigation could make to the evolution of data breach doctrine.

Why the Current Standard of Care Is Structurally Insufficient

The cybersecurity industry has known for years that stolen credentials are the single biggest vulnerability in the modern security stack. This is not a controversial position. Verizon's Data Breach Investigations Report has identified compromised credentials as the leading cause of breaches for nearly a decade running. IBM's Cost of a Data Breach Report consistently ranks stolen credentials as both the most common and most expensive attack vector. Every major security framework - NIST, ISO 27001, HITRUST - includes extensive controls around identity and access precisely because the industry understands that when credentials are compromised, everything built around them collapses.

The cybersecurity industry has known this. It has known it for a long time. And it has continued to build and sell architectures that are fundamentally dependent on the integrity of those same credentials - producing a decade of breach reports confirming the problem while simultaneously recommending the same access layer controls that the breach reports prove are insufficient.

That failure has a direct legal consequence. Courts and regulators developing the standard of care in data breach cases have done what courts and regulators reasonably do - they have looked to the security industry for guidance on what constitutes reasonable practice. The standard of care that has emerged reflects the industry consensus those courts and regulators found when they looked. A perimeter-centric, access-focused framework that treats credential integrity as the primary and in many cases sufficient protection for sensitive data.

The doctrine is not wrong on its own terms. It accurately reflects what the industry told courts and regulators was adequate. The problem is that the industry's own data has been contradicting that consensus for years - and the legal standard has had no mechanism to update itself in response. The result is a standard of care that courts apply in good faith, that organizations implement in good faith, and that leaves sensitive unstructured files fully exposed to the primary attack vector the industry itself has identified as the leading cause of breaches for nearly a decade.

That is not a gap in legal reasoning. It is a gap between legal doctrine and technical reality - and it is a gap that the Mercor breach has rendered impossible to ignore.

The Mercor breach is the most precise possible illustration of that gap. The attack chain began with a compromised GitHub Actions workflow in an open source vulnerability scanner. It harvested credentials through a malicious dependency executing in a CI/CD pipeline. It used those credentials to authenticate as legitimate users. It accessed and exfiltrated files that the authenticated session was authorized to access. Every step of that chain operated entirely within the parameters of a security architecture that meets the current standard of care.

The standard of care that the Mercor lawsuits are invoking - the standard that Mercor allegedly failed to meet - would not have detected or prevented any step of that chain after the initial credential harvest. Because the standard is designed around preventing unauthorized access and the attack succeeded by achieving authorized access with stolen credentials.

A standard of care that cannot address the primary attack vector in the industry's own breach data is not a standard that adequately defines the duty organizations owe to the people whose data they hold.

What the Evolved Standard of Care Looks Like

The legal evolution that the Mercor lawsuits should be driving - but are not yet articulating - is a standard of care that extends the duty of protection beyond the access layer to the data layer itself.

Under an evolved standard the duty is not satisfied by encrypting data at rest and in transit. Those controls protect data from passive interception and storage compromise. They do not protect data from authenticated access using stolen credentials. They do not protect files from exfiltration by a session that the system has recognized and authorized. They are necessary components of a complete security posture but they are not sufficient to discharge the duty of care owed to people whose most sensitive personal and professional information is held in unstructured files.

The evolved standard requires file layer protection - encryption that travels with the file itself, that governs usability independent of the access layer, that remains in force regardless of what credentials were used to obtain access, and that renders the file unusable to any recipient who cannot demonstrate, at the moment of access, that they are the authorized user in the authorized context for which access was intended.

This is Post Authentication Data Security applied as a legal duty rather than a security recommendation. It is the control that, had it been in place at Mercor, would have changed the outcome completely.

The attackers authenticated successfully. They accessed the files. They exfiltrated the files. And the files were ciphertext. Not because the authentication failed. Not because the access was detected and blocked. But because the files themselves were protected in a way that made the authenticated access consequence-free for every contractor whose data was taken.

Under an evolved standard of care that recognized this duty, Mercor's failure was not that it lacked adequate MFA or insufficient password rotation. It is that it held 40,000 people's most sensitive data in unprotected files that were fully usable to anyone who obtained valid credentials - and in a world where credential theft through supply chain compromise is the industry's leading breach vector, holding sensitive data in unprotected files is itself the breach of duty.

The Delve Scandal Proves the Point

The Mercor breach did not happen in isolation. It happened simultaneously with the exposure of Delve Technologies - the GRC automation startup that had issued compliance certifications for LiteLLM, the open source AI proxy whose compromise enabled the credential harvest that reached Mercor. Those certifications were, according to the whistleblower who exposed the company, industrialized fiction. Pre-populated attestations. Certifications issued without independent verification of the controls they purported to certify.

The convergence of these two stories is not incidental. It is the most powerful possible illustration of the gap between certified compliance and actual data protection that sits at the heart of the standard of care problem.

Mercor had compliance certifications. LiteLLM had compliance certifications. Those certifications validated access controls, security processes, and organizational security practices against the current standard of care. And none of it protected a single file when the credentials were compromised.

This is the standard of care problem rendered in its starkest form. The compliance framework the lawsuits are demanding Mercor should have met is a framework designed to certify access controls. It has no mechanism for certifying what happens to files after access succeeds. It validates the door. It has nothing to say about the files behind the door when someone walks through with a stolen key.

The Delve scandal did not create this problem. It exposed it. The problem existed in every legitimately certified organization whose sensitive files are protected only by the access controls that a valid authenticated session bypasses by definition. The certification confirms the lock works. It says nothing about the readability of what is inside when the lock is opened with a stolen key.

Post Authentication Data Security provides the protection that certification cannot - because it is not a process control that can be attested to. It is a technical control that either renders files unusable or does not. There is no compliance theater version of file layer encryption. The files are either protected or they are not. And that binary self-executing reality is precisely what the evolved standard of care should require.

The Regulatory Safe Harbor Argument

The legal implications of file layer protection extend beyond negligence theory into the regulatory framework that governs breach notification and penalty - and here the argument for an evolved standard of care becomes most immediately actionable for organizations deciding right now how to protect the files they hold.

Most data breach notification laws are triggered by the exposure of usable readable personal data. GDPR Article 34 explicitly states that notification to affected individuals is not required when data was encrypted and rendered unintelligible to unauthorized parties. HIPAA's Safe Harbor provision categorizes encrypted breached data as a non-reportable event. California's CCPA, New York's SHIELD Act, and most equivalent state frameworks include explicit encryption safe harbors that reduce or eliminate notification obligations when stolen data was encrypted and ciphertext.

These safe harbors already exist in the regulatory framework. They already recognize that encrypted data that cannot be read does not produce the harm that breach notification laws are designed to address. They are the regulatory system's implicit acknowledgment of the principle that Post Authentication Data Security makes explicit - that what matters for data protection purposes is not whether the data was accessed but whether it was usable when it was taken.

The Mercor lawsuits are built on the premise that contractor data was compromised in a readable form. Under the regulatory safe harbor framework that already exists, file layer encrypted data that is exfiltrated but unusable does not meet the threshold for mandatory notification. The breach event that generates the legal obligation does not occur. The five lawsuits have no viable plaintiff because the harm the plaintiffs allege - exposure of readable personal data to criminal actors who can exploit it - has not occurred.

The safe harbor framework is the regulatory system pointing toward the evolved standard of care that litigation doctrine has not yet fully articulated. It already recognizes that encryption at the data layer changes the legal character of a breach. The doctrinal evolution required is to extend that recognition from a regulatory safe harbor into an affirmative duty - a standard of care that requires file layer protection not merely as a mitigating factor but as a component of the baseline obligation owed to people whose sensitive data is held in unstructured files.

What the Mercor Lawsuits Should Be Arguing

The most important legal contribution the Mercor litigation could make is to reframe the standard of care claim around the duty that was actually breached rather than the duty that existing doctrine recognizes.

The complaint should not lead with failure to implement MFA or failure to rotate passwords. Those are real failures and they belong in the complaint. But they are not the failure that made 40,000 contractors vulnerable to years of identity theft risk. The failure that did that was holding sensitive unstructured files - files containing Social Security numbers, identity documents, video recordings, and proprietary work product - without file layer protection that would have rendered those files unreadable to anyone who took them regardless of what credentials they used.

The encryption allegation in the current complaint points toward this duty but frames it as a storage security failure. The stronger and more legally significant framing is a failure of Post Authentication Data Security - a failure to protect files at the data layer in a way that maintains protection after authentication succeeds, independent of credential integrity, independent of access layer controls, independent of whether the session that accessed the files was legitimate or the product of supply chain credential theft.

That framing advances data breach doctrine in a meaningful direction. It creates a legal framework that actually maps onto the threat environment the industry's own data describes - a world in which credential compromise is the leading attack vector and access layer controls are necessary but insufficient to discharge the duty of care owed to the people whose data is at risk.

It also creates a remedy that would actually change the outcome. Not a settlement requiring better MFA and more rigorous password rotation that leaves 40,000 people's files just as usable the next time valid credentials are stolen. A standard that requires file layer protection - protection that holds when everything else fails, protection that renders credential theft consequence-free for the people whose data was taken.

The Conversation the Industry and the Legal Community Must Have Together

The Mercor lawsuits will settle. The settlement will specify controls. The controls will reflect the current standard of care. And the current standard of care will remain a decade behind the threat environment it is supposed to address.

Unless the legal community starts asking the question that the complaints are currently missing.

Not whether Mercor had adequate access controls. Whether Mercor discharged its duty to protect the files its contractors trusted it to hold - protect them in a way that maintains that protection after authentication succeeds, that holds when credentials are stolen, that renders the breach consequence-free for the people whose data is taken regardless of how the attacker obtained access.

That is the standard the threat environment demands. That is the standard the regulatory safe harbor framework is already gesturing toward. That is the standard the evolved duty of care in data breach litigation needs to articulate.

Post Authentication Data Security is not the standard of care today. It is the standard of care the Mercor breach demonstrates is necessary - and the standard that the legal community, the security industry, and the organizations that hold sensitive unstructured files have a shared obligation to establish before the next breach proves the same point at the same cost to the same people who had no choice but to trust that the files they handed over would be protected when it mattered most.

The five lawsuits filed in seven days are the most powerful available argument for why that conversation cannot wait.

FenixPyre is purpose-built to close the Post Authentication Data Security gap for unstructured data - ensuring that files remain protected at the data layer regardless of how access was obtained. In a world where supply chain attacks make credential theft an inevitability, file layer protection is not a security enhancement. It is the evolved standard of care the modern threat environment demands.


Data Protection

Apr 17, 2026

The Duty of Care Gap: Why Today's Breach Litigation Standard Was Built for Yesterday's Attack

In the week of April 1 through April 7, 2026, five class action lawsuits were filed against Mercor, a $10 billion AI training startup serving OpenAI, Anthropic, and Meta. Five lawsuits in seven days. Each one built around the same fundamental argument - that Mercor failed to implement adequate security measures to protect the sensitive data of more than 40,000 contractors whose personal information, professional work product, and identifying documents were stolen in one of the most consequential data breaches of 2026.

The plaintiffs are not wrong that a failure occurred. The breach was real. The harm is real. The stolen data - 939 gigabytes of proprietary source code, 3 terabytes of video interview recordings and identity verification documents, a 211 gigabyte user database, internal communications, and AI training methodologies that Y Combinator CEO Garry Tan described as representing billions in value and a major national security issue - is now in the hands of attackers who obtained it through a cascading supply chain attack that harvested legitimate credentials from a compromised open source dependency.

The lawsuits are right that Mercor failed. They are wrong about what that failure actually was. And in being wrong about that, they are asking for a legal remedy built on a standard of care argument that - even if fully satisfied - would not have protected a single file when the credentials were compromised.

That is not a minor procedural deficiency. It is a fundamental misidentification of the duty that was breached. And it matters enormously - not just for the 40,000 contractors who deserve meaningful remedy, but for every organization that will read the Mercor settlement, implement its required controls, and believe they have met their obligation to protect the people whose data they hold.

They will not have. And the next breach will prove it.

The Standard of Care Argument the Lawsuits Are Building

To understand why the lawsuits are asking for the wrong fix it is necessary to understand precisely what legal standard they are invoking and where that standard falls short.

Data breach class actions in the United States are predominantly built on negligence theory. To succeed on a negligence claim a plaintiff must establish that the defendant owed a duty of care, that the defendant breached that duty, that the breach caused the plaintiff's harm, and that the plaintiff suffered cognizable damages.

The duty of care in data breach cases has been progressively defined by courts, regulators, and compliance frameworks over the past two decades. The FTC has enforcement authority over unfair or deceptive data security practices. The SEC has specific guidance for registered investment advisers and technology companies on data protection obligations. State attorneys general have brought actions under consumer protection statutes. Courts have increasingly recognized an implicit duty to protect sensitive personal data commensurate with the nature of the data held and the reasonable expectations of the people who provided it.

What has emerged from this body of law, regulation, and enforcement is a standard of care built almost entirely around access layer controls. The duty as courts and regulators currently understand it is a duty to prevent unauthorized access. Implement MFA. Segment networks. Monitor for anomalous activity. Rotate credentials. Conduct regular security audits. Encrypt data at rest and in transit.

The Mercor lawsuits invoke exactly this standard. The Gill complaint alleges failure to implement MFA, failure to limit access to PII, failure to monitor systems, failure to rotate passwords, and failure to encrypt sensitive data during storage and transmission. It is a textbook recitation of the access layer standard of care as it currently exists in data breach litigation doctrine.

And here is the legal problem that nobody in any of the five courtrooms is currently confronting:

That standard of care - even fully satisfied - would not have prevented the harm the plaintiffs suffered. Because the harm did not originate from a failure of access layer controls. It originated from a failure at the data layer. And the legal doctrine has not yet caught up to that distinction.

The Encryption Allegation Points at the Right Problem and Then Misses It

Among all the allegations in the Mercor complaints, the failure to encrypt sensitive data during storage and transmission is the one that comes closest to identifying the actual duty that was breached. It points toward the right problem. But the way it is framed - listed alongside MFA and password rotation as one item among several access layer improvements - reveals that the plaintiff's attorneys understand encryption as a storage security measure rather than as a fundamentally different category of data protection obligation.

That distinction is not semantic. It is the difference between a remedy that changes the outcome for 40,000 contractors and a remedy that produces a more expensive breach with identical consequences.

Encryption at rest means data sitting in a database or storage system is encrypted when it is not being accessed. Encryption in transit means data moving between systems is encrypted as it travels. Both are legitimate and important security controls. Both are widely recognized components of the current standard of care. And both are rendered completely ineffective the moment an attacker obtains valid credentials - because when a user authenticates through the normal access pathway the system decrypts the data for them, it cannot distinguish between a legitimate user and an attacker holding stolen credentials, and the encryption that was supposed to protect the data dissolves on contact with a valid authenticated session. In the exact breach scenario the Mercor lawsuits describe, both controls perform exactly as designed and protect nothing.

This means that in the exact breach scenario the Mercor lawsuits describe - an attacker authenticating successfully with stolen credentials and accessing files through the authorized decryption pathway - both forms of encryption the complaint demands would have been fully satisfied and would have protected nothing. The files would still have been usable. The exfiltration would still have proceeded. The harm would still have flowed to 40,000 contractors.

The lawsuits are demanding a standard of care that has already been implicitly satisfied by the mechanism of the attack itself. And demanding it more rigorously produces no meaningful benefit to the people the litigation is supposed to protect.

The Duty That Was Actually Breached

If the current standard of care - even fully implemented - would not have changed the outcome, the legal question becomes what duty would have. What obligation, if discharged, would have rendered the breach consequence-free for the 40,000 contractors who are now plaintiffs?

The answer is precise and it points to a duty that existing doctrine has not yet adequately articulated: the duty to protect data at the file layer after authentication succeeds.

This is the Post Authentication Data Security duty. It is distinct from and more demanding than the access layer duty that current doctrine recognizes. It is not a duty to prevent unauthorized access - though that duty exists and matters. It is a duty to ensure that data remains protected even when access succeeds, whether that access was legitimately obtained or achieved through credential theft, supply chain compromise, insider misuse, or any other vector that produces a valid authenticated session.

The distinction maps directly onto the facts of the Mercor breach. The attackers authenticated successfully. Every access control performed exactly as designed. The breach did not occur at the access layer - it occurred at the data layer, where no protection existed to govern what happened to files after authentication succeeded.

Under the current standard of care doctrine, Mercor's failure is characterized as an access layer failure - insufficient MFA, inadequate monitoring, poor credential hygiene. Those characterizations may be legally valid but they are factually incomplete. The more precise and more legally significant failure was the absence of file layer protection that would have rendered the authenticated access consequence-free regardless of who held the credentials.

The duty to protect data at the file layer after authentication succeeds is the duty the Mercor lawsuits are gesturing toward but failing to name. And naming it precisely is the most important legal contribution the Mercor litigation could make to the evolution of data breach doctrine.

Why the Current Standard of Care Is Structurally Insufficient

The cybersecurity industry has known for years that stolen credentials are the single biggest vulnerability in the modern security stack. This is not a controversial position. Verizon's Data Breach Investigations Report has identified compromised credentials as the leading cause of breaches for nearly a decade running. IBM's Cost of a Data Breach Report consistently ranks stolen credentials as both the most common and most expensive attack vector. Every major security framework - NIST, ISO 27001, HITRUST - includes extensive controls around identity and access precisely because the industry understands that when credentials are compromised, everything built around them collapses.

The cybersecurity industry has known this. It has known it for a long time. And it has continued to build and sell architectures that are fundamentally dependent on the integrity of those same credentials - producing a decade of breach reports confirming the problem while simultaneously recommending the same access layer controls that the breach reports prove are insufficient.

That failure has a direct legal consequence. Courts and regulators developing the standard of care in data breach cases have done what courts and regulators reasonably do - they have looked to the security industry for guidance on what constitutes reasonable practice. The standard of care that has emerged reflects the industry consensus those courts and regulators found when they looked. A perimeter-centric, access-focused framework that treats credential integrity as the primary and in many cases sufficient protection for sensitive data.

The doctrine is not wrong on its own terms. It accurately reflects what the industry told courts and regulators was adequate. The problem is that the industry's own data has been contradicting that consensus for years - and the legal standard has had no mechanism to update itself in response. The result is a standard of care that courts apply in good faith, that organizations implement in good faith, and that leaves sensitive unstructured files fully exposed to the primary attack vector the industry itself has identified as the leading cause of breaches for nearly a decade.

That is not a gap in legal reasoning. It is a gap between legal doctrine and technical reality - and it is a gap that the Mercor breach has rendered impossible to ignore.

The Mercor breach is the most precise possible illustration of that gap. The attack chain began with a compromised GitHub Actions workflow in an open source vulnerability scanner. It harvested credentials through a malicious dependency executing in a CI/CD pipeline. It used those credentials to authenticate as legitimate users. It accessed and exfiltrated files that the authenticated session was authorized to access. Every step of that chain operated entirely within the parameters of a security architecture that meets the current standard of care.

The standard of care that the Mercor lawsuits are invoking - the standard that Mercor allegedly failed to meet - would not have detected or prevented any step of that chain after the initial credential harvest. Because the standard is designed around preventing unauthorized access and the attack succeeded by achieving authorized access with stolen credentials.

A standard of care that cannot address the primary attack vector in the industry's own breach data is not a standard that adequately defines the duty organizations owe to the people whose data they hold.

What the Evolved Standard of Care Looks Like

The legal evolution that the Mercor lawsuits should be driving - but are not yet articulating - is a standard of care that extends the duty of protection beyond the access layer to the data layer itself.

Under an evolved standard the duty is not satisfied by encrypting data at rest and in transit. Those controls protect data from passive interception and storage compromise. They do not protect data from authenticated access using stolen credentials. They do not protect files from exfiltration by a session that the system has recognized and authorized. They are necessary components of a complete security posture but they are not sufficient to discharge the duty of care owed to people whose most sensitive personal and professional information is held in unstructured files.

The evolved standard requires file layer protection - encryption that travels with the file itself, that governs usability independent of the access layer, that remains in force regardless of what credentials were used to obtain access, and that renders the file unusable to any recipient who cannot demonstrate, at the moment of access, that they are the authorized user in the authorized context for which access was intended.

This is Post Authentication Data Security applied as a legal duty rather than a security recommendation. It is the control that, had it been in place at Mercor, would have changed the outcome completely.

The attackers authenticated successfully. They accessed the files. They exfiltrated the files. And the files were ciphertext. Not because the authentication failed. Not because the access was detected and blocked. But because the files themselves were protected in a way that made the authenticated access consequence-free for every contractor whose data was taken.

Under an evolved standard of care that recognized this duty, Mercor's failure was not that it lacked adequate MFA or insufficient password rotation. It is that it held 40,000 people's most sensitive data in unprotected files that were fully usable to anyone who obtained valid credentials - and in a world where credential theft through supply chain compromise is the industry's leading breach vector, holding sensitive data in unprotected files is itself the breach of duty.

The Delve Scandal Proves the Point

The Mercor breach did not happen in isolation. It happened simultaneously with the exposure of Delve Technologies - the GRC automation startup that had issued compliance certifications for LiteLLM, the open source AI proxy whose compromise enabled the credential harvest that reached Mercor. Those certifications were, according to the whistleblower who exposed the company, industrialized fiction. Pre-populated attestations. Certifications issued without independent verification of the controls they purported to certify.

The convergence of these two stories is not incidental. It is the most powerful possible illustration of the gap between certified compliance and actual data protection that sits at the heart of the standard of care problem.

Mercor had compliance certifications. LiteLLM had compliance certifications. Those certifications validated access controls, security processes, and organizational security practices against the current standard of care. And none of it protected a single file when the credentials were compromised.

This is the standard of care problem rendered in its starkest form. The compliance framework the lawsuits are demanding Mercor should have met is a framework designed to certify access controls. It has no mechanism for certifying what happens to files after access succeeds. It validates the door. It has nothing to say about the files behind the door when someone walks through with a stolen key.

The Delve scandal did not create this problem. It exposed it. The problem existed in every legitimately certified organization whose sensitive files are protected only by the access controls that a valid authenticated session bypasses by definition. The certification confirms the lock works. It says nothing about the readability of what is inside when the lock is opened with a stolen key.

Post Authentication Data Security provides the protection that certification cannot - because it is not a process control that can be attested to. It is a technical control that either renders files unusable or does not. There is no compliance theater version of file layer encryption. The files are either protected or they are not. And that binary self-executing reality is precisely what the evolved standard of care should require.

The Regulatory Safe Harbor Argument

The legal implications of file layer protection extend beyond negligence theory into the regulatory framework that governs breach notification and penalty - and here the argument for an evolved standard of care becomes most immediately actionable for organizations deciding right now how to protect the files they hold.

Most data breach notification laws are triggered by the exposure of usable readable personal data. GDPR Article 34 explicitly states that notification to affected individuals is not required when data was encrypted and rendered unintelligible to unauthorized parties. HIPAA's Safe Harbor provision categorizes encrypted breached data as a non-reportable event. California's CCPA, New York's SHIELD Act, and most equivalent state frameworks include explicit encryption safe harbors that reduce or eliminate notification obligations when stolen data was encrypted and ciphertext.

These safe harbors already exist in the regulatory framework. They already recognize that encrypted data that cannot be read does not produce the harm that breach notification laws are designed to address. They are the regulatory system's implicit acknowledgment of the principle that Post Authentication Data Security makes explicit - that what matters for data protection purposes is not whether the data was accessed but whether it was usable when it was taken.

The Mercor lawsuits are built on the premise that contractor data was compromised in a readable form. Under the regulatory safe harbor framework that already exists, file layer encrypted data that is exfiltrated but unusable does not meet the threshold for mandatory notification. The breach event that generates the legal obligation does not occur. The five lawsuits have no viable plaintiff because the harm the plaintiffs allege - exposure of readable personal data to criminal actors who can exploit it - has not occurred.

The safe harbor framework is the regulatory system pointing toward the evolved standard of care that litigation doctrine has not yet fully articulated. It already recognizes that encryption at the data layer changes the legal character of a breach. The doctrinal evolution required is to extend that recognition from a regulatory safe harbor into an affirmative duty - a standard of care that requires file layer protection not merely as a mitigating factor but as a component of the baseline obligation owed to people whose sensitive data is held in unstructured files.

What the Mercor Lawsuits Should Be Arguing

The most important legal contribution the Mercor litigation could make is to reframe the standard of care claim around the duty that was actually breached rather than the duty that existing doctrine recognizes.

The complaint should not lead with failure to implement MFA or failure to rotate passwords. Those are real failures and they belong in the complaint. But they are not the failure that made 40,000 contractors vulnerable to years of identity theft risk. The failure that did that was holding sensitive unstructured files - files containing Social Security numbers, identity documents, video recordings, and proprietary work product - without file layer protection that would have rendered those files unreadable to anyone who took them regardless of what credentials they used.

The encryption allegation in the current complaint points toward this duty but frames it as a storage security failure. The stronger and more legally significant framing is a failure of Post Authentication Data Security - a failure to protect files at the data layer in a way that maintains protection after authentication succeeds, independent of credential integrity, independent of access layer controls, independent of whether the session that accessed the files was legitimate or the product of supply chain credential theft.

That framing advances data breach doctrine in a meaningful direction. It creates a legal framework that actually maps onto the threat environment the industry's own data describes - a world in which credential compromise is the leading attack vector and access layer controls are necessary but insufficient to discharge the duty of care owed to the people whose data is at risk.

It also creates a remedy that would actually change the outcome. Not a settlement requiring better MFA and more rigorous password rotation that leaves 40,000 people's files just as usable the next time valid credentials are stolen. A standard that requires file layer protection - protection that holds when everything else fails, protection that renders credential theft consequence-free for the people whose data was taken.

The Conversation the Industry and the Legal Community Must Have Together

The Mercor lawsuits will settle. The settlement will specify controls. The controls will reflect the current standard of care. And the current standard of care will remain a decade behind the threat environment it is supposed to address.

Unless the legal community starts asking the question that the complaints are currently missing.

Not whether Mercor had adequate access controls. Whether Mercor discharged its duty to protect the files its contractors trusted it to hold - protect them in a way that maintains that protection after authentication succeeds, that holds when credentials are stolen, that renders the breach consequence-free for the people whose data is taken regardless of how the attacker obtained access.

That is the standard the threat environment demands. That is the standard the regulatory safe harbor framework is already gesturing toward. That is the standard the evolved duty of care in data breach litigation needs to articulate.

Post Authentication Data Security is not the standard of care today. It is the standard of care the Mercor breach demonstrates is necessary - and the standard that the legal community, the security industry, and the organizations that hold sensitive unstructured files have a shared obligation to establish before the next breach proves the same point at the same cost to the same people who had no choice but to trust that the files they handed over would be protected when it mattered most.

The five lawsuits filed in seven days are the most powerful available argument for why that conversation cannot wait.

FenixPyre is purpose-built to close the Post Authentication Data Security gap for unstructured data - ensuring that files remain protected at the data layer regardless of how access was obtained. In a world where supply chain attacks make credential theft an inevitability, file layer protection is not a security enhancement. It is the evolved standard of care the modern threat environment demands.


Data Protection

Mar 23, 2026

When Accenture Reports a 127% Surge in Dark Web Insider Recruitment, It’s Time to Rethink Data Security

Accenture’s Cyber Intelligence team recently published research that should alarm every CISO and board member: insider threats facilitated through dark web ecosystems are escalating at an unprecedented rate.

The numbers are stark:

  • 69% increase in insiders offering access (2025 vs. 2024)

  • 127% surge in hackers actively recruiting insiders (vs. 2022)

As Ryan Whelan, Accenture’s Global Head of Cyber Intelligence, explains:

“The insider economy is now principally designed to support early-stage intrusions, with criminal gangs increasingly relying on insiders to bypass cyber defenses.”

This is not theoretical.

Dark web posts explicitly name targets:

  • Coinbase

  • Binance

  • Kraken

  • Gemini

  • Accenture

  • Genpact

  • Spotify

  • Netflix

…and dozens more across financial services, consulting, and technology.

The going rate?

  • $3,000–$15,000 for initial access

  • $25,000 for 37 million cryptocurrency exchange records

The Real Implication of Accenture’s Findings

What this research makes clear - when taken to its logical conclusion - is this:

Managing insider risk requires more than governing access. It requires governing how data is used after access is granted.

This is the role of Post-Authentication Data Security (PADS).

PADS is a security layer that governs how data can be used after access is granted - enforcing policy at the moment of data interaction, not just at authentication.

What Accenture’s Research Makes Clear

Accenture’s findings highlight a structural shift in threat dynamics:

  • Insiders provide initial access and credentials (30% of cases)

  • Perimeter defenses are bypassed entirely

  • Activity appears legitimate - because it is legitimate

  • Security controls defer by design once authentication succeeds

Whelan emphasizes lifecycle controls:

  • Stronger hiring and identity verification

  • Role separation and least privilege

  • Immediate access revocation during offboarding

  • Monitoring for pre-departure activity

  • Behavioral analytics and insider threat programs

These are essential.

They reduce the likelihood that insider threats emerge - or go undetected.

But they also reveal something deeper:

Even with these controls, an authenticated user can still use data in ways that are indistinguishable from legitimate activity.

Where Existing Controls End - and Why the Gap Exists

When a recruited insider acts, the cybersecurity stack behaves exactly as designed:

  • Identity is verified

  • Access is authorized

  • Permissions are correctly applied

  • Activity aligns with role expectations

  • Monitoring systems observe “normal” behavior

From the system’s perspective:

Everything is working correctly.

And that is precisely the problem.

Because “working correctly” still allows data to be:

  • Queried

  • Downloaded

  • Copied

  • Transferred

  • Sold

Nothing is bypassed.
Nothing is broken.
No control is technically evaded.

The attack succeeds because:

The security stack is architected to stop at authentication.

Whelan’s findings reinforce this reality:

Attackers are not defeating controls - they are operating within the boundary those controls were designed to trust.

The Architectural Limitation

Modern security is built to answer one question:

Who should have access?

It is not built to answer:

What should an authenticated user be allowed to do with data - right now, in this context?

This is why insider recruitment is so effective.

Existing controls - IAM, Zero Trust, SIEM, DLP, UEBA - are optimized for:

  • Preventing unauthorized access

  • Detecting abnormal behavior

They are not designed to stop:

Authorized, normal-looking misuse of data

This is not a failure of execution.

It is a limitation of architecture.

The Missing Layer: Post-Authentication Data Security (PADS)

Accenture’s framework focuses on managing insider risk across the employee lifecycle.

PADS extends that framework into the data interaction lifecycle.

If traditional controls answer:

  • Who should have access?

  • When should access be granted or revoked?

  • Is behavior anomalous?

PADS answers:

  • What should this user be able to do with the data they can access?

  • Is this specific use of data appropriate in this context?

This is not a replacement for insider threat programs.

It is the layer that ensures their effectiveness - even when insiders act within expected patterns.

Why This Matters in the Insider Economy

The insider recruitment model works because it exploits a core assumption:

Authenticated access implies legitimate use.

Accenture’s research shows attackers are deliberately targeting that assumption.

They recruit insiders because:

  • Access is already granted

  • Activity blends into normal workflows

  • Detection becomes significantly harder

PADS shifts control from access → to data usage.

What Changes When Data Is Governed After Access

In a PADS-enabled environment:

  • Access still functions as designed

  • Authorized users still perform legitimate work

But:

  • Bulk extraction can be restricted or challenged

  • Sensitive data use can trigger contextual controls

  • Data remains protected - even outside the system

  • Actions - not just identities - are evaluated in real time

This means even if:

  • An insider is recruited

  • Credentials are valid

  • Behavior appears normal

The outcome changes.

Data is no longer freely extractable and usable simply because access was granted.

Aligning With Accenture’s Recommendations - And Extending Them

Whelan’s recommendations create a strong foundation:

  • Strengthen hiring and identity verification

  • Enforce role separation and least privilege

  • Revoke access immediately during offboarding

  • Monitor for behavioral anomalies

  • Expand insider threat intelligence

All of these aim to:

Prevent trusted individuals from using legitimate access to cause harm

But traditional implementations approach this indirectly.

They:

  • Limit access scope

  • Attempt to detect misuse

  • Reduce opportunity over time

They do not directly control:

What happens to data at the moment it is used

Where Traditional Controls Fall Short

Objective

Traditional Approach

Limitation

Prevent malicious insiders

Pre-employment screening

Cannot prevent post-hire recruitment

Limit exposure

RBAC / PoLP

Broad access still exists within roles

Stop access at risk

Offboarding

Reactive - after decision point

Detect misuse

UEBA / monitoring

Requires deviation from “normal”

Identify targeting

Threat intelligence

Does not stop insider action

These controls rely on:

  • Predicting intent

  • Detecting anomalies

  • Acting after signals appear

In insider recruitment scenarios:

Those signals may never appear in time.

How PADS Delivers the Outcome Directly

Objective

PADS Capability

Outcome

Limit insider impact

Data usability governance

Controls actions within valid access

Prevent extraction

Contextual policy enforcement

Evaluates intent at time of use

Reduce detection reliance

Real-time controls

No need for “abnormal” behavior

Mitigate insider risk

Persistent data protection

Exfiltrated data is unusable

Contain breaches

Outcome-based enforcement

Prevents usable data loss

PADS operates where risk actually materializes:

The moment data is accessed and used

The Strategic Implication: An Architectural Fault Line

Accenture classifies insider threats as a medium-frequency, high-impact strategic risk.

But the deeper implication is this:

Insider risk is not an edge case - it is a consequence of how cybersecurity is designed.

Whelan’s findings expose a critical assumption:

Once a user is authenticated, risk is sufficiently managed.

That assumption no longer holds.

Modern architecture treats:

  • Authentication as the boundary of trust

Everything beyond that boundary is governed by:

  • Permissions

  • Expected behavior

  • Post-event detection

Not by real-time control of data itself.

This is the fault line.

The Bottom Line

Accenture’s findings don’t just highlight the rise of insider threats - they expose a fundamental flaw in modern cybersecurity:

The assumption that risk ends when access is granted.

In reality:

That is where risk begins.

The Verizon DBIR reinforces this:

  • 74% of breaches involve the human element

  • Occurring within legitimate, authenticated sessions

No controls are bypassed.
No systems are broken.

Attackers simply operate inside the boundary the stack was designed to trust.

Whelan’s recommendations strengthen identity and access.

But they also point to a deeper truth:

Without governing how data is used after access is granted, the problem remains unsolved.

That is what Post-Authentication Data Security (PADS) delivers.

It shifts security from:

  • Controlling entry

To:

  • Controlling outcome

Because in today’s threat landscape:

Access is no longer the boundary of risk. Data usage is.

Resources

  • Accenture Cyber Intelligence Report: Insider Threat Escalation (2025)

  • What is PADS - The definition, category map, and how PADS completes the security model

  • Why PADS now - The forces driving post-authentication data theft

Final Thought

Every employee with access to sensitive data is a recruitment target.

Traditional security stops at authentication.

That’s exactly where the insider economy starts.

Data Protection

Mar 23, 2026

When IBM X-Force Says "Post-Auth is the New Perimeter," People Should Take Note

Ryan Anschutz, North America Leader for IBM X-Force Incident Response, recently published an article that deserves more attention than a typical LinkedIn post receives.

It started, as the best security lessons often do, with something completely mundane.

Ryan needed to export a list of event attendees. The UI had no export button. So, he opened browser developer tools, looked at what the application was doing behind the scenes, and scripted the authenticated API calls to extract everything he needed.

No exploits. No bypasses. No stolen credentials.

His conclusion: "The application worked exactly as designed. That's the part worth sitting with."

That sentence is the entire post-authentication data security (PADS) problem stated as plainly as it can be stated.

WHAT RYAN'S EXPORT TASK ACTUALLY DEMONSTRATES 

What Ryan described is not a vulnerability. It is not a misconfiguration. It is not a failure of any control. 

It is what happens when an authenticated session is trusted completely. When the backend extends full data usability to anyone holding a valid credential, with no evaluation of whether that trust should extend to bulk extraction, rapid pagination, or automated API calls at a scale no human would produce manually.

The application's authentication worked. Its authorization worked. Its session management worked. Every control functioned exactly as designed.

And a complete dataset was extracted in minutes.

This is what the 2024 Verizon Data Breach Investigations Report is describing when it notes that 74% of breaches involve valid credentials. It is not that attackers are bypassing authentication. It is that they have learned to operate inside the trust that authentication grants, and once inside that trust, there is almost nothing designed to evaluate whether specific data should be usable at a specific moment, under specific conditions, at a specific volume. 

As Ryan puts it: "Attackers don't care about your UI. They care about what the backend will trust."  

RYAN'S QUESTION IS THE RIGHT QUESTION 

Ryan's bottom line for IR teams is worth quoting directly: 

"The question is not, 'Did MFA work?' The real question is, 'What did the backend trust after MFA succeeded?' That is the perimeter now."

This reframe from "did authentication succeed" to "what did the system trust after authentication succeeded", is precisely the shift that Post-Authentication Data Security (PADS) represents as a security category.

Traditional security architecture is built to answer the first question. The foundational layers, firewalls, IAM, MFA, Zero Trust, are designed to evaluate whether a given identity or session should be granted access in the first place. They operate on the principle that authentication and authorization are the primary security boundaries.

DLP represents the industry's first major attempt to address what happens after authentication. It monitors data movement and attempts to prevent sensitive information from leaving the organization through unauthorized channels. This is critical and valuable.

But Ryan's GraphQL example exposes the limitation: DLP is designed to detect abnormal data movement, not to govern normal data use.

The session was appropriately granted. The API calls were legitimate. The data access was authorized. The pagination pattern, if throttled to human speed, would appear normal. No unauthorized egress channel was used, just standard API responses over HTTPS.

DLP's fundamental assumption is that if data access appears normal, it probably is normal.

This is exactly the assumption that Ryan's example breaks. An attacker who understands how the backend evaluates "normal" can operate entirely within those parameters while extracting complete datasets.

The actions that followed authentication were indistinguishable from legitimate use. And no control in the stack, including DLP, was designed to ask whether bulk data extraction should be permitted even when the session was valid and the behavior appeared normal.

His observation cuts to the core of the problem: "After authentication, everything becomes the real perimeter, and most defenses still aren't built around that truth."

DLP monitors the perimeter. But when the attacker operates inside what the system considers normal authenticated behavior, there is no perimeter event to detect. 

WHAT COULD HAVE CHANGED THE OUTCOME

Ryan identifies several controls that could have interrupted the extraction: 

• Session tokens bound to device or browser context

• Behavioral rate limiting that notices no human paginates this fast

• Authorization enforced at the API layer, not assumed via the UI

• Step-up authentication for bulk or sensitive data access

• Short session lifetimes with frequent token rotation

• API-level telemetry that shows actual query behavior, not just page views

These recommendations map directly to what PADS delivers as a category:



IBM X-Force Recommendation 



PADS Capability 



How It Changes the Outcome 

Session tokens bound to device/browser context 

Contextual session management 

Sessions can't be replayed from different devices or environments - even with valid credentials 

Behavioral rate limiting

Anomaly detection & policy enforcement 

Automated extraction at scale triggers real-time intervention before data leaves 

Authorization enforced at API layer, not assumed via UI 

Data-layer access controls 

Backend enforces what data can be accessed regardless of how the request arrives 

Step-up authentication for bulk access 

Dynamic risk-based authentication 

High-volume data access requires additional verification even for authenticated users 

Short session lifetimes with frequent token rotation 

Session governance 

Limits window of opportunity for credential replay or session hijacking 

API-level telemetry showing actual query behavior 

Data interaction visibility 

Surfaces what's actually happening at the data layer, not just what the UI suggests 

WHERE DETECTION ALONE FALLS SHORT

Ryan's recommendations represent the access-control and behavioral-detection responses to the post-authentication problem. They are valuable and necessary.

But his list implicitly identifies their shared limitation: they all depend on detecting that something unusual is happening. Rate limiting notices unusual pagination speed. Behavioral monitoring notices unusual query patterns. Step-up authentication notices unusual data volume.

What happens when the extraction isn't unusual? When an attacker paginates at human speed, extracts data gradually over days, and operates within the behavioral thresholds that monitoring tools consider normal.

This is the scenario that Post-Authentication Data Security addresses at a more fundamental level. Rather than detecting unusual behavior and interrupting it, PADS governs data usability at the data layer itself. The question is not "does this behavior look suspicious?" It is "should this data be usable, under these conditions, for this action, to this destination?"

In a PADS model, data remains cryptographically protected and is only made usable at the moment of legitimate use - meaning extraction alone no longer equals compromise.

When data is protected at the layer Ryan is describing, the layer where the backend decides what an authenticated session can actually do with the data it accesses then the extraction scenario changes fundamentally.

The attacker can script the API calls. They can walk the pagination. They can extract every file in the repository.

They just can't read any of it.

THE BOTTOM LINE

Ryan's conclusion deserves to be repeated:

"The question is not, 'Did MFA work?' The real question is, 'What did the backend trust after MFA succeeded?' That is the perimeter now."

Every control you currently own is designed to answer the first question.

Almost none are designed to answer the second.

That gap between authentication and data protection is where 74% of breaches now operate.

Post-auth is the new perimeter. And as Ryan's article demonstrates, most defenses still aren't built around that truth.

Post-Authentication Data Security is the category that changes that.

RESOURCES

Ryan Anschutz's original article: https://www.ibm.com/think/x-force/post-auth-new-perimeter

What is PADS The definition, the category map, and how PADS completes the security model existing tools leave unfinished.

Why PADS Now The three forces that made post-authentication data theft the dominant threat.

Every tool you own stops at login. That's exactly where attackers start. 

Data Protection

Feb 17, 2026

Why Traditional DLP Cannot Stop Post-Authentication Data Theft

There is a dangerous oversimplification circulating in cybersecurity conversations: that Data Loss Prevention “doesn’t work.”

That claim is wrong.

Traditional DLP is not broken. It is not obsolete. And it is not the product of immature teams or poor deployment discipline. It was engineered for a different threat model, at a different control layer, under a different set of assumptions about how data is misused.

For more than a decade, DLP has played a meaningful role in enterprise security. It helped organizations locate sensitive data, apply classification-based policy, monitor how information moves through email, endpoints, and cloud services, and satisfy governance and compliance obligations. In many environments, it still provides operational and regulatory value.

And yet, despite mature DLP deployments, layered with IAM, Zero Trust, CASB, and cloud monitoring tools, organizations continue to suffer catastrophic data theft. In most of those incidents, the theft begins after the attacker authenticates successfully.

That is not a contradiction. It is an architectural boundary.

Post-Authentication Data Security exists because material risk now begins at a point where DLP, by design, cannot reliably prevent loss.

The Real Distinction Is Control Plane, Not Feature Depth

The difference between DLP and Post-Authentication Data Security is structural.

DLP observes and governs data movement. PADS governs data usability.

DLP is built to answer: Did sensitive information move somewhere it should not have?

PADS answers a more uncomfortable question: Given that access exists, should this data be usable or extractable right now?

That distinction matters because DLP must inspect data in order to govern it. Inspection requires decryption. By the time DLP evaluates content, the data is already usable inside the session.

PADS asserts control earlier. It enforces cryptographic protection at the data layer, even after authentication succeeds. Access does not automatically grant readability. Usability is conditional.

This is not a tuning difference. It is a control-plane difference.

DLP’s Design Assumptions Made Sense at the Time

DLP was built around a rational premise: if we understand who the user is, what data they are interacting with, and where that data is going, we can stop misuse even after login.

That premise held when misuse looked abnormal. When exfiltration required obvious bulk transfer. When users were the primary actors and backend automation was limited. When sensitive data moved in discrete, observable ways.

Modern attack patterns quietly dismantled those assumptions.

Today, attackers operate inside legitimate workflows. They use valid credentials, including service accounts. They rely on native export features and SaaS APIs. They extract data gradually to avoid triggering thresholds. Their behavior mirrors routine business operations.

Under those conditions, DLP does not “miss” the attack. It simply operates where it was designed to operate: after data is decrypted and in motion.

The architecture did not anticipate a world where authentication itself became the dominant breach vector.

The Backend Is Where the Limits Become Clear

The boundary is most visible at the server and backend layer, where the most valuable data actually resides: file servers, databases, SaaS backends, object storage, APIs, and integration engines.

Even when deployed on servers, DLP still inspects content after it has been decrypted for an authenticated process. Applications receive plaintext. Queries return structured results. APIs deliver usable data.

At that layer, there may be no discrete “user action” to intercept. Extraction occurs through queries and automated processes. Activity appears operational, not interactive.

DLP becomes dependent on logs, heuristics, thresholds, and classification accuracy. It becomes reactive by necessity.

This is why even mature DLP programs tend to be weakest precisely where the organization’s crown jewels live.

Classification Is Both DLP’s Strength and Its Constraint

DLP depends on classification. Before it can enforce policy, it must know whether data is sensitive and how it is labeled.

That dependency introduces fragility in modern environments where data is created continuously, classified by insiders that may be the perpetrator, recombined dynamically, generated by third parties, and returned through APIs without consistent labeling. Sensitive content may be embedded inside larger files. Labels may lag reality. Derived data may inherit no protection at all.

DLP cannot protect what it cannot reliably identify. That is not a tooling flaw. It is a structural dependency.

In a post-authentication attack, the adversary does not defeat classification. They exploit its gaps.

Post-Authentication Data Security removes classification as the gating dependency for protection. It does not eliminate classification. It removes it as a single point of failure. Protection attaches to the data cryptographically. Usability is evaluated at the moment of access, not assumed because a label was correct.

That shift closes a category of silent exposure that DLP cannot.

The Trust Assumption That Now Carries Material Risk

DLP, like IAM and Zero Trust, inherits a necessary operational assumption: if a user or service is authenticated and authorized, their actions are legitimate until proven otherwise.

That assumption allows systems to function. But in a threat landscape where credential compromise is routine, that assumption becomes the attacker’s leverage.

When credentials are stolen, identity is valid. Sessions are approved. Permissions are correct. Backend systems return plaintext. Encryption disengages because authentication succeeded.

DLP sees normal activity.

PADS does not eliminate trust. It decouples trust from data usability. Even when access exists, data remains encrypted unless policy explicitly authorizes its use under the current conditions.

That is a fundamentally different stance toward risk.

The Boundary Has Moved. Architecture Must Follow.

Traditional DLP did not fail. It reached the boundary it was designed to manage.

Security architectures long assumed that controlling access and observing movement after access was sufficient. That model held when misuse was rare and when exfiltration required obvious deviation from normal operations.

Today, attackers authenticate. They operate inside approved workflows. They extract data in ways that appear legitimate. In that environment, observing misuse after data is readable is not prevention. It is documentation.

Post-Authentication Data Security exists because material risk now begins precisely where traditional controls defer by design: after access is granted.

It does not replace DLP, IAM, or Zero Trust. It completes the model they leave unfinished.

The defining question is no longer whether you stopped the attacker from getting in.

It is whether, when access was misused, your data remained protected.

DLP can tell you what happened.

PADS determines whether it mattered.

zero-trust

Data Protection

Feb 10, 2026

Access Control ≠ Data Protection: The Zero Trust Gap

Zero Trust, as a security framework, has been remarkably successful. From the early 2010s onward, it corrected a broken perimeter model, reshaped identity and access control, and forced organizations to abandon implicit trust. Properly implemented, it dramatically reduces unauthorized access and limits blast radius.

And yet, organizations that proudly describe their environments as “Zero Trust mature” continue to lose data at scale.

This is not because Zero Trust failed.

It is because Zero Trust did exactly what it was built to do, and then stopped. Zero Trust is not a data-protection strategy; rather, it is a session-admission strategy. It’s a gatekeeper, no more or less. It decides who gets in, under what conditions, and for how long. Once that decision is made, Zero Trust steps aside and assumes trust.

The mistake organizations made was assuming it would carry responsibility beyond that point. 

This gap that we will now explore persists because leadership accepted the handoff without questioning it. This was not a vendor deception. This was not an implementation miss. This was not an IT oversight. It was a leadership assumption that went unchallenged.

Let’s get into it.

Zero Trust Solves Access. Data Theft Happens After Access.

Zero Trust answers a very specific question, extremely well: Should this session exist?

To answer it, Zero Trust evaluates identity, device posture, network context, application risk, and behavioral signals. If those conditions are satisfied, the session is approved. If they are not, access is denied.

That decision is binary. And once it is made, Zero Trust’s job is complete.

What Zero Trust does not do, and was never intended to do, is govern what happens to data once access is granted. It does not decide whether a file should decrypt. It does not determine whether information should remain usable in a given context. It does not follow data after it leaves the session boundary.

At the moment access is approved, Zero Trust steps aside. Trust is assumed. Data becomes usable.

That assumption is where modern breaches begin.

How Zero Trust Became a Catch-All for a Problem It Cannot Solve

As breaches continued inside Zero Trust environments, security teams faced a difficult truth. Attackers were not bypassing controls. They were using them.

The response was predictable and understandable. If data was being stolen after login, then access controls must not be strict enough. Organizations responded by tightening policies, adding conditional rules, increasing segmentation, layering identity checks, and monitoring sessions more aggressively.

Zero Trust was asked to compensate for a failure that did not belong to it.

The result was escalating complexity, longer deployments, higher cost, growing friction for users, and still, unacceptable data loss.

This was not a failure of execution. It was a failure of assignment.

Access control was being asked to do the job of data protection.

Identity Cannot Carry the Weight We Put on It

Modern security architecture treats identity as truth. If the user authenticated, the assumption is that their intent is acceptable. This appears reasonable, but it is no longer the case.

Attackers build their entire operating model around this assumption.

They exploit phishing, MFA fatigue, token replay, OAuth abuse, insider misuse, vendor access, and shared credentials. Once identity is compromised or misused, Zero Trust has no additional authority. It has already done its job.

From the system’s perspective, nothing is wrong. The user is valid. The device is trusted. The activity looks normal. Files decrypt automatically. Data is readable, copyable, and transferable.

Security did not fail. It cooperated.

The Architectural Boundary Everyone Avoided Naming

Zero Trust governs sessions. Data protection must govern data.

Those are different planes of control.

Zero Trust can evaluate who you are, where you are, and whether you should be connected. It cannot determine whether a specific piece of data should be usable at a specific moment, under specific conditions, after access has already been granted.

Once data is downloaded, shared, copied, exported, or moved into another system, Zero Trust’s authority ends. It does not follow the file. It does not revoke usability. It does not enforce policy beyond the session.

This is not a gap you can configure away. It is a boundary built into the architecture.

Post-Authentication Data Security Exists Because Zero Trust Stops Too Early

Post-Authentication Data Security (PADS) exists to answer the question Zero Trust never asked: Even if access is valid, should this data be readable right now?

PADS operates where Zero Trust does not. It enforces protection at the data layer itself, using persistent encryption and continuous policy evaluation.

With PADS in place, authentication does not automatically grant decryption. Files remain encrypted unless conditions are met. Policies travel with the data across systems, platforms, and external sharing. Exfiltrated files remain unreadable. Credential compromise no longer guarantees data loss.

This is not stronger access control. It is control applied at the correct layer.

Why Data-Centric Businesses Must Reorder Their Priorities

For many organizations, Zero Trust was the right first move. But for businesses whose value is embodied in data, access control alone is insufficient.

Law firms, consulting firms, healthcare providers, financial institutions, and IP-driven enterprises do not fail because attackers get in. They fail because data becomes usable after attackers do.

For these organizations, Post-Authentication Data Security is non-negotiable. It directly protects confidential data and intellectual property. It prevents loss even when access fails. It preserves trust, contracts, and business viability. It contains breaches at the only layer that matters.

Zero Trust remains important. But it is secondary to loss prevention. This is a long-overdue correction of Zero Trust’s role.

Why Stretching Zero Trust Made Security Worse, Not Better

Forcing Zero Trust to carry responsibility for data protection explains why so many programs become slow, expensive, brittle, and frustrating. Identity systems are pushed beyond their limits. Users absorb friction. Security teams chase exceptions. And attackers continue to succeed.

PADS removes that burden.

With PADS in place, Zero Trust can focus on access. Identity can do what it does best. Data protection no longer depends on perfect enforcement upstream.

Breaches stop being existential events. They become containable incidents.

That is the difference between architecture and hope.

The Question Leadership Keeps Avoiding

Every organization should be forced to answer a single question: If someone logs in with valid credentials, what actually protects our data?

If the answer is more access rules, more Zero Trust, or more monitoring, then responsibility has been misplaced.

The correct answer is Post-Authentication Data Security.

Stop Treating Access Control as Data Protection

Zero Trust remains essential for controlling access in a perimeterless world. But data theft no longer happens before access. It happens after.

PADS exists because Zero Trust succeeded and stopped one step too early.

Organizations that take data protection seriously will not abandon Zero Trust. They will stop pretending it solves a problem it was never designed to address.

They will protect the data itself.

pads_phishing

Data Protection

Feb 9, 2026

Phishing Keeps Working Because We’re Solving the Wrong Problem

For more than two decades, organizations have treated phishing as a messaging problem.

They have invested in increasingly sophisticated email filters, AI-powered detection engines, phishing simulations, security awareness training, MFA, browser isolation, DMARC, and Zero Trust architectures. Entire product categories and security budgets exist to stop users from clicking the wrong thing.

And yet phishing remains the single most successful attack vector in cybersecurity.

Not vulnerabilities. Not malware. Not zero-days.

More money is spent fighting phishing than any other type of attack. More breaches still result from it than from anything else. This is not because defenders are incompetent or underfunded. It is because the industry has spent years trying to prevent the wrong outcome.

Phishing does not succeed because an email is delivered. It succeeds because identity is compromised. And once identity is compromised, modern security architectures collapse by design.

Phishing Does Not Target Email. It Targets Identity.

Executives often picture phishing as a malicious link, a fake login page, or a suspicious attachment sent to an employee. That mental model is dangerously outdated.

Modern phishing attacks rarely stop at email. They exploit every place identity can be abused: stolen SSO sessions, MFA approval fatigue, OAuth token grants, help desk resets, browser cookie theft, SaaS integrations, social engineering, and supply-chain impersonation.

The goal is not to deliver malware. The goal is to become a trusted user.

Once an attacker achieves that, they stop caring about your anti-phishing tools entirely. Because at the moment they authenticate successfully, every major control organizations rely on steps aside.

Email security is no longer relevant.

Think about it:

  • Zero Trust validates the session.

  • MFA has already been satisfied.

  • IAM treats the attacker as legitimate.

  • EDR sees normal behavior.

  • Cloud applications grant full access.

  • DLP observes expected file usage.

From the system’s perspective, nothing is wrong. The attacker is now inside, operating exactly like an employee.

Phishing works because it does not need to bypass security. It only needs security to believe the wrong person.

The Terminal Weakness Every Anti-Phishing Tool Shares

Every anti-phishing control is built around a single assumption: if we can stop the attacker from logging in, the data will be safe.

That assumption no longer holds.

Email filters can block malicious messages until attackers pivot to SMS phishing, phone calls, QR codes, LinkedIn messages, MFA fatigue, or fake help desk interactions. Training can reduce mistakes, but even the most disciplined users fail occasionally, and attackers only need one success.

MFA improves security, but it is routinely bypassed through push fatigue, SIM swapping, token theft, evil proxy servers, session replay, and OAuth consent abuse. Zero Trust evaluates identity, device, and context, but once those conditions are met, it does exactly what it is designed to do: trust.

DLP can detect exfiltration after the fact, but it cannot stop an authenticated user from opening, reading, or copying data.

The industry keeps refining controls designed to prevent login, while attackers focus on what happens after login. That is the asymmetry driving today’s breach epidemic.

Authentication Is the Breaking Point

Read any major breach report from the last five years and the pattern is unmistakable.

The attacker authenticated with valid credentials. Systems functioned as designed. Data was stolen.

Authentication is the choke point in modern security. Once it fails, everything downstream cooperates. Files decrypt automatically. Access controls defer. Data becomes readable, transferable, and monetizable.

This is not a tooling failure. It is an architectural one.

Security stops at authentication. Data theft begins there.

Why Post-Authentication Data Security Changes the Outcome

Post Authentication Data Security, or PADS, exists because the industry refused to confront this reality.

PADS is not another anti-phishing tool. It does not attempt to stop phishing emails, prevent credential theft, or predict human behavior. It assumes those failures will happen.

Instead, it addresses the only question that actually matters once identity is compromised: can the attacker read the data?

With PADS, authentication does not automatically grant decryption. Files remain encrypted even after login. Access is continuously evaluated at the data level, not just the session level. Policies travel with the data across cloud platforms, devices, and external sharing.

If data is copied or exfiltrated, it remains unreadable. If access occurs outside approved conditions, it silently fails. The attacker can log in and still walk away empty-handed.

This breaks the phishing kill chain at the only point that matters: data access, not login.

Why PADS Is the Only Effective Anti-Phishing Defense

Every existing anti-phishing approach focuses on prevention. PADS focuses on survivability.

Email security tries to block messages. Training tries to change behavior. MFA tries to harden authentication. Zero Trust tries to validate context. All of them fail once credentials are abused.

PADS does not need to stop phishing to be effective. It renders phishing economically useless.

When stolen credentials no longer unlock readable data, phishing loses its payoff. Breaches turn into contained incidents. Security teams respond without panic. Executives stop explaining why “controls worked but the data was taken.”

This is the difference between a breach report and a footnote.

The Shift Leaders Must Make

Phishing prevention is no longer sufficient. Phishing resilience is now the mandate.

Executives must stop asking how to eliminate phishing and start asking how to ensure phishing cannot steal data when it succeeds. No vendor can stop every attack. No training program can eliminate human error. No identity system is immune to abuse.

Attackers have already adapted to that reality. Defenders must do the same.

That adaptation requires abandoning the assumption that authentication equals trust.

Phishing Is Not a Cyber Problem. It Is a Data Protection Problem.

Phishing succeeds because modern security architectures grant full data access to anyone who authenticates successfully. Attackers have built entire business models around exploiting that assumption.

Post Authentication Data Security eliminates it.

By keeping files encrypted after authentication, PADS removes the attacker’s single greatest advantage: the ability to turn stolen identity into readable data.

PADS by FenixPyre does not stop phishing.

It makes phishing irrelevant.

And in the threat landscape we actually live in, that is the only way organizations truly win.

pads_insider

Data Protection

Feb 6, 2026

Insider Misuse Isn’t a Security Failure. It’s a Design Failure.

Most organizations believe insider misuse is a human problem. A bad employee. A careless contractor. A disgruntled administrator. A developer who took data they should not have.

That framing is wrong.

Insider misuse persists not because people are unpredictable, but because modern security architectures are built on a fragile assumption: once trust is granted, data is safe. That assumption collapses in every real enterprise.

Organizations have built sophisticated, layered defenses to keep threats out. Identity systems authenticate users. Access controls assign permissions. Devices are monitored. Networks are segmented. From the outside, these environments appear mature and well governed.

What remains largely unaddressed is what happens after trust is granted.

That is where insider misuse operates. And that is why it continues to be one of the most common, costly, and underreported drivers of data loss.

Insider Misuse Doesn’t Bypass Security. It Operates Inside It.

Insider misuse does not require malware, exploits, or credential theft. It does not trip alarms. It does not look like an attack.

It uses legitimate access that the organization intentionally granted to people it trusts: employees, contractors, administrators, developers, partners, and vendors. Sometimes it is malicious. Often it is negligent. Frequently it is situational, driven by convenience, pressure, or misunderstanding.

From the system’s point of view, nothing is wrong.

The user is authenticated. The device is trusted. Permissions are valid. MFA has already been satisfied. Zero Trust has validated the session. Endpoint tools see no malicious behavior. DLP observes normal file access. Audit logs record legitimate actions.

The insider does not defeat security. The insider is security.

This is the uncomfortable truth most organizations avoid. Insider misuse succeeds precisely because the environment behaves exactly as designed.

Why Insider Misuse Causes Outsized Damage

Insider misuse is so damaging because it exploits the point where security stops.

Once access is granted, modern systems assume good intent. Files decrypt automatically. Sensitive data becomes readable. Bulk access appears normal. Copying files is permitted. Sharing data externally looks like business as usual.

Detection, if it occurs at all, is slow and reactive.

By the time an organization realizes something went wrong, the data has already been read, copied, or moved. At that point, the loss is irreversible.

This is why insider incidents routinely result in large-scale data exposure, intellectual property theft, regulatory violations, lawsuits, and permanent erosion of customer trust. And it is why some of the most damaging breaches never involve external attackers at all.

The Fatal Flaw: Trust Equals Unlimited Data Access

Every traditional security control answers the same foundational question: is this user authorized?

Insider misuse answers yes.

Identity and access management verifies who someone is, not what they intend to do. Multi-factor authentication validates login, not ongoing behavior. Zero Trust continuously evaluates sessions, but only at the identity and device level. It does not govern the data itself.

Data loss prevention tools look for suspicious movement, not inappropriate reading. Endpoint tools protect operating systems, not business logic. Compliance frameworks assume authorized access is safe access.

SOC 2, ISO 27001, NIST, HIPAA, CMMC and their peers were never designed to prevent trusted users from accessing data they are allowed to see.

Insider misuse is not a failure of tools. It is a failure of architecture.

Where Security Actually Breaks: After Authentication

Every insider incident follows the same pattern.

A trusted user accesses sensitive data. Files decrypt normally. Data is copied, shared, or downloaded. Detection occurs late, if at all. The organization remains compliant on paper. The data is exposed.

Once data is read in cleartext, the incident has already succeeded.

This is the moment modern security stacks do not control and do not defend.

Post Authentication Data Security Changes the Equation

Post Authentication Data Security, or P.A.D.S., was built to address the exact moment traditional security abandons control.

P.A.D.S. does not attempt to predict intent. It does not rely on early detection. It does not block users from doing their jobs. Instead, it removes blind trust from the data layer.

With P.A.D.S., authentication does not automatically grant decryption. Files remain encrypted even for authorized users. Every attempt to access data is continuously evaluated against policy. Protection travels with the data across devices, cloud platforms, and external sharing.

If an insider copies files outside approved conditions, the data remains unreadable. If behavior violates policy, access silently fails. The user can still log in. The data simply does not cooperate.

This is the critical distinction. P.A.D.S. does not stop insiders from existing. It stops insider misuse from becoming data theft.

Why This Works When Everything Else Fails

Traditional controls try to decide who to trust. P.A.D.S. assumes trust will be misplaced.

IAM, MFA, Zero Trust, EDR, and DLP all play important roles, but none protect data after access is granted. P.A.D.S. does. It shifts the unit of protection from users and systems to the data itself.

Insider misuse becomes self-limiting. Possession no longer equals usability. Access no longer guarantees exposure.

This is not a behavioral fix. It is a structural one.

The Question Leaders Must Finally Ask

Organizations must stop asking how to trust users better and start asking what protects data when trust is wrong.

Insiders will always exist. Mistakes will always happen. Privileges will always be misused. You cannot train intent. You cannot audit trust. You cannot detect misuse early enough to matter.

But you can protect data after access is granted.

Insider misuse is not a personnel problem. It is a data protection problem.

Post-Authentication Data Security by FenixPyre does not eliminate trust. It restores control. And in a world where most data loss happens after login, that is the only standard that actually matters.

pads_phi

Data Protection

Feb 4, 2026

Why Healthcare Organizations Are Still Losing Patient Data Even When Fully Compliant

Healthcare has spent years doing what it was told. 

Comply with HIPAA. Document safeguards. Harden EHR access. Pass audits. Train staff. Prepare incident response plans.

And still, patient data keeps leaking.

This is not because healthcare organizations ignored regulation. But because regulation never addressed how modern breaches actually unfold.

Recent incidents across hospitals, insurers, and healthcare service providers exposed millions of patient records despite full compliance with HIPAA, HITECH, and industry security frameworks. These were not fringe operators cutting corners. They were sophisticated organizations with mature cybersecurity programs.

Healthcare regulation has grown more demanding. OCR enforcement now expects demonstrable safeguards for protected health information, clear detection and containment of unauthorized access, and rapid notification when exposure occurs. The emphasis has shifted from policy existence to control effectiveness.

Yet breaches continue because attackers are exploiting a failure mode that compliance does not test and audits do not surface. Once a user logs in with valid credentials, patient data is routinely exposed by design.

This is not a failure of effort or intent. It is a structural blind spot in how healthcare security has been defined. And until it is addressed, compliance will continue to coexist with patient data loss.

The Failure Mode Healthcare Security Misses

Executives need to understand a critical distinction: HIPAA compliance measures the environment. Attackers target the data.

Every major healthcare breach shares the same uncomfortable truth. Controls worked as designed, yet PHI was stolen.

Modern attacks follow a simple and repeatable pattern. Attackers obtain valid credentials. They authenticate successfully. EHR and PHI files decrypt automatically. Data is accessed in cleartext and exfiltrated. The organization remains compliant while patients are exposed.

Even the most mature healthcare cybersecurity stacks contain a critical architectural gap. The moment a valid username and password are used, meaningful data protection collapses.

Encryption disengages. Access controls trust the session. Monitoring becomes reactive rather than preventive.

This is the post-authentication data security gap. And attackers understand it far better than defenders.

They do not need to compromise Epic, Cerner, or Meditech. They do not need to exploit imaging systems or cloud patient portals. They only need to authenticate.

Why Healthcare Compliance Frameworks Do Not Close the Gap

Every major healthcare security framework focuses on protecting systems, networks, identities, and sessions. HIPAA and HITECH mandate safeguards and access controls. NIST CSF and 800-53 emphasize governance and risk management. HITRUST aggregates best practices into certifiable controls.

What none of these frameworks require is persistent protection of PHI after login.

Encryption at rest protects stolen laptops. Encryption in transit protects data moving across networks. Neither protects PHI once a user authenticates legitimately.

As a result, over 80 percent of healthcare data theft now occurs after successful authentication. Compliance verifies that systems are configured correctly. Attackers verify whether PHI decrypts when they log in.

One protects against yesterday’s threats. The other defines today’s reality.

Why Healthcare Organizations Must Go Beyond Compliance

Compliance is necessary. It is no longer sufficient.

Healthcare breaches are the most expensive of any industry, year after year. The cost of PHI exposure extends far beyond regulatory penalties. OCR investigations, class action lawsuits, identity theft protection for millions of patients, ransomware negotiations, operational shutdowns, and long-term reputational damage routinely dwarf the cost of prevention.

Third-party risk compounds the problem. Healthcare ecosystems now span EHR vendors, telehealth platforms, imaging systems, claims processors, labs, SaaS tools, and business associates. Data moves constantly across organizational boundaries, while trust is assumed after authentication.

At the same time, identity-based attacks dominate healthcare breaches. Phished MFA approvals, password reuse, compromised SSO sessions, vendor credential leakage, and insider misuse are now the primary threat vectors. Perimeter defenses are no longer the battleground.

Compliance has not kept pace with this shift.

Why Post Authentication Data Security (PADS) Is Essential for Protecting PHI

PADS addresses the exact failure mode healthcare attackers exploit. It starts with a different question. What happens after an attacker logs in?

In a Post Authentication Data Security model, PHI remains encrypted even after authentication. Access to sensitive files is continuously evaluated based on identity, device, and context. Policies travel with the data across EHR systems, cloud platforms, imaging tools, SaaS applications, and endpoints.

If PHI is exfiltrated, it remains unreadable and unusable. Credential compromise no longer guarantees patient data exposure. Insider misuse becomes containable rather than catastrophic.

This approach delivers what healthcare regulators increasingly demand. Defensible proof that patient data is protected, even when systems are accessed legitimately.

Conclusion

Healthcare organizations can be fully compliant and still catastrophically exposed. HIPAA sets the floor. Attackers set the bar.

To protect patient data rather than just systems, healthcare organizations must close the post-authentication gap that regulations do not address, audits do not evaluate, and pentests do not simulate.

PADS provides that missing layer. It transforms healthcare cybersecurity from policy adherence into patient data protection.

Compliance prevents penalties. PADS by FenixPyre prevents breaches. In healthcare, the difference is measured in patient trust.

pads_finance

Data Protection

Jan 30, 2026

Why Compliance Still Isn’t Protecting Financial Data?

Every major financial institution with a headline-grabbing breach on the books was fully compliant at the time of compromise. Capital One. Morgan Stanley. JPMorgan. Equifax. Robinhood. First American Financial. The pattern is consistent and deeply uncomfortable.

Financial services firms operate under some of the most demanding cybersecurity regulations in the world. Think SEC disclosure rules, NIST frameworks, FFIEC examinations, PCI requirements. These standards form the backbone of modern financial cybersecurity programs and require extensive governance, documentation, and technical controls.

And yet data theft continues.

This reality has become harder to ignore following recent amendments to SEC Regulation S-P, which significantly expand expectations around safeguarding customer information. The amendments require comprehensive written incident response procedures, clear plans for detecting and containing unauthorized access, and mandatory notification when sensitive customer data is exposed.

These updates reflect an important shift. Regulators are no longer satisfied with policy documentation alone. They expect institutions to demonstrate that controls actually protect customer data.

But even with these stronger requirements, compliance still does not prevent modern data theft. That gap exists because today’s attacks exploit a failure mode that regulations were never designed to address.

The Failure Mode Regulators Do Not Measure

Executives need to understand a critical distinction: Compliance frameworks measure the environment. Attackers target the data. See the gap?

Every major financial breach followed the same sequence. Controls worked as designed. Audits were passed. Systems were hardened. And the data was still taken.

Modern attacks do not bypass controls. They turn them against you.

The pattern is simple and repeatable. Attackers obtain valid credentials. They authenticate successfully. Files decrypt automatically. Data is accessed in cleartext and exfiltrated. The organization remains compliant and devastated at the same time.

In most financial cybersecurity stacks, even the most mature ones, there is a fundamental architectural failure. The moment a valid username and password are used, meaningful data protection ends.

Encryption disengages. Access controls trust the session. Monitoring becomes reactive rather than preventive.

This is the post-authentication data security gap. And it is the moment attackers understand better than defenders.

Why Compliance Frameworks Miss This Gap

Every major regulatory and standards body focuses on protecting systems, identities, and sessions. Understand that SEC rules emphasize governance and disclosure. NIST frameworks catalog technical and administrative controls. FFIEC guidance addresses risk management and oversight. PCI enforces strict encryption requirements for cardholder data.

What none of these frameworks require is persistent, file-level protection once a user authenticates.

Encryption at rest protects data if a physical device is stolen. Encryption in transit protects data moving across networks. Neither protects files once a valid login occurs.

As a result, over 80 percent of modern data theft now occurs after successful authentication. Regulations measure whether systems are configured correctly. Attackers measure whether data decrypts when they log in. One addresses yesterday’s threats. The other defines today’s reality.

Why Compliance Alone Is No Longer Defensible

Financial institutions must now confront a difficult truth. Compliance sets the floor for acceptable behavior. It does not define effective data protection.

The financial impact of data theft far exceeds regulatory penalties. Customer churn, class action litigation, incident response costs, recovery operations, insurance premium increases, and reputational damage routinely dwarf the cost of compliance.

At the same time, customer and counterparty expectations are rising faster than regulations. Financial services contracts increasingly require proof of secure data handling, modern identity architectures, and demonstrable controls over sensitive files. Compliance alone is no longer sufficient to win business.

Recent SEC disclosure requirements further raise the stakes. Boards and executives must now publicly describe cybersecurity risk management effectiveness and material impacts. A breach where controls worked but data was taken is becoming indefensible to investors.

Why Post Authentication Data Security (PADS) Changes the Equation

PADS addresses the exact failure mode that compliance frameworks and audits overlook.

It starts by asking a different question. What happens when an attacker logs in successfully?

In a Post Authentication Data Security model, data remains encrypted even after authentication. Access to sensitive files is continuously evaluated based on identity, device, and context. Policies travel with the data wherever it goes. If files are exfiltrated, they remain unreadable and unusable.

This architectural shift changes the outcome of breaches. Credential compromise no longer guarantees data loss. Insider misuse becomes containable. SaaS and cloud data remains protected outside the perimeter.

Most importantly, PADS delivers something compliance never has. Provable data protection outcomes.

The Standard Financial Leaders Must Exceed

Compliance will always matter. It prevents penalties and establishes baseline hygiene. But it cannot be the end goal.

Financial institutions must exceed regulatory requirements because attackers already have. They operate after authentication, inside trusted sessions, against data that decrypts automatically.

PADS closes the post-authentication gap that regulations do not cover, audits do not test, and attackers consistently exploit.

Conclusion

Financial firms can be fully compliant and still catastrophically exposed. The regulations set the floor. Attackers set the bar.

To protect data rather than just systems, financial institutions must adopt Post Authentication Data Security. It is the only approach that survives credential compromise, neutralizes insider threats, and turns breaches into contained events instead of existential failures.

Compliance prevents penalties. PADS by FenixPyre prevents data loss. And in today’s financial threat landscape, the difference matters.

pads_pentest

Data Protection

Jan 27, 2026

Why Pentesting Doesn’t Answer the Question: 'Is Our Data Secure?'

Penetration testing (“pentesting”) has become a staple of modern cybersecurity programs. Organizations invest heavily in annual or quarterly tests, receive detailed reports, and walk away reassured by familiar conclusions. Controls are working as designed. MFA is in place. No critical vulnerabilities were identified. The perimeter is hardened.

For many executives, those findings translate into a simple assumption. Our data is secure.

That assumption is understandable. It is also wrong.

Penetration testing was never designed to validate whether sensitive data can be stolen. It validates whether systems can be compromised. Modern breaches increasingly bypass that distinction, which is why organizations that passed their pentests still suffered catastrophic data loss. Nike, Snowflake, Uber, Waymo, MOVEit, and Conduent all had functioning controls and still lost data at scale.

The gap is architectural, not procedural. 

Closing the gap requires more than another tool layered onto the perimeter.

What Pentesting Actually Measures

At its core, penetration testing answers a narrow and important question: Can an attacker break into our environment?

That question mattered when breaches were primarily driven by malware, exploits, and perimeter bypasses. It doesn’t matter so much these days. Today’s threat landscape looks very different. Most attackers do not break in. They log in.

They do so using phished MFA prompts, reused credentials, help desk resets, leaked API keys, compromised SaaS sessions, or insider access. (In fact, regular phishing tests are not a bad idea to distribute on a surprise basis to your employees.) Once authenticated, attackers inherit trust across the environment. Files decrypt automatically. Access controls relax. Data becomes readable and exportable.

Pentesting does not meaningfully simulate this moment. In most testing methodologies, once valid credentials are obtained and sensitive data is reachable, the test effectively ends. Opening files is considered expected behavior. Exfiltration of readable data is assumed. That is precisely where real-world attacks begin.

Why Passing Pentests Still Leads to Breaches

Pentesting frameworks referenced in NIST, SOC 2, PCI-DSS, ISO 27001, and similar standards focus on essential hygiene. They assess vulnerability management, patching discipline, network segmentation, authentication configuration, and detection capabilities. That’s fine, and these controls are necessary. They are also insufficient for protecting data once access is granted.

This mismatch explains why breach postmortems often sound identical. Controls worked as designed. Detection systems functioned. Identity tools authenticated users correctly. And attackers still walked away with the data.

The misconception is subtle but costly. 

Executives believe pentests validate data security, when in reality they validate infrastructure resilience. Data protection after authentication is rarely tested, measured, or discussed in executive forums.

Security Stops at Login. Data Theft Starts There

Read that again. 

Security stops at login. Data theft starts there.

Modern security architectures are environment-centric. They focus on protecting networks, endpoints, identities, and sessions. They assume that once a user is authenticated, access equals trust.

That assumption no longer holds.

Every major breach of the past decade demonstrates the same pattern. Attackers authenticate legitimately. Systems respond normally. Files decrypt. Data is taken. Pentesting validates the world before authentication. Breaches exploit the world after authentication.

This is the blind spot that keeps repeating itself.

So, what can you do about that? How can we build stronger defenses against that core argument: Security stops at login. Data theft starts there.

Let’s get into it. 

Why Post Authentication Data Security (PADS) Changes the Outcome

PADS addresses the precise gap pentesting exposes but cannot close. Instead of protecting systems around the data, it protects the data itself.

In a PADS model, files remain encrypted even after login. Access is continuously evaluated based on identity, device, and context. Policies travel with the file wherever it goes. If data is exfiltrated, it remains unreadable and unusable outside approved conditions.

This approach does not replace existing controls. It complements them by making credential compromise survivable. Attackers may gain access to systems, but they are denied the one thing they are after. Usable data.

Why This Shift Is Now Unavoidable

Several forces are converging to make Post Authentication Data Security essential rather than optional. Credential-based attacks dominate breach statistics. Cloud and SaaS platforms have dissolved traditional perimeters. Insider risk continues to grow as access expands across employees, contractors, and partners. Regulators increasingly care about outcomes rather than controls, specifically whether stolen data was readable.

Detection tools will always lag exfiltration. By the time alerts fire, the damage is already done. PADS reduces breach impact by removing the attacker’s incentive.

The Executive Question That Finally Matters

There is one question leadership must ask to cut through pentest results, certifications, and dashboards.

If an attacker logged in with valid credentials, could they read our files?

If the answer is yes, the data is not secure, regardless of how strong the perimeter appears. If the answer is no, the organization has achieved a level of resilience traditional security cannot provide.

Conclusion

Penetration testing remains critical. It ensures baseline security hygiene and exposes technical weaknesses. But it does not answer the question executives care about most. Is our data secure?

Only Post Authentication Data Security closes the post-authentication gap that modern attackers exploit and pentests ignore. In a world where attackers log in instead of breaking in, protecting data at the file level is no longer an advanced option.

PADS by FenixPyre is the missing layer that turns cybersecurity from breach prevention optimism into breach survivability reality.

Data Protection

Apr 17, 2026

The Duty of Care Gap: Why Today's Breach Litigation Standard Was Built for Yesterday's Attack

In the week of April 1 through April 7, 2026, five class action lawsuits were filed against Mercor, a $10 billion AI training startup serving OpenAI, Anthropic, and Meta. Five lawsuits in seven days. Each one built around the same fundamental argument - that Mercor failed to implement adequate security measures to protect the sensitive data of more than 40,000 contractors whose personal information, professional work product, and identifying documents were stolen in one of the most consequential data breaches of 2026.

The plaintiffs are not wrong that a failure occurred. The breach was real. The harm is real. The stolen data - 939 gigabytes of proprietary source code, 3 terabytes of video interview recordings and identity verification documents, a 211 gigabyte user database, internal communications, and AI training methodologies that Y Combinator CEO Garry Tan described as representing billions in value and a major national security issue - is now in the hands of attackers who obtained it through a cascading supply chain attack that harvested legitimate credentials from a compromised open source dependency.

The lawsuits are right that Mercor failed. They are wrong about what that failure actually was. And in being wrong about that, they are asking for a legal remedy built on a standard of care argument that - even if fully satisfied - would not have protected a single file when the credentials were compromised.

That is not a minor procedural deficiency. It is a fundamental misidentification of the duty that was breached. And it matters enormously - not just for the 40,000 contractors who deserve meaningful remedy, but for every organization that will read the Mercor settlement, implement its required controls, and believe they have met their obligation to protect the people whose data they hold.

They will not have. And the next breach will prove it.

The Standard of Care Argument the Lawsuits Are Building

To understand why the lawsuits are asking for the wrong fix it is necessary to understand precisely what legal standard they are invoking and where that standard falls short.

Data breach class actions in the United States are predominantly built on negligence theory. To succeed on a negligence claim a plaintiff must establish that the defendant owed a duty of care, that the defendant breached that duty, that the breach caused the plaintiff's harm, and that the plaintiff suffered cognizable damages.

The duty of care in data breach cases has been progressively defined by courts, regulators, and compliance frameworks over the past two decades. The FTC has enforcement authority over unfair or deceptive data security practices. The SEC has specific guidance for registered investment advisers and technology companies on data protection obligations. State attorneys general have brought actions under consumer protection statutes. Courts have increasingly recognized an implicit duty to protect sensitive personal data commensurate with the nature of the data held and the reasonable expectations of the people who provided it.

What has emerged from this body of law, regulation, and enforcement is a standard of care built almost entirely around access layer controls. The duty as courts and regulators currently understand it is a duty to prevent unauthorized access. Implement MFA. Segment networks. Monitor for anomalous activity. Rotate credentials. Conduct regular security audits. Encrypt data at rest and in transit.

The Mercor lawsuits invoke exactly this standard. The Gill complaint alleges failure to implement MFA, failure to limit access to PII, failure to monitor systems, failure to rotate passwords, and failure to encrypt sensitive data during storage and transmission. It is a textbook recitation of the access layer standard of care as it currently exists in data breach litigation doctrine.

And here is the legal problem that nobody in any of the five courtrooms is currently confronting:

That standard of care - even fully satisfied - would not have prevented the harm the plaintiffs suffered. Because the harm did not originate from a failure of access layer controls. It originated from a failure at the data layer. And the legal doctrine has not yet caught up to that distinction.

The Encryption Allegation Points at the Right Problem and Then Misses It

Among all the allegations in the Mercor complaints, the failure to encrypt sensitive data during storage and transmission is the one that comes closest to identifying the actual duty that was breached. It points toward the right problem. But the way it is framed - listed alongside MFA and password rotation as one item among several access layer improvements - reveals that the plaintiff's attorneys understand encryption as a storage security measure rather than as a fundamentally different category of data protection obligation.

That distinction is not semantic. It is the difference between a remedy that changes the outcome for 40,000 contractors and a remedy that produces a more expensive breach with identical consequences.

Encryption at rest means data sitting in a database or storage system is encrypted when it is not being accessed. Encryption in transit means data moving between systems is encrypted as it travels. Both are legitimate and important security controls. Both are widely recognized components of the current standard of care. And both are rendered completely ineffective the moment an attacker obtains valid credentials - because when a user authenticates through the normal access pathway the system decrypts the data for them, it cannot distinguish between a legitimate user and an attacker holding stolen credentials, and the encryption that was supposed to protect the data dissolves on contact with a valid authenticated session. In the exact breach scenario the Mercor lawsuits describe, both controls perform exactly as designed and protect nothing.

This means that in the exact breach scenario the Mercor lawsuits describe - an attacker authenticating successfully with stolen credentials and accessing files through the authorized decryption pathway - both forms of encryption the complaint demands would have been fully satisfied and would have protected nothing. The files would still have been usable. The exfiltration would still have proceeded. The harm would still have flowed to 40,000 contractors.

The lawsuits are demanding a standard of care that has already been implicitly satisfied by the mechanism of the attack itself. And demanding it more rigorously produces no meaningful benefit to the people the litigation is supposed to protect.

The Duty That Was Actually Breached

If the current standard of care - even fully implemented - would not have changed the outcome, the legal question becomes what duty would have. What obligation, if discharged, would have rendered the breach consequence-free for the 40,000 contractors who are now plaintiffs?

The answer is precise and it points to a duty that existing doctrine has not yet adequately articulated: the duty to protect data at the file layer after authentication succeeds.

This is the Post Authentication Data Security duty. It is distinct from and more demanding than the access layer duty that current doctrine recognizes. It is not a duty to prevent unauthorized access - though that duty exists and matters. It is a duty to ensure that data remains protected even when access succeeds, whether that access was legitimately obtained or achieved through credential theft, supply chain compromise, insider misuse, or any other vector that produces a valid authenticated session.

The distinction maps directly onto the facts of the Mercor breach. The attackers authenticated successfully. Every access control performed exactly as designed. The breach did not occur at the access layer - it occurred at the data layer, where no protection existed to govern what happened to files after authentication succeeded.

Under the current standard of care doctrine, Mercor's failure is characterized as an access layer failure - insufficient MFA, inadequate monitoring, poor credential hygiene. Those characterizations may be legally valid but they are factually incomplete. The more precise and more legally significant failure was the absence of file layer protection that would have rendered the authenticated access consequence-free regardless of who held the credentials.

The duty to protect data at the file layer after authentication succeeds is the duty the Mercor lawsuits are gesturing toward but failing to name. And naming it precisely is the most important legal contribution the Mercor litigation could make to the evolution of data breach doctrine.

Why the Current Standard of Care Is Structurally Insufficient

The cybersecurity industry has known for years that stolen credentials are the single biggest vulnerability in the modern security stack. This is not a controversial position. Verizon's Data Breach Investigations Report has identified compromised credentials as the leading cause of breaches for nearly a decade running. IBM's Cost of a Data Breach Report consistently ranks stolen credentials as both the most common and most expensive attack vector. Every major security framework - NIST, ISO 27001, HITRUST - includes extensive controls around identity and access precisely because the industry understands that when credentials are compromised, everything built around them collapses.

The cybersecurity industry has known this. It has known it for a long time. And it has continued to build and sell architectures that are fundamentally dependent on the integrity of those same credentials - producing a decade of breach reports confirming the problem while simultaneously recommending the same access layer controls that the breach reports prove are insufficient.

That failure has a direct legal consequence. Courts and regulators developing the standard of care in data breach cases have done what courts and regulators reasonably do - they have looked to the security industry for guidance on what constitutes reasonable practice. The standard of care that has emerged reflects the industry consensus those courts and regulators found when they looked. A perimeter-centric, access-focused framework that treats credential integrity as the primary and in many cases sufficient protection for sensitive data.

The doctrine is not wrong on its own terms. It accurately reflects what the industry told courts and regulators was adequate. The problem is that the industry's own data has been contradicting that consensus for years - and the legal standard has had no mechanism to update itself in response. The result is a standard of care that courts apply in good faith, that organizations implement in good faith, and that leaves sensitive unstructured files fully exposed to the primary attack vector the industry itself has identified as the leading cause of breaches for nearly a decade.

That is not a gap in legal reasoning. It is a gap between legal doctrine and technical reality - and it is a gap that the Mercor breach has rendered impossible to ignore.

The Mercor breach is the most precise possible illustration of that gap. The attack chain began with a compromised GitHub Actions workflow in an open source vulnerability scanner. It harvested credentials through a malicious dependency executing in a CI/CD pipeline. It used those credentials to authenticate as legitimate users. It accessed and exfiltrated files that the authenticated session was authorized to access. Every step of that chain operated entirely within the parameters of a security architecture that meets the current standard of care.

The standard of care that the Mercor lawsuits are invoking - the standard that Mercor allegedly failed to meet - would not have detected or prevented any step of that chain after the initial credential harvest. Because the standard is designed around preventing unauthorized access and the attack succeeded by achieving authorized access with stolen credentials.

A standard of care that cannot address the primary attack vector in the industry's own breach data is not a standard that adequately defines the duty organizations owe to the people whose data they hold.

What the Evolved Standard of Care Looks Like

The legal evolution that the Mercor lawsuits should be driving - but are not yet articulating - is a standard of care that extends the duty of protection beyond the access layer to the data layer itself.

Under an evolved standard the duty is not satisfied by encrypting data at rest and in transit. Those controls protect data from passive interception and storage compromise. They do not protect data from authenticated access using stolen credentials. They do not protect files from exfiltration by a session that the system has recognized and authorized. They are necessary components of a complete security posture but they are not sufficient to discharge the duty of care owed to people whose most sensitive personal and professional information is held in unstructured files.

The evolved standard requires file layer protection - encryption that travels with the file itself, that governs usability independent of the access layer, that remains in force regardless of what credentials were used to obtain access, and that renders the file unusable to any recipient who cannot demonstrate, at the moment of access, that they are the authorized user in the authorized context for which access was intended.

This is Post Authentication Data Security applied as a legal duty rather than a security recommendation. It is the control that, had it been in place at Mercor, would have changed the outcome completely.

The attackers authenticated successfully. They accessed the files. They exfiltrated the files. And the files were ciphertext. Not because the authentication failed. Not because the access was detected and blocked. But because the files themselves were protected in a way that made the authenticated access consequence-free for every contractor whose data was taken.

Under an evolved standard of care that recognized this duty, Mercor's failure was not that it lacked adequate MFA or insufficient password rotation. It is that it held 40,000 people's most sensitive data in unprotected files that were fully usable to anyone who obtained valid credentials - and in a world where credential theft through supply chain compromise is the industry's leading breach vector, holding sensitive data in unprotected files is itself the breach of duty.

The Delve Scandal Proves the Point

The Mercor breach did not happen in isolation. It happened simultaneously with the exposure of Delve Technologies - the GRC automation startup that had issued compliance certifications for LiteLLM, the open source AI proxy whose compromise enabled the credential harvest that reached Mercor. Those certifications were, according to the whistleblower who exposed the company, industrialized fiction. Pre-populated attestations. Certifications issued without independent verification of the controls they purported to certify.

The convergence of these two stories is not incidental. It is the most powerful possible illustration of the gap between certified compliance and actual data protection that sits at the heart of the standard of care problem.

Mercor had compliance certifications. LiteLLM had compliance certifications. Those certifications validated access controls, security processes, and organizational security practices against the current standard of care. And none of it protected a single file when the credentials were compromised.

This is the standard of care problem rendered in its starkest form. The compliance framework the lawsuits are demanding Mercor should have met is a framework designed to certify access controls. It has no mechanism for certifying what happens to files after access succeeds. It validates the door. It has nothing to say about the files behind the door when someone walks through with a stolen key.

The Delve scandal did not create this problem. It exposed it. The problem existed in every legitimately certified organization whose sensitive files are protected only by the access controls that a valid authenticated session bypasses by definition. The certification confirms the lock works. It says nothing about the readability of what is inside when the lock is opened with a stolen key.

Post Authentication Data Security provides the protection that certification cannot - because it is not a process control that can be attested to. It is a technical control that either renders files unusable or does not. There is no compliance theater version of file layer encryption. The files are either protected or they are not. And that binary self-executing reality is precisely what the evolved standard of care should require.

The Regulatory Safe Harbor Argument

The legal implications of file layer protection extend beyond negligence theory into the regulatory framework that governs breach notification and penalty - and here the argument for an evolved standard of care becomes most immediately actionable for organizations deciding right now how to protect the files they hold.

Most data breach notification laws are triggered by the exposure of usable readable personal data. GDPR Article 34 explicitly states that notification to affected individuals is not required when data was encrypted and rendered unintelligible to unauthorized parties. HIPAA's Safe Harbor provision categorizes encrypted breached data as a non-reportable event. California's CCPA, New York's SHIELD Act, and most equivalent state frameworks include explicit encryption safe harbors that reduce or eliminate notification obligations when stolen data was encrypted and ciphertext.

These safe harbors already exist in the regulatory framework. They already recognize that encrypted data that cannot be read does not produce the harm that breach notification laws are designed to address. They are the regulatory system's implicit acknowledgment of the principle that Post Authentication Data Security makes explicit - that what matters for data protection purposes is not whether the data was accessed but whether it was usable when it was taken.

The Mercor lawsuits are built on the premise that contractor data was compromised in a readable form. Under the regulatory safe harbor framework that already exists, file layer encrypted data that is exfiltrated but unusable does not meet the threshold for mandatory notification. The breach event that generates the legal obligation does not occur. The five lawsuits have no viable plaintiff because the harm the plaintiffs allege - exposure of readable personal data to criminal actors who can exploit it - has not occurred.

The safe harbor framework is the regulatory system pointing toward the evolved standard of care that litigation doctrine has not yet fully articulated. It already recognizes that encryption at the data layer changes the legal character of a breach. The doctrinal evolution required is to extend that recognition from a regulatory safe harbor into an affirmative duty - a standard of care that requires file layer protection not merely as a mitigating factor but as a component of the baseline obligation owed to people whose sensitive data is held in unstructured files.

What the Mercor Lawsuits Should Be Arguing

The most important legal contribution the Mercor litigation could make is to reframe the standard of care claim around the duty that was actually breached rather than the duty that existing doctrine recognizes.

The complaint should not lead with failure to implement MFA or failure to rotate passwords. Those are real failures and they belong in the complaint. But they are not the failure that made 40,000 contractors vulnerable to years of identity theft risk. The failure that did that was holding sensitive unstructured files - files containing Social Security numbers, identity documents, video recordings, and proprietary work product - without file layer protection that would have rendered those files unreadable to anyone who took them regardless of what credentials they used.

The encryption allegation in the current complaint points toward this duty but frames it as a storage security failure. The stronger and more legally significant framing is a failure of Post Authentication Data Security - a failure to protect files at the data layer in a way that maintains protection after authentication succeeds, independent of credential integrity, independent of access layer controls, independent of whether the session that accessed the files was legitimate or the product of supply chain credential theft.

That framing advances data breach doctrine in a meaningful direction. It creates a legal framework that actually maps onto the threat environment the industry's own data describes - a world in which credential compromise is the leading attack vector and access layer controls are necessary but insufficient to discharge the duty of care owed to the people whose data is at risk.

It also creates a remedy that would actually change the outcome. Not a settlement requiring better MFA and more rigorous password rotation that leaves 40,000 people's files just as usable the next time valid credentials are stolen. A standard that requires file layer protection - protection that holds when everything else fails, protection that renders credential theft consequence-free for the people whose data was taken.

The Conversation the Industry and the Legal Community Must Have Together

The Mercor lawsuits will settle. The settlement will specify controls. The controls will reflect the current standard of care. And the current standard of care will remain a decade behind the threat environment it is supposed to address.

Unless the legal community starts asking the question that the complaints are currently missing.

Not whether Mercor had adequate access controls. Whether Mercor discharged its duty to protect the files its contractors trusted it to hold - protect them in a way that maintains that protection after authentication succeeds, that holds when credentials are stolen, that renders the breach consequence-free for the people whose data is taken regardless of how the attacker obtained access.

That is the standard the threat environment demands. That is the standard the regulatory safe harbor framework is already gesturing toward. That is the standard the evolved duty of care in data breach litigation needs to articulate.

Post Authentication Data Security is not the standard of care today. It is the standard of care the Mercor breach demonstrates is necessary - and the standard that the legal community, the security industry, and the organizations that hold sensitive unstructured files have a shared obligation to establish before the next breach proves the same point at the same cost to the same people who had no choice but to trust that the files they handed over would be protected when it mattered most.

The five lawsuits filed in seven days are the most powerful available argument for why that conversation cannot wait.

FenixPyre is purpose-built to close the Post Authentication Data Security gap for unstructured data - ensuring that files remain protected at the data layer regardless of how access was obtained. In a world where supply chain attacks make credential theft an inevitability, file layer protection is not a security enhancement. It is the evolved standard of care the modern threat environment demands.


Data Protection

Mar 23, 2026

When Accenture Reports a 127% Surge in Dark Web Insider Recruitment, It’s Time to Rethink Data Security

Accenture’s Cyber Intelligence team recently published research that should alarm every CISO and board member: insider threats facilitated through dark web ecosystems are escalating at an unprecedented rate.

The numbers are stark:

  • 69% increase in insiders offering access (2025 vs. 2024)

  • 127% surge in hackers actively recruiting insiders (vs. 2022)

As Ryan Whelan, Accenture’s Global Head of Cyber Intelligence, explains:

“The insider economy is now principally designed to support early-stage intrusions, with criminal gangs increasingly relying on insiders to bypass cyber defenses.”

This is not theoretical.

Dark web posts explicitly name targets:

  • Coinbase

  • Binance

  • Kraken

  • Gemini

  • Accenture

  • Genpact

  • Spotify

  • Netflix

…and dozens more across financial services, consulting, and technology.

The going rate?

  • $3,000–$15,000 for initial access

  • $25,000 for 37 million cryptocurrency exchange records

The Real Implication of Accenture’s Findings

What this research makes clear - when taken to its logical conclusion - is this:

Managing insider risk requires more than governing access. It requires governing how data is used after access is granted.

This is the role of Post-Authentication Data Security (PADS).

PADS is a security layer that governs how data can be used after access is granted - enforcing policy at the moment of data interaction, not just at authentication.

What Accenture’s Research Makes Clear

Accenture’s findings highlight a structural shift in threat dynamics:

  • Insiders provide initial access and credentials (30% of cases)

  • Perimeter defenses are bypassed entirely

  • Activity appears legitimate - because it is legitimate

  • Security controls defer by design once authentication succeeds

Whelan emphasizes lifecycle controls:

  • Stronger hiring and identity verification

  • Role separation and least privilege

  • Immediate access revocation during offboarding

  • Monitoring for pre-departure activity

  • Behavioral analytics and insider threat programs

These are essential.

They reduce the likelihood that insider threats emerge - or go undetected.

But they also reveal something deeper:

Even with these controls, an authenticated user can still use data in ways that are indistinguishable from legitimate activity.

Where Existing Controls End - and Why the Gap Exists

When a recruited insider acts, the cybersecurity stack behaves exactly as designed:

  • Identity is verified

  • Access is authorized

  • Permissions are correctly applied

  • Activity aligns with role expectations

  • Monitoring systems observe “normal” behavior

From the system’s perspective:

Everything is working correctly.

And that is precisely the problem.

Because “working correctly” still allows data to be:

  • Queried

  • Downloaded

  • Copied

  • Transferred

  • Sold

Nothing is bypassed.
Nothing is broken.
No control is technically evaded.

The attack succeeds because:

The security stack is architected to stop at authentication.

Whelan’s findings reinforce this reality:

Attackers are not defeating controls - they are operating within the boundary those controls were designed to trust.

The Architectural Limitation

Modern security is built to answer one question:

Who should have access?

It is not built to answer:

What should an authenticated user be allowed to do with data - right now, in this context?

This is why insider recruitment is so effective.

Existing controls - IAM, Zero Trust, SIEM, DLP, UEBA - are optimized for:

  • Preventing unauthorized access

  • Detecting abnormal behavior

They are not designed to stop:

Authorized, normal-looking misuse of data

This is not a failure of execution.

It is a limitation of architecture.

The Missing Layer: Post-Authentication Data Security (PADS)

Accenture’s framework focuses on managing insider risk across the employee lifecycle.

PADS extends that framework into the data interaction lifecycle.

If traditional controls answer:

  • Who should have access?

  • When should access be granted or revoked?

  • Is behavior anomalous?

PADS answers:

  • What should this user be able to do with the data they can access?

  • Is this specific use of data appropriate in this context?

This is not a replacement for insider threat programs.

It is the layer that ensures their effectiveness - even when insiders act within expected patterns.

Why This Matters in the Insider Economy

The insider recruitment model works because it exploits a core assumption:

Authenticated access implies legitimate use.

Accenture’s research shows attackers are deliberately targeting that assumption.

They recruit insiders because:

  • Access is already granted

  • Activity blends into normal workflows

  • Detection becomes significantly harder

PADS shifts control from access → to data usage.

What Changes When Data Is Governed After Access

In a PADS-enabled environment:

  • Access still functions as designed

  • Authorized users still perform legitimate work

But:

  • Bulk extraction can be restricted or challenged

  • Sensitive data use can trigger contextual controls

  • Data remains protected - even outside the system

  • Actions - not just identities - are evaluated in real time

This means even if:

  • An insider is recruited

  • Credentials are valid

  • Behavior appears normal

The outcome changes.

Data is no longer freely extractable and usable simply because access was granted.

Aligning With Accenture’s Recommendations - And Extending Them

Whelan’s recommendations create a strong foundation:

  • Strengthen hiring and identity verification

  • Enforce role separation and least privilege

  • Revoke access immediately during offboarding

  • Monitor for behavioral anomalies

  • Expand insider threat intelligence

All of these aim to:

Prevent trusted individuals from using legitimate access to cause harm

But traditional implementations approach this indirectly.

They:

  • Limit access scope

  • Attempt to detect misuse

  • Reduce opportunity over time

They do not directly control:

What happens to data at the moment it is used

Where Traditional Controls Fall Short

Objective

Traditional Approach

Limitation

Prevent malicious insiders

Pre-employment screening

Cannot prevent post-hire recruitment

Limit exposure

RBAC / PoLP

Broad access still exists within roles

Stop access at risk

Offboarding

Reactive - after decision point

Detect misuse

UEBA / monitoring

Requires deviation from “normal”

Identify targeting

Threat intelligence

Does not stop insider action

These controls rely on:

  • Predicting intent

  • Detecting anomalies

  • Acting after signals appear

In insider recruitment scenarios:

Those signals may never appear in time.

How PADS Delivers the Outcome Directly

Objective

PADS Capability

Outcome

Limit insider impact

Data usability governance

Controls actions within valid access

Prevent extraction

Contextual policy enforcement

Evaluates intent at time of use

Reduce detection reliance

Real-time controls

No need for “abnormal” behavior

Mitigate insider risk

Persistent data protection

Exfiltrated data is unusable

Contain breaches

Outcome-based enforcement

Prevents usable data loss

PADS operates where risk actually materializes:

The moment data is accessed and used

The Strategic Implication: An Architectural Fault Line

Accenture classifies insider threats as a medium-frequency, high-impact strategic risk.

But the deeper implication is this:

Insider risk is not an edge case - it is a consequence of how cybersecurity is designed.

Whelan’s findings expose a critical assumption:

Once a user is authenticated, risk is sufficiently managed.

That assumption no longer holds.

Modern architecture treats:

  • Authentication as the boundary of trust

Everything beyond that boundary is governed by:

  • Permissions

  • Expected behavior

  • Post-event detection

Not by real-time control of data itself.

This is the fault line.

The Bottom Line

Accenture’s findings don’t just highlight the rise of insider threats - they expose a fundamental flaw in modern cybersecurity:

The assumption that risk ends when access is granted.

In reality:

That is where risk begins.

The Verizon DBIR reinforces this:

  • 74% of breaches involve the human element

  • Occurring within legitimate, authenticated sessions

No controls are bypassed.
No systems are broken.

Attackers simply operate inside the boundary the stack was designed to trust.

Whelan’s recommendations strengthen identity and access.

But they also point to a deeper truth:

Without governing how data is used after access is granted, the problem remains unsolved.

That is what Post-Authentication Data Security (PADS) delivers.

It shifts security from:

  • Controlling entry

To:

  • Controlling outcome

Because in today’s threat landscape:

Access is no longer the boundary of risk. Data usage is.

Resources

  • Accenture Cyber Intelligence Report: Insider Threat Escalation (2025)

  • What is PADS - The definition, category map, and how PADS completes the security model

  • Why PADS now - The forces driving post-authentication data theft

Final Thought

Every employee with access to sensitive data is a recruitment target.

Traditional security stops at authentication.

That’s exactly where the insider economy starts.

Data Protection

Apr 17, 2026

The Duty of Care Gap: Why Today's Breach Litigation Standard Was Built for Yesterday's Attack

In the week of April 1 through April 7, 2026, five class action lawsuits were filed against Mercor, a $10 billion AI training startup serving OpenAI, Anthropic, and Meta. Five lawsuits in seven days. Each one built around the same fundamental argument - that Mercor failed to implement adequate security measures to protect the sensitive data of more than 40,000 contractors whose personal information, professional work product, and identifying documents were stolen in one of the most consequential data breaches of 2026.

The plaintiffs are not wrong that a failure occurred. The breach was real. The harm is real. The stolen data - 939 gigabytes of proprietary source code, 3 terabytes of video interview recordings and identity verification documents, a 211 gigabyte user database, internal communications, and AI training methodologies that Y Combinator CEO Garry Tan described as representing billions in value and a major national security issue - is now in the hands of attackers who obtained it through a cascading supply chain attack that harvested legitimate credentials from a compromised open source dependency.

The lawsuits are right that Mercor failed. They are wrong about what that failure actually was. And in being wrong about that, they are asking for a legal remedy built on a standard of care argument that - even if fully satisfied - would not have protected a single file when the credentials were compromised.

That is not a minor procedural deficiency. It is a fundamental misidentification of the duty that was breached. And it matters enormously - not just for the 40,000 contractors who deserve meaningful remedy, but for every organization that will read the Mercor settlement, implement its required controls, and believe they have met their obligation to protect the people whose data they hold.

They will not have. And the next breach will prove it.

The Standard of Care Argument the Lawsuits Are Building

To understand why the lawsuits are asking for the wrong fix it is necessary to understand precisely what legal standard they are invoking and where that standard falls short.

Data breach class actions in the United States are predominantly built on negligence theory. To succeed on a negligence claim a plaintiff must establish that the defendant owed a duty of care, that the defendant breached that duty, that the breach caused the plaintiff's harm, and that the plaintiff suffered cognizable damages.

The duty of care in data breach cases has been progressively defined by courts, regulators, and compliance frameworks over the past two decades. The FTC has enforcement authority over unfair or deceptive data security practices. The SEC has specific guidance for registered investment advisers and technology companies on data protection obligations. State attorneys general have brought actions under consumer protection statutes. Courts have increasingly recognized an implicit duty to protect sensitive personal data commensurate with the nature of the data held and the reasonable expectations of the people who provided it.

What has emerged from this body of law, regulation, and enforcement is a standard of care built almost entirely around access layer controls. The duty as courts and regulators currently understand it is a duty to prevent unauthorized access. Implement MFA. Segment networks. Monitor for anomalous activity. Rotate credentials. Conduct regular security audits. Encrypt data at rest and in transit.

The Mercor lawsuits invoke exactly this standard. The Gill complaint alleges failure to implement MFA, failure to limit access to PII, failure to monitor systems, failure to rotate passwords, and failure to encrypt sensitive data during storage and transmission. It is a textbook recitation of the access layer standard of care as it currently exists in data breach litigation doctrine.

And here is the legal problem that nobody in any of the five courtrooms is currently confronting:

That standard of care - even fully satisfied - would not have prevented the harm the plaintiffs suffered. Because the harm did not originate from a failure of access layer controls. It originated from a failure at the data layer. And the legal doctrine has not yet caught up to that distinction.

The Encryption Allegation Points at the Right Problem and Then Misses It

Among all the allegations in the Mercor complaints, the failure to encrypt sensitive data during storage and transmission is the one that comes closest to identifying the actual duty that was breached. It points toward the right problem. But the way it is framed - listed alongside MFA and password rotation as one item among several access layer improvements - reveals that the plaintiff's attorneys understand encryption as a storage security measure rather than as a fundamentally different category of data protection obligation.

That distinction is not semantic. It is the difference between a remedy that changes the outcome for 40,000 contractors and a remedy that produces a more expensive breach with identical consequences.

Encryption at rest means data sitting in a database or storage system is encrypted when it is not being accessed. Encryption in transit means data moving between systems is encrypted as it travels. Both are legitimate and important security controls. Both are widely recognized components of the current standard of care. And both are rendered completely ineffective the moment an attacker obtains valid credentials - because when a user authenticates through the normal access pathway the system decrypts the data for them, it cannot distinguish between a legitimate user and an attacker holding stolen credentials, and the encryption that was supposed to protect the data dissolves on contact with a valid authenticated session. In the exact breach scenario the Mercor lawsuits describe, both controls perform exactly as designed and protect nothing.

This means that in the exact breach scenario the Mercor lawsuits describe - an attacker authenticating successfully with stolen credentials and accessing files through the authorized decryption pathway - both forms of encryption the complaint demands would have been fully satisfied and would have protected nothing. The files would still have been usable. The exfiltration would still have proceeded. The harm would still have flowed to 40,000 contractors.

The lawsuits are demanding a standard of care that has already been implicitly satisfied by the mechanism of the attack itself. And demanding it more rigorously produces no meaningful benefit to the people the litigation is supposed to protect.

The Duty That Was Actually Breached

If the current standard of care - even fully implemented - would not have changed the outcome, the legal question becomes what duty would have. What obligation, if discharged, would have rendered the breach consequence-free for the 40,000 contractors who are now plaintiffs?

The answer is precise and it points to a duty that existing doctrine has not yet adequately articulated: the duty to protect data at the file layer after authentication succeeds.

This is the Post Authentication Data Security duty. It is distinct from and more demanding than the access layer duty that current doctrine recognizes. It is not a duty to prevent unauthorized access - though that duty exists and matters. It is a duty to ensure that data remains protected even when access succeeds, whether that access was legitimately obtained or achieved through credential theft, supply chain compromise, insider misuse, or any other vector that produces a valid authenticated session.

The distinction maps directly onto the facts of the Mercor breach. The attackers authenticated successfully. Every access control performed exactly as designed. The breach did not occur at the access layer - it occurred at the data layer, where no protection existed to govern what happened to files after authentication succeeded.

Under the current standard of care doctrine, Mercor's failure is characterized as an access layer failure - insufficient MFA, inadequate monitoring, poor credential hygiene. Those characterizations may be legally valid but they are factually incomplete. The more precise and more legally significant failure was the absence of file layer protection that would have rendered the authenticated access consequence-free regardless of who held the credentials.

The duty to protect data at the file layer after authentication succeeds is the duty the Mercor lawsuits are gesturing toward but failing to name. And naming it precisely is the most important legal contribution the Mercor litigation could make to the evolution of data breach doctrine.

Why the Current Standard of Care Is Structurally Insufficient

The cybersecurity industry has known for years that stolen credentials are the single biggest vulnerability in the modern security stack. This is not a controversial position. Verizon's Data Breach Investigations Report has identified compromised credentials as the leading cause of breaches for nearly a decade running. IBM's Cost of a Data Breach Report consistently ranks stolen credentials as both the most common and most expensive attack vector. Every major security framework - NIST, ISO 27001, HITRUST - includes extensive controls around identity and access precisely because the industry understands that when credentials are compromised, everything built around them collapses.

The cybersecurity industry has known this. It has known it for a long time. And it has continued to build and sell architectures that are fundamentally dependent on the integrity of those same credentials - producing a decade of breach reports confirming the problem while simultaneously recommending the same access layer controls that the breach reports prove are insufficient.

That failure has a direct legal consequence. Courts and regulators developing the standard of care in data breach cases have done what courts and regulators reasonably do - they have looked to the security industry for guidance on what constitutes reasonable practice. The standard of care that has emerged reflects the industry consensus those courts and regulators found when they looked. A perimeter-centric, access-focused framework that treats credential integrity as the primary and in many cases sufficient protection for sensitive data.

The doctrine is not wrong on its own terms. It accurately reflects what the industry told courts and regulators was adequate. The problem is that the industry's own data has been contradicting that consensus for years - and the legal standard has had no mechanism to update itself in response. The result is a standard of care that courts apply in good faith, that organizations implement in good faith, and that leaves sensitive unstructured files fully exposed to the primary attack vector the industry itself has identified as the leading cause of breaches for nearly a decade.

That is not a gap in legal reasoning. It is a gap between legal doctrine and technical reality - and it is a gap that the Mercor breach has rendered impossible to ignore.

The Mercor breach is the most precise possible illustration of that gap. The attack chain began with a compromised GitHub Actions workflow in an open source vulnerability scanner. It harvested credentials through a malicious dependency executing in a CI/CD pipeline. It used those credentials to authenticate as legitimate users. It accessed and exfiltrated files that the authenticated session was authorized to access. Every step of that chain operated entirely within the parameters of a security architecture that meets the current standard of care.

The standard of care that the Mercor lawsuits are invoking - the standard that Mercor allegedly failed to meet - would not have detected or prevented any step of that chain after the initial credential harvest. Because the standard is designed around preventing unauthorized access and the attack succeeded by achieving authorized access with stolen credentials.

A standard of care that cannot address the primary attack vector in the industry's own breach data is not a standard that adequately defines the duty organizations owe to the people whose data they hold.

What the Evolved Standard of Care Looks Like

The legal evolution that the Mercor lawsuits should be driving - but are not yet articulating - is a standard of care that extends the duty of protection beyond the access layer to the data layer itself.

Under an evolved standard the duty is not satisfied by encrypting data at rest and in transit. Those controls protect data from passive interception and storage compromise. They do not protect data from authenticated access using stolen credentials. They do not protect files from exfiltration by a session that the system has recognized and authorized. They are necessary components of a complete security posture but they are not sufficient to discharge the duty of care owed to people whose most sensitive personal and professional information is held in unstructured files.

The evolved standard requires file layer protection - encryption that travels with the file itself, that governs usability independent of the access layer, that remains in force regardless of what credentials were used to obtain access, and that renders the file unusable to any recipient who cannot demonstrate, at the moment of access, that they are the authorized user in the authorized context for which access was intended.

This is Post Authentication Data Security applied as a legal duty rather than a security recommendation. It is the control that, had it been in place at Mercor, would have changed the outcome completely.

The attackers authenticated successfully. They accessed the files. They exfiltrated the files. And the files were ciphertext. Not because the authentication failed. Not because the access was detected and blocked. But because the files themselves were protected in a way that made the authenticated access consequence-free for every contractor whose data was taken.

Under an evolved standard of care that recognized this duty, Mercor's failure was not that it lacked adequate MFA or insufficient password rotation. It is that it held 40,000 people's most sensitive data in unprotected files that were fully usable to anyone who obtained valid credentials - and in a world where credential theft through supply chain compromise is the industry's leading breach vector, holding sensitive data in unprotected files is itself the breach of duty.

The Delve Scandal Proves the Point

The Mercor breach did not happen in isolation. It happened simultaneously with the exposure of Delve Technologies - the GRC automation startup that had issued compliance certifications for LiteLLM, the open source AI proxy whose compromise enabled the credential harvest that reached Mercor. Those certifications were, according to the whistleblower who exposed the company, industrialized fiction. Pre-populated attestations. Certifications issued without independent verification of the controls they purported to certify.

The convergence of these two stories is not incidental. It is the most powerful possible illustration of the gap between certified compliance and actual data protection that sits at the heart of the standard of care problem.

Mercor had compliance certifications. LiteLLM had compliance certifications. Those certifications validated access controls, security processes, and organizational security practices against the current standard of care. And none of it protected a single file when the credentials were compromised.

This is the standard of care problem rendered in its starkest form. The compliance framework the lawsuits are demanding Mercor should have met is a framework designed to certify access controls. It has no mechanism for certifying what happens to files after access succeeds. It validates the door. It has nothing to say about the files behind the door when someone walks through with a stolen key.

The Delve scandal did not create this problem. It exposed it. The problem existed in every legitimately certified organization whose sensitive files are protected only by the access controls that a valid authenticated session bypasses by definition. The certification confirms the lock works. It says nothing about the readability of what is inside when the lock is opened with a stolen key.

Post Authentication Data Security provides the protection that certification cannot - because it is not a process control that can be attested to. It is a technical control that either renders files unusable or does not. There is no compliance theater version of file layer encryption. The files are either protected or they are not. And that binary self-executing reality is precisely what the evolved standard of care should require.

The Regulatory Safe Harbor Argument

The legal implications of file layer protection extend beyond negligence theory into the regulatory framework that governs breach notification and penalty - and here the argument for an evolved standard of care becomes most immediately actionable for organizations deciding right now how to protect the files they hold.

Most data breach notification laws are triggered by the exposure of usable readable personal data. GDPR Article 34 explicitly states that notification to affected individuals is not required when data was encrypted and rendered unintelligible to unauthorized parties. HIPAA's Safe Harbor provision categorizes encrypted breached data as a non-reportable event. California's CCPA, New York's SHIELD Act, and most equivalent state frameworks include explicit encryption safe harbors that reduce or eliminate notification obligations when stolen data was encrypted and ciphertext.

These safe harbors already exist in the regulatory framework. They already recognize that encrypted data that cannot be read does not produce the harm that breach notification laws are designed to address. They are the regulatory system's implicit acknowledgment of the principle that Post Authentication Data Security makes explicit - that what matters for data protection purposes is not whether the data was accessed but whether it was usable when it was taken.

The Mercor lawsuits are built on the premise that contractor data was compromised in a readable form. Under the regulatory safe harbor framework that already exists, file layer encrypted data that is exfiltrated but unusable does not meet the threshold for mandatory notification. The breach event that generates the legal obligation does not occur. The five lawsuits have no viable plaintiff because the harm the plaintiffs allege - exposure of readable personal data to criminal actors who can exploit it - has not occurred.

The safe harbor framework is the regulatory system pointing toward the evolved standard of care that litigation doctrine has not yet fully articulated. It already recognizes that encryption at the data layer changes the legal character of a breach. The doctrinal evolution required is to extend that recognition from a regulatory safe harbor into an affirmative duty - a standard of care that requires file layer protection not merely as a mitigating factor but as a component of the baseline obligation owed to people whose sensitive data is held in unstructured files.

What the Mercor Lawsuits Should Be Arguing

The most important legal contribution the Mercor litigation could make is to reframe the standard of care claim around the duty that was actually breached rather than the duty that existing doctrine recognizes.

The complaint should not lead with failure to implement MFA or failure to rotate passwords. Those are real failures and they belong in the complaint. But they are not the failure that made 40,000 contractors vulnerable to years of identity theft risk. The failure that did that was holding sensitive unstructured files - files containing Social Security numbers, identity documents, video recordings, and proprietary work product - without file layer protection that would have rendered those files unreadable to anyone who took them regardless of what credentials they used.

The encryption allegation in the current complaint points toward this duty but frames it as a storage security failure. The stronger and more legally significant framing is a failure of Post Authentication Data Security - a failure to protect files at the data layer in a way that maintains protection after authentication succeeds, independent of credential integrity, independent of access layer controls, independent of whether the session that accessed the files was legitimate or the product of supply chain credential theft.

That framing advances data breach doctrine in a meaningful direction. It creates a legal framework that actually maps onto the threat environment the industry's own data describes - a world in which credential compromise is the leading attack vector and access layer controls are necessary but insufficient to discharge the duty of care owed to the people whose data is at risk.

It also creates a remedy that would actually change the outcome. Not a settlement requiring better MFA and more rigorous password rotation that leaves 40,000 people's files just as usable the next time valid credentials are stolen. A standard that requires file layer protection - protection that holds when everything else fails, protection that renders credential theft consequence-free for the people whose data was taken.

The Conversation the Industry and the Legal Community Must Have Together

The Mercor lawsuits will settle. The settlement will specify controls. The controls will reflect the current standard of care. And the current standard of care will remain a decade behind the threat environment it is supposed to address.

Unless the legal community starts asking the question that the complaints are currently missing.

Not whether Mercor had adequate access controls. Whether Mercor discharged its duty to protect the files its contractors trusted it to hold - protect them in a way that maintains that protection after authentication succeeds, that holds when credentials are stolen, that renders the breach consequence-free for the people whose data is taken regardless of how the attacker obtained access.

That is the standard the threat environment demands. That is the standard the regulatory safe harbor framework is already gesturing toward. That is the standard the evolved duty of care in data breach litigation needs to articulate.

Post Authentication Data Security is not the standard of care today. It is the standard of care the Mercor breach demonstrates is necessary - and the standard that the legal community, the security industry, and the organizations that hold sensitive unstructured files have a shared obligation to establish before the next breach proves the same point at the same cost to the same people who had no choice but to trust that the files they handed over would be protected when it mattered most.

The five lawsuits filed in seven days are the most powerful available argument for why that conversation cannot wait.

FenixPyre is purpose-built to close the Post Authentication Data Security gap for unstructured data - ensuring that files remain protected at the data layer regardless of how access was obtained. In a world where supply chain attacks make credential theft an inevitability, file layer protection is not a security enhancement. It is the evolved standard of care the modern threat environment demands.


Data Protection

Mar 23, 2026

When Accenture Reports a 127% Surge in Dark Web Insider Recruitment, It’s Time to Rethink Data Security

Accenture’s Cyber Intelligence team recently published research that should alarm every CISO and board member: insider threats facilitated through dark web ecosystems are escalating at an unprecedented rate.

The numbers are stark:

  • 69% increase in insiders offering access (2025 vs. 2024)

  • 127% surge in hackers actively recruiting insiders (vs. 2022)

As Ryan Whelan, Accenture’s Global Head of Cyber Intelligence, explains:

“The insider economy is now principally designed to support early-stage intrusions, with criminal gangs increasingly relying on insiders to bypass cyber defenses.”

This is not theoretical.

Dark web posts explicitly name targets:

  • Coinbase

  • Binance

  • Kraken

  • Gemini

  • Accenture

  • Genpact

  • Spotify

  • Netflix

…and dozens more across financial services, consulting, and technology.

The going rate?

  • $3,000–$15,000 for initial access

  • $25,000 for 37 million cryptocurrency exchange records

The Real Implication of Accenture’s Findings

What this research makes clear - when taken to its logical conclusion - is this:

Managing insider risk requires more than governing access. It requires governing how data is used after access is granted.

This is the role of Post-Authentication Data Security (PADS).

PADS is a security layer that governs how data can be used after access is granted - enforcing policy at the moment of data interaction, not just at authentication.

What Accenture’s Research Makes Clear

Accenture’s findings highlight a structural shift in threat dynamics:

  • Insiders provide initial access and credentials (30% of cases)

  • Perimeter defenses are bypassed entirely

  • Activity appears legitimate - because it is legitimate

  • Security controls defer by design once authentication succeeds

Whelan emphasizes lifecycle controls:

  • Stronger hiring and identity verification

  • Role separation and least privilege

  • Immediate access revocation during offboarding

  • Monitoring for pre-departure activity

  • Behavioral analytics and insider threat programs

These are essential.

They reduce the likelihood that insider threats emerge - or go undetected.

But they also reveal something deeper:

Even with these controls, an authenticated user can still use data in ways that are indistinguishable from legitimate activity.

Where Existing Controls End - and Why the Gap Exists

When a recruited insider acts, the cybersecurity stack behaves exactly as designed:

  • Identity is verified

  • Access is authorized

  • Permissions are correctly applied

  • Activity aligns with role expectations

  • Monitoring systems observe “normal” behavior

From the system’s perspective:

Everything is working correctly.

And that is precisely the problem.

Because “working correctly” still allows data to be:

  • Queried

  • Downloaded

  • Copied

  • Transferred

  • Sold

Nothing is bypassed.
Nothing is broken.
No control is technically evaded.

The attack succeeds because:

The security stack is architected to stop at authentication.

Whelan’s findings reinforce this reality:

Attackers are not defeating controls - they are operating within the boundary those controls were designed to trust.

The Architectural Limitation

Modern security is built to answer one question:

Who should have access?

It is not built to answer:

What should an authenticated user be allowed to do with data - right now, in this context?

This is why insider recruitment is so effective.

Existing controls - IAM, Zero Trust, SIEM, DLP, UEBA - are optimized for:

  • Preventing unauthorized access

  • Detecting abnormal behavior

They are not designed to stop:

Authorized, normal-looking misuse of data

This is not a failure of execution.

It is a limitation of architecture.

The Missing Layer: Post-Authentication Data Security (PADS)

Accenture’s framework focuses on managing insider risk across the employee lifecycle.

PADS extends that framework into the data interaction lifecycle.

If traditional controls answer:

  • Who should have access?

  • When should access be granted or revoked?

  • Is behavior anomalous?

PADS answers:

  • What should this user be able to do with the data they can access?

  • Is this specific use of data appropriate in this context?

This is not a replacement for insider threat programs.

It is the layer that ensures their effectiveness - even when insiders act within expected patterns.

Why This Matters in the Insider Economy

The insider recruitment model works because it exploits a core assumption:

Authenticated access implies legitimate use.

Accenture’s research shows attackers are deliberately targeting that assumption.

They recruit insiders because:

  • Access is already granted

  • Activity blends into normal workflows

  • Detection becomes significantly harder

PADS shifts control from access → to data usage.

What Changes When Data Is Governed After Access

In a PADS-enabled environment:

  • Access still functions as designed

  • Authorized users still perform legitimate work

But:

  • Bulk extraction can be restricted or challenged

  • Sensitive data use can trigger contextual controls

  • Data remains protected - even outside the system

  • Actions - not just identities - are evaluated in real time

This means even if:

  • An insider is recruited

  • Credentials are valid

  • Behavior appears normal

The outcome changes.

Data is no longer freely extractable and usable simply because access was granted.

Aligning With Accenture’s Recommendations - And Extending Them

Whelan’s recommendations create a strong foundation:

  • Strengthen hiring and identity verification

  • Enforce role separation and least privilege

  • Revoke access immediately during offboarding

  • Monitor for behavioral anomalies

  • Expand insider threat intelligence

All of these aim to:

Prevent trusted individuals from using legitimate access to cause harm

But traditional implementations approach this indirectly.

They:

  • Limit access scope

  • Attempt to detect misuse

  • Reduce opportunity over time

They do not directly control:

What happens to data at the moment it is used

Where Traditional Controls Fall Short

Objective

Traditional Approach

Limitation

Prevent malicious insiders

Pre-employment screening

Cannot prevent post-hire recruitment

Limit exposure

RBAC / PoLP

Broad access still exists within roles

Stop access at risk

Offboarding

Reactive - after decision point

Detect misuse

UEBA / monitoring

Requires deviation from “normal”

Identify targeting

Threat intelligence

Does not stop insider action

These controls rely on:

  • Predicting intent

  • Detecting anomalies

  • Acting after signals appear

In insider recruitment scenarios:

Those signals may never appear in time.

How PADS Delivers the Outcome Directly

Objective

PADS Capability

Outcome

Limit insider impact

Data usability governance

Controls actions within valid access

Prevent extraction

Contextual policy enforcement

Evaluates intent at time of use

Reduce detection reliance

Real-time controls

No need for “abnormal” behavior

Mitigate insider risk

Persistent data protection

Exfiltrated data is unusable

Contain breaches

Outcome-based enforcement

Prevents usable data loss

PADS operates where risk actually materializes:

The moment data is accessed and used

The Strategic Implication: An Architectural Fault Line

Accenture classifies insider threats as a medium-frequency, high-impact strategic risk.

But the deeper implication is this:

Insider risk is not an edge case - it is a consequence of how cybersecurity is designed.

Whelan’s findings expose a critical assumption:

Once a user is authenticated, risk is sufficiently managed.

That assumption no longer holds.

Modern architecture treats:

  • Authentication as the boundary of trust

Everything beyond that boundary is governed by:

  • Permissions

  • Expected behavior

  • Post-event detection

Not by real-time control of data itself.

This is the fault line.

The Bottom Line

Accenture’s findings don’t just highlight the rise of insider threats - they expose a fundamental flaw in modern cybersecurity:

The assumption that risk ends when access is granted.

In reality:

That is where risk begins.

The Verizon DBIR reinforces this:

  • 74% of breaches involve the human element

  • Occurring within legitimate, authenticated sessions

No controls are bypassed.
No systems are broken.

Attackers simply operate inside the boundary the stack was designed to trust.

Whelan’s recommendations strengthen identity and access.

But they also point to a deeper truth:

Without governing how data is used after access is granted, the problem remains unsolved.

That is what Post-Authentication Data Security (PADS) delivers.

It shifts security from:

  • Controlling entry

To:

  • Controlling outcome

Because in today’s threat landscape:

Access is no longer the boundary of risk. Data usage is.

Resources

  • Accenture Cyber Intelligence Report: Insider Threat Escalation (2025)

  • What is PADS - The definition, category map, and how PADS completes the security model

  • Why PADS now - The forces driving post-authentication data theft

Final Thought

Every employee with access to sensitive data is a recruitment target.

Traditional security stops at authentication.

That’s exactly where the insider economy starts.

© 2018-2026 FenixPyre Inc, All rights reserved

© 2018-2026 FenixPyre Inc, All rights reserved

© 2018-2026 FenixPyre Inc, All rights reserved