About FenixPyre

Our Mission Behind Post-Authentication Data Security

For decades, cybersecurity has focused on keeping attackers out.

We built walls. We hardened networks. We strengthened identities. We fortified endpoints. We added more tools, more layers, more alerts. And yet, attackers continue to walk away with the data. Not because our systems failed, but because they worked exactly as designed.

Attackers don’t break in anymore. They log in.

They phish credentials. They hijack sessions. They bypass MFA. They impersonate employees.

They inherit the trust our systems blindly grant. And once inside, every control we’ve built steps aside and lets them in.

Encryption dissolves. Files open. Data flows. DLP stays silent. IAM nods in approval. Zero Trust trusts the wrong session. Monitoring arrives too late.

This is the Post-Authentication Gap. Over 74% of breaches involve post authentication access - Verizon DBIR.

The blind spot that fuels catastrophic breaches.
The quiet flaw at the center of modern security architecture.
The assumption attackers depend on: “If a user is authenticated, they are trustworthy.”

This assumption is wrong. It always was. And it is costing organizations billions.

Our
Principles

Our Principles

We refuse to accept a world where a stolen password is a complete, systemic failure.

We refuse to build another layer of tools that collapse the moment identity is compromised. We refuse to accept that the only time data is unprotected is the exact moment attackers go after it.

We believe in a new model of security, one that protects what matters most: the data itself. Because data does not need to trust a network. Data does not need to trust a device. Data does not need to trust identity. Data does NOT need to decrypt simply because someone typed the right password.

And so we declare a new era of cybersecurity, one that begins after authentication.

The Principles of Post Authentication Data Security

1. Authentication is not authorization to decrypt data.

2. Data must remain protected even after access is granted.

3. Encryption must persist from creation to destruction.

4. Data must carry its own policies everywhere it travels.

5. Stolen data must be worthless.

6. Credential compromise must NOT equal data compromise.

7. Every access attempt must be continuously verified at the file layer.

We refuse to accept a world where a stolen password is a complete, systemic failure.

We refuse to build another layer of tools that collapse the moment identity is compromised. We refuse to accept that the only time data is unprotected is the exact moment attackers go after it.

We believe in a new model of security, one that protects what matters most: the data itself. Because data does not need to trust a network. Data does not need to trust a device. Data does not need to trust identity. Data does NOT need to decrypt simply because someone typed the right password.

And so we declare a new era of cybersecurity, one that begins after authentication.

The Principles of Post Authentication Data Security

1. Authentication is not authorization to decrypt data.

2. Data must remain protected even after access is granted.

3. Encryption must persist from creation to destruction.

4. Data must carry its own policies everywhere it travels.

5. Stolen data must be worthless.

6. Credential compromise must NOT equal data compromise.

7. Every access attempt must be continuously verified at the file layer.

To eliminate the idea that data is “safe” simply because a user authenticated.

To ensure stolen files are useless.

To end the era of identity-based data breaches.

To protect data at the layer where attacks actually succeed.

To redefine cybersecurity from just preventing intrusions to preventing data loss.

To eliminate the idea that data is “safe” simply because a user authenticated.

To ensure stolen files are useless.

To end the era of identity-based data breaches.

To protect data at the layer where attacks actually succeed.

To redefine cybersecurity from just preventing intrusions to preventing data loss.

Our
Mission

Our Mission

Our Team

A world-class team pioneering Post-Authentication Data Security to protect sensitive enterprise data wherever it moves.

A world-class team pioneering Post-Authentication Data Security to protect sensitive enterprise data wherever it moves.

50+ Patent Claims

Millions of files protected

Award Winning
Platform

Award Winning
Platform

Cybersecurity Excellence Award

Cybersecurity Excellence Award

CRN Tech Innovator Finalist, Zero Trust Security

CRN Tech Innovator Finalist, Zero Trust Security

Finalist MIT Sloan CIO Summit

Finalist MIT Sloan CIO Summit

USAF ABMS


USAF ABMS

USAF ABMS


Latest Articles

Data Protection

Mar 23, 2026

When Accenture Reports a 127% Surge in Dark Web Insider Recruitment, It’s Time to Rethink Data Security

Accenture’s Cyber Intelligence team recently published research that should alarm every CISO and board member: insider threats facilitated through dark web ecosystems are escalating at an unprecedented rate.

The numbers are stark:

  • 69% increase in insiders offering access (2025 vs. 2024)

  • 127% surge in hackers actively recruiting insiders (vs. 2022)

As Ryan Whelan, Accenture’s Global Head of Cyber Intelligence, explains:

“The insider economy is now principally designed to support early-stage intrusions, with criminal gangs increasingly relying on insiders to bypass cyber defenses.”

This is not theoretical.

Dark web posts explicitly name targets:

  • Coinbase

  • Binance

  • Kraken

  • Gemini

  • Accenture

  • Genpact

  • Spotify

  • Netflix

…and dozens more across financial services, consulting, and technology.

The going rate?

  • $3,000–$15,000 for initial access

  • $25,000 for 37 million cryptocurrency exchange records

The Real Implication of Accenture’s Findings

What this research makes clear - when taken to its logical conclusion - is this:

Managing insider risk requires more than governing access. It requires governing how data is used after access is granted.

This is the role of Post-Authentication Data Security (PADS).

PADS is a security layer that governs how data can be used after access is granted - enforcing policy at the moment of data interaction, not just at authentication.

What Accenture’s Research Makes Clear

Accenture’s findings highlight a structural shift in threat dynamics:

  • Insiders provide initial access and credentials (30% of cases)

  • Perimeter defenses are bypassed entirely

  • Activity appears legitimate - because it is legitimate

  • Security controls defer by design once authentication succeeds

Whelan emphasizes lifecycle controls:

  • Stronger hiring and identity verification

  • Role separation and least privilege

  • Immediate access revocation during offboarding

  • Monitoring for pre-departure activity

  • Behavioral analytics and insider threat programs

These are essential.

They reduce the likelihood that insider threats emerge - or go undetected.

But they also reveal something deeper:

Even with these controls, an authenticated user can still use data in ways that are indistinguishable from legitimate activity.

Where Existing Controls End - and Why the Gap Exists

When a recruited insider acts, the cybersecurity stack behaves exactly as designed:

  • Identity is verified

  • Access is authorized

  • Permissions are correctly applied

  • Activity aligns with role expectations

  • Monitoring systems observe “normal” behavior

From the system’s perspective:

Everything is working correctly.

And that is precisely the problem.

Because “working correctly” still allows data to be:

  • Queried

  • Downloaded

  • Copied

  • Transferred

  • Sold

Nothing is bypassed.
Nothing is broken.
No control is technically evaded.

The attack succeeds because:

The security stack is architected to stop at authentication.

Whelan’s findings reinforce this reality:

Attackers are not defeating controls - they are operating within the boundary those controls were designed to trust.

The Architectural Limitation

Modern security is built to answer one question:

Who should have access?

It is not built to answer:

What should an authenticated user be allowed to do with data - right now, in this context?

This is why insider recruitment is so effective.

Existing controls - IAM, Zero Trust, SIEM, DLP, UEBA - are optimized for:

  • Preventing unauthorized access

  • Detecting abnormal behavior

They are not designed to stop:

Authorized, normal-looking misuse of data

This is not a failure of execution.

It is a limitation of architecture.

The Missing Layer: Post-Authentication Data Security (PADS)

Accenture’s framework focuses on managing insider risk across the employee lifecycle.

PADS extends that framework into the data interaction lifecycle.

If traditional controls answer:

  • Who should have access?

  • When should access be granted or revoked?

  • Is behavior anomalous?

PADS answers:

  • What should this user be able to do with the data they can access?

  • Is this specific use of data appropriate in this context?

This is not a replacement for insider threat programs.

It is the layer that ensures their effectiveness - even when insiders act within expected patterns.

Why This Matters in the Insider Economy

The insider recruitment model works because it exploits a core assumption:

Authenticated access implies legitimate use.

Accenture’s research shows attackers are deliberately targeting that assumption.

They recruit insiders because:

  • Access is already granted

  • Activity blends into normal workflows

  • Detection becomes significantly harder

PADS shifts control from access → to data usage.

What Changes When Data Is Governed After Access

In a PADS-enabled environment:

  • Access still functions as designed

  • Authorized users still perform legitimate work

But:

  • Bulk extraction can be restricted or challenged

  • Sensitive data use can trigger contextual controls

  • Data remains protected - even outside the system

  • Actions - not just identities - are evaluated in real time

This means even if:

  • An insider is recruited

  • Credentials are valid

  • Behavior appears normal

The outcome changes.

Data is no longer freely extractable and usable simply because access was granted.

Aligning With Accenture’s Recommendations - And Extending Them

Whelan’s recommendations create a strong foundation:

  • Strengthen hiring and identity verification

  • Enforce role separation and least privilege

  • Revoke access immediately during offboarding

  • Monitor for behavioral anomalies

  • Expand insider threat intelligence

All of these aim to:

Prevent trusted individuals from using legitimate access to cause harm

But traditional implementations approach this indirectly.

They:

  • Limit access scope

  • Attempt to detect misuse

  • Reduce opportunity over time

They do not directly control:

What happens to data at the moment it is used

Where Traditional Controls Fall Short

Objective

Traditional Approach

Limitation

Prevent malicious insiders

Pre-employment screening

Cannot prevent post-hire recruitment

Limit exposure

RBAC / PoLP

Broad access still exists within roles

Stop access at risk

Offboarding

Reactive - after decision point

Detect misuse

UEBA / monitoring

Requires deviation from “normal”

Identify targeting

Threat intelligence

Does not stop insider action

These controls rely on:

  • Predicting intent

  • Detecting anomalies

  • Acting after signals appear

In insider recruitment scenarios:

Those signals may never appear in time.

How PADS Delivers the Outcome Directly

Objective

PADS Capability

Outcome

Limit insider impact

Data usability governance

Controls actions within valid access

Prevent extraction

Contextual policy enforcement

Evaluates intent at time of use

Reduce detection reliance

Real-time controls

No need for “abnormal” behavior

Mitigate insider risk

Persistent data protection

Exfiltrated data is unusable

Contain breaches

Outcome-based enforcement

Prevents usable data loss

PADS operates where risk actually materializes:

The moment data is accessed and used

The Strategic Implication: An Architectural Fault Line

Accenture classifies insider threats as a medium-frequency, high-impact strategic risk.

But the deeper implication is this:

Insider risk is not an edge case - it is a consequence of how cybersecurity is designed.

Whelan’s findings expose a critical assumption:

Once a user is authenticated, risk is sufficiently managed.

That assumption no longer holds.

Modern architecture treats:

  • Authentication as the boundary of trust

Everything beyond that boundary is governed by:

  • Permissions

  • Expected behavior

  • Post-event detection

Not by real-time control of data itself.

This is the fault line.

The Bottom Line

Accenture’s findings don’t just highlight the rise of insider threats - they expose a fundamental flaw in modern cybersecurity:

The assumption that risk ends when access is granted.

In reality:

That is where risk begins.

The Verizon DBIR reinforces this:

  • 74% of breaches involve the human element

  • Occurring within legitimate, authenticated sessions

No controls are bypassed.
No systems are broken.

Attackers simply operate inside the boundary the stack was designed to trust.

Whelan’s recommendations strengthen identity and access.

But they also point to a deeper truth:

Without governing how data is used after access is granted, the problem remains unsolved.

That is what Post-Authentication Data Security (PADS) delivers.

It shifts security from:

  • Controlling entry

To:

  • Controlling outcome

Because in today’s threat landscape:

Access is no longer the boundary of risk. Data usage is.

Resources

  • Accenture Cyber Intelligence Report: Insider Threat Escalation (2025)

  • What is PADS - The definition, category map, and how PADS completes the security model

  • Why PADS now - The forces driving post-authentication data theft

Final Thought

Every employee with access to sensitive data is a recruitment target.

Traditional security stops at authentication.

That’s exactly where the insider economy starts.

Data Protection

Mar 23, 2026

When Accenture Reports a 127% Surge in Dark Web Insider Recruitment, It’s Time to Rethink Data Security

Accenture’s Cyber Intelligence team recently published research that should alarm every CISO and board member: insider threats facilitated through dark web ecosystems are escalating at an unprecedented rate.

The numbers are stark:

  • 69% increase in insiders offering access (2025 vs. 2024)

  • 127% surge in hackers actively recruiting insiders (vs. 2022)

As Ryan Whelan, Accenture’s Global Head of Cyber Intelligence, explains:

“The insider economy is now principally designed to support early-stage intrusions, with criminal gangs increasingly relying on insiders to bypass cyber defenses.”

This is not theoretical.

Dark web posts explicitly name targets:

  • Coinbase

  • Binance

  • Kraken

  • Gemini

  • Accenture

  • Genpact

  • Spotify

  • Netflix

…and dozens more across financial services, consulting, and technology.

The going rate?

  • $3,000–$15,000 for initial access

  • $25,000 for 37 million cryptocurrency exchange records

The Real Implication of Accenture’s Findings

What this research makes clear - when taken to its logical conclusion - is this:

Managing insider risk requires more than governing access. It requires governing how data is used after access is granted.

This is the role of Post-Authentication Data Security (PADS).

PADS is a security layer that governs how data can be used after access is granted - enforcing policy at the moment of data interaction, not just at authentication.

What Accenture’s Research Makes Clear

Accenture’s findings highlight a structural shift in threat dynamics:

  • Insiders provide initial access and credentials (30% of cases)

  • Perimeter defenses are bypassed entirely

  • Activity appears legitimate - because it is legitimate

  • Security controls defer by design once authentication succeeds

Whelan emphasizes lifecycle controls:

  • Stronger hiring and identity verification

  • Role separation and least privilege

  • Immediate access revocation during offboarding

  • Monitoring for pre-departure activity

  • Behavioral analytics and insider threat programs

These are essential.

They reduce the likelihood that insider threats emerge - or go undetected.

But they also reveal something deeper:

Even with these controls, an authenticated user can still use data in ways that are indistinguishable from legitimate activity.

Where Existing Controls End - and Why the Gap Exists

When a recruited insider acts, the cybersecurity stack behaves exactly as designed:

  • Identity is verified

  • Access is authorized

  • Permissions are correctly applied

  • Activity aligns with role expectations

  • Monitoring systems observe “normal” behavior

From the system’s perspective:

Everything is working correctly.

And that is precisely the problem.

Because “working correctly” still allows data to be:

  • Queried

  • Downloaded

  • Copied

  • Transferred

  • Sold

Nothing is bypassed.
Nothing is broken.
No control is technically evaded.

The attack succeeds because:

The security stack is architected to stop at authentication.

Whelan’s findings reinforce this reality:

Attackers are not defeating controls - they are operating within the boundary those controls were designed to trust.

The Architectural Limitation

Modern security is built to answer one question:

Who should have access?

It is not built to answer:

What should an authenticated user be allowed to do with data - right now, in this context?

This is why insider recruitment is so effective.

Existing controls - IAM, Zero Trust, SIEM, DLP, UEBA - are optimized for:

  • Preventing unauthorized access

  • Detecting abnormal behavior

They are not designed to stop:

Authorized, normal-looking misuse of data

This is not a failure of execution.

It is a limitation of architecture.

The Missing Layer: Post-Authentication Data Security (PADS)

Accenture’s framework focuses on managing insider risk across the employee lifecycle.

PADS extends that framework into the data interaction lifecycle.

If traditional controls answer:

  • Who should have access?

  • When should access be granted or revoked?

  • Is behavior anomalous?

PADS answers:

  • What should this user be able to do with the data they can access?

  • Is this specific use of data appropriate in this context?

This is not a replacement for insider threat programs.

It is the layer that ensures their effectiveness - even when insiders act within expected patterns.

Why This Matters in the Insider Economy

The insider recruitment model works because it exploits a core assumption:

Authenticated access implies legitimate use.

Accenture’s research shows attackers are deliberately targeting that assumption.

They recruit insiders because:

  • Access is already granted

  • Activity blends into normal workflows

  • Detection becomes significantly harder

PADS shifts control from access → to data usage.

What Changes When Data Is Governed After Access

In a PADS-enabled environment:

  • Access still functions as designed

  • Authorized users still perform legitimate work

But:

  • Bulk extraction can be restricted or challenged

  • Sensitive data use can trigger contextual controls

  • Data remains protected - even outside the system

  • Actions - not just identities - are evaluated in real time

This means even if:

  • An insider is recruited

  • Credentials are valid

  • Behavior appears normal

The outcome changes.

Data is no longer freely extractable and usable simply because access was granted.

Aligning With Accenture’s Recommendations - And Extending Them

Whelan’s recommendations create a strong foundation:

  • Strengthen hiring and identity verification

  • Enforce role separation and least privilege

  • Revoke access immediately during offboarding

  • Monitor for behavioral anomalies

  • Expand insider threat intelligence

All of these aim to:

Prevent trusted individuals from using legitimate access to cause harm

But traditional implementations approach this indirectly.

They:

  • Limit access scope

  • Attempt to detect misuse

  • Reduce opportunity over time

They do not directly control:

What happens to data at the moment it is used

Where Traditional Controls Fall Short

Objective

Traditional Approach

Limitation

Prevent malicious insiders

Pre-employment screening

Cannot prevent post-hire recruitment

Limit exposure

RBAC / PoLP

Broad access still exists within roles

Stop access at risk

Offboarding

Reactive - after decision point

Detect misuse

UEBA / monitoring

Requires deviation from “normal”

Identify targeting

Threat intelligence

Does not stop insider action

These controls rely on:

  • Predicting intent

  • Detecting anomalies

  • Acting after signals appear

In insider recruitment scenarios:

Those signals may never appear in time.

How PADS Delivers the Outcome Directly

Objective

PADS Capability

Outcome

Limit insider impact

Data usability governance

Controls actions within valid access

Prevent extraction

Contextual policy enforcement

Evaluates intent at time of use

Reduce detection reliance

Real-time controls

No need for “abnormal” behavior

Mitigate insider risk

Persistent data protection

Exfiltrated data is unusable

Contain breaches

Outcome-based enforcement

Prevents usable data loss

PADS operates where risk actually materializes:

The moment data is accessed and used

The Strategic Implication: An Architectural Fault Line

Accenture classifies insider threats as a medium-frequency, high-impact strategic risk.

But the deeper implication is this:

Insider risk is not an edge case - it is a consequence of how cybersecurity is designed.

Whelan’s findings expose a critical assumption:

Once a user is authenticated, risk is sufficiently managed.

That assumption no longer holds.

Modern architecture treats:

  • Authentication as the boundary of trust

Everything beyond that boundary is governed by:

  • Permissions

  • Expected behavior

  • Post-event detection

Not by real-time control of data itself.

This is the fault line.

The Bottom Line

Accenture’s findings don’t just highlight the rise of insider threats - they expose a fundamental flaw in modern cybersecurity:

The assumption that risk ends when access is granted.

In reality:

That is where risk begins.

The Verizon DBIR reinforces this:

  • 74% of breaches involve the human element

  • Occurring within legitimate, authenticated sessions

No controls are bypassed.
No systems are broken.

Attackers simply operate inside the boundary the stack was designed to trust.

Whelan’s recommendations strengthen identity and access.

But they also point to a deeper truth:

Without governing how data is used after access is granted, the problem remains unsolved.

That is what Post-Authentication Data Security (PADS) delivers.

It shifts security from:

  • Controlling entry

To:

  • Controlling outcome

Because in today’s threat landscape:

Access is no longer the boundary of risk. Data usage is.

Resources

  • Accenture Cyber Intelligence Report: Insider Threat Escalation (2025)

  • What is PADS - The definition, category map, and how PADS completes the security model

  • Why PADS now - The forces driving post-authentication data theft

Final Thought

Every employee with access to sensitive data is a recruitment target.

Traditional security stops at authentication.

That’s exactly where the insider economy starts.

Data Protection

Mar 23, 2026

When IBM X-Force Says "Post-Auth is the New Perimeter," People Should Take Note

Ryan Anschutz, North America Leader for IBM X-Force Incident Response, recently published an article that deserves more attention than a typical LinkedIn post receives.

It started, as the best security lessons often do, with something completely mundane.

Ryan needed to export a list of event attendees. The UI had no export button. So, he opened browser developer tools, looked at what the application was doing behind the scenes, and scripted the authenticated API calls to extract everything he needed.

No exploits. No bypasses. No stolen credentials.

His conclusion: "The application worked exactly as designed. That's the part worth sitting with."

That sentence is the entire post-authentication data security (PADS) problem stated as plainly as it can be stated.

WHAT RYAN'S EXPORT TASK ACTUALLY DEMONSTRATES 

What Ryan described is not a vulnerability. It is not a misconfiguration. It is not a failure of any control. 

It is what happens when an authenticated session is trusted completely. When the backend extends full data usability to anyone holding a valid credential, with no evaluation of whether that trust should extend to bulk extraction, rapid pagination, or automated API calls at a scale no human would produce manually.

The application's authentication worked. Its authorization worked. Its session management worked. Every control functioned exactly as designed.

And a complete dataset was extracted in minutes.

This is what the 2024 Verizon Data Breach Investigations Report is describing when it notes that 74% of breaches involve valid credentials. It is not that attackers are bypassing authentication. It is that they have learned to operate inside the trust that authentication grants, and once inside that trust, there is almost nothing designed to evaluate whether specific data should be usable at a specific moment, under specific conditions, at a specific volume. 

As Ryan puts it: "Attackers don't care about your UI. They care about what the backend will trust."  

RYAN'S QUESTION IS THE RIGHT QUESTION 

Ryan's bottom line for IR teams is worth quoting directly: 

"The question is not, 'Did MFA work?' The real question is, 'What did the backend trust after MFA succeeded?' That is the perimeter now."

This reframe from "did authentication succeed" to "what did the system trust after authentication succeeded", is precisely the shift that Post-Authentication Data Security (PADS) represents as a security category.

Traditional security architecture is built to answer the first question. The foundational layers, firewalls, IAM, MFA, Zero Trust, are designed to evaluate whether a given identity or session should be granted access in the first place. They operate on the principle that authentication and authorization are the primary security boundaries.

DLP represents the industry's first major attempt to address what happens after authentication. It monitors data movement and attempts to prevent sensitive information from leaving the organization through unauthorized channels. This is critical and valuable.

But Ryan's GraphQL example exposes the limitation: DLP is designed to detect abnormal data movement, not to govern normal data use.

The session was appropriately granted. The API calls were legitimate. The data access was authorized. The pagination pattern, if throttled to human speed, would appear normal. No unauthorized egress channel was used, just standard API responses over HTTPS.

DLP's fundamental assumption is that if data access appears normal, it probably is normal.

This is exactly the assumption that Ryan's example breaks. An attacker who understands how the backend evaluates "normal" can operate entirely within those parameters while extracting complete datasets.

The actions that followed authentication were indistinguishable from legitimate use. And no control in the stack, including DLP, was designed to ask whether bulk data extraction should be permitted even when the session was valid and the behavior appeared normal.

His observation cuts to the core of the problem: "After authentication, everything becomes the real perimeter, and most defenses still aren't built around that truth."

DLP monitors the perimeter. But when the attacker operates inside what the system considers normal authenticated behavior, there is no perimeter event to detect. 

WHAT COULD HAVE CHANGED THE OUTCOME

Ryan identifies several controls that could have interrupted the extraction: 

• Session tokens bound to device or browser context

• Behavioral rate limiting that notices no human paginates this fast

• Authorization enforced at the API layer, not assumed via the UI

• Step-up authentication for bulk or sensitive data access

• Short session lifetimes with frequent token rotation

• API-level telemetry that shows actual query behavior, not just page views

These recommendations map directly to what PADS delivers as a category:



IBM X-Force Recommendation 



PADS Capability 



How It Changes the Outcome 

Session tokens bound to device/browser context 

Contextual session management 

Sessions can't be replayed from different devices or environments - even with valid credentials 

Behavioral rate limiting

Anomaly detection & policy enforcement 

Automated extraction at scale triggers real-time intervention before data leaves 

Authorization enforced at API layer, not assumed via UI 

Data-layer access controls 

Backend enforces what data can be accessed regardless of how the request arrives 

Step-up authentication for bulk access 

Dynamic risk-based authentication 

High-volume data access requires additional verification even for authenticated users 

Short session lifetimes with frequent token rotation 

Session governance 

Limits window of opportunity for credential replay or session hijacking 

API-level telemetry showing actual query behavior 

Data interaction visibility 

Surfaces what's actually happening at the data layer, not just what the UI suggests 

WHERE DETECTION ALONE FALLS SHORT

Ryan's recommendations represent the access-control and behavioral-detection responses to the post-authentication problem. They are valuable and necessary.

But his list implicitly identifies their shared limitation: they all depend on detecting that something unusual is happening. Rate limiting notices unusual pagination speed. Behavioral monitoring notices unusual query patterns. Step-up authentication notices unusual data volume.

What happens when the extraction isn't unusual? When an attacker paginates at human speed, extracts data gradually over days, and operates within the behavioral thresholds that monitoring tools consider normal.

This is the scenario that Post-Authentication Data Security addresses at a more fundamental level. Rather than detecting unusual behavior and interrupting it, PADS governs data usability at the data layer itself. The question is not "does this behavior look suspicious?" It is "should this data be usable, under these conditions, for this action, to this destination?"

In a PADS model, data remains cryptographically protected and is only made usable at the moment of legitimate use - meaning extraction alone no longer equals compromise.

When data is protected at the layer Ryan is describing, the layer where the backend decides what an authenticated session can actually do with the data it accesses then the extraction scenario changes fundamentally.

The attacker can script the API calls. They can walk the pagination. They can extract every file in the repository.

They just can't read any of it.

THE BOTTOM LINE

Ryan's conclusion deserves to be repeated:

"The question is not, 'Did MFA work?' The real question is, 'What did the backend trust after MFA succeeded?' That is the perimeter now."

Every control you currently own is designed to answer the first question.

Almost none are designed to answer the second.

That gap between authentication and data protection is where 74% of breaches now operate.

Post-auth is the new perimeter. And as Ryan's article demonstrates, most defenses still aren't built around that truth.

Post-Authentication Data Security is the category that changes that.

RESOURCES

Ryan Anschutz's original article: https://www.ibm.com/think/x-force/post-auth-new-perimeter

What is PADS The definition, the category map, and how PADS completes the security model existing tools leave unfinished.

Why PADS Now The three forces that made post-authentication data theft the dominant threat.

Every tool you own stops at login. That's exactly where attackers start. 

Data Protection

Mar 23, 2026

When IBM X-Force Says "Post-Auth is the New Perimeter," People Should Take Note

Ryan Anschutz, North America Leader for IBM X-Force Incident Response, recently published an article that deserves more attention than a typical LinkedIn post receives.

It started, as the best security lessons often do, with something completely mundane.

Ryan needed to export a list of event attendees. The UI had no export button. So, he opened browser developer tools, looked at what the application was doing behind the scenes, and scripted the authenticated API calls to extract everything he needed.

No exploits. No bypasses. No stolen credentials.

His conclusion: "The application worked exactly as designed. That's the part worth sitting with."

That sentence is the entire post-authentication data security (PADS) problem stated as plainly as it can be stated.

WHAT RYAN'S EXPORT TASK ACTUALLY DEMONSTRATES 

What Ryan described is not a vulnerability. It is not a misconfiguration. It is not a failure of any control. 

It is what happens when an authenticated session is trusted completely. When the backend extends full data usability to anyone holding a valid credential, with no evaluation of whether that trust should extend to bulk extraction, rapid pagination, or automated API calls at a scale no human would produce manually.

The application's authentication worked. Its authorization worked. Its session management worked. Every control functioned exactly as designed.

And a complete dataset was extracted in minutes.

This is what the 2024 Verizon Data Breach Investigations Report is describing when it notes that 74% of breaches involve valid credentials. It is not that attackers are bypassing authentication. It is that they have learned to operate inside the trust that authentication grants, and once inside that trust, there is almost nothing designed to evaluate whether specific data should be usable at a specific moment, under specific conditions, at a specific volume. 

As Ryan puts it: "Attackers don't care about your UI. They care about what the backend will trust."  

RYAN'S QUESTION IS THE RIGHT QUESTION 

Ryan's bottom line for IR teams is worth quoting directly: 

"The question is not, 'Did MFA work?' The real question is, 'What did the backend trust after MFA succeeded?' That is the perimeter now."

This reframe from "did authentication succeed" to "what did the system trust after authentication succeeded", is precisely the shift that Post-Authentication Data Security (PADS) represents as a security category.

Traditional security architecture is built to answer the first question. The foundational layers, firewalls, IAM, MFA, Zero Trust, are designed to evaluate whether a given identity or session should be granted access in the first place. They operate on the principle that authentication and authorization are the primary security boundaries.

DLP represents the industry's first major attempt to address what happens after authentication. It monitors data movement and attempts to prevent sensitive information from leaving the organization through unauthorized channels. This is critical and valuable.

But Ryan's GraphQL example exposes the limitation: DLP is designed to detect abnormal data movement, not to govern normal data use.

The session was appropriately granted. The API calls were legitimate. The data access was authorized. The pagination pattern, if throttled to human speed, would appear normal. No unauthorized egress channel was used, just standard API responses over HTTPS.

DLP's fundamental assumption is that if data access appears normal, it probably is normal.

This is exactly the assumption that Ryan's example breaks. An attacker who understands how the backend evaluates "normal" can operate entirely within those parameters while extracting complete datasets.

The actions that followed authentication were indistinguishable from legitimate use. And no control in the stack, including DLP, was designed to ask whether bulk data extraction should be permitted even when the session was valid and the behavior appeared normal.

His observation cuts to the core of the problem: "After authentication, everything becomes the real perimeter, and most defenses still aren't built around that truth."

DLP monitors the perimeter. But when the attacker operates inside what the system considers normal authenticated behavior, there is no perimeter event to detect. 

WHAT COULD HAVE CHANGED THE OUTCOME

Ryan identifies several controls that could have interrupted the extraction: 

• Session tokens bound to device or browser context

• Behavioral rate limiting that notices no human paginates this fast

• Authorization enforced at the API layer, not assumed via the UI

• Step-up authentication for bulk or sensitive data access

• Short session lifetimes with frequent token rotation

• API-level telemetry that shows actual query behavior, not just page views

These recommendations map directly to what PADS delivers as a category:



IBM X-Force Recommendation 



PADS Capability 



How It Changes the Outcome 

Session tokens bound to device/browser context 

Contextual session management 

Sessions can't be replayed from different devices or environments - even with valid credentials 

Behavioral rate limiting

Anomaly detection & policy enforcement 

Automated extraction at scale triggers real-time intervention before data leaves 

Authorization enforced at API layer, not assumed via UI 

Data-layer access controls 

Backend enforces what data can be accessed regardless of how the request arrives 

Step-up authentication for bulk access 

Dynamic risk-based authentication 

High-volume data access requires additional verification even for authenticated users 

Short session lifetimes with frequent token rotation 

Session governance 

Limits window of opportunity for credential replay or session hijacking 

API-level telemetry showing actual query behavior 

Data interaction visibility 

Surfaces what's actually happening at the data layer, not just what the UI suggests 

WHERE DETECTION ALONE FALLS SHORT

Ryan's recommendations represent the access-control and behavioral-detection responses to the post-authentication problem. They are valuable and necessary.

But his list implicitly identifies their shared limitation: they all depend on detecting that something unusual is happening. Rate limiting notices unusual pagination speed. Behavioral monitoring notices unusual query patterns. Step-up authentication notices unusual data volume.

What happens when the extraction isn't unusual? When an attacker paginates at human speed, extracts data gradually over days, and operates within the behavioral thresholds that monitoring tools consider normal.

This is the scenario that Post-Authentication Data Security addresses at a more fundamental level. Rather than detecting unusual behavior and interrupting it, PADS governs data usability at the data layer itself. The question is not "does this behavior look suspicious?" It is "should this data be usable, under these conditions, for this action, to this destination?"

In a PADS model, data remains cryptographically protected and is only made usable at the moment of legitimate use - meaning extraction alone no longer equals compromise.

When data is protected at the layer Ryan is describing, the layer where the backend decides what an authenticated session can actually do with the data it accesses then the extraction scenario changes fundamentally.

The attacker can script the API calls. They can walk the pagination. They can extract every file in the repository.

They just can't read any of it.

THE BOTTOM LINE

Ryan's conclusion deserves to be repeated:

"The question is not, 'Did MFA work?' The real question is, 'What did the backend trust after MFA succeeded?' That is the perimeter now."

Every control you currently own is designed to answer the first question.

Almost none are designed to answer the second.

That gap between authentication and data protection is where 74% of breaches now operate.

Post-auth is the new perimeter. And as Ryan's article demonstrates, most defenses still aren't built around that truth.

Post-Authentication Data Security is the category that changes that.

RESOURCES

Ryan Anschutz's original article: https://www.ibm.com/think/x-force/post-auth-new-perimeter

What is PADS The definition, the category map, and how PADS completes the security model existing tools leave unfinished.

Why PADS Now The three forces that made post-authentication data theft the dominant threat.

Every tool you own stops at login. That's exactly where attackers start. 

Data Protection

Feb 17, 2026

Why Traditional DLP Cannot Stop Post-Authentication Data Theft

There is a dangerous oversimplification circulating in cybersecurity conversations: that Data Loss Prevention “doesn’t work.”

That claim is wrong.

Traditional DLP is not broken. It is not obsolete. And it is not the product of immature teams or poor deployment discipline. It was engineered for a different threat model, at a different control layer, under a different set of assumptions about how data is misused.

For more than a decade, DLP has played a meaningful role in enterprise security. It helped organizations locate sensitive data, apply classification-based policy, monitor how information moves through email, endpoints, and cloud services, and satisfy governance and compliance obligations. In many environments, it still provides operational and regulatory value.

And yet, despite mature DLP deployments, layered with IAM, Zero Trust, CASB, and cloud monitoring tools, organizations continue to suffer catastrophic data theft. In most of those incidents, the theft begins after the attacker authenticates successfully.

That is not a contradiction. It is an architectural boundary.

Post-Authentication Data Security exists because material risk now begins at a point where DLP, by design, cannot reliably prevent loss.

The Real Distinction Is Control Plane, Not Feature Depth

The difference between DLP and Post-Authentication Data Security is structural.

DLP observes and governs data movement. PADS governs data usability.

DLP is built to answer: Did sensitive information move somewhere it should not have?

PADS answers a more uncomfortable question: Given that access exists, should this data be usable or extractable right now?

That distinction matters because DLP must inspect data in order to govern it. Inspection requires decryption. By the time DLP evaluates content, the data is already usable inside the session.

PADS asserts control earlier. It enforces cryptographic protection at the data layer, even after authentication succeeds. Access does not automatically grant readability. Usability is conditional.

This is not a tuning difference. It is a control-plane difference.

DLP’s Design Assumptions Made Sense at the Time

DLP was built around a rational premise: if we understand who the user is, what data they are interacting with, and where that data is going, we can stop misuse even after login.

That premise held when misuse looked abnormal. When exfiltration required obvious bulk transfer. When users were the primary actors and backend automation was limited. When sensitive data moved in discrete, observable ways.

Modern attack patterns quietly dismantled those assumptions.

Today, attackers operate inside legitimate workflows. They use valid credentials, including service accounts. They rely on native export features and SaaS APIs. They extract data gradually to avoid triggering thresholds. Their behavior mirrors routine business operations.

Under those conditions, DLP does not “miss” the attack. It simply operates where it was designed to operate: after data is decrypted and in motion.

The architecture did not anticipate a world where authentication itself became the dominant breach vector.

The Backend Is Where the Limits Become Clear

The boundary is most visible at the server and backend layer, where the most valuable data actually resides: file servers, databases, SaaS backends, object storage, APIs, and integration engines.

Even when deployed on servers, DLP still inspects content after it has been decrypted for an authenticated process. Applications receive plaintext. Queries return structured results. APIs deliver usable data.

At that layer, there may be no discrete “user action” to intercept. Extraction occurs through queries and automated processes. Activity appears operational, not interactive.

DLP becomes dependent on logs, heuristics, thresholds, and classification accuracy. It becomes reactive by necessity.

This is why even mature DLP programs tend to be weakest precisely where the organization’s crown jewels live.

Classification Is Both DLP’s Strength and Its Constraint

DLP depends on classification. Before it can enforce policy, it must know whether data is sensitive and how it is labeled.

That dependency introduces fragility in modern environments where data is created continuously, classified by insiders that may be the perpetrator, recombined dynamically, generated by third parties, and returned through APIs without consistent labeling. Sensitive content may be embedded inside larger files. Labels may lag reality. Derived data may inherit no protection at all.

DLP cannot protect what it cannot reliably identify. That is not a tooling flaw. It is a structural dependency.

In a post-authentication attack, the adversary does not defeat classification. They exploit its gaps.

Post-Authentication Data Security removes classification as the gating dependency for protection. It does not eliminate classification. It removes it as a single point of failure. Protection attaches to the data cryptographically. Usability is evaluated at the moment of access, not assumed because a label was correct.

That shift closes a category of silent exposure that DLP cannot.

The Trust Assumption That Now Carries Material Risk

DLP, like IAM and Zero Trust, inherits a necessary operational assumption: if a user or service is authenticated and authorized, their actions are legitimate until proven otherwise.

That assumption allows systems to function. But in a threat landscape where credential compromise is routine, that assumption becomes the attacker’s leverage.

When credentials are stolen, identity is valid. Sessions are approved. Permissions are correct. Backend systems return plaintext. Encryption disengages because authentication succeeded.

DLP sees normal activity.

PADS does not eliminate trust. It decouples trust from data usability. Even when access exists, data remains encrypted unless policy explicitly authorizes its use under the current conditions.

That is a fundamentally different stance toward risk.

The Boundary Has Moved. Architecture Must Follow.

Traditional DLP did not fail. It reached the boundary it was designed to manage.

Security architectures long assumed that controlling access and observing movement after access was sufficient. That model held when misuse was rare and when exfiltration required obvious deviation from normal operations.

Today, attackers authenticate. They operate inside approved workflows. They extract data in ways that appear legitimate. In that environment, observing misuse after data is readable is not prevention. It is documentation.

Post-Authentication Data Security exists because material risk now begins precisely where traditional controls defer by design: after access is granted.

It does not replace DLP, IAM, or Zero Trust. It completes the model they leave unfinished.

The defining question is no longer whether you stopped the attacker from getting in.

It is whether, when access was misused, your data remained protected.

DLP can tell you what happened.

PADS determines whether it mattered.

Data Protection

Feb 17, 2026

Why Traditional DLP Cannot Stop Post-Authentication Data Theft

There is a dangerous oversimplification circulating in cybersecurity conversations: that Data Loss Prevention “doesn’t work.”

That claim is wrong.

Traditional DLP is not broken. It is not obsolete. And it is not the product of immature teams or poor deployment discipline. It was engineered for a different threat model, at a different control layer, under a different set of assumptions about how data is misused.

For more than a decade, DLP has played a meaningful role in enterprise security. It helped organizations locate sensitive data, apply classification-based policy, monitor how information moves through email, endpoints, and cloud services, and satisfy governance and compliance obligations. In many environments, it still provides operational and regulatory value.

And yet, despite mature DLP deployments, layered with IAM, Zero Trust, CASB, and cloud monitoring tools, organizations continue to suffer catastrophic data theft. In most of those incidents, the theft begins after the attacker authenticates successfully.

That is not a contradiction. It is an architectural boundary.

Post-Authentication Data Security exists because material risk now begins at a point where DLP, by design, cannot reliably prevent loss.

The Real Distinction Is Control Plane, Not Feature Depth

The difference between DLP and Post-Authentication Data Security is structural.

DLP observes and governs data movement. PADS governs data usability.

DLP is built to answer: Did sensitive information move somewhere it should not have?

PADS answers a more uncomfortable question: Given that access exists, should this data be usable or extractable right now?

That distinction matters because DLP must inspect data in order to govern it. Inspection requires decryption. By the time DLP evaluates content, the data is already usable inside the session.

PADS asserts control earlier. It enforces cryptographic protection at the data layer, even after authentication succeeds. Access does not automatically grant readability. Usability is conditional.

This is not a tuning difference. It is a control-plane difference.

DLP’s Design Assumptions Made Sense at the Time

DLP was built around a rational premise: if we understand who the user is, what data they are interacting with, and where that data is going, we can stop misuse even after login.

That premise held when misuse looked abnormal. When exfiltration required obvious bulk transfer. When users were the primary actors and backend automation was limited. When sensitive data moved in discrete, observable ways.

Modern attack patterns quietly dismantled those assumptions.

Today, attackers operate inside legitimate workflows. They use valid credentials, including service accounts. They rely on native export features and SaaS APIs. They extract data gradually to avoid triggering thresholds. Their behavior mirrors routine business operations.

Under those conditions, DLP does not “miss” the attack. It simply operates where it was designed to operate: after data is decrypted and in motion.

The architecture did not anticipate a world where authentication itself became the dominant breach vector.

The Backend Is Where the Limits Become Clear

The boundary is most visible at the server and backend layer, where the most valuable data actually resides: file servers, databases, SaaS backends, object storage, APIs, and integration engines.

Even when deployed on servers, DLP still inspects content after it has been decrypted for an authenticated process. Applications receive plaintext. Queries return structured results. APIs deliver usable data.

At that layer, there may be no discrete “user action” to intercept. Extraction occurs through queries and automated processes. Activity appears operational, not interactive.

DLP becomes dependent on logs, heuristics, thresholds, and classification accuracy. It becomes reactive by necessity.

This is why even mature DLP programs tend to be weakest precisely where the organization’s crown jewels live.

Classification Is Both DLP’s Strength and Its Constraint

DLP depends on classification. Before it can enforce policy, it must know whether data is sensitive and how it is labeled.

That dependency introduces fragility in modern environments where data is created continuously, classified by insiders that may be the perpetrator, recombined dynamically, generated by third parties, and returned through APIs without consistent labeling. Sensitive content may be embedded inside larger files. Labels may lag reality. Derived data may inherit no protection at all.

DLP cannot protect what it cannot reliably identify. That is not a tooling flaw. It is a structural dependency.

In a post-authentication attack, the adversary does not defeat classification. They exploit its gaps.

Post-Authentication Data Security removes classification as the gating dependency for protection. It does not eliminate classification. It removes it as a single point of failure. Protection attaches to the data cryptographically. Usability is evaluated at the moment of access, not assumed because a label was correct.

That shift closes a category of silent exposure that DLP cannot.

The Trust Assumption That Now Carries Material Risk

DLP, like IAM and Zero Trust, inherits a necessary operational assumption: if a user or service is authenticated and authorized, their actions are legitimate until proven otherwise.

That assumption allows systems to function. But in a threat landscape where credential compromise is routine, that assumption becomes the attacker’s leverage.

When credentials are stolen, identity is valid. Sessions are approved. Permissions are correct. Backend systems return plaintext. Encryption disengages because authentication succeeded.

DLP sees normal activity.

PADS does not eliminate trust. It decouples trust from data usability. Even when access exists, data remains encrypted unless policy explicitly authorizes its use under the current conditions.

That is a fundamentally different stance toward risk.

The Boundary Has Moved. Architecture Must Follow.

Traditional DLP did not fail. It reached the boundary it was designed to manage.

Security architectures long assumed that controlling access and observing movement after access was sufficient. That model held when misuse was rare and when exfiltration required obvious deviation from normal operations.

Today, attackers authenticate. They operate inside approved workflows. They extract data in ways that appear legitimate. In that environment, observing misuse after data is readable is not prevention. It is documentation.

Post-Authentication Data Security exists because material risk now begins precisely where traditional controls defer by design: after access is granted.

It does not replace DLP, IAM, or Zero Trust. It completes the model they leave unfinished.

The defining question is no longer whether you stopped the attacker from getting in.

It is whether, when access was misused, your data remained protected.

DLP can tell you what happened.

PADS determines whether it mattered.

Data Protection

Mar 23, 2026

When Accenture Reports a 127% Surge in Dark Web Insider Recruitment, It’s Time to Rethink Data Security

Accenture’s Cyber Intelligence team recently published research that should alarm every CISO and board member: insider threats facilitated through dark web ecosystems are escalating at an unprecedented rate.

The numbers are stark:

  • 69% increase in insiders offering access (2025 vs. 2024)

  • 127% surge in hackers actively recruiting insiders (vs. 2022)

As Ryan Whelan, Accenture’s Global Head of Cyber Intelligence, explains:

“The insider economy is now principally designed to support early-stage intrusions, with criminal gangs increasingly relying on insiders to bypass cyber defenses.”

This is not theoretical.

Dark web posts explicitly name targets:

  • Coinbase

  • Binance

  • Kraken

  • Gemini

  • Accenture

  • Genpact

  • Spotify

  • Netflix

…and dozens more across financial services, consulting, and technology.

The going rate?

  • $3,000–$15,000 for initial access

  • $25,000 for 37 million cryptocurrency exchange records

The Real Implication of Accenture’s Findings

What this research makes clear - when taken to its logical conclusion - is this:

Managing insider risk requires more than governing access. It requires governing how data is used after access is granted.

This is the role of Post-Authentication Data Security (PADS).

PADS is a security layer that governs how data can be used after access is granted - enforcing policy at the moment of data interaction, not just at authentication.

What Accenture’s Research Makes Clear

Accenture’s findings highlight a structural shift in threat dynamics:

  • Insiders provide initial access and credentials (30% of cases)

  • Perimeter defenses are bypassed entirely

  • Activity appears legitimate - because it is legitimate

  • Security controls defer by design once authentication succeeds

Whelan emphasizes lifecycle controls:

  • Stronger hiring and identity verification

  • Role separation and least privilege

  • Immediate access revocation during offboarding

  • Monitoring for pre-departure activity

  • Behavioral analytics and insider threat programs

These are essential.

They reduce the likelihood that insider threats emerge - or go undetected.

But they also reveal something deeper:

Even with these controls, an authenticated user can still use data in ways that are indistinguishable from legitimate activity.

Where Existing Controls End - and Why the Gap Exists

When a recruited insider acts, the cybersecurity stack behaves exactly as designed:

  • Identity is verified

  • Access is authorized

  • Permissions are correctly applied

  • Activity aligns with role expectations

  • Monitoring systems observe “normal” behavior

From the system’s perspective:

Everything is working correctly.

And that is precisely the problem.

Because “working correctly” still allows data to be:

  • Queried

  • Downloaded

  • Copied

  • Transferred

  • Sold

Nothing is bypassed.
Nothing is broken.
No control is technically evaded.

The attack succeeds because:

The security stack is architected to stop at authentication.

Whelan’s findings reinforce this reality:

Attackers are not defeating controls - they are operating within the boundary those controls were designed to trust.

The Architectural Limitation

Modern security is built to answer one question:

Who should have access?

It is not built to answer:

What should an authenticated user be allowed to do with data - right now, in this context?

This is why insider recruitment is so effective.

Existing controls - IAM, Zero Trust, SIEM, DLP, UEBA - are optimized for:

  • Preventing unauthorized access

  • Detecting abnormal behavior

They are not designed to stop:

Authorized, normal-looking misuse of data

This is not a failure of execution.

It is a limitation of architecture.

The Missing Layer: Post-Authentication Data Security (PADS)

Accenture’s framework focuses on managing insider risk across the employee lifecycle.

PADS extends that framework into the data interaction lifecycle.

If traditional controls answer:

  • Who should have access?

  • When should access be granted or revoked?

  • Is behavior anomalous?

PADS answers:

  • What should this user be able to do with the data they can access?

  • Is this specific use of data appropriate in this context?

This is not a replacement for insider threat programs.

It is the layer that ensures their effectiveness - even when insiders act within expected patterns.

Why This Matters in the Insider Economy

The insider recruitment model works because it exploits a core assumption:

Authenticated access implies legitimate use.

Accenture’s research shows attackers are deliberately targeting that assumption.

They recruit insiders because:

  • Access is already granted

  • Activity blends into normal workflows

  • Detection becomes significantly harder

PADS shifts control from access → to data usage.

What Changes When Data Is Governed After Access

In a PADS-enabled environment:

  • Access still functions as designed

  • Authorized users still perform legitimate work

But:

  • Bulk extraction can be restricted or challenged

  • Sensitive data use can trigger contextual controls

  • Data remains protected - even outside the system

  • Actions - not just identities - are evaluated in real time

This means even if:

  • An insider is recruited

  • Credentials are valid

  • Behavior appears normal

The outcome changes.

Data is no longer freely extractable and usable simply because access was granted.

Aligning With Accenture’s Recommendations - And Extending Them

Whelan’s recommendations create a strong foundation:

  • Strengthen hiring and identity verification

  • Enforce role separation and least privilege

  • Revoke access immediately during offboarding

  • Monitor for behavioral anomalies

  • Expand insider threat intelligence

All of these aim to:

Prevent trusted individuals from using legitimate access to cause harm

But traditional implementations approach this indirectly.

They:

  • Limit access scope

  • Attempt to detect misuse

  • Reduce opportunity over time

They do not directly control:

What happens to data at the moment it is used

Where Traditional Controls Fall Short

Objective

Traditional Approach

Limitation

Prevent malicious insiders

Pre-employment screening

Cannot prevent post-hire recruitment

Limit exposure

RBAC / PoLP

Broad access still exists within roles

Stop access at risk

Offboarding

Reactive - after decision point

Detect misuse

UEBA / monitoring

Requires deviation from “normal”

Identify targeting

Threat intelligence

Does not stop insider action

These controls rely on:

  • Predicting intent

  • Detecting anomalies

  • Acting after signals appear

In insider recruitment scenarios:

Those signals may never appear in time.

How PADS Delivers the Outcome Directly

Objective

PADS Capability

Outcome

Limit insider impact

Data usability governance

Controls actions within valid access

Prevent extraction

Contextual policy enforcement

Evaluates intent at time of use

Reduce detection reliance

Real-time controls

No need for “abnormal” behavior

Mitigate insider risk

Persistent data protection

Exfiltrated data is unusable

Contain breaches

Outcome-based enforcement

Prevents usable data loss

PADS operates where risk actually materializes:

The moment data is accessed and used

The Strategic Implication: An Architectural Fault Line

Accenture classifies insider threats as a medium-frequency, high-impact strategic risk.

But the deeper implication is this:

Insider risk is not an edge case - it is a consequence of how cybersecurity is designed.

Whelan’s findings expose a critical assumption:

Once a user is authenticated, risk is sufficiently managed.

That assumption no longer holds.

Modern architecture treats:

  • Authentication as the boundary of trust

Everything beyond that boundary is governed by:

  • Permissions

  • Expected behavior

  • Post-event detection

Not by real-time control of data itself.

This is the fault line.

The Bottom Line

Accenture’s findings don’t just highlight the rise of insider threats - they expose a fundamental flaw in modern cybersecurity:

The assumption that risk ends when access is granted.

In reality:

That is where risk begins.

The Verizon DBIR reinforces this:

  • 74% of breaches involve the human element

  • Occurring within legitimate, authenticated sessions

No controls are bypassed.
No systems are broken.

Attackers simply operate inside the boundary the stack was designed to trust.

Whelan’s recommendations strengthen identity and access.

But they also point to a deeper truth:

Without governing how data is used after access is granted, the problem remains unsolved.

That is what Post-Authentication Data Security (PADS) delivers.

It shifts security from:

  • Controlling entry

To:

  • Controlling outcome

Because in today’s threat landscape:

Access is no longer the boundary of risk. Data usage is.

Resources

  • Accenture Cyber Intelligence Report: Insider Threat Escalation (2025)

  • What is PADS - The definition, category map, and how PADS completes the security model

  • Why PADS now - The forces driving post-authentication data theft

Final Thought

Every employee with access to sensitive data is a recruitment target.

Traditional security stops at authentication.

That’s exactly where the insider economy starts.

Data Protection

Mar 23, 2026

When IBM X-Force Says "Post-Auth is the New Perimeter," People Should Take Note

Ryan Anschutz, North America Leader for IBM X-Force Incident Response, recently published an article that deserves more attention than a typical LinkedIn post receives.

It started, as the best security lessons often do, with something completely mundane.

Ryan needed to export a list of event attendees. The UI had no export button. So, he opened browser developer tools, looked at what the application was doing behind the scenes, and scripted the authenticated API calls to extract everything he needed.

No exploits. No bypasses. No stolen credentials.

His conclusion: "The application worked exactly as designed. That's the part worth sitting with."

That sentence is the entire post-authentication data security (PADS) problem stated as plainly as it can be stated.

WHAT RYAN'S EXPORT TASK ACTUALLY DEMONSTRATES 

What Ryan described is not a vulnerability. It is not a misconfiguration. It is not a failure of any control. 

It is what happens when an authenticated session is trusted completely. When the backend extends full data usability to anyone holding a valid credential, with no evaluation of whether that trust should extend to bulk extraction, rapid pagination, or automated API calls at a scale no human would produce manually.

The application's authentication worked. Its authorization worked. Its session management worked. Every control functioned exactly as designed.

And a complete dataset was extracted in minutes.

This is what the 2024 Verizon Data Breach Investigations Report is describing when it notes that 74% of breaches involve valid credentials. It is not that attackers are bypassing authentication. It is that they have learned to operate inside the trust that authentication grants, and once inside that trust, there is almost nothing designed to evaluate whether specific data should be usable at a specific moment, under specific conditions, at a specific volume. 

As Ryan puts it: "Attackers don't care about your UI. They care about what the backend will trust."  

RYAN'S QUESTION IS THE RIGHT QUESTION 

Ryan's bottom line for IR teams is worth quoting directly: 

"The question is not, 'Did MFA work?' The real question is, 'What did the backend trust after MFA succeeded?' That is the perimeter now."

This reframe from "did authentication succeed" to "what did the system trust after authentication succeeded", is precisely the shift that Post-Authentication Data Security (PADS) represents as a security category.

Traditional security architecture is built to answer the first question. The foundational layers, firewalls, IAM, MFA, Zero Trust, are designed to evaluate whether a given identity or session should be granted access in the first place. They operate on the principle that authentication and authorization are the primary security boundaries.

DLP represents the industry's first major attempt to address what happens after authentication. It monitors data movement and attempts to prevent sensitive information from leaving the organization through unauthorized channels. This is critical and valuable.

But Ryan's GraphQL example exposes the limitation: DLP is designed to detect abnormal data movement, not to govern normal data use.

The session was appropriately granted. The API calls were legitimate. The data access was authorized. The pagination pattern, if throttled to human speed, would appear normal. No unauthorized egress channel was used, just standard API responses over HTTPS.

DLP's fundamental assumption is that if data access appears normal, it probably is normal.

This is exactly the assumption that Ryan's example breaks. An attacker who understands how the backend evaluates "normal" can operate entirely within those parameters while extracting complete datasets.

The actions that followed authentication were indistinguishable from legitimate use. And no control in the stack, including DLP, was designed to ask whether bulk data extraction should be permitted even when the session was valid and the behavior appeared normal.

His observation cuts to the core of the problem: "After authentication, everything becomes the real perimeter, and most defenses still aren't built around that truth."

DLP monitors the perimeter. But when the attacker operates inside what the system considers normal authenticated behavior, there is no perimeter event to detect. 

WHAT COULD HAVE CHANGED THE OUTCOME

Ryan identifies several controls that could have interrupted the extraction: 

• Session tokens bound to device or browser context

• Behavioral rate limiting that notices no human paginates this fast

• Authorization enforced at the API layer, not assumed via the UI

• Step-up authentication for bulk or sensitive data access

• Short session lifetimes with frequent token rotation

• API-level telemetry that shows actual query behavior, not just page views

These recommendations map directly to what PADS delivers as a category:



IBM X-Force Recommendation 



PADS Capability 



How It Changes the Outcome 

Session tokens bound to device/browser context 

Contextual session management 

Sessions can't be replayed from different devices or environments - even with valid credentials 

Behavioral rate limiting

Anomaly detection & policy enforcement 

Automated extraction at scale triggers real-time intervention before data leaves 

Authorization enforced at API layer, not assumed via UI 

Data-layer access controls 

Backend enforces what data can be accessed regardless of how the request arrives 

Step-up authentication for bulk access 

Dynamic risk-based authentication 

High-volume data access requires additional verification even for authenticated users 

Short session lifetimes with frequent token rotation 

Session governance 

Limits window of opportunity for credential replay or session hijacking 

API-level telemetry showing actual query behavior 

Data interaction visibility 

Surfaces what's actually happening at the data layer, not just what the UI suggests 

WHERE DETECTION ALONE FALLS SHORT

Ryan's recommendations represent the access-control and behavioral-detection responses to the post-authentication problem. They are valuable and necessary.

But his list implicitly identifies their shared limitation: they all depend on detecting that something unusual is happening. Rate limiting notices unusual pagination speed. Behavioral monitoring notices unusual query patterns. Step-up authentication notices unusual data volume.

What happens when the extraction isn't unusual? When an attacker paginates at human speed, extracts data gradually over days, and operates within the behavioral thresholds that monitoring tools consider normal.

This is the scenario that Post-Authentication Data Security addresses at a more fundamental level. Rather than detecting unusual behavior and interrupting it, PADS governs data usability at the data layer itself. The question is not "does this behavior look suspicious?" It is "should this data be usable, under these conditions, for this action, to this destination?"

In a PADS model, data remains cryptographically protected and is only made usable at the moment of legitimate use - meaning extraction alone no longer equals compromise.

When data is protected at the layer Ryan is describing, the layer where the backend decides what an authenticated session can actually do with the data it accesses then the extraction scenario changes fundamentally.

The attacker can script the API calls. They can walk the pagination. They can extract every file in the repository.

They just can't read any of it.

THE BOTTOM LINE

Ryan's conclusion deserves to be repeated:

"The question is not, 'Did MFA work?' The real question is, 'What did the backend trust after MFA succeeded?' That is the perimeter now."

Every control you currently own is designed to answer the first question.

Almost none are designed to answer the second.

That gap between authentication and data protection is where 74% of breaches now operate.

Post-auth is the new perimeter. And as Ryan's article demonstrates, most defenses still aren't built around that truth.

Post-Authentication Data Security is the category that changes that.

RESOURCES

Ryan Anschutz's original article: https://www.ibm.com/think/x-force/post-auth-new-perimeter

What is PADS The definition, the category map, and how PADS completes the security model existing tools leave unfinished.

Why PADS Now The three forces that made post-authentication data theft the dominant threat.

Every tool you own stops at login. That's exactly where attackers start. 

Every tool you own stops at login. That's exactly where attackers start.

See how FenixPyre supports your Data Governance program

See how FenixPyre supports your
Data Governance program

© 2018-2026 FenixPyre Inc, All rights reserved

© 2018-2026 FenixPyre Inc, All rights reserved

© 2018-2026 FenixPyre Inc, All rights reserved