Security teams spent the last decade obsessing over human identities. Multi-factor authentication, zero trust, privileged access management -- all of it aimed squarely at the human in the chair. Meanwhile, something else was quietly multiplying in the dark. Service accounts. API tokens. Automation credentials. And now AI agents. The machine outnumbers the human. The machine has keys to everything. And in the vast majority of enterprises today, the machine is completely ungoverned.
The cybersecurity industry loves a dramatic headline. "Next wave." "Invisible threat." "Unmanaged danger." We get it. But every so often a warning deserves to be taken at face value, and the conversation around AI agent identity governance is one of them. This is not theoretical risk. The infrastructure decisions being made right now -- in sprint planning, in architecture reviews, in "let's just deploy this and see" moments -- are creating a class of ungoverned digital identities that will define the breach landscape for the next several years. This article cuts past the marketing language to explain exactly what is happening, why the scale is staggering, and what organizations have to do before their AI agents become the easiest door in the building to walk through.
The Numbers Are Worse Than You Think
Before we can talk about AI agents specifically, we need to establish the baseline catastrophe already in progress. The proliferation of non-human identities (NHIs) -- the broader category that includes service accounts, API keys, OAuth tokens, bots, and AI agents -- has outrun every governance framework that currently exists.
That 144-to-1 ratio deserves a pause. According to Entro Labs' H1 2025 NHI & Secrets Risk Report, which analyzed over 27 million non-human identities across real enterprise environments, the machine-to-human ratio surged 44% year-over-year in NHI count -- with the ratio itself climbing more than 56% from the 92-to-1 figure recorded just twelve months earlier. That is not organic growth. That is the fingerprint of rapid AI and automation adoption with zero corresponding governance investment. As Entro CEO Itzik Alvas stated at the report's release: "An identity gap of 144:1 isn't just a stat -- it's a seismic shift in how risk scales across modern environments. Agentic AI and automation are fueling a machine identity explosion, but most of these NHIs are invisible, ungoverned, and overprivileged. You can't secure what you can't see, and attackers know it."
James Maude, Field CTO at BeyondTrust, put the organizational reality bluntly when the report dropped: "Many organizations have been so focused on securing human identities that non-human identities and agentic AI have gotten away from them." Shane Barney, CISO at Keeper Security, connected it to a pattern that keeps repeating: "From SolarWinds to CodeCov to CircleCI, attackers have repeatedly exploited poorly managed service accounts, tokens, and secrets to gain deep, undetected access. Despite years of clear warnings and real-world consequences, many organizations still lack basic visibility and control over their non-human credentials. It's not that the risk is misunderstood -- it's that it's being deprioritized."
ManageEngine's 2026 Identity Security Outlook found organizations reporting machine-to-human ratios as high as 500-to-1 in some environments. Writing for CSO Online, cybersecurity engineer Anjali Gopinadhan Nair framed the dynamic bluntly:
"We locked the front door years ago. The back door has been open this whole time." — Anjali Gopinadhan Nair, CSO Online, February 2026
And the credentials themselves are not just numerous -- they are ancient. Entro's H1 2025 research found that 7.5% of machine identities in cloud environments are between five and ten years old. More than 2% of active secrets are over a decade old -- more than 20 times the share of decade-old NHIs -- including hardcoded values buried in legacy systems and configuration files that teams consider too risky to replace. In AWS environments alone, 62% of NHIs showed no activity in the past 90 days but retained their access permissions. Some of these accounts outlive the humans who created them. They keep running. They keep authenticating. They keep accumulating access nobody intended them to have.
What Identity Dark Matter Actually Means
The term "identity dark matter" was coined and popularized by Orchid Security, whose co-founder and CEO Roy Katmor has used it consistently across published research, analyst briefings, and industry commentary. Katmor defines it precisely: "App-local users, service identities, API keys and long-lived tokens, legacy directories, external domains, ad hoc auth paths, embedded credentials, and access that still works but isn't consistently governed." The analogy to astrophysics is deliberate: just as dark matter in the universe exerts real gravitational force while remaining entirely invisible to direct observation, identity dark matter describes the massive volume of digital identities that exist, operate, and act within enterprise environments while sitting completely outside any governance or monitoring system. Orchid's own analysis of enterprise environments found that as much as 46% of enterprise identity activity occurs entirely outside centralized IAM visibility.
Traditional Identity and Access Management (IAM) and Identity Governance and Administration (IGA) tools were designed around a simple model: a human employee gets an account, that account gets assigned to a person, and when that person leaves, the account gets revoked. That model broke down the moment DevOps teams started spinning up service accounts in bulk, and it became completely irrelevant the moment organizations started deploying AI agents.
"Organizations today run two identity systems. The ones they think they have and the ones that actually exist. The gap between the two is identity dark matter." — Roy Katmor, co-founder and CEO, Orchid Security
When Orchid launched its Identity Audit capability in February 2026, Katmor sharpened the warning further: "Identity dark matter is where attackers hide and where audits fail. As identity becomes the control plane for the enterprise, including its AI and cloud-native systems, complete visibility -- and thus control and governance -- is no longer optional. It is essential."
Research from Orchid Security identifies the primary types: Unmanaged Shadow Apps that operate outside corporate governance; Non-Human Identities including APIs, bots, and service accounts that act without oversight; Orphaned and Stale Accounts where 44% of organizations report over 1,000 orphaned accounts; and Agent-AI Entities -- the newest and fastest-growing category -- autonomous agents that perform tasks and grant access independently, breaking every traditional identity model we built.
The Cloud Security Alliance's State of Non-Human Identity Security survey, which gathered responses from 818 IT and security professionals, found that only 20% of organizations have formal processes for offboarding and revoking API keys. Even fewer have documented procedures for rotating them. That means when a project ends, when a vendor relationship terminates, when an AI agent is decommissioned or replaced -- the credentials it used keep sitting there. Live. Accessible. Waiting.
The 2025 State of Non-Human Identities and Secrets report adds another dimension: 92% of organizations are actively exposing their NHIs to third parties. Combined with the finding that 44% of tokens are being stored or transmitted across platforms like Teams, Jira, Confluence, code commits, and Slack, and you have a picture of credentials scattered across the organization like loose change in the couch cushions -- except the couch cushions are accessible to anyone with a phishing kit and ten minutes.
MCP: The New Attack Surface Nobody Fully Controls
To understand why AI agents represent a qualitative leap in the identity dark matter problem -- not just a quantitative one -- you need to understand the Model Context Protocol (MCP). Developed and open-sourced by Anthropic in November 2024, MCP is rapidly becoming the standard infrastructure layer that connects large language models to external tools, APIs, databases, and enterprise systems. Think of it as a universal connector: instead of building custom integrations for every data source an AI agent needs to access, MCP provides a single, standardized interface. The protocol has achieved remarkable adoption velocity -- Anthropic reports support from hundreds of integrations across the developer ecosystem.
The problem, as Checkmarx Zero's research documents, is that the integration capability that makes MCP powerful is precisely what makes it dangerous. Every connection between an AI agent and an MCP server expands the trust boundary of your environment. Those connections carry prompts, tokens, configurations, and executable schemas -- every one of which is a potential entry point for exploitation.
"MCP spec doesn't enforce audit, sandboxing, or verification. It's up to the enterprise to manage trust. Each server is a potential gateway to SaaS sprawl, misconfigured tools, or credential leaks." — Zenity Security, 2025
The Coalition for Secure AI (CoSAI) released a comprehensive whitepaper on January 27, 2026 cataloging nearly 40 distinct threat vectors across 12 core categories that emerge specifically from MCP-based deployments. Developed under OASIS Open by CoSAI's Workstream 4, with contributors from Google, IBM, Microsoft, NVIDIA, Zscaler, and Snyk, the framework addresses both novel and amplified attack vectors unique to agent-based systems. Among the most dangerous: tool poisoning, where malicious modification of tool metadata causes agents to invoke compromised tools; full schema poisoning, where attackers compromise entire tool schema definitions at a structural level; and indirect prompt injection, where malicious instructions are hidden in resources the agent consumes -- documentation files, metadata fields, configuration text -- rather than being directed at the model explicitly. The severity of these risks is underscored by the fact that Anthropic's own official mcp-server-git reference implementation shipped with exploitable vulnerabilities in its default configuration. As Cyata CEO Shahar Tal stated to Dark Reading: "If Anthropic gets it wrong -- in their official MCP reference implementation for what 'good' should look like -- then everyone can get MCP security wrong. That's where we are today."
This is not theoretical. Security researchers at Cyata Security confirmed three vulnerabilities in Anthropic's own Git MCP server -- CVE-2025-68143 (unrestricted git_init, CVSS v3: 8.8 / v4: 6.5), CVE-2025-68144 (argument injection in git_diff, CVSS v3: 8.1 / v4: 6.3), and CVE-2025-68145 (path validation bypass, CVSS v3: 7.1 / v4: 6.4) -- all exploitable via prompt injection. Reported to Anthropic in June 2025, accepted in September, and fully patched in version 2025.12.18 released December 2025. Crucially, none of these vulnerabilities requires direct system access. An attacker who can influence what an AI assistant reads -- a malicious README, a poisoned issue description, a compromised webpage -- can trigger the flaws without credentials. When chained together with the Filesystem MCP server, the combination achieves full remote code execution: attackers can compromise SSH keys, inject backdoors into git repositories, and gain persistent access to developer systems. As Cyata core team engineer Yarden Porat wrote: "Agentic systems break in unexpected ways when multiple components interact. Each MCP server might look safe in isolation, but combine two of them, Git and Filesystem in this case, and you get a toxic combination." Real-world MCP incidents catalogued in the CoSAI whitepaper have also included an Asana tenant isolation flaw affecting up to 1,000 enterprises, and WordPress plugins exposing more than 100,000 sites to privilege escalation through AI-mediated integrations. Source: Adversa AI, February 2026; SecurityWeek, January 2026; The Register, January 2026.
Authorization in most current MCP implementations defaults to all-or-nothing access. As AI-360 reported, if an agent can reach an MCP server, it frequently has access to every tool exposed behind it. One poorly scoped integration and an agent that should only be reading a calendar could have write permissions across an entire environment. The phrase security researchers use for an agent with over-permissioned financial access is "denial of wallet" -- but the real-world version involves far worse outcomes than a depleted budget.
Why AI Agents Make This an Entirely Different Problem
The identity dark matter problem predates AI agents. Stale service accounts, orphaned tokens, and ungoverned API keys have been accumulating for years. So why are AI agents a new category of concern rather than just more of the same?
Three reasons: speed, autonomy, and optimization.
Traditional unmanaged identities are passive. An orphaned service account sits there. It might be exploited if an attacker finds it, but it does not go looking for ways to use its own access. AI agents are the opposite. They are actively executing tasks, making decisions, chaining tool calls, and navigating environments -- and they do all of this at machine speed. A human attacker moving through your environment laterally leaves patterns that detection tools are tuned to find. An AI agent doing the same thing, legitimately, looks like normal operation.
The autonomy dimension matters because AI agents are not following a deterministic script. As the academic paper "Securing the Model Context Protocol" explains, the exact sequence of tools an agent will invoke cannot be determined in advance. A single task may cause an agent to read customer tickets, query production databases, and send email on behalf of a user -- crossing multiple security domains within a single interaction. Current security logging and analysis tools were built for deterministic systems with predictable call graphs. Agentic AI behavior breaks those assumptions entirely.
The optimization angle is perhaps the most alarming. Orchid Security's Roy Katmor identified this dynamic in his March 2026 Hacker News analysis: LLM-driven agents are built to find the path of least resistance to task completion. If an orphaned local admin account or an over-scoped token exists in the environment and "just works," the agent will use it. And reuse it. The agent is not making a value judgment about whether that credential should be used -- it is simply using the most efficient path available. This is not a bug. It is an emergent behavior of optimization, and it is exactly what makes unmanaged credentials so dangerous in an agentic environment.
"If an orphaned local admin or an over-scoped token 'just works,' the agent will use it, and reuse it." — Roy Katmor, CEO, Orchid Security, The Hacker News, March 2026
The Real-World Kill Chain Is Already Being Tested
This is not a future scenario. The credential compromise patterns that enable AI agent exploitation are already well-established in the threat landscape, and security researchers are documenting how they translate to agentic environments.
To make this concrete: here is what a plausible AI agent kill chain looks like in 2026, assembled from confirmed real-world components rather than speculation. An attacker compromises a developer's GitHub environment -- as happened in the Salesloft-Drift breach, where unauthorized access persisted for approximately four months (March through June 2025) undetected. Inside that environment, they discover the organization has deployed MCP-connected AI agents for code review and ticket triage. The agents are running against Anthropic's mcp-server-git, in a version prior to December 2025's patch (CVE-2025-68143/44/45). The attacker inserts a poisoned README into a repository the agent is configured to review. The agent reads it, the prompt injection fires, and via the chained Git and Filesystem MCP servers, the attacker achieves code execution without ever touching the organization's network directly. From there, the agent's existing OAuth tokens -- provisioned with broad access so it "doesn't fail" -- become the lateral movement vehicle. The agent has already authenticated to Salesforce, Google Workspace, and three internal APIs. The attacker rides those tokens exactly as UNC6395 rode Drift's OAuth tokens into 700 organizations' Salesforce environments. Every component of this chain was a real, documented incident in 2025.
In March 2025, attackers compromised the popular tj-actions/changed-files GitHub Action (CVE-2025-30066, CVSS 8.6) using a stolen personal access token belonging to @tj-actions-bot, a privileged automation account with write access to the repository. They injected malicious code that silently dumped CI/CD runner memory, exfiltrating secrets from workflow logs across more than 23,000 repositories. The attack was particularly stealthy: the malicious commit was disguised to appear as a legitimate Renovate bot dependency update, and was automatically merged per existing workflow configuration. Critically, no secrets were transmitted to external attacker infrastructure -- they were printed in the logs of affected repositories. But for public repositories, that was enough. According to Cybersecurity Tribe's reporting on Entro Labs research, this incident demonstrated precisely how vulnerable automated systems become when machine identity credentials are exposed -- and that was before agentic AI entered those same pipelines at scale.
The 2026 NHI Reality Report from the Cyber Strategy Institute projects that at least one major breach in 2026 will originate from a compromised AI-agent NHI. The report notes that non-human identities have already "decisively outnumbered humans in production environments and become the primary real perimeter for cloud, SaaS, and agentic AI systems." Detection improved during 2025. Remediation and revocation did not. Millions of leaked secrets remained valid for months or years, giving attackers the ability to operate using legitimate NHI credentials long after exposure was documented.
The August 2025 Salesloft-Drift breach, attributed by Google Threat Intelligence Group to a threat cluster designated UNC6395, is instructive as a template. Attackers first compromised Salesloft's GitHub environment between March and June 2025 -- an approximately four-month dwell time that went undetected, during which they downloaded repository content, added a guest user, and established workflows. They then used that foothold to access Drift's AWS environment and steal OAuth refresh tokens for customer integrations. From August 8 through August 18, they used those tokens to systematically query more than 700 downstream Salesforce environments, exfiltrating contacts, support case data, and -- critically -- plaintext credentials including AWS keys, Snowflake tokens, and passwords stored in CRM fields. The victims included Cloudflare, Palo Alto Networks, Zscaler, Proofpoint, and others. Google's own Workspace accounts were separately impacted via a Drift Email integration compromise on August 9, though that access was limited to accounts specifically configured to integrate with Drift. The attackers covered their tracks by deleting query jobs immediately after execution. Obsidian Security researchers noted that the blast radius was dramatically larger than previous direct-compromise incidents against Salesforce because the OAuth tokens provided lateral movement across the entire SaaS supply chain without triggering MFA checks. Google Threat Intelligence Group assessed UNC6395's primary intent as credential harvesting, though no definitive nation-state attribution has been officially confirmed. Transpose that pattern onto MCP-connected AI agents operating across multi-cloud environments and the math becomes uncomfortable quickly.
Entro's H1 2025 research found that 5.5% of AWS non-human identities hold full administrator privileges -- what researchers call "Super NHIs." Entro notes that since their data reflects organizations already prioritizing NHI security, "the real number is likely much higher across less mature environments." Additionally, 8.7% of AWS NHIs are overprivileged and idle, meaning they hold access to services they rarely or never actually use. A single exposed Super NHI token grants an attacker unrestricted access across the entire cloud environment. With AI agents frequently provisioned with broad access to enable autonomous task completion, the Super NHI risk in agentic deployments is structurally higher than in traditional automation. Source: Entro Security Labs, H1 2025 NHI Report.
What Organizations Actually Need to Do
The standard advice in this space -- "adopt zero trust," "implement least privilege," "do regular audits" -- is not wrong. It is just insufficient as a description of what the actual work looks like. Here is a more honest accounting of the operational requirements.
Treat Every Agent as a First-Class Identity From Day One
This sounds obvious. It is not being done. Every AI agent deployed in your environment needs to be registered, inventoried, and governed the same way a human employee's account is governed. That means a defined owner, a documented purpose, a scope of access that is the minimum required to accomplish the specific task, and a lifecycle that ends when the agent is decommissioned. Gartner researchers introduced the concept of "ownership mapping" -- full lineage tracking from an agent's creation to its deployment, tied to both the machine identity and a human accountable sponsor. If that human changes roles or leaves, the agent's access changes with them.
Kill Standing Privileges for Machine Identities
Long-lived credentials are the root cause behind the vast majority of NHI breaches. The goal needs to be their elimination, not their management. Replace permanent API keys with short-lived tokens that expire automatically. Implement just-in-time access that grants permissions for a specific task and revokes them immediately after completion. Automate credential rotation on a defined schedule. Entro's 2025 State of Non-Human Identities and Secrets research found that 71% of non-human identities are not rotated within recommended timeframes. Every day a credential sits unchanged is another day a compromised token could be in active use without detection.
Audit Your MCP Surface Before You Deploy Into Production
Bitdefender's research is direct on this: "The adoption of new AI technologies consistently outpaces the development of robust security practices and guardrails. MCP was designed for interoperability and functionality, not with security as a primary, built-in concern." Before any MCP server goes into production, implement mandatory code signing verification, use private package repositories with security scanning and approval workflows, and maintain a centralized inventory of all deployed MCP servers with automated discovery to flag shadow deployments. The CoSAI whitepaper's guidance on MCP security is publicly available and worth reading in full before deployment decisions are finalized.
Implement Behavioral Detection for NHIs, Not Just Posture Checks
Static risk assessments catch misconfigurations. They do not catch an AI agent that has been manipulated through prompt injection to operate outside its intended scope. Organizations implementing behavioral detection for NHI security report discovering active compromises that had operated undetected for months. Behavioral monitoring establishes baselines for how each non-human identity normally behaves -- what it accesses, when, from where, in what sequence -- and flags deviations in real time. For AI agents specifically, this requires new tooling since existing SIEM and SOAR platforms were built for deterministic systems.
Extend Governance Across Hybrid Environments Explicitly
Native platform controls and vendor safeguards do not extend beyond their own cloud or platform borders. This is documented. An AI agent that crosses from AWS to Azure to a SaaS application in a single task execution is operating in a governance gap for at least part of that journey. Without an independent oversight mechanism, those cross-cloud interactions are entirely ungoverned. The architecture for addressing this requires centralized MCP gateway deployment with unified policy enforcement -- a single point where authentication, provenance tracking, isolation, and policy enforcement are applied to every agent-tool interaction regardless of which cloud or system is involved.
Key Takeaways
- The scale of ungoverned machine identities is not a future problem: With NHIs outnumbering humans 144-to-1 and 97% carrying excessive privileges, the foundation for AI agent exploitation is already laid in the majority of enterprise environments.
- AI agents are not just more NHIs -- they are a new threat category: Their optimization for efficiency means they will naturally discover and exploit the path of least resistance through your environment, including every ungoverned credential in their path.
- MCP is powerful, widely adopted, and largely unsecured: The Model Context Protocol introduces a class of vulnerabilities -- tool poisoning, schema manipulation, indirect prompt injection, cross-agent context abuse -- that do not fit traditional threat models and are not addressed by existing security controls.
- Static credentials are the actual root cause: Long-lived API keys, tokens, and service account credentials are the primary fuel for NHI breaches. Eliminating them is not aspirational -- it is the baseline requirement for operating safely in an agentic environment.
- Governance needs to happen at agent creation, not after the breach: Retroactive cleanup of ungoverned identities is expensive, disruptive, and often incomplete. The organizations that establish first-class identity governance for AI agents from day one will spend far less time explaining to their boards what a forgotten service account made possible.
The cybersecurity industry has a well-established pattern: a new technology gets widely deployed, the security implications get documented, a wave of breaches provides empirical proof, and then governance frameworks catch up two to three years too late. We are at the documentation phase for AI agent identity governance right now. The breaches have not fully landed yet -- but the credentials that will enable them are already sitting in your environment, unrotated, overprivileged, and invisible to every tool you currently use to manage identity risk. The question is not whether this becomes a crisis. The question is whether your organization is on the right side of the timeline.
Sources: ManageEngine, 2026 Identity Security Outlook — The Hacker News / Orchid Security (Roy Katmor), March 2026 — The Hacker News / Orchid Security, January 2026 — Entro Security Labs NHI & Secrets Risk Report H1 2025 — Entro Security GlobeNewswire, July 2025 — Cloud Security Alliance, State of Non-Human Identity Security — Coalition for Secure AI, MCP Security Whitepaper, January 27, 2026 — OASIS Open, CoSAI MCP Security Release, January 27, 2026 — Checkmarx Zero, MCP Security Risks — Cyber Strategy Institute, 2026 NHI Reality Report — CSO Online, February 2026 — Adversa AI, February 2026 — Bitdefender Business Insights — SecurityWeek, CVE-2025-68143/44/45, January 2026 — The Register, Anthropic MCP Flaws, January 2026 — Dark Reading, Anthropic MCP Flaws, January 2026 — GitHub Advisory: CVE-2025-30066 (tj-actions) — Google GTIG, UNC6395 / Salesloft-Drift, August 2025 — SecurityWeek, Salesloft GitHub Compromise Timeline, September 2025 — Obsidian Security, UNC6395 / Salesloft-Drift Breach — The Hacker News, Salesloft Drift, September 2025 — NHIMG, 2025 State of NHIs and Secrets — MSSP Alert / Orchid Security Identity Dark Matter — Orchid Security / GlobeNewswire, Identity Audit Launch, February 2026 — SC World, NHI Ratio Coverage, July 2025