Identity Governance Blog

Non-Human Identities and AI Agents: The Governance Blind Spot

Blog Summary

Non-human identities now outnumber human users in most enterprises by more than 40:1. They hold real privilege, mostly operate without lifecycle governance, and sit almost entirely outside the identity programs organizations have spent years building. AI agents are entering production with the same governance gaps, but unlike a service account, an agent makes decisions. This is the first post in a three-part series that moves from understanding the governance gap, to closing it for non-human identities, to extending it for AI agents.

In August 2025, attackers used compromised OAuth tokens associated with the Salesloft Drift third-party application to systematically access Salesforce environments across hundreds of customer organizations (Google Threat Intelligence Group, 2025). No passwords were stolen because none were needed. The compromised tokens already carried the authorization. The access those tokens held persisted silently, operated at scale, and sat entirely outside the review processes used to govern human users.

This was not a failure of authentication, but a failure of governance. The tokens were valid and the access was authorized, but the access path operated outside the ownership, lifecycle, and review processes organizations use to govern human users. This is the core blind spot in how most organizations govern non-human identities.

This breach will not be the last of its kind. Non-human identities (service accounts, workload credentials, API keys, OAuth tokens) outnumber human users in most enterprises by more than 40 to 1 (IBM, 2025). They hold real privilege, many with direct access to production systems and sensitive data. Yet most governance programs were never built to see them. AI agents are now entering production with the same governance blind spots. Unlike a service account, an agent makes decisions, not just transactions. It can interpret context and adapt its behavior at runtime, which makes an already serious governance problem significantly harder.

 

The Governance Model No Longer Fits

Traditional identity governance was built for people. It asks who has access to what, whether that access fits their role, and whether it can be proven. It depends on HR records, managers, and lifecycle events to answer those questions. Non-human identities and AI agents are not governed by any of those inputs. They generate no joiner event when created, no mover event when their scope changes, and no leaver event when they are no longer needed. Without those triggers, their credentials persist indefinitely and their access accumulates silently.

For non-human identities, the question is not what can they access, but what can they do. Service accounts, API keys, and workload credentials are automated but predictable. They perform the same operation, against the same system, every time. The risk is real, but it is bounded. Governance means ensuring every NHI has a declared purpose, an accountable owner, and access that matches what it actually needs to do.

For AI agents, For AI agents, the governance question shifts from static access to accountable action: what are they doing, what have they done, and can the organization prove it was authorized? Agents make decisions. They select tools, delegate to other agents, and use service accounts, API credentials, and OAuth tokens to execute their work. When left unchecked, they access data in ways their human sponsors never explicitly approved. They adapt, and they operate at machine speed.

None of these identity types operate in isolation. Humans rely on NHI to connect the systems they use. Agents rely on NHI to execute the tasks they are given. Agents also act on behalf of humans, inheriting their authority but not their accountability, and creating access paths no one explicitly approved and no one is watching. Governing one without the others leaves your governance program incomplete.

 

Where Governance Breaks Down

Governance controls such as ownership, certification, and least privilege still matter. What breaks down is the model used to trigger and sustain those controls when the identity is not a person. Three gaps explain why organizations remain exposed.

  1. The inventory gap: Most organizations cannot tell you how many non-human identities they have. Cloud platforms generate managed identities with every new workload. Developers create service principals outside any central registry. Vendors deploy service accounts as part of standard installations. The result is a population of privileged identities that no one has inventoried or classified. Without knowing what each identity is, what it can reach, and what risk it carries, governance has nowhere to start.
  2. The ownership gap: Even where non-human identities are known, accountability is often absent. The engineer who created a service account may have left. The application it supported may have been retired. The team that understood its purpose may have been reorganized. In human identity governance, a manager, a business role, or an HR event provides a natural point of accountability. Non-human identities rarely have that. When an auditor asks why an identity still exists, what it is for, and who approved its access, too many organizations cannot answer.
  3. The lifecycle gap: Human identity governance works because access decisions are tied to lifecycle events. That activation model does not translate to non-human identities. A service account does not get reviewed because someone changed jobs. An API credential does not get removed because a project quietly ended. Over time, privileges accumulate, credentials age, and identities remain active long after their original purpose has disappeared.

 

AI Changes the Equation

Non-human identities are numerous and often over-privileged, but their behavior is predictable. A compromised service account can only do what it is configured to do. The risk is real, but it is bounded.

AI agents change that calculation. Rather than simply holding access, they interpret context, select tools, chain actions together, and delegate work in pursuit of an objective. When the governance model is not built to constrain that behavior, the risk extends to what the agent does with that access at runtime, whether those actions stay within its intended purpose, and whether anyone can reconstruct them afterward.

Organizations that do not close the NHI governance gap now will face it again in the context of agents, under worse conditions and with less time to act.

 

Regulation Is Catching Up

NIS2, DORA, and the EU AI Act are converging on a shared expectation: organizations must be able to demonstrate control over every identity holding privileged access to critical systems and sensitive data, not just the identities that belong to people. DORA already applies and requires financial entities to maintain formal records of their ICT third-party dependencies, including the access paths those relationships introduce. NIS2 is in force across the EU and requires entities to demonstrate cybersecurity risk management and access control across critical environments, regardless of whether the identity is human. The EU AI Act adds a further obligation as AI moves into production, requiring accountability, traceability, and evidence of control over autonomous systems.

NHI governance is appearing with increasing regularity in IGA-related RFPs and audit conversations because the regulatory question is becoming the same everywhere: can you prove who, or what, had access, why it had it, and whether that access was governed? Organizations that cannot demonstrate coverage are accumulating compliance risk before a regulator or a customer ever asks.

 

The Cost of Waiting

The cost of ungoverned non-human identities surfaces in breach forensics, failed audits, compliance findings, and regulatory enforcement actions. What those events have in common is a governance program that was built for people, applied to systems, and found wanting at the moment it mattered most. The organizations closing this gap now are building the foundation that agent governance will require. The ones that are not are compounding a problem that is already expensive and about to get harder.

The next post in this series sets out the five operational capabilities every organization needs to bring non-human identities under governance control: inventory, ownership, lifecycle governance, certification, and risk signals.

Request a briefing to learn how Omada’s unified governance model can help you govern every identity type.

Written by Robert Imeson
Last edited Apr 07, 2026

FREQUENTLY ASKED QUESTIONS

What are non-human identities, and why are they a governance blind spot?

Non-human identities include service accounts, workload credentials, API keys, and OAuth tokens that enable systems and applications to operate. The post argues they often outnumber human users by more than 40 to 1 and hold real privilege. Because they commonly sit outside human-centered identity programs, they can persist and accumulate access without routine oversight.

Why does the post frame recent token-based attacks as a governance failure rather than an authentication failure?

The example describes attackers using compromised OAuth tokens tied to a third-party application to access Salesforce environments at scale. No passwords were needed because the tokens already carried authorization and could persist silently. The issue was that the access path operated outside ownership, lifecycle, and review processes used to govern human users.

How does governance need to change for non-human identities compared with human identities?

Traditional identity governance relies on HR records, managers, and joiner, mover, and leaver events, which do not exist for non-human identities. The post says governance should focus on what each non-human identity can do by requiring a declared purpose, an accountable owner, and access that matches actual operational need. Without these triggers, credentials can remain active indefinitely.

What makes AI agent governance harder than governing service accounts and other non-human identities?

The post describes service accounts and similar identities as automated but predictable, which bounds their risk to configured behavior. AI agents interpret context, select tools, chain actions, and can delegate to other agents while using credentials such as OAuth tokens and API keys. Governance therefore shifts toward accountable action, including what agents did and whether those actions were authorized and reconstructible.

Which governance gaps does the post highlight, and what operational capabilities does it point toward next?

The post outlines three gaps: inventory, ownership, and lifecycle, which leave organizations unable to count, classify, or retire privileged non-human identities reliably. It also links these issues to growing regulatory expectations across NIS2, DORA, and the EU AI Act for control, traceability, and evidence over privileged access. The next post is said to cover five capabilities: inventory, ownership, lifecycle governance, certification, and risk signals.

Let's Get
Started

Let us show you how Omada can enable your business.