Non-human identities now outnumber human users in most enterprises by more than 40:1. AI agents are entering production with the same governance gaps, but unlike a service account, an agent makes decisions.
Enterprises struggle to govern service accounts, API keys, OAuth tokens, and AI agents because identity programs were built for people, leaving critical automation exposed through missing ownership, weak lifecycle controls, and blind spots in review. The blog argues that non-human identity governance depends on five capabilities: continuous inventory, accountable ownership, lifecycle governance, certification, and risk signals, which create the auditable foundation AI agent governance must extend.
Most enterprises have a governance program built for people. Service accounts, API keys, OAuth tokens, and AI agents were never part of that design. This post sets out the five operational capabilities required to bring non-human identities under governance control: inventory, ownership, lifecycle governance, certification, and risk signals. These capabilities form the foundation that AI agent governance builds on. The additional governance dimensions that agents specifically require are addressed in the third post in this series.
Most organizations already know they have a non-human identity problem. Service accounts with no owners. API keys that outlived the projects that created them. OAuth tokens connected to integrations nobody remembers authorizing. Their governance program was built for people, and non-human identities were never part of the design.
In March 2025, a developer at xAI accidently exposed a private API key, a credential used to access systems, in a public GitHub repository (KrebsOnSecurity, 2025). The key granted access to more than 60 private and unreleased large language models, including models fine-tuned on proprietary SpaceX and Tesla data. GitGuardian detected the exposure the same day and sent an automated alert to the developer. The key remained active and publicly accessible for nearly two months. The incident exposed governance failures at five critical points, each one a capability that any effective NHI governance program must address.
Because none of these governance controls were in place, the credential remained active and accessible for nearly two months. In most enterprises, this is not unusual. It describes the current state of most non-human identities that have never been brought under governance control.
The starting point is a complete, normalized inventory across every environment where automation runs, including Azure managed identities, service principals and app registrations, AWS IAM roles, Kubernetes service accounts, CI/CD pipeline credentials, API keys, RPA bot credentials, and OAuth tokens issued to third-party integrations. AI agents belong in that inventory too. They rely on these same credential types to execute their work.
An inventory is only as useful as the context attached to each identity. For each NHI, effective governance requires visibility into what it is, where it lives, what it can access, how old its credentials are, and who is accountable for it. Without that context, the inventory tells you what exists but not how to govern and manage it.
Discovery must be continuous. NHIs are created constantly and often outside any central process. A periodic snapshot captures what existed at a point in time. By the time anyone acts on it, new NHIs have already been created outside its scope. Ungoverned identities with active privileges are where attackers find their footholds.
Inventory tells an organization what exists. Ownership tells it who is responsible. Every NHI must have a named individual on record who can answer three questions: why does this identity exist, what does it do, and does it still need the access it has? That applies equally to AI agents. Without a declared owner, no one is accountable for what an agent does or accesses.
Establishing ownership across an environment that has grown organically over years is rarely straightforward. Engineers leave. Projects end. Teams reorganize. The identities they created persist. A structured process is needed to surface unclaimed identities, escalate when ownership cannot be determined, and enforce a clear policy for NHIs that remain unattributed. In most mature programs, that policy is deprovision by default. An identity no one will claim is an identity that should not exist.
Non-human identities do not generate HR-driven lifecycle events. There is no manager notification or system trigger when a service account is created, when its scope changes, or when the project it was built for ends. The triggers must be defined and enforced by the governance program itself. Every NHI provisioning request should require a documented business justification, a declared owner, and a defined access scope. Scope changes require review. When a project closes, an application is retired, or a team is reorganized, the NHIs they created must be reviewed and, where no longer needed, removed.
Credential rotation addresses a risk unique to NHI lifecycle governance. The credential a non-human identity uses to authenticate, a password, token, or API key, can remain unchanged for years. A stolen credential that is never rotated remains valid indefinitely. Long-lived credentials are among the most exploited attack vectors in non-human identity breaches (OWASP, 2025).
Governance programs need visibility into credential age across the NHI estate, with defined thresholds and escalation paths for credentials that exceed them.
NHIs belong in certification campaigns alongside human identities. The governance principle is the same. What differs significantly is the context reviewers need to make an informed decision. For a human identity, reviewers evaluate whether access still fits the person’s role. For an NHI, reviewers need to understand its declared purpose, whether its privilege scope matches that purpose, when it was last used, and whether anything has changed in the systems it accesses. The same applies to AI agents: reviewers need to know what the agent is authorized to do, what it has actually done, and whether that remains appropriate.
That context is also the evidence an audit requires. A certification campaign that captures purpose, usage, and ownership decisions creates a record that regulators can interrogate, not just a record that a review took place.
The risk signals that matter for NHIs are different from those that apply to people. The indicators that carry weight include:
Relevant risk signals must surface between certification cycles. Exposure begins the moment a risk indicator appears, not at the next scheduled review. Governance programs need a continuously updated view of risk across non-human identities so that issues requiring attention today do not wait until the next review cycle.
Organizations with governance program that address these five areas close the gaps the xAI incident exposed: every NHI is visible, has an accountable owner, is governed through its lifecycle, is included in certification, and is connected to a risk escalation path. Taken together, NIS2, DORA, and the EU AI Act raise the expectation that organizations can produce auditable evidence of governance over privileged access, regardless of identity type.
That foundation is necessary for governing AI agents, but it is not sufficient. Unlike a service account, which performs the same operation every time, an AI agent makes autonomous decisions, adapts to context, and takes actions its human sponsors never explicitly authorized. The governance question shifts from what an identity can access to what it has done, and whether the organization can demonstrate it was authorized. The third post in this series discusses how the NHI governance model should be expanded to manage AI agents, including governing delegation chains, constraining agent behavior to declared purposes, and producing the audit evidence that autonomous decision-making demands.
Request a briefing to learn how Omada’s unified governance model brings non-human identities under structured governance control.
This is the second post in a three-part series on identity governance for non-human identities and AI agents. Read Post 1: Non-Human Identities and AI Agents: The Governance Blind Spot. Post 3 addresses the governance dimensions that AI agents require beyond the NHI foundation.
FREQUENTLY ASKED QUESTIONS
The blog describes non-human identities as service accounts, API keys, OAuth tokens, and AI agents that operate across enterprise systems. It argues they need governance because most identity programs were designed for people, which leaves these identities outside normal accountability, review, and control.
The incident is used to show how governance failures can compound when a credential is exposed and no control responds effectively. According to the blog, the exposed key had no inventory record, no accountable owner, no lifecycle controls, no certification review, and no risk escalation path, which allowed it to remain active for nearly two months.
The blog says the foundation starts with five operational capabilities: inventory, ownership, lifecycle governance, certification, and risk signals. It also explains that discovery must be continuous and that each identity needs context such as purpose, location, access, credential age, and accountable ownership so governance decisions can be made.
The blog recommends requiring a documented business justification, a declared owner, and a defined access scope when a non-human identity is created. It also says scope changes should be reviewed, unused or unattributed identities should be removed, and credential age should be monitored with thresholds and escalation paths for overdue rotation.
The blog says non-human identities should be included in certification campaigns with evidence about purpose, usage, privilege scope, and ownership, which creates a record that auditors and regulators can examine. It also states that continuously updated risk signals help surface issues between review cycles and support a broader governance foundation for AI agents under regulations such as NIS2, DORA, and the EU AI Act.
FEATURED RESOURCES
Non-human identities now outnumber human users in most enterprises by more than 40:1. AI agents are entering production with the same governance gaps, but unlike a service account, an agent makes decisions.
As AI agents multiply, non-human identities increasingly become a new attack surface. Learn how role-based governance and automation tighten access, tame sprawl, and sustain compliance.
Enterprises face growing identity sprawl across SaaS and non-human accounts, which expands exposure and makes least-privilege hard to prove. Identity Security Posture Management (ISPM) adds continuous visibility, risk scoring, and automated remediation so access stays governed and audit-ready.