Identity Governance Blog

Non-Human Identities Don't Govern Themselves: Building the Governance Foundation for NHI and AI Agents

Blog Summary

Enterprises struggle to govern service accounts, API keys, OAuth tokens, and AI agents because identity programs were built for people, leaving critical automation exposed through missing ownership, weak lifecycle controls, and blind spots in review. The blog argues that non-human identity governance depends on five capabilities: continuous inventory, accountable ownership, lifecycle governance, certification, and risk signals, which create the auditable foundation AI agent governance must extend.

Most enterprises have a governance program built for people. Service accounts, API keys, OAuth tokens, and AI agents were never part of that design. This post sets out the five operational capabilities required to bring non-human identities under governance control: inventory, ownership, lifecycle governance, certification, and risk signals. These capabilities form the foundation that AI agent governance builds on. The additional governance dimensions that agents specifically require are addressed in the third post in this series.

Most organizations already know they have a non-human identity problem. Service accounts with no owners. API keys that outlived the projects that created them. OAuth tokens connected to integrations nobody remembers authorizing. Their governance program was built for people, and non-human identities were never part of the design.

 

Five failures in a single incident

In March 2025, a developer at xAI accidently exposed a private API key, a credential used to access systems, in a public GitHub repository (KrebsOnSecurity, 2025). The key granted access to more than 60 private and unreleased large language models, including models fine-tuned on proprietary SpaceX and Tesla data. GitGuardian detected the exposure the same day and sent an automated alert to the developer. The key remained active and publicly accessible for nearly two months. The incident exposed governance failures at five critical points, each one a capability that any effective NHI governance program must address.

  1. No inventory: The API key had never been registered in any central system used to manage identities and access. When it landed in a public repository, anyone who found it could query proprietary AI models built on sensitive corporate data.
  2. No owner: GitGuardian’s alert reached the developer who committed the key, and for nearly two months, nothing happened. Awareness with no accountability structure behind it is not ownership.
  3. No lifecycle controls: The key had no expiration date and no rotation schedule. A credential with access to dozens of proprietary AI models was allowed to persist indefinitely.
  4. No certification: The key had never appeared in an access review. An identity that never enters a review cycle never gets questioned.
  5. No risk escalation: GitGuardian detected the exposed key within hours and the alert reached the developer. Detection is only a control when it triggers a defined response, routes to someone with the authority to act, and produces a recorded outcome.

Because none of these governance controls were in place, the credential remained active and accessible for nearly two months. In most enterprises, this is not unusual. It describes the current state of most non-human identities that have never been brought under governance control.

 

Inventory: you cannot govern what you cannot see

The starting point is a complete, normalized inventory across every environment where automation runs, including Azure managed identities, service principals and app registrations, AWS IAM roles, Kubernetes service accounts, CI/CD pipeline credentials, API keys, RPA bot credentials, and OAuth tokens issued to third-party integrations. AI agents belong in that inventory too. They rely on these same credential types to execute their work.

An inventory is only as useful as the context attached to each identity. For each NHI, effective governance requires visibility into what it is, where it lives, what it can access, how old its credentials are, and who is accountable for it. Without that context, the inventory tells you what exists but not how to govern and manage it.

Discovery must be continuous. NHIs are created constantly and often outside any central process. A periodic snapshot captures what existed at a point in time. By the time anyone acts on it, new NHIs have already been created outside its scope. Ungoverned identities with active privileges are where attackers find their footholds.

 

Ownership: every identity needs an accountable person

Inventory tells an organization what exists. Ownership tells it who is responsible. Every NHI must have a named individual on record who can answer three questions: why does this identity exist, what does it do, and does it still need the access it has? That applies equally to AI agents. Without a declared owner, no one is accountable for what an agent does or accesses.

Establishing ownership across an environment that has grown organically over years is rarely straightforward. Engineers leave. Projects end. Teams reorganize. The identities they created persist. A structured process is needed to surface unclaimed identities, escalate when ownership cannot be determined, and enforce a clear policy for NHIs that remain unattributed. In most mature programs, that policy is deprovision by default. An identity no one will claim is an identity that should not exist.

 

Lifecycle governance: triggered differently, same principles

Non-human identities do not generate HR-driven lifecycle events. There is no manager notification or system trigger when a service account is created, when its scope changes, or when the project it was built for ends. The triggers must be defined and enforced by the governance program itself. Every NHI provisioning request should require a documented business justification, a declared owner, and a defined access scope. Scope changes require review. When a project closes, an application is retired, or a team is reorganized, the NHIs they created must be reviewed and, where no longer needed, removed.

Credential rotation addresses a risk unique to NHI lifecycle governance. The credential a non-human identity uses to authenticate, a password, token, or API key, can remain unchanged for years. A stolen credential that is never rotated remains valid indefinitely. Long-lived credentials are among the most exploited attack vectors in non-human identity breaches (OWASP, 2025).

Governance programs need visibility into credential age across the NHI estate, with defined thresholds and escalation paths for credentials that exceed them.

 

Certification: non-human identities belong in your review campaigns

NHIs belong in certification campaigns alongside human identities. The governance principle is the same. What differs significantly is the context reviewers need to make an informed decision. For a human identity, reviewers evaluate whether access still fits the person’s role. For an NHI, reviewers need to understand its declared purpose, whether its privilege scope matches that purpose, when it was last used, and whether anything has changed in the systems it accesses. The same applies to AI agents: reviewers need to know what the agent is authorized to do, what it has actually done, and whether that remains appropriate.

That context is also the evidence an audit requires. A certification campaign that captures purpose, usage, and ownership decisions creates a record that regulators can interrogate, not just a record that a review took place.

 

Risk signals: different identities, different indicators

The risk signals that matter for NHIs are different from those that apply to people. The indicators that carry weight include:

  1. a missing or unconfirmed owner,
  2. access to production environments without a documented business justification,
  3. access levels that exceed observed usage ,
  4. credentials that fall outside defined rotation policy thresholds,
  5. identities with no usage in the past 90 days that still retain active privileges,
  6. unusual combinations of access that suggest privileges have accumulated over time rather than being intentionally assigned.

Relevant risk signals must surface between certification cycles. Exposure begins the moment a risk indicator appears, not at the next scheduled review. Governance programs need a continuously updated view of risk across non-human identities so that issues requiring attention today do not wait until the next review cycle.

 

The foundation AI agent governance requires

Organizations with governance program that address these five areas close the gaps the xAI incident exposed: every NHI is visible, has an accountable owner, is governed through its lifecycle, is included in certification, and is connected to a risk escalation path. Taken together, NIS2, DORA, and the EU AI Act raise the expectation that organizations can produce auditable evidence of governance over privileged access, regardless of identity type.

That foundation is necessary for governing AI agents, but it is not sufficient. Unlike a service account, which performs the same operation every time, an AI agent makes autonomous decisions, adapts to context, and takes actions its human sponsors never explicitly authorized. The governance question shifts from what an identity can access to what it has done, and whether the organization can demonstrate it was authorized. The third post in this series discusses how the NHI governance model should be expanded to manage AI agents, including governing delegation chains, constraining agent behavior to declared purposes, and producing the audit evidence that autonomous decision-making demands.

Request a briefing to learn how Omada’s unified governance model brings non-human identities under structured governance control.

This is the second post in a three-part series on identity governance for non-human identities and AI agents. Read Post 1: Non-Human Identities and AI Agents: The Governance Blind Spot. Post 3 addresses the governance dimensions that AI agents require beyond the NHI foundation.

Written by Robert Imeson
Last edited Apr 20, 2026

FREQUENTLY ASKED QUESTIONS

What does this article mean by non-human identities, and why do they need governance?

The blog describes non-human identities as service accounts, API keys, OAuth tokens, and AI agents that operate across enterprise systems. It argues they need governance because most identity programs were designed for people, which leaves these identities outside normal accountability, review, and control.

Why does the xAI API key incident matter for non-human identity governance?

The incident is used to show how governance failures can compound when a credential is exposed and no control responds effectively. According to the blog, the exposed key had no inventory record, no accountable owner, no lifecycle controls, no certification review, and no risk escalation path, which allowed it to remain active for nearly two months.

How should organizations build a governance foundation for non-human identities?

The blog says the foundation starts with five operational capabilities: inventory, ownership, lifecycle governance, certification, and risk signals. It also explains that discovery must be continuous and that each identity needs context such as purpose, location, access, credential age, and accountable ownership so governance decisions can be made.

What governance practices does the article recommend for managing non-human identities over time?

The blog recommends requiring a documented business justification, a declared owner, and a defined access scope when a non-human identity is created. It also says scope changes should be reviewed, unused or unattributed identities should be removed, and credential age should be monitored with thresholds and escalation paths for overdue rotation.

How do certification and risk signals support compliance and AI agent governance?

The blog says non-human identities should be included in certification campaigns with evidence about purpose, usage, privilege scope, and ownership, which creates a record that auditors and regulators can examine. It also states that continuously updated risk signals help surface issues between review cycles and support a broader governance foundation for AI agents under regulations such as NIS2, DORA, and the EU AI Act.

Let's Get
Started

Let us show you how Omada can enable your business.