Identity Governance Blog

Governing AI Agents: What Changes When Identities Make Decisions

This blog is part of a three-part series exploring how identity governance must evolve to manage non-human identities. From foundational governance to autonomous agent oversight, it outlines the controls needed for secure AI adoption.

Part 1: Non-Human Identities and AI Agents: The Governance Blind Spot

Part 2: Non-Human Identities Don’t Govern Themselves: Building the Governance Foundation for NHI and AI Agents

Part 3: Governing AI Agents: What Changes When Identities Make Decisions

Blog Summary

The next privileged identity your auditors ask about may not be a person. It may not be a service account. It may be an AI agent that selected its own tools, delegated part of the task, accessed data through another credential, and completed an action no human explicitly approved. Agent governance is the operating model for that reality, and identity governance is where it has to live.

In August 2025, security researchers at Brave demonstrated how an AI agent embedded in Perplexity’s Comet browser could be hijacked. A user asked the agent to summarize a webpage. Hidden in the page were instructions written for the agent itself, which it followed instead of the user’s request. The technique is known as indirect prompt injection.

Identity governance does not stop indirect prompt injection. What it has to address is what the demonstration made visible: an autonomous identity capable of executing a chain of consequential actions that no one had ever authorized it to take.

Every enterprise piloting AI agents is putting autonomous identities to work that interpret context, select tools, chain actions, and delegate to other agents and non-human identities in pursuit of an objective. The risk is no longer only that an identity holds too much access. It is that the identity acts in unforeseen ways inside the access it already has. Agent governance has to answer two new questions: did the identity stay within the authority its owner defined for it, and can the organization prove the result was authorized.

Agent governance starts with the foundation already in place for non-human identities. Every service account, workload credential, and AI agent must be inventoried, owned by an accountable person, governed across its lifecycle, included in certification campaigns, and monitored for risk signals appropriate to its type. The second post in this series covered those five capabilities in detail. AI agent governance builds on that foundation by adding four further dimensions, which the rest of this post covers in turn.

Defined authority: scoping what an agent is permitted to do

Without explicitly defined authority, an AI agent operates on whatever permissions it accumulates, directly during development or through delegation to other agents, with no durable record of what its sponsor intended. Agent governance requires every agent to have a documented authorization scope: its business purpose, its human sponsor and accountable owner, the tools and connectors it can invoke, the data domains it can access, the actions it is allowed and prohibited from taking, the credentials it acts through, and the cadence at which the scope is reviewed.

The authorization scope is the reference point against which delegation paths, runtime behavior, and audit evidence are evaluated. Without it, no part of agent governance can be performed reliably, because there is no declared boundary to compare against.

Delegation chain visibility: governing the path, not just the account

Consider three agents working a procurement process. One creates new vendor records. Another retrieves supplier banking data. A third initiates payment. Each looks acceptable on its own. Chained together, without guardrails, they create an end-to-end payment path no human reviewer would have approved. Agents do not just hold access; they assemble it on the fly through delegation, and the chain crosses identities, tools, and systems that no single review covers because each system only sees the activity within its own boundary.

Under traditional governance oversight, separation of duties keeps any one person from controlling a payment end to end. The same principle has to apply when agents do the work. Agent governance requires the delegation chain to be visible end to end, with every step reviewed as part of one decision: who authorized the work, which agents and credentials carried it out, and whether the resulting access combination would have been approved if a human had requested it directly.

Runtime drift: continuous comparison against defined authority

Unlike service accounts, which operate against a fixed configuration, AI agents adapt at runtime. They respond to context, select tools, and chain actions in ways their sponsors did not anticipate. A finance agent given access to a new ERP module may begin acting on data its scope never covered. A customer service agent connected to a new integration may begin executing transactions that were never part of its mandate. The change is rarely the result of a defect; it is the result of agents doing what they were designed to do, in environments that change underneath them.

Agent governance requires the agent’s actual behavior to be compared against its defined authority continuously, not at quarterly intervals. Operational telemetry, including cloud platform logs, SaaS application events, IAM activity, and SIEM data, is interpreted by the agentic governance platform against the agent’s scope, owner, and approved capability set. When behavior diverges, the response is governed: the agent is constrained or suspended, the owner is notified, and the deviation is recorded in the audit trail. In the Comet incident highlighted earlier, governance of this kind would not have stopped the prompt injection, but it would have stopped the chain of unauthorized actions that followed.

Decision evidence: produced as agents operate, not reconstructed after the fact

Access logs show what an agent reached. They do not show why it was permitted to reach it. Agent governance requires a continuous evidence record that captures who authorized the agent, what authority was defined, which tools and credentials were used, which other identities were involved in the chain, and how each action mapped to the agent’s approved scope.

Regulators and standards bodies are converging on this expectation. The EU AI Act, NIST’s AI Risk Management Framework, and the OpenID Foundation’s 2025 whitepaper on agentic AI all point in the same direction: agent governance has to produce documented, traceable evidence as agents operate, in a form regulators, auditors, and boards can act on.

Every platform is building its own agent identity model

Microsoft has introduced Entra Agent ID, a new identity object inside Entra ID Governance currently in preview. Salesforce has Agentforce. ServiceNow has AI Agent Studio. Amazon Bedrock and Google’s Gemini Enterprise Agent Platform each define their own model of agent identity, credentials, and tool authorization. Each of these is necessary for governing agents inside its own ecosystem.

None of them governs agents running on the others. An agent that initiates a task in Salesforce, calls a tool hosted in AWS, queries a record in ServiceNow, and writes a result back through Microsoft Graph leaves a chain no platform-native registry can reconstruct. Identity programs spent two decades recovering from exactly this kind of fragmentation in the human directory layer. Agent adoption now risks recreating it with autonomous identities that multiply faster than human accounts ever did. What is needed is a platform-independent control layer that brings every agent under a single governance model.

The framework for governing AI agents at scale

Agents that are properly governed share the following characteristics:

  1. Accountability: each agent has a declared business purpose, a human sponsor, and an accountable owner.
  2. Defined authority: the tools, data, and actions within the agent’s scope are specified explicitly.
  3. Identity visibility: the credentials and non-human identities the agent acts through are visible to the governance program.
  4. End-to-end reviewability: delegation paths and runtime behavior are reviewed continuously as a single chain.
  5. Traceable evidence: every action traces back to a human authorization, an entry in the agent’s defined authority, and an audit record.

Governing a single agent is one problem. Scaling it across an enterprise is another. The framework rests on three components. First, a single accountable owner for agent governance as a discipline, sitting alongside human and non-human identity governance rather than apart from it. Second, a centralized control plane that maintains every agent’s defined authority, delegation paths, and runtime evidence in one place. Third, a clear escalation path that routes deviations to the right reviewer with the full chain in view.

Regulators, auditors, and boards will not ask only whether an organization deployed AI. They will ask whether the organization can prove the AI acted within authorized boundaries, and whether it will continue to do so. The organizations that build toward that proof now will adopt agents with confidence.
Request a briefing to learn how Omada’s unified governance model extends to AI agents.

This is the third post in a three-part series on identity governance for non-human identities and AI agents. To learn more about the foundation this post builds on, see the earlier posts in this series: Non-Human Identities and AI Agents: The Governance Blind Spot, and Non-Human Identities Don’t Govern Themselves.

Let's Get
Started

Let us show you how Omada can enable your business.