AI Agent ID
We Must Solve Agent Identity. This New Industry Whitepaper is a Starting Point.
Why Identity Management for AI Agents Can’t Wait: Introducing Our New OpenID Foundation Whitepaper
If you’re investing in, building, or deploying AI agents, there’s a foundational problem you need to understand: identity, authentication, and authorization for autonomous agents is fundamentally different from traditional software, and many current implementations are getting it wrong.
Today, I’m excited to share a comprehensive whitepaper I co-authored for the OpenID Foundation: “Identity Management for Agentic AI: The New Frontier of Authorization, Authentication, and Security for an AI Agent World.”
Why This Matters Now
As AI agents rapidly move from proof-of-concept to pilot and now to production, they’re creating urgent security and accountability challenges:
User impersonation is masking accountability. Most agents today act indistinguishably from their users, creating dangerous gaps in audit trails and accountability when things go wrong.
Consent fatigue is inevitable. As agents proliferate, users will face thousands of authorization requests, leading to reflexive approval and security risks.
Recursive delegation is uncharted territory. When agents spawn sub-agents or delegate tasks across organizational boundaries, we lack clear mechanisms for scope attenuation and attributable transitive trust.
Cross-domain operations break current models. OAuth 2.1 works well within anchored trust domains, but agents operating more fluidly across organizational boundaries need something more robust.
What’s Already Working (and What Isn’t)
The good news: we’re not starting from scratch. Current OAuth 2.1 frameworks, when properly implemented with protocols like MCP (Model Context Protocol), provide a starting point for enterprise agents accessing internal tools within a single trust domain.
The challenge: this only solves the simplest use cases. The moment agents need greater autonomy, asynchronous execution, or cross-domain delegation, existing patterns reveal significant gaps. We identify several issues, options, and future opportunities in the whitepaper that I hope will provide a sound approach supporting everyone seeking to span that gap!
A Huge Thanks to the Team
I want to especially thank Tobin South for his incredible, energetic leadership as the primary author and editor who wrangled this entire effort together. His vision and persistence made this comprehensive work possible. I’m also thrilled that the Stanford & Consumer Reports Loyal Agents Initiative (where both Tobin and I are active) was able to collaborate on this project. This cross-institutional collaboration reflects the urgency and importance of getting agent identity right, especially for ensuring AI agents are safe and effective for consumers to use and rely upon, particularly when conducting e-commerce transactions and making binding commitments on behalf of users.
What’s in the Paper
The whitepaper provides both immediate, practical guidance and a strategic roadmap:
Section 2 outlines current best practices using existing standards (OAuth 2.1, SCIM, SSO, CIBA) for today’s agent implementations
Section 3 tackles future challenges: delegated authority models, recursive delegation, scope attenuation, scalable consent mechanisms, and the economic layer (payments and financial transactions)
Real-world use cases demonstrating where traditional IAM fails and what’s needed for high-velocity, asynchronous, and cross-domain agent operations
What’s Coming Next
This whitepaper is just the beginning of a deeper exploration I’ll be sharing:
Agent Protocols: I’ve already started with my recent post on Agent Payments Protocol (AP2) last month, with more protocol deep-dives to follow.
Legal Dimensions: Building on my previous work on AI agents conducting transactions, UETA and LLM agents, and recent agent legal frameworks, I’ll be diving deeper into the legal infrastructure needed for increasingly autonomous agent transactions.
Evals for AI Agents: Following up on my initial exploration beyond AI benchmarks, I’ll be sharing frameworks for properly evaluating agent capabilities, safety, and reliability.
High-Value Use Cases: Identifying and unpacking the specific scenarios where proper identity capabilities unlock significant new value and reduces risk.
Agents Accelerating Research and Science: Exploring how properly governed agents can transform scientific discovery and research methodologies to spur innovation.
Looking Forward with Clear Eyes
I’m genuinely optimistic about the transformative potential of AI agents to augment human capabilities, empower consumers, and create new forms of value. The technical foundations exist, brilliant people across industry and academia are collaborating, and momentum is building toward interoperable standards.
But let’s be clear: many hard challenges remain. We need to move from impersonation to true delegation, build scalable governance mechanisms that respect user autonomy, create robust cross-domain trust fabrics, and ensure agents serve their users’ interests loyally. The work of building safe, trustworthy, and effective agent systems is just beginning.
For those investing in AI agents: ignoring these identity and authorization challenges doesn’t make them go away, it just means you’ll hit them unexpectedly in production. This whitepaper aims to be your starting point for understanding what’s required and building responsibly from the ground up.
Read the full paper: Identity Management for Agentic AI
Let’s build the future of autonomous agents together, securely, responsibly, accountably, and successfully!


