Recent Posts on AI Agents
Consolidating and Sharing Recent Posts I Published With Stanford and Consumer Reports Innovation Lab
As May comes to a close, I want to take a moment to spotlight several blog posts I’ve published this year in collaboration with Stanford CodeX (with Diana Stern) and the Consumer Reports Innovation Lab, all focused on AI Agents.
These pieces collectively examine how AI agents are reshaping transactional systems, contract formation, and legal responsibility—raising urgent questions about loyalty, liability, and governance. Whether you’re designing agents, regulating them, or simply trying to make sense of this shift, this collection maps out key legal, technical, and practical considerations.
Themes include:
Agency & Liability: How legal frameworks like principal-agent relationships and UETA apply to AI agents.
Design for Trust: Technical and policy mechanisms (e.g., error prevention, human oversight, LLMS.txt) that build user trust.
Emerging Standards: The potential of interoperability (e.g., A2A protocols), machine-readable contracts, and loyalty frameworks to rewire digital marketplaces.
Published with Stanford CodeX:
From Fine Print to Machine Code: How AI Agents are Rewriting the Rules of Engagement Part 1
From Fine Print to Machine Code: How AI Agents are Rewriting the Rules of Engagement Part 2
From Fine Print to Machine Code: How AI Agents are Rewriting the Rules of Engagement Part 3
Published with Consumer Reports Innovation Lab:
Defining ‘Loyalty’ for AI Agents: Insights from the Stanford AI Agents x Law Workshop
My Agent Messed Up! Understanding Errors and Recourse in AI Transactions
Agents Talking to Agents (A2A): Reshaping the Marketplace and Your Power
As AI agents transition from experimental demos to real-world applications handling contracts, money, and trust, I’ve found myself increasingly focused on the legal and technical implications. This roundup brings together several key pieces charting that terrain. The full content is collected below for your reading convenience.
URL for the following original post: https://law.stanford.edu/2025/01/14/from-fine-print-to-machine-code-how-ai-agents-are-rewriting-the-rules-of-engagement/
From Fine Print to Machine Code: How AI Agents are Rewriting the Rules of Engagement: Part 1 of 3
January 14, 2025
Part 1 of 3
by Diana Stern and Dazza Greenwood, Codex Affiliate
Picture this: you’ve just developed a sleek new AI shopping assistant. It’s ready to scour the internet for the best deals, compare prices faster than you can say “discount,” and make purchases quicker than you can reach for your wallet. But wait, there’s a catch. How do you ensure this digital dealmaker doesn’t make mistakes that could bind you or your customer to a bad deal, create liability under privacy laws, or violate terms of service that it (and, let’s face it, probably you) never actually read?
This three-part series will identify U.S. legal issues raised by this type of AI agent and how to address them. In this post, we’ll start by level setting on AI agent terminology. Next, we’ll dispel the misnomer that liability can be pushed to the AI agents themselves and explain why the company offering services like this AI shopping assistant to customers could be left holding the bag o’ risks. Finally, we’ll touch on how software companies can helpfully leverage principal agent law to manage this risk.
What is a Transactional Agent?
AI agents are an umbrella category of AI systems that execute tasks on behalf of users. In addition to your AI shopping bot that purchases goods online, think of virtual assistants that book flights or event tickets and meeting schedulers that reserve tables at restaurants. There are a variety of AI agents with diverse capabilities.
This series focuses on what we’ll call “Transactional Agents”: AI agent systems that conduct transactions involving monetary or contractual commitments. These systems leverage large language models (LLMs) to move beyond basic query-response interactions. What makes them special is their ability to perform dynamic, multi-step reasoning and take action without human review or approval. Imagine your shopping bot doesn’t just find products but compares prices across retailers, checks reviews, confirms availability and makes purchases – all while sticking to your customer’s specified budget and preferences. Transactional Agents achieve this through key capabilities like:
Tool use: Accessing external services like payment processors or APIs
Memory management: Retaining context and user preferences across interactions
Iterative refinement: Learning from past decisions to improve future outcomes
Their ability to make binding commitments, including payments, differentiates Transactional Agents from simple chatbots and other types of AI agents. These systems can spend real money or enter into contracts on one’s behalf. Let’s say your company provides an AI shopping bot consumer app powered by a third-party LLM. On the surface, this seems like it could be a straightforward SaaS offering, but it has hidden challenges and risks related to security, authorization, and trust. How do you ensure the app follows your customers’ requests? How do you prevent errors? Misuse? These are some of the challenges we’ll explore in this series.
Your Transactional Agent Is Not A Legal Agent, But You Might Be
Your Transactional Agent cannot be held liable nor enter agreements itself because it’s not a legal entity – it’s software! So, how are they able to buy the perfect pair of Jimmy Choo’s for your customer right when they go on sale? Under the Uniform Electronic Transactions Act, which we will discuss further in a future post, it is well-settled that Transactional Agents can form contracts on behalf of their users, but principal-agent law may also be operating in the background.
If you’ve bought a house, a real estate agent may have acted on your behalf to buy the property, negotiate prices, and handle paperwork. Not all principal-agent relationships are made through an express agreement like in real estate. They can also be implied, like a whiskey bar manager who is in charge of curating the menu and decides to enter into agreements on the bar’s behalf to buy mocktail supplies in January. In addition, a principal-agent relationship can be based on “apparent authority”, when a third party reasonably believes an agent has the authority to act on the principal’s behalf. For example, when the bar manager tells a non-alcoholic spirit distributor, she is authorized to enter into agreements for new products on the bar’s behalf.
Under state common law (law primarily developed through court cases), a common law agent has a fiduciary duty to the principal (legal nerds can see Restatement (Third) of Agency § 8.01). This is a big deal! A fiduciary duty is one of the highest standards of care imposed by law. It is a legal obligation to act in the best interests of the other party within the scope of the business relationship. The agent owes other duties as well, including avoiding conflicts of interest and acting in line with the agency agreement.
When a company offering a Transactional Agent to customers (“Transactional Agent Provider”) operates the Transactional Agent, a principal-agent relationship *may* exist. If the customer went to court, they could argue there was a principal-agent relationship between them and the Transactional Agent Provider in order to get the Transactional Agent Provider on the hook. The court would likely look at the customer’s actions in deploying and configuring the Transactional Agent as well as the terms they agreed to, among other factors.
Apparent authority may be a particularly relevant consideration for the court, since third parties interacting with the AI may not know the actual instructions given to the Transactional Agent by the user, but rather, are relying on what they see from the Transactional Agent. The court would consider how the Transactional Agent Provider’s authority was communicated to third parties, including representations, disclaimers, and industry standards.
Even if a Transactional Agent Provider exceeded its authority, a court might analyze whether the customer ratified the action, meaning the customer essentially gave the Transactional Agent Provider authority to do that action after the fact.
In short, when it comes to Transactional Agents, the customer could be the principal delegating authority to the Transactional Agent Provider as their agent. Et voila, the Transactional Agent Provider would become legally liable under principal-agent laws.
Making Agency (or Alternatives) Work For You
Agency law is a familiar legal framework for courts and can potentially clarify liability issues, so, in some cases, it might be advantageous to state there is an agency relationship in Transactional Agent Provider terms of service. We have seen this already in our review of existing Transactional Agent Provider terms of service. At the same time, since the standard of care for an agent is so high, Transactional Agent Providers may wish to structure these relationships as independent contractor relationships if they can ensure that the terms and the way the customer interacts with the Transactional Agent align with this characterization. Likewise, there may be a competitive advantage in embracing some fiduciary duties as a Transactional Agent Provider to create and retain customer trust.
In addition, there’s a potential business opportunity here. Transactional Agent Providers may look to third parties to take on the responsibility of being the customer’s legal agent. This already happens in the payments industry where some companies act as the “merchant of record” and take on some liability for the actual provider or manufacturer of products and services sold.
In conclusion, as more Transactional Agents with increasingly advanced capabilities come online every day, customers should choose their Transactional Agent Providers wisely, and Transactional Agent Providers should be proactive in determining the principal-agent legal strategy appropriate for their business.
Diana Stern is Deputy General Counsel at Protocol Labs, Inc. and advises clients in her role as Special Counsel at DLx Law. Dazza Greenwood runs Civics.Com consultancy services, and he founded and leads law.MIT.edu and heads the Agentic GenAI Transaction Systems research project at Stanford’s CodeX.
Thanks to Sarah Conley Odenkirk, art attorney and founder of ArtConverge, and Jessy Kate Schingler, Law Clerk, Mill Law Center and Earth Law Center, for their valuable feedback on this post.
URL for the following original post: https://law.stanford.edu/2025/01/21/from-fine-print-to-machine-code-how-ai-agents-are-rewriting-the-rules-of-engagement-2/
From Fine Print to Machine Code: How AI Agents are Rewriting the Rules of Engagement: Part 2 of 3
January 21, 2025
Part 2 of 3
by Diana Stern and Dazza Greenwood, Codex Affiliate
Your AI shopping assistant is humming along, finding deals and making purchases for your customers. Then one day, it happens: the bot buys 100 self-heating mugs instead of 1, maxes out a customer’s credit card on duplicate Xbox orders, or shares your customer’s shipping address with an unauthorized third party. As the company behind this digital dealmaker (the “Transactional Agent Provider”), what happens when your AI assistant makes mistakes?
As a refresher, in our prior post, we defined Transactional Agents and uncovered why Transactional Agent Providers should be thoughtful about whether they serve as a legal agent for their customers (fiduciary duties abound!). We also identified a new business opportunity for third parties to take on this role.
Mistakes and Errors – at AI Scale
At a practical level, given the myriad possible contract permutations, the Transactional Agent could easily overstep its intended authority by filling in the gaps where its specific direction is not programmed, resulting in unintended obligations for the user (like ponying up enough cash to keep 100 self-heating mugfuls of matcha tea going at once). Will these agreements be binding if the Transactional Agent makes a mistake or exceeds its intended scope of authorization?
The Uniform Electronic Transactions Act (UETA) is broadly adopted commercial law in the United States that has provisions specifically addressing errors made during automated transactions conducted by Transactional Agents. For example, a relevant provision of UETA addressing errors permits the user to reverse transactions if the Transactional Agent did not provide a means to prevent or correct the error. This provision should be carefully understood by Transactional Agent Providers to ensure their process flow and ultimate user interaction support and reflect adequate means to prevent or correct these types of errors.
Likewise, under another provision of UETA, if the parties had an agreed security procedure in place and one party failed to abide by that procedure but would have caught the issue if they had, then the other party may be able to reverse the transaction. Even with this uniform law, such changes and errors’ legal and practical implications are complex and largely untested. Would these provisions mean that no transaction conducted by a Transactional Agent should be considered finalized until or unless its user has had an opportunity to review and determine no error requires correction? How long a period of time would be reasonable?
If a Transactional Agent Makes a Mistake, Who is on the Hook?
If a Transactional Agent doesn’t stick to customer instructions and makes a purchasing mistake, several different issues could come up in court. While tort law claims could fill their own textbook (we’ll leave those for our litigator friends), let’s zoom in on the contract law side of things.
In terms (heh) of contract formation, the mistake doctrine could apply. Under the Restatement (Second) of Contracts § 153, a mistake by one party could allow her to get out of the contract if:
The mistake was about a basic assumption on which she made the contract;
The mistake had a material effect on how the contract was carried out that negatively impacted her;
She does not bear the risk of the mistake; and
The other party knew or had reason to know of the mistake or the effect of the mistake would make the contract unconscionable (extremely one-sided or unjust) to enforce.
Whew, that was a mouthful.
Let’s bring this to life. Say you as the Transactional Agent Provider are acting as your customer’s legal agent, as explained in our last post. The actions your Transactional Agent takes within its scope of authority bind the customer. Let’s say your Transactional Agent books your customer on a trip to Paris, France instead of time-sensitive tickets to a conference in Paris, Texas. Your customer assumed the bot would book destinations accurately, and she would be adversely affected by having plans in France instead of Texas. Even assuming refundable bookings, she might miss her conference in Texas or have to pay higher room rates.
Does the risk of the Transactional Agent booking a trip to the wrong city fall on the customer (does she bear the risk)? What if the Transactional Agent Provider had disclaimers that the customer would bear the risk? Is that enough? Is the risk of Transactional Agents not following instructions so well known that customers bear the risk just by using them? Is that a desirable policy outcome?
And when is the Transactional Agent’s mistake so obvious, the other party should have known? What if the Transactional Agent left a reservation note to the French hotel that the customer was coming for the annual cryptocurrency conference in Paris, Texas? These answers will emerge as industry norms and expectations evolve.
Fortunately, there are ways for Transactional Agent Providers to mitigate some of these risks. As we discussed earlier, the Uniform Electronic Transactions Act (UETA) Section 10(2) offers a powerful tool in this regard. This provision allows customers to reverse transactions if the Transactional Agent did not provide a means to prevent or correct the error. By implementing a user interface and process flow that enables customers to review and correct transactions before they are finalized, providers not only comply with UETA but also establish a strong argument for ratification. If a customer has the opportunity to correct an error but chooses not to, they have arguably adopted the transaction as final. Moreover, this provision of UETA cannot be varied by contract, which means this rule allowing customers to reverse transactions will apply even if providers insert disclaimers or other contract terms insisting the customer holds all responsibility and liability for mistakes and errors committed by the Transactional Agent.
Given this is the law of the land in the U.S., with UETA enacted in 49 states, it is prudent to take these rules seriously. This design pattern – proactively building in error prevention and correction mechanisms – is therefore not just about legal compliance; it’s a fundamental aspect of responsible Transactional Agent development that helps define the point of finality and clarify the allocation of risk. But it’s also just good practice and a fair rule. By implementing these mechanisms, providers can significantly reduce their risk of liability. By embracing error avoidance and corrections protocols in the design and deployment of Transactional Agents, perhaps the most valuable benefit will not be avoiding liability for reversed transactions but legitimately earning Transactional Agent customers’ trust and reliance upon this new technology and way of doing business.
Enter the Regulators
Depending on the frequency and severity to which Transactional Agents’ mistakes harm customers, regulators like state attorneys general might investigate whether such conduct constitutes unfair or deceptive practices under consumer protection statutes.
Privacy issues add another layer of complexity. When Transactional Agents follow their open-loop model to complete tasks, they may use information in unexpected ways. Your friendly neighborhood shopping assistant might leverage information from your customer’s health-related queries to recommend products for purchase. This raises thorny questions about context integrity, consent, and compliance with privacy frameworks like GDPR, especially when these systems can make complex inferences about customers from seemingly innocuous data.
Designing Transactional Agents for compliance with existing laws is further complicated by certain regulators’ shift toward new, AI-specific laws. For example, last year, Regulation (EU) 2024/1689 (the “EU AI Act”) became the first AI-specific legal framework across the EU. While the EU AI Act makes a nod to existing EU privacy regulations, stating that they will not be modified by the Act, it may prove challenging for companies to comply with both if inconsistencies between the two bodies of law arise as more varied Transactional Agents are deployed. In the U.S., California’s Assembly Bill 2013 Generative Artificial Intelligence: Training Data Transparency will require builders to publish summaries of their training datasets, including whether aspects of the datasets meet certain privacy law definitions, increasing compliance overhead.
And this is just the tip of the agentic iceberg. The legal challenges posed by Transactional Agents bear some resemblance to those faced when open-source software first emerged. Just as the legal and developer communities grappled with novel issues surrounding open source licensing – such as who is liable for a bug in the code – we’re now confronting unprecedented questions about Transactional Agents and liability.
What About Missteps between the Transactional Agent Provider and LLM Provider?
Another persnickety contract-related risk lies in the terms of service between the Transactional Agent Provider and the LLM it uses. In our research, we observed that many LLM providers place a great deal of liability on the Transactional Agent Provider, leaving them with one-way indemnities and uncapped liability for certain claims. Others take a more even-handed approach. One commonality is that they leverage broad principles the Transactional Agent Provider must follow. LLM providers need to account for the innumerable edge cases that emerge when Transactional Agents are released in the wild. These principles range from restrictions against building competing services and circumventing safeguards to compliance with law. While useful for LLM-side lawyers drafting around a large set of risks posed by a rapidly developing technology, these principles become quite complicated when Transactional Agent Providers consider how to make them programmable. You would need to deal with thousands of areas of law in multiple jurisdictions around the world in the context of an open-loop interaction where you cannot predict outputs. Some of this uncertainty can be solved through thoughtful technical architecture that appropriately uses deterministic outputs to mitigate risk, but it’s not the only way.
Stay tuned for our third and final post, where we’ll share more solutions for managing Transactional Agent legal risks. We’ll explore everything from clear delegation frameworks to zero-knowledge proofs.
————————————————————————————————————–
Diana Stern is Deputy General Counsel at Protocol Labs, Inc. and Special Counsel at DLx Law. Dazza Greenwood runs Civics.Com consultancy services, and he founded and leads law.MIT.edu and heads the Agentic GenAI Transaction Systems research project at Stanford’s CodeX.
Thanks to Sarah Conley Odenkirk, art attorney and founder of ArtConverge, and Jessy Kate Schingler, Law Clerk, Mill Law Center and Earth Law Center, for their valuable feedback on this post.
URL for the following original post: https://law.stanford.edu/2025/03/26/from-fine-print-to-machine-code-how-ai-agents-are-rewriting-the-rules-of-engagement-part-3-of-3/
From Fine Print to Machine Code: How AI Agents are Rewriting the Rules of Engagement: Part 3 of 3
March 26, 2025
by Dazza Greenwood, Codex Affiliate (1) and Diana Stern
In the first two parts of this series, we explored the emergence of AI agents in everyday transactions and the legal risks they pose, particularly concerning agency and liability. We then examined the potential for AI agent errors and the crucial role of user trust. Now, in this final installment, we turn our attention to proactive solutions and “legal hacks” – innovative strategies to embed legal safeguards directly into AI agent systems, minimizing risk and maximizing their transformative potential. (Here are parts one and two of this series.)
Starting Off on the Right Foot
A robust approach to managing AI agents begins with a clear delegation and consent framework, mirroring established protocols in banking where explicit authorization is required for specific transactions. Just as a bank requires explicit authorization for financial actions, users should grant AI agent providers clearly defined authority from the outset. This is not merely a matter of convenience; it’s a fundamental principle of agency law.
An emerging consideration for managing AI agent risks is the potential role of insurance products. Just as professional errors and omissions policies protect human professionals, specialized insurance could provide a valuable safety net for autonomous AI transactions. These products could offer protection for consumers and platforms when AI agents encounter unexpected scenarios or make unintended decisions.
A well-defined scope of authority is crucial because, under agency law, the principal (the user) is bound by the agent’s actions within that scope. This minimizes the risk of unintended legal consequences and establishes a clear audit trail if issues arise. We encourage companies to consider the tradeoffs of taking an agency or independent contractor approach, which we touched on in our first post. In addition, companies might try to take the position that users themselves are taking all of the actions, and the AI agent is only providing access and infrastructure.
The optimal time to address legal considerations is during the transaction itself – when the AI agent interacts with a seller or counterparty. This is when agreements are formed, terms are established, and responsibilities are defined. While future AI agents might autonomously negotiate aspects of these agreements, a more immediate and powerful solution is the development of standardized transactional terms, analogous to Creative Commons licenses. Imagine a shared library of legal terms, pre-approved and readily understandable by both humans and AI agents. These standardized terms could provide a common framework for AI-driven transactions, ensuring a shared understanding of rights and obligations between the agent, the user, and the counterparty, streamlining legal interactions at scale.
The Human in the Loop: A Well-Intentioned Speed Bump
Traditionally, the answer to risky AI behavior has been to keep a human “in the loop”. While this provides a critical safety net, it also introduces friction and delays. Moreover, many users barely skim, let alone fully comprehend, lengthy terms of service before clicking “I Agree.”
While human oversight remains a necessary precaution in the current stage of AI agent development, particularly for high-value or complex transactions, the ultimate goal is to create agents that can operate autonomously and reliably, with minimal human intervention. Consider a practical scenario: an AI travel booking agent that could autonomously negotiate flexible cancellation policies with service providers based on predefined user preferences. For instance, the agent might secure more lenient terms for a trip to Paris, adapting the booking conditions to match the user’s specific risk tolerance and travel plans. Users could set preferences once and have each new AI agent they use incorporate them.
The traditional approach of “human in the loop,” while providing a safety net, significantly reduces the efficiency and scalability that make AI agents so compelling. Furthermore, the effectiveness of human oversight is questionable, especially when users often accept complex terms of service without careful review. To move beyond these limitations and fully realize the potential of AI agents, we need to explore proactive strategies – “legal hacks” – to embed legal safeguards directly into their design and operation.
Legal Hacks for AI Agents: Addressing What Could Go Wrong
To move beyond the limitations of human oversight and address the inherent legal risks of AI agents, we now explore “legal hacks” – proactive strategies to embed legal safeguards directly into the design and operation of these systems. These “legal hacks” are not about circumventing the law, but rather about leveraging technology to make legal compliance more efficient, reliable, and scalable. Our aim is to create more predictable legal outcomes, reduce reliance on cumbersome human intervention, and potentially offer first-mover advantages to companies that adopt these innovative approaches.
Teaching AI to Read the Fine Print
One powerful “legal hack” is to integrate relevant contractual terms directly into the AI agent’s decision-making process. Instead of treating legal agreements as external constraints, we can make them an integral part of the agent’s operational logic. This could involve platforms providing terms of service in structured, machine-readable formats, potentially via APIs or standardized data formats. AI agents could then be designed to parse this structured legal data, proactively assess potential compliance issues before executing transactions, and ensure alignment with applicable terms. An innovative approach to managing evolving legal terms could involve a broadcast mechanism. When platform terms of service are updated, AI agents could receive immediate notifications, eliminating the need for constant manual checking. This would allow agents to stay continuously aligned with the latest legal requirements without computational overhead.
Designing for Compliance: Checkpoints and Balances
This compliance-centric approach requires embedding checkpoints within the AI agent’s workflow. Before executing a transaction, the agent would cross-reference its planned actions against applicable legal terms, flagging potential non-compliance and, if necessary, prompting human review or adjusting its course of action. This creates a system of internal controls, ensuring that the agent operates within defined legal boundaries.
The Devil in the Details: Challenges and Considerations
Implementing this approach is not without challenges. Terms of service are often lengthy, complex, and ambiguous. Teaching an AI to interpret and apply these terms requires sophisticated natural language processing and a deep understanding of legal principles. Furthermore, we must be mindful of the unauthorized practice of law (UPL). If an AI agent were to directly advise users about complex legal terms or offer legal interpretations, it could potentially be construed as UPL. One way to mitigate this risk is to design these compliance tools primarily for the benefit of the AI agent provider. By focusing on internal compliance checks and business rule enforcement, the tool helps the provider ensure the AI operates within legal boundaries, while the AI agent itself communicates only business restrictions or options to the user, rather than direct legal advice.
The Future: AI-Friendly Terms of Service
Looking ahead, we envision a future where terms of service are designed specifically for AI comprehension. Platforms could create computational versions of their terms, optimized for machine readability while maintaining legal validity. This could involve a standardized format, perhaps analogous to the ‘robots.txt’ file that web crawlers use to understand website rules. In fact, today, AI agent developers are already updating business websites to ensure they are easily readable by LLMs and AI agents by providing a plain text version of the information. The ‘LLMS.txt’ specification is the main way people are doing this. A website’s terms of service could be put into LLMs.txt format today, making this legal hack immediately and easily achievable. In the future, an LLMS.txt file could provide additional legal and compliance requirements for AI agents operating on a given platform, making legal expectations clear and accessible. Furthermore, extending attribution fields, similar to those in some AI APIs like Google Gemini that are used to cite sources, to include metadata identifying the responsible party for an AI agent’s actions would enhance transparency and accountability in AI-driven transactions. Taking it even further, in the future, these machine-readable terms of service could roll up into immediately understandable summaries for end users who might want to filter by, for example, AI agents that act as a legal agent (as opposed to those that take the alternative independent contractor or infrastructure approaches referenced above).
On the Horizon: Leveraging Zero Knowledge Proofs
Another groundbreaking “legal hack,” particularly relevant to addressing privacy concerns highlighted in our second post, lies in the realm of cryptography: zero-knowledge proofs. A zero-knowledge proof is a cryptographic method that allows one party (the prover) to convince another party (the verifier) that a statement is true, without revealing any information beyond the validity of the statement itself. Imagine you have a magic door that only opens if you know a secret password. You want to prove to someone that you know the password without actually telling them what it is. A zero-knowledge proof would allow you to do just that. You could interact with the door in a way that demonstrates you can open it, convincing the other person you know the secret without ever revealing the password itself.
In the context of AI agents, zero-knowledge proofs could enable agents to process sensitive data – such as personal information required for a purchase – without actually revealing that data to the agent itself, the platform, or other parties. This significantly enhances user privacy and reduces the risk of data breaches, key considerations highlighted by privacy regulations. For AI agent providers, incorporating zero-knowledge proofs could minimize the amount of sensitive data they collect, simplifying compliance with privacy regulations.
Conclusion: Code as Law 2.0 – Architecting the Digital Future
Companies that pioneer these “legal hacks” – from AI-readable terms of service and standardized transactional terms to compliance checkpoints and zero-knowledge proofs – are not simply adapting to a changing legal landscape; they are actively shaping it. These innovations represent a fusion of law and code, creating a “Code as Law 2.0” paradigm that has the potential to revolutionize digital interactions. By embedding legal safeguards directly into AI agents, we can reduce compliance costs, mitigate legal risks, enhance user trust, and unlock new global markets. As AI agents become increasingly sophisticated and autonomous, embracing these proactive legal strategies will be essential for responsible innovation and building a more trustworthy, efficient, and equitable digital future. The question is not if the industry will adopt AI agents for transactions, but how quickly will you adapt to this emerging future and gain advantage over those who lag behind?
(1) Dazza Greenwood runs Civics.Com consultancy services, and he founded and leads law.MIT.edu and heads the Agentic GenAI Transaction Systems research project at Stanford’s CodeX. Diana Stern is Deputy General Counsel at Protocol Labs, Inc. and Special Counsel at DLx Law.
Thanks to Sarah Conley Odenkirk, art attorney and founder of ArtConverge, and Jessy Kate Schingler, Law Clerk, Mill Law Center and Earth Law Center, for their valuable feedback on this post.
URL for the following original post: https://innovation.consumerreports.org/defining-loyalty-for-ai-agents-insights-from-the-stanford-ai-agents-x-law-workshop/
May 5, 2025
Defining ‘Loyalty’ for AI Agents: Insights from the Stanford AI Agents x Law Workshop
AI agents are rapidly moving from science fiction to daily reality. These sophisticated software systems promise to manage tasks, conduct transactions, and augment our capabilities in unprecedented ways. But as they become more integrated into our lives, critical questions arise: Whose interests will they serve? How can we ensure they act reliably and responsibly on our behalf?
These questions were at the heart of the AI Agents x Law Workshop, held on April 8th, 2025, at Stanford Law School. Part of an ongoing research initiative affiliated with Stanford CodeX and law.MIT.edu, the event brought together legal experts, technologists, founders, and consumer advocates in collaboration with the Stanford HAI Digital Economy Lab and the Consumer Reports (CR) Innovation Lab to map the complex legal and ethical terrain of emerging AI agent technologies. This event marked the beginning of a focused effort by these organizations to collaboratively define actionable standards and practices for consumer-centric AI.
The overarching goal echoed throughout the day, was to foster an ecosystem where AI agents are built to be trustworthy, safe, and aligned with the best interests of the individuals they serve – agents that work for people, not on them. This first post in a series will provide a brief overview of the workshop and then dive deeper into one of the central themes discussed: What does it mean for an AI agent to be “loyal” to its user?
Setting the Stage: The Quest for Consumer-Centric Agents
The workshop kicked off with framing remarks emphasizing the high stakes. Professor Sandy Pentland (MIT/Stanford HAI) highlighted the intense industry interest driven not just by opportunity, but by liability concerns. Companies recognize the need for evidence-based best practices and standards to ensure agent systems don’t go off the rails, potentially leading to significant harm and legal challenges. The vision? To move towards agents that could potentially act as legal fiduciaries for their users.
Ben Moskowitz, VP of Innovation at CR, explained CR’s commitment to this vision. He spoke of “consumer-authorized agents” designed to empower users in the marketplace – tools that research, buy, and troubleshoot effectively, advocating tirelessly for consumer interests. He stressed that achieving this requires tackling normative questions, technical challenges, and defining clear expectations for agent behavior, underscoring CR’s dual role in both consumer protection advocacy and proactive product R&D to help build the desired future. Ben specifically called for consumer platforms like CR to help develop standardized testing methodologies to validate agent claims—echoing CR’s historical role in product reliability assessments.
What Does a “Loyal” AI Agent Mean for Consumers?
This fundamental question of loyalty was a recurring theme, explored in depth by myself (Dazza Greenwood, Stanford CodeX/law.MIT.edu) and Diana Stern (Deputy GC at Protocol Labs & Special Counsel at DLx Law), a leading Silicon Valley lawyer and collaborator on a pre-workshop blog series on this topic.
Imagine Bob, circa 2026, needing a new dishwasher. Instead of wading through endless online reviews and potentially misleading sponsored content, he asks his AI agent: “Find me the best dishwasher for my needs and budget.”
A “loyal” agent, operating under a duty of loyalty, would prioritize Bob’s stated interests. It would analyze objective information, compare features based on Bob’s criteria (price, efficiency, reliability ratings, specific features), and recommend the option that genuinely best serves Bob. Its internal logic and external actions would be aligned with maximizing Bob’s benefit.
An agent not bound by loyalty, however, might operate differently. Its recommendations could be skewed by hidden incentives. Perhaps it prioritizes dishwashers from manufacturers who pay the agent provider the highest commission or kickback. Maybe it highlights models from advertising partners, even if they aren’t the best fit for Bob. Bob might still get a dishwasher, but likely not the best one for him, potentially paying more or getting a less suitable product.
This “duty of loyalty” concept, central to traditional agency law (as seen in the “Iron Triangle” diagram), suggests a model where the agent provider is legally and ethically bound to put the user’s interests first within the scope of their relationship.
Beyond Promises: The Link Between Legal Frameworks & Technical Reality
The workshop discussion highlighted that merely claiming loyalty in a terms of service document isn’t sufficient. True loyalty must be reflected in the agent’s underlying architecture and behavior. As Ben Moskowitz prompted, what happens if an agent claims loyalty but acts otherwise, perhaps due to flawed design, negligence, or even intentional bias in its programming?
This necessitates observability and verifiability of agent decisions. We need ways to assess whether an agent is actually acting loyally. Can we technically test if its information processing and decision-making are free from undue influence from third-party interests or the provider’s own conflicting business models? Can we evaluate if it consistently prioritizes the user’s goals as instructed? This technical dimension is inseparable from the legal promise. Workshop attendees identified promising technical approaches, such as independent “agent audits” and sandboxed simulations—methods CR could lead or facilitate—to objectively measure an agent’s adherence to consumer-first standards.
Diana Stern’s work, which we discussed, further illuminates this by outlining different potential relationship models between agent providers and users:
Fiduciary: The highest standard, embedding a duty of loyalty (as discussed above)
Technology Provider: The opposite extreme, where the provider essentially says, “We just provide the tool; you bear all the risk,” disclaiming liability (as seen in some current terms)
Contractor: An intermediate model where duties and responsibilities are defined by a specific contract or scope of work, potentially mixing elements of service provision with limited obligations.
Choosing a model has profound implications on user trust and provider liability. While the “technology provider” stance might seem safest legally for the provider, the “fiduciary” approach, despite its higher bar, could become a significant competitive differentiator, attracting users seeking agents they can genuinely trust.
Looking Ahead
Establishing loyalty is foundational, but it’s just one piece of the puzzle. The AI Agents x Law workshop also explored critical mechanisms for handling agent errors (leveraging UETA Section 10b), the challenges of authorizing agents securely (authenticated delegation), the impact of agents on legal practice and labor, and the need for robust evaluation methods (“evals”) to ensure agent performance and alignment. Future posts will explore other crucial topics surfaced during the workshop, such as error handling and the implications of new protocols like Agent-to-Agent (A2A) communication. Stay tuned for more.
The transition to an agent-driven world requires careful thought, collaboration, and proactive design. By bringing together diverse perspectives, initiatives like this aim to develop the frameworks, standards, and technical solutions needed to ensure AI agents enhance, rather than undermine, consumer welfare and market fairness. To this end, CR is exploring prototype tests and interactive demos, aiming to make loyalty measurable and visible to everyday users.
Interested in how AI agents can better serve people? Want to help define that future? We’d love to hear from you. Reach out to us anytime at innovationlab@cr.consumer.org.
URL for the following original post: https://innovation.consumerreports.org/my-agent-messed-up-understanding-errors-and-recourse-in-ai-transactions/
May 19, 2025
My Agent Messed Up! Understanding Errors and Recourse in AI Transactions
In my previous post, I shared highlights from Stanford CodeX’s AI Agents x Law Workshop exploring how we might foster an ecosystem where AI agents are built to be trustworthy, safe, and aligned with the best interests of the individuals they serve. In this post, I’ll dive into Section 10(b) of the Uniform Electronic Transactions Act (UETA)—a previously obscure provision—that has suddenly become critically relevant as AI-driven agents increasingly mediate commercial transactions.
Setting the Scene
Imagine asking your new AI shopping assistant to order a specific book, only to find 10 copies arriving at your door. Or perhaps it books a flight to Paris, France, instead of Paris, Texas, for that crucial conference. As AI agents move beyond providing information to actively conducting transactions on our behalf – buying goods, booking services, managing finances – the potential for costly errors increases. What happens then? Who is responsible, and what recourse do you have?
While the technology feels cutting-edge, part of the answer lies in a surprisingly relevant piece of legislation from the dawn of the internet age: the UETA. Enacted in 49 states and territories around 1999 to give legal validity to electronic signatures and records, UETA showed remarkable foresight by including provisions specifically addressing “electronic agents.” These rules, particularly Section 10(b) concerning errors are once again pertinent with the rise of powerful LLM-driven agents.
UETA Section 10(b): The Right to Undo Agent Errors
UETA Section 10(b) provides a critical safeguard for individuals when an electronic agent introduces an error into a transaction. In plain terms:
If an electronic agent makes a mistake during a transaction (one you didn’t intend), and…
You, the user, were not provided with a reasonable “means to prevent or correct the error” by the agent’s provider…
Then, you generally have the legal right to “avoid the effect” of the erroneous transaction – essentially, to reverse or undo it.
This isn’t about agents giving bad advice – that might fall under different legal principles like negligence or deceptive practices. UETA Section 10(b) specifically targets situations where the agent itself, operating autonomously, messes up the action of the transaction.
Crucially, this right to reverse the transaction cannot simply be waived by fine print in the terms of service.
The Provider’s Role: Building the Escape Hatch
The key phrase here is the “means to prevent or correct the error.” This puts the onus squarely on the company providing the AI agent service. If they want to ensure the transactions conducted by their agents are considered final and legally binding, they must build mechanisms that give the user a fair chance to catch and fix mistakes before they become irreversible problems.
What does this look like in practice? At the Stanford CodeX’s AI Agents x Law Workshop, Andor Kesselman presented a compelling open-source demo showcasing exactly this. Implementations might include:
Clear Confirmation Prompts: “You are about to purchase 10 widgets for $100. Confirm or Cancel?”
Review Steps: Allowing users to review order details before final submission
Spending Limits or Threshold Alerts: Flagging unusually large or atypical transactions for human verification
Accessible Error Reporting: Clear paths for users to report issues promptly
As Diana Stern and I noted in a recent Stanford CodeX article:
“By implementing a user interface and process flow that enables customers to review and correct transactions before they are finalized, providers not only comply with UETA but also establish a strong argument for ratification… This design pattern – proactively building in error prevention and correction mechanisms – is therefore not just about legal compliance; it’s a fundamental aspect of responsible Transactional Agent development that helps define the point of finality and clarify the allocation of risk. But it’s also just good practice and a fair rule.”
Why This Matters Now More Than Ever
While UETA is over two decades old, its provisions on automated transactions and error handling are stepping into the spotlight. The “electronic agents” envisioned then were largely deterministic; today’s LLM-powered agents are far more complex and unpredictable, making robust error handling even more vital.
Because of UETA Section 10(b), consumers have a powerful legal remedy if an agent transaction goes wrong and the consumer wasn’t given a chance to fix it. For businesses deploying AI agents, UETA Section 10(b) is a clear mandate: building effective, transparent error prevention and correction isn’t just good customer service – it’s a legal necessity for ensuring transaction finality, mitigating liability, and ultimately, earning user trust in this new era of automated commerce.
Looking Ahead
While we’ve explored the importance of loyalty in AI agents and the legal frameworks for handling their errors, it’s also crucial to recognize that agents are no longer acting alone—they’re starting to talk to each other. My final post in this series will dive into the emerging world of Agent-to-Agent (A2A) communication and what it means for consumers.
Interested in how AI agents can better serve people? Want to help define that future? We’d love to hear from you. Reach out to us anytime at innovationlab@cr.consumer.org.
URL for the following original post: https://innovation.consumerreports.org/agents-talking-to-agents-a2a-reshaping-the-marketplace-and-your-power/
May 30, 2025
Agents Talking to Agents (A2A): Reshaping the Marketplace and Your Power
In previous posts, we explored the importance of loyalty in AI agents and the legal framework like the Uniform Electronic Transactions Act (UETA) for handling their errors. But the next evolution is already here: agents aren’t just acting solo; they’re starting to talk to each other. This Agent-to-Agent (A2A) communication, recently standardized by protocols like Google’s open-source A2A initiative, is poised to fundamentally reshape digital marketplaces and potentially shift significant power towards consumers.
While the technical details involve standardizing how different agents discover, communicate, and collaborate, the implications go far beyond mere plumbing. Think of it less like upgrading pipes and more like building the interconnected highways for an entirely new kind of commerce and interaction, operating at machine speed.
Market Disruption at Machine Speed
As discussed during Stanford CodeX’s AI Agents x Law Workshop, the widespread adoption of A2A protocols could trigger market shifts reminiscent of how High-Frequency Trading transformed finance, but on a much broader scale.
Hyper-Speed Transactions: Agents negotiating and executing deals directly with other agents bypass human bottlenecks, accelerating everything from price discovery to order fulfillment
New Intermediaries (and Disintermediation): Just as electronic trading created new market makers, A2A will likely spawn new kinds of digital intermediaries – agent “matchmakers,” reputation brokers, or specialized negotiation agents. Simultaneously, it could disintermediate existing players who rely on friction or information asymmetry. As highlighted in our workshop discussions, we might even see waves of “redisintermediation” as the ecosystem rapidly evolves.
Dynamic Competition: Standardized communication lowers the barrier for entry. Specialized agents focusing on specific tasks (like finding the absolute lowest price or negotiating the best warranty) can plug into the ecosystem, fostering intense competition based on capability and value.
Unlocking Consumer Power Through Interoperability
This is where A2A becomes particularly exciting from a consumer perspective. An open standard for agent communication directly enables:
Real Choice Among Agents: If agents can talk to each other via A2A, you’re not locked into a single provider’s ecosystem. You could choose a primary “concierge” agent from one company but employ a specialized “deal-hunting” agent known for its fierce loyalty from another, knowing they can collaborate effectively on your behalf. This interoperability is the bedrock for a competitive market where truly pro-consumer agents can thrive.
Agents as “Legal Hacks”: Remember the challenge of impenetrable terms and conditions? As explored by legal minds like Diana Stern during our workshop, AI agents, facilitated by A2A’s ability to interact with diverse services in a standardized way, could become powerful tools for navigating this complexity. Imagine instructing your agent: “Find me the retailer with the best price and the most consumer-friendly return policy according to these specific criteria.” A2A provides the rails for your agent to query, parse, and compare these terms across multiple sellers automatically.
Potential for Collective Action: The idea of a “union of agents” becomes more feasible. Platforms coordinating numerous consumer agents via A2A could potentially aggregate demand or negotiate terms collectively. Imagine thousands of agents simultaneously signaling preference for merchants who meet specific data privacy standards or offer extended warranties, creating collective bargaining power at an unprecedented scale and speed.
The Road Ahead: Opportunity & Responsibility
The emergence of A2A protocols marks a pivotal moment. It offers the potential for vastly more efficient and dynamic markets, but also new avenues for consumer empowerment, choice, and leverage. However, realizing this positive potential requires conscious effort.
Ensuring these protocols remain open, fostering genuine competition among agent providers, demanding transparency in how agents operate, and building robust mechanisms for accountability (like the UETA error handling discussed previously) are crucial next steps. Consumer Reports and collaborators at Stanford and MIT are actively researching and prototyping in this space, working to ensure that as agents learn to talk to each other, they do so in ways that ultimately benefit the consumers they serve.
The agent-to-agent future is rapidly approaching. By understanding the underlying technology and advocating for consumer-centric principles in its development, we can help shape a marketplace that is not only faster and smarter, but also fairer.
Get In Touch
Interested in how AI agents can better serve people? Want to help define that future? We’d love to hear from you. Reach out to us anytime at innovationlab@cr.consumer.org.
Phenomenal recap. Super helpful to see it all in one place and thanks for the acknowledgement of the demo!