On AI Regulation "Third-Way"
Legislative Testimony on AI Regulatory Approaches and the Rise of AI Agents
Earlier today I appeared before the Wyoming Legislature’s Joint Select Committee on Blockchain, Financial Technology & Digital Innovation Technology to outline a practical path for governing artificial-intelligence systems without throttling innovation.
In my testimony, I presented California's SB 813 as a potential "third way" for AI regulation—a middle path between heavy-handed restrictions and complete absence of oversight. This approach creates voluntary certification through Multi-stakeholder Regulatory Organizations (MROs) that can verify AI systems meet safety and reliability standards. Certified systems gain a rebuttable presumption of "reasonable care" in tort cases—creating a powerful incentive for responsible innovation without mandating specific technical approaches.
The economic implications of AI agent systems formed a central focus of our discussion. These autonomous AI systems are already transforming software engineering, legal services, and commercial transactions. Companies like Perplexity and Amazon are deploying AI agents that can conduct transactions and make purchases on users' behalf, while Stripe now offers tools for businesses to authorize AI agents to make direct payments.
The economic boost could reach 3-5% of GDP by 2030, yet the same technology that scales productivity can displace jobs or amplify malicious actors. During questioning I discussed authenticated delegation protocols that tie every agent action to a verifiable human or legal entity, limiting liability drift and curbing fraud, and urged pairing flexible certification with robust up-skilling programs rather than blunt “human-in-the-loop” mandates that freeze scalability.
What's particularly striking is how quickly these technologies are moving from research concepts to everyday deployment. When I first testified to this committee on generative AI, many of these capabilities seemed theoretical. Today, they're commercially available. This rapid evolution suggests we need frameworks that can adapt as quickly as the technology while providing necessary guardrails around high-risk applications.
The committee demonstrated a sophisticated understanding of the challenges, asking thoughtful questions about security implications of foreign AI models, intellectual property concerns with training data, and evolving approaches to human oversight requirements. As Senator Rothfuss noted, Wyoming has a tradition of "regulating to enable rather than restrict"—a philosophy perfectly suited to this moment of technological transformation.
I've been honored to work with the Wyoming legislature over several years as they've crafted blockchain legislation and other digital innovation frameworks. Their approach of careful listening, thoughtful questioning, and balanced policy-making continues to serve as a model for how states can navigate technological disruption. I look forward to continuing this important conversation at future hearings as we work toward frameworks that unlock AI's benefits while mitigating potential harms.
May 16, 2025 Update: Further Thoughts on AI Regulation, MROs & a Path to Interstate Co-operation
After I posted my Wyoming testimony on multistakeholder regulatory organizations (MROs), Nancy (Leyes) Myrland left an insightful LinkedIn comment that zeroed-in on three issues:
Will state-level guardrails still matter if Washington eventually centralises AI oversight?
How often would a “trustworthy” badge have to be renewed when models evolve daily?
Are California-only guardrails enough, or must other states join for real protection?
I made a short reply to Nancy on LinkedIn but the character limit is short and her questions invite a richer look at both California’s SB 813 and an idea I sketched for the legislature: inter-state reciprocity. So let’s go deeper!
Nancy’s questions—answered
1 | Will a future federal regulator make state action moot?
Not at all. SB 813 obliges every MRO to spell out “an approach to interfacing effectively with federal and non-California state authorities” . In American law we repeatedly see innovations flow bottom-up: Blue-Sky securities rules, driver-licence compacts, the Uniform Commercial Code. States are nimble laboratories; Congress often scales what they prove. A running California MRO framework gives Washington a tested chassis to bolt onto.
2 | How often does “trustworthy” recertification happen?
Model-level triggers. Each MRO plan must define technical thresholds for updates requiring renewed certification . If a developer adds autonomous code-execution or a new multimodal dataset that crosses the line, the certificate pauses until a fresh audit clears it—much like the FDA’s 510(k)/PMA split for medical devices.
MRO-charter clock. An MRO’s own designation lasts three years and can be ripped up sooner if independence erodes, its methods become obsolete, or a certified model causes major harm . Oversight of the overseers updates at least as fast as the tech.
3 | Are one-state guardrails enough?
SB 813 already covers any AI deployed in California, so most national providers will seek certification. Still, a genuine safety net needs more than one state’s knots. Enter reciprocity.
Expanding the vision: a practical path to interstate AI reciprocity
While SB 813 gives California a robust foundation, legislators in Wyoming (and elsewhere) asked how to spread the benefit without fifty separate audits. The answer I proposed is an interstate reciprocity layer. It is not yet in SB 813; rather, it is a natural extension that lets developers certify once and be recognised in many jurisdictions, while each state keeps the power to yank recognition the minute another state’s protections slip.
4.1 A simple legislative starting-point
To switch reciprocity on, California (or any pioneering state) could add a single sentence to its safe-harbor section. Something like:
“A certificate issued under a substantially equivalent multistakeholder regulatory framework of another state shall confer the same rebuttable presumption, unless the Attorney General determines that framework no longer affords equivalent protections.”
That one clause empowers the AG to recognise outside frameworks and keep a live list of reciprocal states.
4.2 What “substantially equivalent” could mean
The phrase must have teeth. An outside framework would need to meet, at minimum, these pillars:
Comprehensive risk scope —covers CBRN, malign persuasion, autonomy, exfiltration.
Guaranteed independence —board composition and funding caps that block capture.
Transparency & accountability —public annual reports and decade-long record retention.
Robust enforcement —real-time power to revoke certificates when models drift.
Continuing governmental oversight —periodic review of each MRO by its home-state AG (or equivalent).
Collaborative data-sharing —MOUs so AG offices trade incident reports, best-practice memos and evolving threat intel in near-real time.
4.3 Making reciprocity work: procedural mechanics
Public registry & dynamic review. California’s AG would publish the recognised-states list; every listing sunsets (say) in three years, forcing re-inspection so standards evolve with the science.
Agile de-recognition. If State X’s MRO weakens or certifies a reckless model, California can strike that state overnight—integrity preserved, no legislative lag.
Interstate compact option. For deeper ties, two or more states could enshrine reciprocity in a compact, driver-licence-style. The Uniform Law Commission could draft model language so Wyoming and New Jersey start from the same page.
4.4 Why stake-holders win
Developers: one dossier, many states—lower friction, stronger incentive to certify.
States: pooled expertise and shared intel, yet full power to slam the door if another jurisdiction backslides.
Public: consistent guardrails and quicker access to vetted AI.
Nation: a bottom-up baseline forms while Congress deliberates—innovation and safety advance together.
4.5 Guardrails & challenges
Reciprocity must never spark a race to the bottom. That is why listings sunset and why de-recognition is swift. And remember: the safe-harbor is narrow and rebuttable—it shields developers only on personal-injury and property-damage claims, not consumer-protection, privacy, or civil-rights suits . Participation is voluntary; immunity is limited.
Additional clarifications
Transparency. MRO plans are filed with the AG; future regulations should publish them (redacting trade secrets) to build public trust.
Built-in safeguards. Whistle-blower protections (§ 8898.2(a)(7)), mandatory incident reports (§ 8898.2(a)(3)) and auditing of post-deployment practices (§ 8898.2(a)(1)) are core plan elements .
Closing – laboratories at work
I’ve spent my career in state-powered innovation: drafting the Uniform Electronic Transactions Act, co-ordinating early digital-signature standards, steering multi-state mega-procurementsthat pooled demand for better pricing, building open-source repositories shared across agencies, and countless other projects where states proved nimbler and bolder than Washington. More recently we’ve seen states pioneer everything from digital identity and electronic notarization to friction-less sales-tax collection. SB 813 stands firmly in that tradition—nimble, incentive-driven, and ready for replication.
Could your state benefit from a “certify once, recognised many” approach? I’m eager to refine these ideas with lawmakers, technologists and advocates. Drop me a comment at Civics.Com/contact and let’s keep building trustworthy AI, the federalist way.