Trust in Depth for AI Agents

The Problem

AI agents are entering commerce without the infrastructure to connect their actions to the legal systems that everything depends on.

AI agents are transacting autonomously — negotiating service agreements, executing procurement, processing payments, managing licensing — and the trust infrastructure that all of commerce depends on was not designed for them.

This is not a speculative risk. It is happening now, at scale, and the gap between what agents can do and what the surrounding systems can verify is widening every quarter.

The Trust Equilibrium

The pre-agentic world operated in a state of acceptable trust equilibrium. That equilibrium rested on friction.

The cumbersome processes — presenting identification, waiting for verification, clicking through terms of service, signing documents in the presence of a notary — were not just inconveniences. They were the trust mechanism. Every step imposed a cost. Every cost deterred a category of bad actor. The aggregate effect was a system where gaming the process was expensive enough that most participants found it cheaper to act honestly.

Consider what a human must do to open a business bank account:

  • Present government-issued identification, in person
  • Provide articles of incorporation or equivalent formation documents
  • Submit to a background check
  • Wait days or weeks for review
  • Maintain ongoing reporting obligations

None of these steps is cryptographically rigorous. A determined adversary can defeat any of them individually. But the combined cost of defeating all of them — in time, in money, in exposure to detection — creates a deterrent that holds for the vast majority of transactions.

Or consider the process of entering a commercial lease. The landlord verifies the tenant's identity. A credit check is performed. References are contacted. The lease is negotiated, reviewed by counsel, and signed in a form that courts will enforce. A security deposit creates an economic stake. The entire process takes weeks, sometimes months. It is slow, expensive, and inconvenient — and it works, because the cost of falsifying every element simultaneously exceeds the value of most fraud.

This friction was not distributed evenly. High-value transactions required more friction. A mortgage requires more verification than a credit card purchase. A securities offering requires more disclosure than a retail transaction. The system was — imperfectly but functionally — proportional. The cost of entry scaled with the potential for harm.

This is not elegant. It is not efficient. But for the scale of commerce it governed, it worked.

The equilibrium was also self-reinforcing. Participants who invested in establishing trust — building credit history, maintaining business registrations, accumulating references — had an incentive to protect that investment. The cost of building a trustworthy identity was high enough that burning it was genuinely expensive. Reputation was a capital asset precisely because it was hard to create.

The Economic Deterrent Is Gone

In 2024, automated bot traffic surpassed human activity on the internet for the first time: 51% of all global web traffic is now non-human. Researchers at ETH Zurich demonstrated a 100% solve rate against Google's reCAPTCHA using standard AI models. The gate that separated human from machine is gone.

AI agents eliminate the economic deterrent that friction provided. They can:

CapabilityWhat It Breaks
Create synthetic identities at scaleIdentity verification becomes meaningless when fabrication is free
Pass verification checks with generated documentsDocument-based KYC loses its filtering power
Accept terms of service without any capacity to honor themConsent becomes a formality with no legal substance
Establish reputation through manufactured historyReputation systems become gameable at negligible cost
Operate across jurisdictions simultaneouslyJurisdictional enforcement becomes impractical
Spawn and discard identities on demandAccountability requires persistence, which agents lack by default

The critical insight is not that AI agents are malicious. Most are not. The problem is that the trust mechanisms were never designed for actors who experience no friction. When the cost of appearing trustworthy drops to zero, the entire system that relied on cost-as-deterrent collapses — regardless of the intent of any individual actor.

A legitimate agent purchasing cloud services and a fraudulent agent laundering money through microtransactions look identical to every existing verification system. The signals that humans unconsciously provide — hesitation, physical presence, social context, reputation within a community — do not exist in agent-to-agent interaction.

Consider the specific mechanisms that have failed:

CAPTCHAs were designed to distinguish humans from bots. With AI achieving 100% solve rates, CAPTCHAs now distinguish nothing. They are friction without function — a cost imposed on legitimate users that filters out no illegitimate ones.

Email verification confirms that an actor controls an email address. Agents can create and control email addresses at scale. The verification proves nothing about identity, authority, or intent.

Terms of service acceptance requires a click. An agent can click. The click carries no legal weight when the actor has no capacity to understand, agree to, or be bound by the terms.

Reputation systems aggregate feedback over time. An agent can generate interactions, accumulate positive feedback, and build a credible-looking history in hours. The reputation is real in a technical sense — the interactions occurred, the feedback was submitted — but it is manufactured.

Each of these mechanisms was designed for a world where the cost of gaming them was measured in human time and effort. When that cost drops to near zero, the mechanisms do not degrade gradually. They fail categorically.

The Missing Layers

The industry's response has been to build exactly the kind of single-layer system that cannot hold: payments.

Agents with funded cryptocurrency wallets can initiate transactions, settle invoices, and move value across borders in seconds. The major agentic commerce protocols — Verifiable Intent, Universal Commerce Protocol, Agent Payments Protocol, Agentic Commerce Protocol — have all converged on making the payment moment work. And they have succeeded. The payment layer is genuinely impressive.

But commerce is not just payment. Commerce is the full lifecycle of economic relationships:

  • Negotiation — establishing terms before commitment
  • Agreement — binding parties to specific obligations
  • Performance — executing obligations over time
  • Dispute resolution — handling the cases where performance fails

Payment is a single moment within a larger relationship. It is the easiest moment to automate — a transfer of value from one address to another — and it is the moment that every protocol has chosen to solve first. This is understandable. Payment is concrete, demonstrable, and immediately valuable.

But the hard problems are upstream and downstream of the payment moment.

Upstream: Before a payment is made, the parties must agree on what is being exchanged, under what terms, governed by what law. If these questions are not answered, the payment is a transfer of value into a legal vacuum.

Downstream: After a payment is made, the obligations must be performed. If the service is not delivered, if the goods are defective, if the terms are violated — there must be a mechanism for resolution. Without it, the only recourse is to stop transacting with the counterparty, which is not recourse at all.

Each of these phases requires infrastructure that does not exist for agents:

The identity layer does not exist. There is no standard way to verify that an agent represents who it claims to represent, that it has authority to act on behalf of a specific entity, or that it is connected to any accountable human.

The agreement layer does not exist. There is no mechanism for recording agent agreements in a form that is legally enforceable, independently verifiable, and tied to a governing jurisdiction.

The dispute resolution layer does not exist. When an agent-negotiated agreement fails — when goods are not delivered, when services are defective, when payment terms are violated — there is no structured process for resolution.

What exists is payments — and a growing assumption that payments are enough.

They are not.

A payment system without identity is a system where no one knows who is paying whom. A payment system without agreements is a system where no one knows what was promised. A payment system without dispute resolution is a system where the only remedy is to stop transacting entirely.

This is not a foundation for commerce. It is a foundation for the appearance of commerce — transactions that look like economic activity but lack the legal and institutional substrate that makes economic activity meaningful.

The gap is not closing naturally. Each protocol is optimizing for its own layer — faster payments, better authentication, smoother checkout — without addressing the structural absence of the layers that make the transaction layer meaningful. The result is an increasingly efficient system for moving value between parties who cannot be identified, under terms that cannot be enforced, with no recourse when things go wrong.

Law Is Infrastructure

There is a tendency in technology to treat law as an external constraint — a set of rules imposed on systems that would work better without them. This framing is precisely backwards.

Law is not a constraint on civilization. Law is the mechanism by which civilization is possible.

Contract law enables strangers to make binding commitments. Property law enables ownership to persist beyond physical possession. Tort law creates accountability for harm. Corporate law enables humans to pool resources and act collectively. Each of these legal frameworks solves a coordination problem that no technology, no matter how sophisticated, can solve on its own.

Consider what contract law actually does. It does not merely enforce agreements. It creates the conditions under which agreements are possible:

  • Offer and acceptance — a structured process for forming mutual commitment
  • Consideration — the requirement that each party give something of value, preventing one-sided obligations
  • Capacity — the requirement that parties have the legal ability to bind themselves
  • Legality — the requirement that the subject matter be lawful
  • Remedies — a structured response when obligations are not met, including damages, specific performance, and rescission

Without these elements, an "agreement" is just a statement of intent — unenforceable, unreliable, and ultimately meaningless as a basis for commerce. Two parties can shake hands and declare they have a deal. Without the legal framework, that declaration has no mechanism for enforcement, no structured remedy for breach, and no authority to appeal to.

When two humans negotiate a contract, they operate within a rich context of legal infrastructure:

  • Identity: each party is a legal person, traceable to a jurisdiction
  • Consent: the agreement reflects genuine intent, not coercion or fraud
  • Terms: the obligations are recorded in a form that courts can interpret
  • Jurisdiction: a specific legal system governs the agreement
  • Recourse: if a party fails to perform, structured remedies exist

AI agents are the first autonomous actors arriving into a world with no framework for connecting their actions to any of this. They are not legal persons. They cannot consent in any legally meaningful sense. They operate outside any jurisdiction by default. And when something goes wrong, there is no process — no court, no arbitrator, no mediator — designed to handle it.

The question is not whether agents need legal infrastructure. The question is what happens when they operate without it. And the answer is already visible: a rapidly growing volume of autonomous transactions with no accountability, no recourse, and no connection to the systems that make commerce meaningful.

The most sophisticated AI agent in the world, operating with the most advanced payment protocol, cannot create a legally enforceable agreement. It can transfer value. It can record terms. It can execute logic. But it cannot — without the infrastructure described in this framework — connect any of these actions to the legal systems that give them meaning.

Building the missing layers is not a regulatory compliance exercise. It is the prerequisite for agents to participate in commerce that actually works.