Proof’s Head of Product argues that agentic commerce will break the trust model behind digital transactions unless the industry solves a question nobody has answered yet: who issues the proof that a real human authorized an AI agent to spend their money?
OpenAI has north of 800 million users. Anthropic, Google, and every other major AI platform are building the same capabilities. Soon, hundreds of millions of people will be able to hand their payment credentials to an AI agent and say “go buy this for me.” Teaching AI to navigate a website, find the “add to cart” button, and click “buy now” has been solved for years. As AI agents begin making purchases on behalf of millions, the industry’s central challenge isn’t technical execution, it’s proving a human authorized the transaction.
The payments and technology industries know this is coming and have been taking steps to address what could be a wave of fraud claims. Google published an open protocol called AP2, the Agent Payments Protocol, and launched the Universal Commerce Protocol with Shopify, Target, Walmart, and over 60 organizations, including Visa, Mastercard, American Express, PayPal, Coinbase, and Stripe. OpenAI is likewise developing ACP, the Agentic Commerce Protocol. Stripe also announced an agentic commerce protocol.
The premise behind these protocols is that before an AI agent can use your payment credentials, you digitally sign a data contract specifying exactly what you’re authorizing. What agent, what payment method, what products, what spending limits, which merchants. When the agent makes a purchase, it presents this signed permission slip, or “mandate,” alongside the transaction. If a merchant possesses the mandate, they have cryptographic proof that a real human authorized the purchase within specific parameters.
Think of it as a scoped, time-bound power of attorney for your payment credentials. But the mandate only works if you can answer some hard questions: How do you know this permission slip is real? What identity or keys were used to sign it? How do you know it wasn’t fabricated or generated by the AI itself to shield its own liability? Can you be sure a human was actually present at the time of signing? These are questions buried in the specification that no amount of protocol engineering can solve on its own. In the language of security infrastructure, who is the Certificate Authority, the independent party that vouches for the identity behind the signature?
The entire system depends on the mandate being trustworthy. A merchant needs to know whether a mandate was actually signed by a verified human or fabricated, tampered with, or signed by the AI itself. Without a trusted, independent authority to issue the keys and verify the identities behind them, the mandates are questionable. And every obvious candidate for this role brings real capabilities but also structural conflicts.
Here, we come to a crucial conflict of interest: AI platforms can’t be the trust anchor for a system designed to create accountability outside of AI. The mandate exists to prove that a human, not the AI, authorized the action. If the AI platform also signs the mandate, it’s vouching for itself. The entity that acts and the entity that verifies authorization have to be different, to offer a check and balance.
Merchants, meanwhile, have a direct financial stake in the outcome. The mandate is partially designed to protect these businesses from disputes, so a merchant issuing their own proof of authorization is the equivalent of writing yourself a receipt. It doesn’t matter how honest any individual merchant is. Similarly, the threat model doesn’t work if the party that benefits from the proof also creates it.
Large technology platforms bring enormous scale and have driven much of the protocol innovation, but many also operate massive commercial businesses that intersect directly with how consumers discover and buy products. Google generates $175 billion annually from product listing ads. Amazon makes $50 billion. When the entity that controls the authorization layer also profits from influencing what gets purchased, that’s a conflict that’s hard to design around. There’s also a privacy dimension: when Big Tech controls the signing of your permission slip, they gain visibility into exactly what you’re buying, and that data feeds directly into the consumer targeting models their entire business depends on.
Banks and payment networks are the most natural candidates. Visa and Mastercard are already doing critical work on authentication and agentic commerce standards. But the banking ecosystem is fragmented and, candidly, many financial institutions haven’t yet fully engaged with policies and protocols for what AI-delegated transactions mean for their infrastructure and liability frameworks.
The role of the independent verifier requires structural independence from the transaction itself. The entity that vouches for human identity and intent cannot have a financial interest in the outcome of the transaction it’s authorizing. This is the same principle behind why auditors can’t also be consultants to the companies they audit.
At Proof, we have seen this firsthand: When $350 billion in real estate transactions and $150 billion in financial service outflows move through digital channels, the records proving identity and intent must be airtight. They have to be cryptographically tied to a verified human, impossible to fabricate, and verifiable by any system that receives them. We call these verifiable records or digitally signed artifacts that prove three things: it was really you, you agreed to specific terms, and the record hasn’t been modified since you signed it. They can be verified by any machine that receives it. They cannot be generated by AI. An AI agent mandate is exactly this kind of record.
The agentic commerce problem is urgent, but it’s also a window into something much larger. If we become comfortable delegating a $50 purchase to AI, how long before we’re delegating investment portfolio rebalancing, healthcare plan selection during open enrollment, insurance claims, wire transfers, or retirement contributions? Each of these is a power of attorney. And in many of these domains, state and federal law already governs how power of attorney is granted. In the paper world, when you sign a healthcare directive, you need a witness or a notary. When you grant someone financial power of attorney, there are formal requirements around identity, intent, and documentation. There is no reason to expect less rigor when a person delegates those same decisions to AI. If anything, the stakes argue for more.
If the industry working groups design protocols only for checkout, we’ll rebuild this infrastructure for every other domain where AI acts on behalf of a human. If they design it as a general-purpose delegation framework, we build it once.
Beyond protocol, the other half of the solution is liability policy. When someone disputes an AI-initiated transaction, some party needs to determine who bears responsibility. Was it the merchant, the bank, or the AI platform? Today’s payment rules were written for scenarios where a human clicked “buy.” AI agents introduce an entirely new category of scenarios that no existing policy covers. A customer told the agent to buy shoes and it bought something else entirely. The agent was authorized for one merchant but purchased from another. The consumer doesn’t like what was delivered, and the merchant says no refund because the mandate was technically valid. There is no agreed-upon framework for resolving any of these scenarios. New liability rules need to be written, and the merchants, banks, AI platforms, and regulators all need to agree on who is accountable.
The world is about to delegate far more than its purchasing power to AI. Payments are just the beginning. The question isn’t whether delegation will happen. What matters is whether there will be a verified, tamper-proof, independently trusted record behind every delegation.
Financial institutions, payment networks, and merchants need to participate in these working groups now, even if they have nothing to contribute to the technical specifications. The protocols being built will work in terms of technology, but the harder work is the policies around liability, governance, and what constitutes valid AI delegation. The industry has a window to lead before regulators are forced to intervene reactively, after the damage is done.
Darren is Head of Product at Proof, where he leads the development of digital identity and verifiable record technology. Proof’s platform has facilitated over $350 billion in real estate closings and $150 billion in financial service transactions.
Proof’s Head of Product argues that agentic commerce will break the trust model behind digital transactions unless the industry solves a question nobody has answered yet: who issues the proof that a real human authorized an AI agent to spend their money? OpenAI has north of 800 million users. Anthropic, Google, and every other major AI platform are building the same capabilities. Soon, hundreds of millions of people will be able to hand their payment credentials to an AI agent AI, Home, News, Popular
This articles is written by : Nermeen Nabil Khear Abdelmalak
All rights reserved to : USAGOLDMIES . www.usagoldmines.com
You can Enjoy surfing our website categories and read more content in many fields you may like .
Why USAGoldMines ?
USAGoldMines is a comprehensive website offering the latest in financial, crypto, and technical news. With specialized sections for each category, it provides readers with up-to-date market insights, investment trends, and technological advancements, making it a valuable resource for investors and enthusiasts in the fast-paced financial world.