Litecoin

a16z: How does the block chain fill the gap between AI Agent identity, payment and trust

2026/04/24 02:06
🌐en

In the era of AI Age, the block chain is a critical infrastructure: identity, governance, payment, trust and control。

a16z: How does the block chain fill the gap between AI Agent identity, payment and trust
Original title: The missing agent for AI delegates: 5 ways blockchains can help
source: a16z crypto
Original: AididiaoJP, Foresight News

AI Agent is at a much faster rate than other infrastructureThe rapid evolution from assistive tools to real economic players。

Although Agent is now able to carry out his mission and deal, they still lack a standard cross-environment approach to prove who I am, what I am authorized to do, and how I should be paid. Identity cannot be moved, payment is not default programmable and collaboration is still on the island。

The block chains are addressing these issues at the infrastructure level。The public ledger provides a certificate for each transaction that can be audited by anyone; the wallet gives Agent portable identity; and the stable currency becomes another settlement layer. These are not future concepts, and they are available today to help Agent to operate as a true economic subject without permission。

Providing identity to non-humans

The current bottleneck in the Agent economy is no longer intelligence, but identity。

In the financial services sector aloneThe number of non-human identities (automated trading systems, risk engines, fraudulent models) is already about 100 times that of human employees。With the massive deployment of the modern Agent framework (tools call large models, autonomous workflows, multi-Agent configurations)This proportion will continue to rise in all sectors。

However, these Agents are in fact "no bank accounts". They can interact with the financial system but cannot be carried out in a portable, verifiable and tacitly credible manner. They lack a standardized way of proving their authority, operating independently across platforms or taking responsibility for their own actions。

What is missing is a common identity layer — the Agent version of SSL — capable of standardizing collaboration across platforms. The current solution is still fragmented: on the one hand, vertically integrated, first in French currency; on the other hand, encrypted primary, open standards (e.g., x 402 and the emerging Agent identity proposal); and on the other hand, an extended framework of developers trying to bridge the application layer (e.g., MCP, Mode Context Protocol)。

There is still no widely used, interoperable approach that allows one Agent to prove to another who it represents, what it is allowed to do and how it is remunerated。

This is the core idea of KYA. Just as humans rely on credit records and KYC (Know Your Customer), Agent will need an encrypted signed certificate that binds it to the subject, authority, constraint and reputation。

THE BLOCK CHAIN PROVIDES A NEUTRAL COORDINATION LAYER: PORTABLE IDENTITY, PROGRAMMABLE WALLETS, AND VERIFIABLE CERTIFICATES THAT CAN BE ANALYSED IN CHAT APPLICATIONS, API AND MARKETS。

We have seen the early emergence of the Agent registration form on the chain, the original Agent using USDC's wallet, the ERC standard for minimizing trust Agent, and the developer's toolkit that combines identity with embedded payment and fraud control。

However, pending the introduction of a common identification standard, businesses will continue to block Agent from the firewall。

GOVERNANCE AI-OPERATED SYSTEMS

Agent started taking over the real system, which raises a new question: who really has controlIMAGINE A COMMUNITY OR COMPANY WHERE THE AI SYSTEM COORDINATES KEY RESOURCES (WHETHER BY ALLOCATING CAPITAL OR MANAGING SUPPLY CHAINS)。

EVEN IF PEOPLE CAN VOTE ON POLICY CHANGES, THIS AUTHORITY IS VERY WEAK IF THE LOWER LEVEL OF AI IS CONTROLLED BY A SINGLE PROVIDER, ABLE TO PUSH MODELS UP, ADJUST CONSTRAINTS OR COVER DECISION-MAKING. THE FORMAL LAYERS OF GOVERNANCE MAY BE DECENTRALISED, BUT THE OPERATIONAL LAYERS ARE STILL CENTRALIZED — WHOEVER CONTROLS THE MODEL, ULTIMATELY CONTROLS THE RESULTS。

When Agent took on the role of governance, they introduced a new layer of dependency. In theory, this makes direct democracy more viable: Everyone can have an AI agent to help understand complex proposals, model trade-offs and vote according to established preferences。

But this vision can only be achieved if Agent is truly accountable to the people he represents, can be transplanted across the providers and technically bound to follow human orders. Otherwise, the system that you get is seemingly democratic and is actually manipulated by non-transparent models that are not really controlled by anyone。

If the current reality is that Agent is built mainly on a few basic models, we need to have a way to prove that an Agent is acting for the benefit of the user, not the model company。

This is likely to require encryption assurances at multiple levels:

(1) Training data, fine-tuning or enhanced learning based on model examples

(2) Specific precise instructions and instructions followed by Agent

(3) its actual record of behaviour in the real world

(4) Credible assurance that the provider cannot change its instructions after deployment or retrain it without the knowledge of the user. Without these guarantees, Agent governance can be degraded by human governance that controls the weight of models。

That's where encryption works. If collective decision-making is recorded on the chain and implemented automatically, AI systems can be required to strictly follow validated results. If Agent had an encrypted identity and a transparent implementation log, one could check whether their agents acted within the boundaries。

IF THE AI LAYER IS USER-OWNED AND PORTABLE RATHER THAN LOCKED ON A SINGLE PLATFORM, NO COMPANY CAN CHANGE THE RULES THROUGH A MODEL UPDATE。

ULTIMATELY, GOVERNANCE AI IS ESSENTIALLY AN INFRASTRUCTURE CHALLENGE, NOT A POLICY CHALLENGE。True authority depends on building enforceable guarantees in the system itself。

FILLING AI LEGACY PAYMENT SYSTEM GAPS

AI Agent is starting to buy services — web grabs, browser sessions, image generation — and stabilizing coins are becoming a substitute settlement for these transactions。At the same time, a new market for Agent is emerging。

For example, the Tripe and Tempo MPP markets brought together over 60 services specifically designed for AI Agent. More than 34,000 transactions were processed in the first week of the line, at a lower fee of as much as US$ 0.003。

The difference is in the way these services are visited:They have no closing pages. Agent reads schema, sends requests, pays and receives output, in a single exchange。

This represents a new category of undocumented businesses:There is only one server, one set of end points and the price of each call. There is no front-end interface and no sales team。

The payment track to achieve this is on line. The Coinbase x 402 and MPPs use different methods, but will be paid for directly embedded in HTTP requests. Visa is also expanding the card payment track in the same direction, providing a CLI tool that allows developers to spend from the terminal and businesses to receive stabilization currency immediately at the back end。

Data are still at an early stage. After filtering non-organic activities such as brushing, x402 processed approximately $1.6 million per month of Agent driven payments, well below the $24 million recently reported by Bloomberg (citing 402.org data). But the surrounding infrastructure is rapidly expanding: Stripe, Cloudflare, Vercel and Google have integrated x402 into their platforms。

the developer tool is a major opportunity, and as the "vibe coding" expands the crowd that can build the software, the developer's tool's total searchable market is growing。Companies such as Merit Systems are building products for the world, such as AgentCash, a CLI wallet and market connected to MPP and x402. These products allow Agent to purchase the required data, tools and capabilities using a single stable currency in the balance。

For example, Agent of the sales team can call an endpoint, while obtaining data from Apollo, Google Maps and Whitepages to enrich potential customer information without the user having to leave the command line。

This Agent-to-Agent business tends to use the encrypted payment track (and the emerging kaki solution) for several reasons。

One is insurance risk:Traditional payment processors are required to take the business risk when accessing the business, and it is difficult for a headless business that does not have a website or legal entity to be covered by the traditional processor。

The second is the unlicensed programmability of the stable currency on open networks:Any developer can have one endpoint to support payment without having to access the payment processor or sign a commercial agreement。

We've seen this pattern before. Each shift in business patterns creates a new type of business that is initially unserviceable by existing systems. The company that's building this infrastructure is betting not on $1.6 million a month, but on what it would become when Agent became a default buyer。

Repricing trust in the Agent economy

OVER THE PAST 300,000 YEARS, HUMAN AWARENESS HAS BEEN A BOTTLENECK TO PROGRESS. TODAY, AI IS PUSHING THE MARGINAL COST OF IMPLEMENTATION TO ZERO. WHEN SCARCE RESOURCES BECOME ABUNDANT, CONSTRAINTS SHIFT. WHEN INTELLIGENCE BECOMES CHEAP, WHAT BECOMES EXPENSIVE? THE ANSWER IS VALIDATION。

In the Agent economy, the real limits of scale are our ability to audit and guarantee machine decision-making in biologically restricted humans. Agent's throughput has far exceeded human surveillance. Because of the high cost of surveillance and the lag in failure, markets tend to underinvest in surveillance. People on the way back are rapidly becoming physically impossible。

But deployment of unverified Agent would introduce compound risk. The system relentlessly optimizes “agent” indicators, while quietly deviating from human intentions, creating empty productivity images that mask the accumulation of huge AI debt. To safely entrust the economy to the machine, trust can no longer rely on manual inspection — trust must be coded hard into the architecture itself。

When anyone can produce content free of charge, they will be able to do soThe most important thing is a verifiable source -- knowing where it comes from and whether you can trust it。SECTOR CHAINS, CHAIN CERTIFICATION AND DECENTRIZED DIGITAL IDENTIFICATION SYSTEMS ARE CHANGING WHAT ECONOMIC BOUNDARIES CAN BE SAFELY DEPLOYED. INSTEAD OF USING AI AS A BLACK BOX, YOU GET CLEAR, AUDITABLE HISTORICAL RECORDS。

As more AI Agent started trading with each other, the settlement track became closely linked to the source certificate。

Systems dealing with funds (such as stabilization currency and smart contracts) can also carry encrypted vouchers showing who did what and who is responsible if problems arise。

Human comparative advantages will shift upwards:From the discovery of small errors to setting strategic directions and taking responsibility for mistakes. The lasting advantage belongs to those who can authenticate the output, insure it and absorb responsibility when it fails。

The uncertified scale is a liability that will accrue over time。

Maintenance of user control

For decades, the new abstract has defined how users interact with technology。THE PROGRAMMING LANGUAGE ABSTRACTED THE MACHINE CODE; THE COMMAND LINE GAVE WAY TO THE GRAPHICAL USER INTERFACE, FOLLOWED BY MOBILE APPLICATIONS AND API. EACH SHIFT HIDES MORE UNDERLYING COMPLEXITIES, BUT ALWAYS KEEPS USERS FIRMLY IN THE LOOP。

In the Agent world, users specify results rather than specific actions, and the system determines how to achieve them. Agent not only abstracted how the mission was carried out, but also abstracted by whom. The user sets the initial parameter and then moves back to allow the system to run itself. The user ' s role is changed from interactive to monitoring; unless the user intervenes, the default status is "open " 。

As users entrust more tasks to Agent, new risks arise:Fuzzy input may result in Agent acting without the knowledge of the user on the basis of a false assumption; failure may not be reported, leading to a lack of clear diagnosis; and a single approval may trigger an unforeseen multi-step workflow。

This is where encryption can help. Encryption has been working to minimize blind trust。

As users turn more decisions over to the software, the Agent system makes the problem more acute and increases the rigour of our design -- - By setting clearer limits, increasing visibility and enforcing stronger guarantees of system capacity。

A new generation of encryption tools is emerging。The role domain commissioning framework - such as the Delegation Toolkit in MetaMask, the AgentKit and Agent wallets in Coinbase, and the AgentCash in Merit Systems - allows users to define what Agent can and cannot do at the level of smart contracts. Intent-based architecture (e.g. NEAR Intents, which has processed cumulative DEX transactions of more than $15 billion since the fourth quarter of 2024) allows users to set the desired result (e.g. "bridge to token and pledge") without specifying how to achieve it。

Original Link
QQlink

Không có cửa hậu mã hóa, không thỏa hiệp. Một nền tảng xã hội và tài chính phi tập trung dựa trên công nghệ blockchain, trả lại quyền riêng tư và tự do cho người dùng.

© 2024 Đội ngũ R&D QQlink. Đã đăng ký Bản quyền.