🔥 Gate.io Launchpool $1 Million Airdrop: Stake #ETH# to Earn Rewards Hourly
【 #1# Mainnet - #OM# 】
🎁 Total Reward: 92,330 #OM#
⏰ Subscription: 02:00 AM, February 25th — March 18th (UTC)
🏆 Stake Now: https://www.gate.io/launchpool/OM?pid=221
More: https://www.gate.io/announcements/article/43515
Is the continued decline of AI Agent caused by the recent explosion of the MCP protocol?
Written by: Haotian
A friend said that the continuous decline of web3 AI agents such as #ai16z and $arc is caused by the recent explosion of the MCP protocol? At first glance, the whole person is a little confused, does WTF have anything to do with it? But after thinking about it, I found that there is really a certain logic: the valuation and pricing logic of the existing web3 AI agent has changed, and the narrative direction and product landing route need to be adjusted urgently! Below, I would like to share my personal opinions:
To put it simply, there are obvious data islands between AI applications, and agents and LLMs need to develop corresponding call APIs to achieve interoperability, not to mention the complexity of the operation process, and the lack of two-way interaction functions, usually have relatively limited model access and permission restrictions.
The emergence of MCP provides a unified framework for AI applications to get rid of the data silos of the past and realize the possibility of "dynamic" access to external data and tools, which can significantly reduce development complexity and integration efficiency, especially in terms of automated task execution, real-time data query, and cross-platform collaboration.
Speaking of which, many people immediately thought that if Manus for multi-agent collaboration and innovation were integrated with this MCP open-source framework that can promote multi-agent collaboration, wouldn't it be invincible?
That's right, Manus + MCP is the key to the impact of web3 AI Agent.
It stands to reason that it is completely contrary to the central ideas of "distributed servers, distributed collaboration, and distributed incentives" pursued by web3 AI Agent.
The reason is that the first phase of web3 AI Agent is too "web2-oriented", on the one hand, because many teams are from web2 backgrounds and lack a full understanding of the native requirements of web3 Native. "API interfaces" such as DeepSeek appropriately encapsulate some Memory and Charater general frameworks to help developers quickly develop AI agent applications. But what is the difference between this set of service frameworks and web2 open source tools? What are the differentiators?
Uh, is the advantage that there is a set of Tokenomics incentives? And then use a set of frameworks that web2 can completely replace to incentivize a group of more AI agents that exist for the purpose of issuing new coins? Terrible.. Looking at this logic, you can roughly understand why Manus +MCP can have an impact on web3 AI Agent?
Since many web3 AI agent frameworks and services only solve the needs of rapid development and application similar to web2 AI agents, but they cannot keep up with the innovation speed of web2 in terms of technical services and standards and differentiation advantages, the market/capital has revalued and priced the last batch of web3 AI agents.
Taking distributed cloud computing power, data, algorithms and other service platforms as an example, on the surface, it seems that this kind of computing power and data aggregated on the grounds of idle resources cannot meet the needs of engineering innovation in the short term, but when a large number of AI LLMs are fighting for centralized computing power to break through the arms race for performance, a service model with the gimmick of "idle resources and low cost" will naturally disdain web2 developers and VC groups.
However, when the web2 AI agent passes the stage of performance innovation, it is bound to pursue the expansion of vertical application scenarios and the optimization of subdivision and fine-tuning models, and the advantages of web3 AI resource services will be truly apparent at that time.
In fact, when the web2 AI that climbs to the position of the giant in the form of resource monopoly reaches a certain stage, it is difficult to retreat to the idea of surrounding the city with the countryside and subdividing the scene one by one, and at that time it is the time for surplus web2 AI developers + web3 AI resources to work together.
In fact, in addition to web2's set of quick deployment + multi-agent collaborative communication framework + Tokenomic issuance narrative, there are many innovative directions of web3 Native that are worth exploring:
For example, equipped with a set of distributed consensus collaboration framework, considering the characteristics of off-chain computing + on-chain state storage of LLM large models, many adaptable components are required.
A decentralized DID authentication system allows the agent to have a verifiable on-chain identity, which is like the unique address generated by the execution virtual machine for the smart contract, mainly for the continuous tracking and recording of the subsequent state;
A decentralized Oracle oracle system is mainly responsible for the trusted acquisition and verification of off-chain data, which is different from the previous Oracle, and this oracle adapted to the AI Agent may also need to do a combination of multiple agents including a data collection layer, a decision-making consensus layer, and an execution feedback layer, so that the on-chain data required by the agent and off-chain computing and decision-making can be reached in real time;
A decentralized storage DA system, due to the uncertainty of the state of the knowledge base when the AI Agent is running, and the inference process is also temporary, it is necessary to record the key state libraries and inference paths behind the LLM and store them in the distributed storage system, and provide a cost-controllable data proof mechanism to ensure the data availability of the public chain verification;
A set of zero-knowledge proof ZKP privacy computing layer can be linked with privacy computing solutions including TEE time, PHE, etc., to achieve real-time privacy computing + data proof verification, so that the agent can have a wider range of vertical data sources (medical, financial), and then more professional customized service agents appear on top of the top;
A set of cross-chain interoperability protocols, somewhat similar to the framework defined by the MCP open source protocol, the difference is that this set of interoperability solutions needs to have a relay and communication scheduling mechanism that adapts to the operation, delivery, and verification of the agent, and can complete the asset transfer and state synchronization of the agent between different chains, especially the complex states such as agent context and prompt, knowledge base, memory, etc.;
……
In my opinion, the focus of a real web3 AI agent should be on how to make the "complex workflow" of the AI agent and the "trust verification flow" of the blockchain fit as much as possible. As for these incremental solutions, it is possible to upgrade and iterate from existing old narrative projects, or recast from projects on the newly formed AI Agent narrative track.
This is the direction that web3 AI Agent should strive to build, and it is in line with the fundamentals of the innovation ecosystem under the macro narrative of AI + Crypto. Without the establishment of relevant innovation and differentiated competition barriers, then every time the web2 AI track is blown upside down, web3 AI may be turned upside down.