Some frens said, is the continuous downward trend in web3 AI Agent targets such as #ai16z, arc, etc., caused by the recent hot MCP protocol? At first glance, it's a bit confusing, WTF does it have to do with anything? But after careful consideration, there is a certain logic: the valuation logic of existing web3 AI Agents has changed, and the narrative direction and product implementation roadmap urgently need adjustments! Below, I will share some personal opinions:
1)MCP (Model Context Protocol) is an open standardized protocol designed to seamlessly connect various AI LLM/Agents to various data sources and tools, acting as a plug-and-play USB "universal" interface, replacing the previous end-to-end "specific" encapsulation method.
Simply put, there are obvious data silos between AI applications. To achieve interoperability between Agent/LLM, they need to develop their own corresponding calling API interfaces. The operation process is complex, lacking bidirectional interaction, and usually has relatively limited model access and permission restrictions.
The emergence of MCP provides a unified framework for AI applications to break away from the past data silos, realizing the possibility of 'dynamic' access to external data and tools, significantly reducing development complexity and integration efficiency, especially in automating task execution, real-time data queries, and cross-platform collaboration.
When it comes to this, many people immediately think, if the Manus integrated with multiple Agent cooperation innovation can promote the MCP open source framework for multiple Agent cooperation, wouldn't it be invincible?
That's right, Manus + MCP is the key to the impact on web3 AI Agent this time.
However, what is unimaginable is that both Manus and MCP are frameworks and protocol standards oriented towards web2 LLM/Agent, solving the problem of data interaction and collaboration between centralized servers. Their permissions and access control still rely on the "active" openness of each server node. In other words, it is just a kind of open-source tool attribute.
In theory, how can the centralized Italian cannon blast away the decentralized fortress, which completely deviates from the central ideas pursued by web3 AI Agent such as 'distributed servers, distributed collaboration, distributed incentives', and so on?
The reason lies in the fact that the web3 AI Agent in the first phase is too 'web2-ized'. On the one hand, this is due to the fact that many teams come from a web2 background and lack a full understanding of the native requirements of web3. For example, the ElizaOS framework was originally a packaged framework to help developers quickly deploy AI Agent applications, which integrates platforms such as Twitter, Discord, and some 'API interfaces' such as OpenAI, Claude, DeepSeek, appropriately encapsulates some general frameworks like Memory and Charater, and helps developers quickly develop and settle AI Agent applications. But if we look closely, what is the difference between this service framework and open-source tools in web2? What are the differentiated advantages?
Uh, is the advantage simply having a set of Tokenomics incentives? And then using a framework that can be completely replaced by web2, incentivizing a group of AI Agents that exist solely to launch new coins? Scary... Following this logic, you can probably understand why Manus + MCP can impact web3 AI Agents.
As a multitude of web3 AI Agent frameworks and services have only addressed the rapid development and application needs similar to web2 AI Agents, but have not kept up with the innovation speed of web2 in terms of technical services, standards, and differentiation advantages, the market/capital has revalued and priced the previous batch of web3 AI Agents.
At this point, the general problem must have found the crux, but how to break through? Just one way: focus on developing web3 native solutions, because the operation and incentive mechanism of distributed systems are the absolute differentiation advantages of web3.
Taking the example of a platform that provides distributed cloud computing power, data, algorithms, and other services, on the surface, it seems that this computing power and data aggregated based on idle resources cannot meet the need for engineering innovation in the short term. However, at a time when a large number of AI LLM are gathering centralized computing power to break through performance in an arms race, a service model that promotes 'idle resources, low cost' naturally makes web2 developers and VC teams disdainful.
However, when the web2 AI Agent has passed the stage of competing for performance innovation, it will inevitably pursue the expansion of vertical application scenarios and the fine-tuning of model optimization, and that is when the advantages of web3 AI resource services will truly emerge.
In fact, when web2 AI climbs to the top in a monopolistic manner, it is difficult to go back to the idea of surrounding cities with rural areas and breaking down each specific scene when it reaches a certain stage. That is when there will be an excess of web2 AI developers and a collaborative effort of web3 AI resources.
In fact, in addition to the fast deployment and multi-agent collaborative communication framework of web2 + Tokenomic issuance narrative, there are many innovative directions worth exploring in web3 Native.
For example, equipped with a distributed consensus collaboration framework, considering the characteristics of LLM large model chain computation + on-chain state storage, many adaptive components are needed.
A decentralized DID identity verification system that allows Agents to have verifiable on-chain identities, similar to the uniqueness addresses generated by executing virtual machines for smart contracts, primarily for the continuous tracking and recording of subsequent states.
A decentralized Oracle oracle system, mainly responsible for the trustworthy acquisition and verification of off-chain data. Unlike traditional Oracles, this Oracle system adapted for AI Agents may also need to create a combination architecture of multiple Agents including data collection layer, decision consensus layer, and execution feedback layer, so that the on-chain data required by Agents and off-chain computing and decision-making can be reached in real-time.
A decentralized storage DA system, due to the uncertainty of the knowledge base state during the operation of the AI Agent and the temporary nature of the reasoning process, requires a set of recording the key state library and reasoning paths behind LLM to be stored in a distributed storage system, and providing a cost-controlled data proof mechanism to ensure the data availability during public chain verification.
A set of zero-knowledge proof ZKP privacy computing layer, which can be linked with privacy computing solutions including TEE, FHE, etc., to achieve real-time privacy computing + data proof verification, allowing Agents to have a wider range of vertical data sources (medical, financial), and then on top of that, more specialized custom services by Agents will appear.
A cross-chain interoperability protocol set, somewhat similar to the framework defined by the open-source MCP protocol, the difference lies in this Interoperability solution, which requires an adaptive Agent to run, pass, verify relays and communication scheduling mechanisms, capable of solving asset transfer and state synchronization issues between different chains, especially involving complex states such as Agent context and Prompts, knowledge base, Memory, etc.
……
In my opinion, the key to conquering the real web3 AI Agent should focus on how to make the "complex workflow" of the AI Agent and the "trust verification flow" of the blockchain fit together as much as possible. As for these incremental solutions, they may come from the upgrading and iteration of existing old narrative projects, or they may be re-created by projects on the new AI Agent narrative track.
This is the direction that the web3 AI Agent should strive to Build, which is in line with the fundamental innovation ecology of AI + Crypto macro narrative. Without related innovative exploration and differentiation competitive barriers, every move in the web2 AI track may disrupt the web3 AI.
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
AI AgentToken keeps falling, is MCP too hot to handle?
Some frens said, is the continuous downward trend in web3 AI Agent targets such as #ai16z, arc, etc., caused by the recent hot MCP protocol? At first glance, it's a bit confusing, WTF does it have to do with anything? But after careful consideration, there is a certain logic: the valuation logic of existing web3 AI Agents has changed, and the narrative direction and product implementation roadmap urgently need adjustments! Below, I will share some personal opinions:
1)MCP (Model Context Protocol) is an open standardized protocol designed to seamlessly connect various AI LLM/Agents to various data sources and tools, acting as a plug-and-play USB "universal" interface, replacing the previous end-to-end "specific" encapsulation method.
Simply put, there are obvious data silos between AI applications. To achieve interoperability between Agent/LLM, they need to develop their own corresponding calling API interfaces. The operation process is complex, lacking bidirectional interaction, and usually has relatively limited model access and permission restrictions.
The emergence of MCP provides a unified framework for AI applications to break away from the past data silos, realizing the possibility of 'dynamic' access to external data and tools, significantly reducing development complexity and integration efficiency, especially in automating task execution, real-time data queries, and cross-platform collaboration.
When it comes to this, many people immediately think, if the Manus integrated with multiple Agent cooperation innovation can promote the MCP open source framework for multiple Agent cooperation, wouldn't it be invincible?
That's right, Manus + MCP is the key to the impact on web3 AI Agent this time.
In theory, how can the centralized Italian cannon blast away the decentralized fortress, which completely deviates from the central ideas pursued by web3 AI Agent such as 'distributed servers, distributed collaboration, distributed incentives', and so on?
The reason lies in the fact that the web3 AI Agent in the first phase is too 'web2-ized'. On the one hand, this is due to the fact that many teams come from a web2 background and lack a full understanding of the native requirements of web3. For example, the ElizaOS framework was originally a packaged framework to help developers quickly deploy AI Agent applications, which integrates platforms such as Twitter, Discord, and some 'API interfaces' such as OpenAI, Claude, DeepSeek, appropriately encapsulates some general frameworks like Memory and Charater, and helps developers quickly develop and settle AI Agent applications. But if we look closely, what is the difference between this service framework and open-source tools in web2? What are the differentiated advantages?
Uh, is the advantage simply having a set of Tokenomics incentives? And then using a framework that can be completely replaced by web2, incentivizing a group of AI Agents that exist solely to launch new coins? Scary... Following this logic, you can probably understand why Manus + MCP can impact web3 AI Agents.
As a multitude of web3 AI Agent frameworks and services have only addressed the rapid development and application needs similar to web2 AI Agents, but have not kept up with the innovation speed of web2 in terms of technical services, standards, and differentiation advantages, the market/capital has revalued and priced the previous batch of web3 AI Agents.
Taking the example of a platform that provides distributed cloud computing power, data, algorithms, and other services, on the surface, it seems that this computing power and data aggregated based on idle resources cannot meet the need for engineering innovation in the short term. However, at a time when a large number of AI LLM are gathering centralized computing power to break through performance in an arms race, a service model that promotes 'idle resources, low cost' naturally makes web2 developers and VC teams disdainful.
However, when the web2 AI Agent has passed the stage of competing for performance innovation, it will inevitably pursue the expansion of vertical application scenarios and the fine-tuning of model optimization, and that is when the advantages of web3 AI resource services will truly emerge.
In fact, when web2 AI climbs to the top in a monopolistic manner, it is difficult to go back to the idea of surrounding cities with rural areas and breaking down each specific scene when it reaches a certain stage. That is when there will be an excess of web2 AI developers and a collaborative effort of web3 AI resources.
In fact, in addition to the fast deployment and multi-agent collaborative communication framework of web2 + Tokenomic issuance narrative, there are many innovative directions worth exploring in web3 Native.
For example, equipped with a distributed consensus collaboration framework, considering the characteristics of LLM large model chain computation + on-chain state storage, many adaptive components are needed.
A decentralized DID identity verification system that allows Agents to have verifiable on-chain identities, similar to the uniqueness addresses generated by executing virtual machines for smart contracts, primarily for the continuous tracking and recording of subsequent states.
A decentralized Oracle oracle system, mainly responsible for the trustworthy acquisition and verification of off-chain data. Unlike traditional Oracles, this Oracle system adapted for AI Agents may also need to create a combination architecture of multiple Agents including data collection layer, decision consensus layer, and execution feedback layer, so that the on-chain data required by Agents and off-chain computing and decision-making can be reached in real-time.
A decentralized storage DA system, due to the uncertainty of the knowledge base state during the operation of the AI Agent and the temporary nature of the reasoning process, requires a set of recording the key state library and reasoning paths behind LLM to be stored in a distributed storage system, and providing a cost-controlled data proof mechanism to ensure the data availability during public chain verification.
A set of zero-knowledge proof ZKP privacy computing layer, which can be linked with privacy computing solutions including TEE, FHE, etc., to achieve real-time privacy computing + data proof verification, allowing Agents to have a wider range of vertical data sources (medical, financial), and then on top of that, more specialized custom services by Agents will appear.
A cross-chain interoperability protocol set, somewhat similar to the framework defined by the open-source MCP protocol, the difference lies in this Interoperability solution, which requires an adaptive Agent to run, pass, verify relays and communication scheduling mechanisms, capable of solving asset transfer and state synchronization issues between different chains, especially involving complex states such as Agent context and Prompts, knowledge base, Memory, etc.
……
In my opinion, the key to conquering the real web3 AI Agent should focus on how to make the "complex workflow" of the AI Agent and the "trust verification flow" of the blockchain fit together as much as possible. As for these incremental solutions, they may come from the upgrading and iteration of existing old narrative projects, or they may be re-created by projects on the new AI Agent narrative track.
This is the direction that the web3 AI Agent should strive to Build, which is in line with the fundamental innovation ecology of AI + Crypto macro narrative. Without related innovative exploration and differentiation competitive barriers, every move in the web2 AI track may disrupt the web3 AI.