Manus brings the dawn of AGI, and AI security is also worth pondering

robot
Abstract generation in progress

Manus achieved a SOTA (State-of-the-Art) score in the GAIA benchmark, showing that its performance outperformed Open AI's large models of the same level. In other words, it can independently complete complex tasks, such as cross-border business negotiations, which involve breaking down contract terms, predicting strategies, generating solutions, and even coordinating legal and finance teams. Compared with traditional systems, Manus has the advantage of dynamic object disassembly ability, cross-modal reasoning ability, and memory enhancement learning ability. It can break down large tasks into hundreds of executable subtasks, process multiple types of data at the same time, and use reinforcement learning to continuously improve its decision-making efficiency and reduce error rates.

! Manus brings the dawn of AGI, AI security is also worth pondering

In addition to marveling at the rapid development of technology, Manus has once again sparked disagreement in the circle about the evolution path of AI: will AGI dominate the world in the future, or will MAS be synergistically dominant?

This starts with Manus' design philosophy, which implies two possibilities:

One is the AGI path. By continuously improving the level of individual intelligence, it is close to the comprehensive decision-making ability of human beings.

There is also the MAS path. As a super-coordinator, command thousands of vertical agents to work together.

On the surface, we are discussing different paths, but in fact we are discussing the underlying contradiction of AI development: how should efficiency and security be balanced? The closer the monolithic intelligence is to AGI, the higher the risk of black-box decision-making. While multi-agent collaboration can spread risk, it can miss critical decision-making windows due to communication delays.

The evolution of Manus has invisibly magnified the inherent risks of AI development. For example, data privacy black holes: in medical scenarios, Manus needs real-time access to patient genomic data; During financial negotiations, it may touch the company's undisclosed financial information; For example, the algorithmic bias trap, in hiring negotiations, Manus gives below-average salary recommendations to candidates of a particular ethnicity; When reviewing legal contracts, the rate of misjudgment of emerging industry terms is nearly half. Another example is the adversarial attack vulnerability, where hackers implant specific voice frequencies to enable Manus to misjudge the opponent's offer range during negotiations.

We have to face a terrible pain point for AI systems: the smarter the system, the wider the attack surface.

However, security is a word that has been mentioned a lot in web3, and there are a variety of encryption methods derived from the framework of the impossible triangle of V (blockchain networks cannot achieve security, decentralization, and scalability at the same time):

The core idea of the Zero Trust Security Model :* is "trust no one, always verify", that is, devices should not be trusted by default, regardless of whether they are on the internal network or not. This model emphasizes strict authentication and authorization for each access request to ensure system security. Decentralized Identity (DID): A DID is a set of identifier standards that enable entities to be identified in a verifiable and persistent manner without the need for a centralized registry. This enables a new model of decentralized digital identity, often compared to self-sovereign identity, which is an essential part of Web3. Fully Homomorphic Encryption (FHE) is an advanced encryption technique that allows arbitrary computation to be performed on encrypted data without decrypting it. This means that a third party can operate on the ciphertext, and the result obtained after decryption is the same as the result of the same operation on the plaintext. This feature is important for scenarios that require computation without exposing raw data, such as cloud computing and data outsourcing.

Zero trust security models and DIDs have a certain number of projects in multiple rounds of bull markets, and they have either succeeded or drowned in the wave of encryption, and as the youngest encryption method: Fully Homomorphic Encryption (FHE) is also a big killer to solve security problems in the AI era. Fully homomorphic encryption (FHE) is a technology that allows computation to take place on encrypted data.

How to fix it?

First, at the data level. All information entered by the user (including biometrics, voice tone) is processed in an encrypted state, and even Manus itself cannot decrypt the original data. For example, in the case of medical diagnosis, the patient's genomic data is analyzed in ciphertext to avoid the leakage of biological information.

Algorithmic level. The "cryptographic model training" achieved through FHE makes it impossible for developers to peek into the decision-making path of AI.

At the level of synergy. Threshold encryption is used for multiple agent communications, and a single node can be breached without causing global data leakage. Even in supply chain attack and defense drills, attackers infiltrate multiple agents to gain a complete view of the business.

Due to technical limitations, web3 security may not be directly related to most users, but it is inextricably linked to indirect interests.

Launched on the Ethereum mainnet in 2017, uPort was probably the first decentralized identity (DID) project to be released on mainnet. In terms of the Zero Trust security model, NKN released its mainnet in 2019. Mind Network is the first FHE project to be launched on the mainnet, and it has taken the lead in cooperating with ZAMA, Google, DeepSeek, etc.

uPort and NKN are already projects that I have never heard of, and it seems that security projects are really not being paid attention to by speculators, so let's wait and see if Mind network can escape this curse and become a leader in the security field.

The future is here. The closer AI is to human intelligence, the more it needs non-human defenses. The value of FHE is not only to solve today's problems, but also to pave the way for the era of strong AI. On this treacherous road to AGI, FHE is not an option, but a necessity for survival.

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • 4
  • Share
Comment
0/400
No comments