AI auditing enters practical application, OpenAI releases EVMbench to enhance smart contract security ratings

ETH-2%
WELL-2,21%

OpenAI Collaborates with Paradigm to Launch EVMbench, Testing AI Agents’ Defense and Attack Capabilities in EVM Contracts, Revealing Strengths and Weaknesses.

Focusing on Real-World Economic Environment Testing, OpenAI and Paradigm Enhance On-Chain Security Ratings

Leading AI company OpenAI announced a partnership with well-known cryptocurrency venture capital firm Paradigm and security firm OtterSec to launch EVMbench, a benchmark tool designed to evaluate the security performance of AI agents in Ethereum Virtual Machine (EVM) smart contracts.

As AI and blockchain technologies converge deeply, smart contracts have become the core infrastructure managing over $100 billion in open-source crypto assets. The release of this tool signifies that the industry is beginning to recognize AI’s practical capabilities within economically meaningful environments.

OpenAI team notes that with the rapid advancement of AI agents in coding and planning, these models will play transformative roles in blockchain attack and defense in the future. Therefore, establishing a standardized evaluation framework is crucial for monitoring AI progress.

Three Deep Testing Modes with 120 Real Audit Vulnerabilities as the Benchmark

EVMbench’s core design centers around 120 high-risk vulnerabilities extracted from 40 professional audit reports. Data sources include well-known public audit competitions like Code4rena, ensuring testing scenarios closely resemble real-world complexity. The benchmark evaluates AI agents in three different operational modes:

Image source: OpenAI EVMbench core design evaluates AI agents in three different modes

  • The first is “Detection Mode,” where AI audits contract codebases and identifies known vulnerabilities, assigning scores based on the severity of issues found;
  • The second is “Patch Mode,” challenging AI to remove exploitable vulnerabilities and repair code without altering existing functionality;
  • The final, highly controversial mode is “Exploit Mode,” where AI must execute end-to-end fund theft attacks within sandboxed blockchain environments.

To ensure rigorous and repeatable testing, the team developed a Rust-based testing framework that uses deterministic transaction replay techniques to verify whether AI’s attacks or patches succeed.

Significant Trend of Attack-Strength, Defense-Weakness; GPT-5.3-Codex Shows Remarkable Growth in Attacks

Initial test results reveal a clear performance gap across different tasks. The latest GPT-5.3-Codex performs exceptionally well in Exploit Mode, scoring as high as 72.2%, a dramatic improvement compared to GPT-5, released just six months earlier, which scored 31.9%.

Image source: Overview of scores for various AI models across three modes

This indicates that when the goal is explicitly “draining funds,” AI demonstrates strong iterative planning and execution capabilities. However, on the defense side, performance is comparatively weaker. AI often stops searching after discovering a single flaw in detection mode, and struggles to perfectly patch complex logic without affecting normal contract operation. Security experts express concern that AI could significantly shorten the time from vulnerability discovery to attack development, raising the bar for DeFi project defenses.

Talent Acquisition and Defense Funding, OpenAI’s Strategy for AI Agent Ecosystem Security

Beyond tool development, OpenAI is actively investing in talent and ecosystem defense. Recently, it hired Peter Steinberger, founder of the open-source AI agent project OpenClaw, to lead the development of next-generation personalized agents, transforming the project into an OpenAI-supported foundation model.

To address potential cybersecurity risks posed by AI, OpenAI commits to a $10 million API budget through its cybersecurity grant program to support open-source defense tools and critical infrastructure research. This move is particularly timely following the recent Moonwell protocol incident, where a coding error in AI-generated code caused approximately $1.78 million in losses.

Further Reading
Refusing Meta’s Billion-Dollar Offer, OpenClaw Creator Joins OpenAI in Talent Race; Is Vibe Coding to Blame? Moonwell Oracle Fails, Who Will Cover the $1.78M Loss?

Looking ahead, as more AI-assisted stablecoin payment agents and automated wallets join the ecosystem, the ability to distinguish models that merely describe vulnerabilities from those that can reliably provide defense solutions using tools like EVMbench will become a critical turning point in blockchain security.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Ripple Tests RLUSD Stablecoin in Singapore Trade Sandbox

Ripple is testing its RLUSD stablecoin in Singapore's regulatory sandbox to automate trade finance payments. The pilot streamlines processes by triggering instant payments post-shipment verification, enhancing efficiency and supporting Ripple's expansion strategy.

CryptoFrontNews26m ago

Polygon PIP-85 Proposal: 50% of the priority fee will be allocated to stakers, with remaining validator pool allocation adjusted.

The Polygon community released the PIP-85 proposal on March 26, adjusting the priority fee distribution, introducing an equal weight factor, and setting the staking reward ratio, aiming to fairly reward network participants and incentivize validators.

GateNews46m ago

Monument Bank in the UK plans to tokenize approximately $335 million in retail customer deposits.

Regulated UK-based Monument Bank plans to tokenize approximately $335 million of retail customer deposits on the Midnight network. These deposits will continue to accrue interest and be fully insured under the Financial Services Compensation Scheme. In the future, tokenized investment products and asset-backed loan services will be launched.

GateNews2h ago

Nic Carter: Progress on BTC quantum resistance is lagging, while ETH has outlined its upgrade roadmap for 2029.

Nic Carter pointed out that Bitcoin is making slow progress on quantum-resistant upgrades, while Ethereum has made it a strategic priority, planning to complete the upgrade by 2029. He warned that current elliptic curve cryptography faces threats from quantum computing, highlighting the difference in development directions between the two.

GateNews2h ago

Pudgy Penguins: Challenging the Pokemon and Disney Legacy in the Global IP Race

Pudgy Penguins disrupts the $31.7B licensed toy market by using a "Negative CAC" model, achieving over 2M unit sales in 10,000 retail locations. It has gained cultural significance through partnerships and aims for $120M revenue in 2026 ahead of a possible IPO.

CoinDesk2h ago
Comment
0/400
No comments