Study Warns of AI Driven DeFi Attacks

2.12.25.04
Artificial intelligence is beginning to behave in ways that until recently patřily výhradně elite hackers. A new study from Anthropic and the MATS program shows that modern models can independently identify vulnerabilities in smart contracts – automated programs governing transactions in the crypto ecosystem – and immediately exploit them. Researchers tested models such as GPT-5 and Claude Opus on hundreds of previously compromised contracts. In simulation, the AI “stole” more than 4.6 million dollars and replicated real-world exploits including the exact sequence of transactions.

Why smart contracts matter so much

A smart contract is a program running on a Blockchain that automatically executes predetermined rules. In DeFi, it often manages funds – loans, swaps, or liquidity. Because everything operates without human oversight, even a small flaw can have major financial consequences. When AI can not only detect a flaw but also turn it into a working exploit, the risk level shifts dramatically.

AI uncovered previously unknown vulnerabilities

Even more significant is the part of the research focused on newly deployed, untested smart contracts. Models GPT-5 and Sonnet 4.5 analyzed nearly three thousand fresh contracts on BNB Chain and found two brand-new zero-day vulnerabilities. One allowed attackers to artificially increase their token balance, while the other redirected fees to a different address. The AI also produced functional scripts showing exactly how to monetize the flaws.

Autonomous attacks are becoming cheap and accessible

The worrying part is cost. Large-scale scanning runs only a few thousand dollars and a single model execution costs about one dollar. In other words, anyone who wants to systematically hunt for vulnerabilities can do so continuously and at very low cost. Researchers warn that this could dramatically shorten the time between the deployment of a smart contract and its first exploit.

It is not just about cryptocurrencies

The study notes that these principles apply equally to traditional software and infrastructure used by crypto services. If AI can understand a flaw in code and transform it into an actionable attack, the threat goes far beyond DeFi.

According to the authors, the findings serve primarily as a warning: AI capabilities are advancing quickly and defenses will need to accelerate just to keep pace. If models can simulate real-world exploits today, the moment they appear outside the lab may not be far away.

Sources:

https://red.anthropic.com/2025/smart-contracts/

https://www.coindesk.com/tech/2025/12/02/anthropic-research-shows-ai-agents-are-closing-in-on-real-defi-attack-capability

Don’t miss any crypto news

Subscribe to our Newsletters - the best way to stay informed about the crypto world. No spam. You can unsubscribe anytime.

Please enter your email address

Email is invalid

By sharing your email, you consent to recieving BITmarket's newsletter.
Read how we process your data in our Privacy policy.

Thank you for subscribing 😊

Subscribe to our Newsletters - the best way to stay informed about the crypto world. No spam. You can unsubscribe anytime.

Something went wrong 😔

If your problem persists please try contact our support

If you have any questions about cryptocurrencies 
or need some advice, I'm here to help.
Let us know at [email protected]