Smaller AI Models Find Same Security Flaws as Mythos for $0.11
Eight smaller AI models successfully detected the same major FreeBSD security flaw that Mythos found, including one model with only 3.6 billion parameters that costs just $0.11 per million tokens. Researchers at AISLE, an AI cybersecurity startup, tested these cheaper models against Anthropic's showcase vulnerabilities.

Researchers have discovered that much smaller and cheaper AI models can find the same critical security vulnerabilities that made headlines when Mythos detected them. Eight out of eight smaller models successfully identified Mythos's flagship FreeBSD exploit during testing.
The research was conducted by AISLE, an AI cybersecurity startup, which tested various open-source models against the same vulnerabilities that Anthropic's Mythos system found. The smallest successful model had only 3.6 billion active parameters and cost just $0.11 per million tokens to operate.
This finding challenges the assumption that only large, expensive AI systems can effectively hunt for security flaws. The FreeBSD vulnerability that these models detected had survived 27 years of human security reviews before AI systems spotted it.
The discovery suggests that cybersecurity teams may not need cutting-edge AI models to dramatically improve their vulnerability detection capabilities. Instead, they could deploy smaller, more affordable models to scan code for critical security issues.
This shows that finding serious security bugs doesn't require expensive AI systems. Companies can now use much cheaper tools to scan their software for the same dangerous flaws that survived 27 years of human review.
More companies will likely test smaller AI models for cybersecurity tasks as costs become more manageable.
Was this article helpful?
0 people found this helpful