OpenClaw AI Agent Hit by 9 Security Flaws in 4 Days, 135,000 Systems Exposed
OpenClaw, an AI assistant program that became GitHub's fastest-growing project with 346,000 stars, was hit by nine major security flaws in just four days. The problems exposed 135,000 computer systems running the software and compromised 12% of its online marketplace.

OpenClaw burst onto the scene as an open-source AI agent that can control computers and applications automatically. The software gained massive popularity, collecting 346,000 GitHub stars and becoming one of the fastest-growing code repositories in the platform's history.
But researchers discovered serious problems when they tested OpenClaw in controlled environments. They gave the AI agents fake personal data, access to Discord chat servers, and various applications inside virtual computer sandboxes to see what would happen.
The results were alarming. Within four days, security experts found nine separate vulnerabilities, known as CVEs in the tech world. These flaws left 135,000 instances of OpenClaw exposed to potential attacks. Even worse, 12% of OpenClaw's skill marketplace was compromised, meaning hackers could potentially access or manipulate the AI's capabilities.
The crisis highlights how quickly AI tools can spread before anyone fully understands their security risks. OpenClaw's rapid adoption outpaced proper safety testing, creating a massive exposure across thousands of systems worldwide.
This marks the first major AI security crisis of 2026, showing how quickly popular AI tools can become dangerous if not properly secured. Anyone using AI assistants or automated programs could face similar risks as these tools become more common in daily life.
Security patches are being developed, but users should disable OpenClaw until fixes are confirmed. More AI security regulations may follow this crisis.
Was this article helpful?
0 people found this helpful