NSA Using Anthropic's Mythos AI Tool Despite Pentagon Blacklist
The National Security Agency is using Anthropic's Mythos Preview AI tool despite the Pentagon giving the company a formal supply-chain risk designation. The NSA's use of the blacklisted technology raises questions about government cybersecurity policies.
The National Security Agency is using Anthropic's Mythos Preview AI tool even though the Pentagon has flagged the AI company as a supply-chain risk, according to a new report from Axios.
The Pentagon's designation means officials consider Anthropic potentially dangerous to use in government systems. Yet the NSA, America's top electronic spy agency, continues using the company's AI technology.
It's unclear exactly how the NSA is using Mythos. Other organizations with access to the tool mainly use it to scan their own computer systems for security holes that hackers could exploit.
The situation highlights potential disagreements within the U.S. government about AI safety and which companies can be trusted with sensitive work. The NSA handles some of America's most classified intelligence operations.
Anthropics creates AI systems that compete with ChatGPT and other popular AI tools. The company has been working to build relationships with government agencies even as some officials raise concerns about AI security risks.
This shows potential conflicts within the U.S. government over which AI tools are safe to use. It could signal confusion about cybersecurity rules that protect sensitive government data and operations.
Watch for clarification from the NSA about its use of Mythos and whether it will comply with Pentagon guidelines.
Was this article helpful?
0 people found this helpful