Cyber stocks plunge after reportedly leaked document shows Anthropic is worried its new model will enable indefensible online attacks
Cybersecurity stocks are suffering from another case of Claude-struption:
Palo Alto Networks, CrowdStrike, Cloudflare, Fortinet, Zscaler, and Okta are all slumping in premarket trading after Fortune reported that a data leak from Anthropic revealed an updated AI model the company fears is so powerful that malicious actors could launch cyberattacks that these companies wouldn’t be able to defend against.
Per the leaked document reviewed by Fortune, the new model “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders,” and Anthropic plans to release it early to cybersecurity companies in order to help improve their ability to withstand attacks.
According to experts cited by Fortune, this leak was able to be discovered because digital assets created in Anthropic’s content management system “are set to public by default” unless a user shifts them to be kept private. Anthropic refers to this as “human error.”
But given how Claude Cowork was created by Claude Code, one presumes that Anthropic makes extensive use of its AI tools for code and products deployed both internally and externally.
This leaves us with a bit of a conundrum. Anthropic is simultaneously able to:
Develop an AI model so powerful that traditional cyber defenders might be bringing a paper shield to a gun fight; and
Not utilize anything resembling appropriate safeguards for protecting its own information and products using those same powerful AI tools it has developed.
When “hey, maybe make sure we don’t default to publishing information publicly!” can be considered an improvement on one’s own cybersecurity standards, it’s a little difficult to trust one’s assessment of future threats.
These cyber stocks had previously slumped in late February after Anthropic launched a new security feature for its AI model.