In June 2024, a cyber-attack on a pathology services company caused chaos across London’s hospitals. More than 10,000 appointments were cancelled. Blood shortages followed and delays to blood tests led to a patient’s death.
Lethal cyber-attacks like this are thankfully rare. But a new AI release could change that – plunging us into a terrifying new world of chaos and disruption to the digital systems that we rely on.
This week Anthropic, a leading AI company in San Francisco, announced “Claude Mythos Preview”, an AI model that the startup says is too dangerous to publicly release, thanks to its exceptional cybersecurity – and cyber-attacking – capabilities. Mythos, the company claims, has found vulnerabilities in every major browser and operating system. In other words, this new AI model might be able to help hackers disrupt much of the world’s most important software.
“This is Y2K-level alarming,” one security expert said. Already, Mythos has found a 27-year-old bug in a critical piece of security infrastructure and multiple vulnerabilities in the Linux kernel, essential for computer systems worldwide. These weak points could threaten almost everything on the internet from the streaming services you relax with to the banking systems you rely on.
Most of this is just marketing crap from Anthropic.
Finding vulnerabilities in code and generating complex, multistep exploits with publicly available models is possible now. This biggest hurdles now is setting correct context and actually knowing what to look for. Any “guardrails” for this behavior are easily bypassed by framing the detection and exploit generation as a valid dev style question in the most difficult of situations.
They likely just trained a model without guardrails in this case.
What they are doing here is over-hyping a problem and framing it like they are the only ones with a solution. LLM security issues are more in-focus now that companies have dumped a ton of resources into building AI systems they don’t really understand.



