A new and highly advanced AI cybersecurity model developed by Anthropic, known as Claude Mythos, is raising global concerns about the future of digital security. The model is reportedly so powerful that it has not been released publicly, due to its ability to identify and exploit vulnerabilities across massive, complex systems.
Instead, access has been limited to a small group of major US tech companies, including Microsoft, Apple, Cisco, and Amazon Web Services, allowing them to strengthen their defences. However, Australian banks, infrastructure providers, and other critical industries currently do not have access to test their systems against such advanced threats.
Experts warn that the real issue is not just this single model, but the rapid acceleration of AI capabilities. These tools can now detect weaknesses, chain them together, and generate exploit code at unprecedented speed—potentially outpacing current cybersecurity measures. This creates a growing “arms race” between attackers and defenders, where both sides may soon have access to similar technologies.
Australian regulators and industry leaders are monitoring the situation closely, but some experts argue more proactive collaboration is needed between government, AI developers, and infrastructure providers to stay ahead. Despite the risks, analysts stress that this is not a moment for panic, but for preparation, regulation, and strategic investment in AI-driven defence systems.
Ultimately, the rise of tools like Mythos highlights a critical turning point: cybersecurity will increasingly depend on “AI versus AI,” and nations that fail to adapt quickly may fall behind.