Tech

The Mythos remains a mystery as the security world faces increasing threats, agency attacks and concerns about the integrity of AI.

Claude Mythos’ Anthropic PBC model has emerged as a widely discussed artificial intelligence solution without being fully released.

Information about the model, which reportedly has the ability to analyze software at scale, find bugs in robust software ecosystems, and identify vulnerabilities, is tightly controlled by Anthropic. That situation didn’t change much Monday when Anthropic’s Head of Threat Intelligence Jacob Klein (pictured) spoke at the SANS Cybersecurity Summit at a hotel outside Washington DC, although he did offer a glimpse of the model’s capabilities during his appearance.

Klein gave a brief explanation of the model’s power in the context of how quickly AI has changed the world of cyber security and vowed to go public in the coming months.

“It’s great at finding weaknesses and bringing them together to do something,” Klein told the group. “You have to rethink what your risk picture looks like now. The landscape has changed today. There are trade-offs we have to balance. We’re going to be transparent, and I would hope that other labs will have the same level of transparency.”

Violations are accelerating

Klein’s appearance at the SANS Institute gathering comes at a time when the pace of AI breakthroughs has grown exponentially. Last weekend, cloud development platform Vercel Inc. disclosed that its internal systems were compromised due to a breach of Context.ai, a third-party tool used by a Vercel employee.

Hackers have since claimed to have stolen customer credentials from Vercel and made the data available for sale online. This followed a report earlier this month that a North Korean threat actor had injected malicious code into the widely used Java library Axios, as adversaries used AI to examine every link in the supply chain.

Events like these and discussions about Mythos prompted a meeting at the White House between the Treasury Secretary and Anthropic’s chief executive late last week. This weekend, the Financial Times reported that major banks are strengthening their defenses against the growing number of cyber attacks.

“AI capabilities increase the level of attack that attackers have at their disposal,” Klein said.

It moves at the speed of a machine

Anthropic’s head of threat intelligence presented a brief history of how the Claude AI model was adopted by malicious actors. It showed how quickly the cyberthreat landscape has evolved.

The company first saw evidence of Claude’s use in the spring of 2025 when one actor used the model to create a botched ransomware attack. Two months later, Anthropic found a Russian hacker who hired Claude to carry out the scam. In September 2025, the company had evidence that a government-sponsored group in China was using Claude to re-discover the system, test scale penetration, exploitation, access and collective movements within the breached network.

Klein noted that the goal in the Chinese example was espionage and data migration, with 80% to 90% of the actions being automated.

“Once it was built, it was easy,” Klein said. “Especially Claude himself who takes action. The person here has become a boss.”

Just as popular AI models have enabled well-intentioned non-programmers to create agents that perform tasks at lightning speed, Anthropic’s research highlights how threat actors are following the same playbook to create exploit tools they can’t do themselves. The company has mapped 800 bad actors against MITER techniques to get a better picture of how adversaries are using AI to circumvent their defenses and the report should be available soon, according to Klein.

“Nowadays AI systems are becoming an important part of the architecture of bad actors,” Klein said. “My job is to find bad actors and understand what they’re doing.”

Construction of strong defense structures

Klein’s point about AI systems becoming an integral part of threat actors’ architectures highlights an important development in how rapidly the cyber threat landscape is changing. Mythos can represent the type of architecture or scaffolding needed to effectively defend against AI-related attacks according to one leading security researcher.

Knostic founder Sounil Yu discussed the latest AI threats during the SANS Cybersecurity Summit.

Speaking at the SANS Conference, the founder of Knostic Inc. and AI Security Chief Sounil Yu used the analogy of a “big bad wolf” blowing up a house of “three pigs” made only of straw.

“Many think we should build with bricks, instead we should focus on the concept of architecture,” Yu told the SANS gathering. “Sometimes Architecture is more important than materials.”

The development of tools like Mythos that can strengthen cyber defenses and build strong infrastructure has gained momentum in recent months with the increasing adoption of AI agents. The most prominent example of this evolution has been OpenClaw, the most popular open source personal AI assistant with very weak security controls.

Nvidia Corp., Cisco Systems Inc. and Knostic have all released security-enhanced versions of OpenClaw in an effort to keep the tool from opening up new vulnerabilities to enterprise organizations.

“The Claw has already left the tank, and you probably already have one working in your organization,” Yu noted. “Unfortunately, OpenClaw is by default in a dangerous place, pulling talent from who knows where. OpenClaw is actually a wake-up call for many businesses.”

Call for integrity

That wake-up call has also led some prominent voices in the cybersecurity world to issue a warning about AI’s journey down the road of dishonesty. As AI takes over the world, can it be trusted?

This is a problem that the cyber security community must face, according to Bruce Schneier, former dean of the Harvard Kennedy School and currently an adjunct professor at the University of Toronto. Schneier expressed concern that the current lack of oversight around the use of AI and the motives of states could lead to more dangerous consequences for the world.

“We are already seeing Russian attacks to use training information,” Schneier said during his presentation. “Imagine AI being used as an advisor in international trade negotiations. There will be an economic incentive to hack that AI. We need reliable AI.”

Schneier said this can only be achieved through government intervention, through transparent regulations and regulation of AI and robot safety. He made the point that focusing on the integrity of AI will be a key responsibility for security professionals at a time when AI is increasingly seen as a trusted advisor and agent worker.

“I predict that integrity is the key security issue of the next decade,” Schneier said. “Our confusion will increase with AI. We will think of AI as a friend, and it is not.”

Photos: SANS Institute/livestream

Support our mission to keep content open and free by engaging with the CUBE community. Join CUBE’s Alumni Trust Networkwhere technology leaders connect, share wisdom and create opportunities.

  • 15M+ viewers of CUBE videosenabling conversations across AI, cloud, cybersecurity and more
  • 11.4k+ CUBE alumni – Connect with more than 11,400 technology and business leaders who are shaping the future through a unique network based on trust.

About SiliconANGLE Media

SiliconANGLE Media is a recognized leader in digital media innovation, technology that integrates breakthrough, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, CUBE Network, CUBE Research, CUBE365, CUBE AI and CUBE SuperStudios – with leading locations in Silicon Valley and the New York Stock Exchange – SiliconANGLE Media works at the intersection of media, technology and AI.

Founded by technology visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media products that reach 15+ million elite technology professionals. Our new ownership of CUBE AI Video Cloud is starting to engage with audiences, using CUBEai.com’s neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button