Exclusive: Anthropic rolls out AI tool that can hunt software bugs on its own—including the most dangerous ones humans miss

2 days ago 1

Anthropic has introduced Claude Code Security, the company’s first product aimed at using AI models to help security teams keep up with the flood of software bugs they’re responsible for fixing. For large companies, unpatched software bugs are a leading cause of data breaches, outages, and regulatory headaches—while security teams are often overwhelmed by how much code they have to protect. 

Now, instead of just scanning code for known problem patterns, Claude Code Security can review entire codebases, more like a human expert would—looking at how different pieces of software interact and how data moves through a system. The AI double-checks its own findings, rates how severe each issue is, and suggests fixes. But while the system can investigate code on its own, it does not apply fixes automatically, which could be dangerous in its own right—developers must review and approve every change.

Claude Code Security builds on over a year of research by the company’s Frontier Red Team, an internal group of about 15 researchers tasked with stress-testing the company’s most advanced AI systems and probing how they might be misused in areas such as cybersecurity. 

The Frontier Red Team’s most recent research found that Anthropic’s new Opus 4.6 model ha...

Read Entire Article