Anthropic Warns of New ‘Vibe Hacking’ Assaults That Use Claude AI

Anthropic Warns of New ‘Vibe Hacking’ Assaults That Use Claude AI

Last Updated: August 27, 2025By

Anthropic, the corporate behind the favored AI mannequin Claude, stated in a brand new Threat Intelligence report that it disrupted a “vibe hacking” extortion scheme. Within the report, the corporate detailed how the cybercriminal scaled up a mass assault towards 17 targets, together with entities in authorities, healthcare, emergency companies and spiritual organizations.

(You possibly can learn the total report in this PDF file.)

Anthropic says that its Claude AI know-how was used as each a “technical guide and lively operator, enabling assaults that will be tougher and time-consuming for particular person actors to execute manually.” Claude was used to “automate reconnaissance, credential harvesting, and community penetration at scale,” the report stated.

What makes the findings extra disturbing is that so-called vibe hacking was thought of a future risk, with some specialists believing it was not yet possible. Anthropic’s report could have revealed what represents a major shift in how AI fashions and brokers are used to scale up huge cyberattacks, ransomware schemes, or extortion scams.

Individually, Anthropic has additionally just lately been coping with different AI points, namely settling a lawsuit by authors claiming Claude was educated on their copyrighted supplies. One other firm, Perplexity, has been coping with its personal safety points as its Comet AI browser was shown to have a major vulnerability.




Source link

Leave A Comment

you might also like