Hackers have manipulated Anthropic’s Claude AI chatbot to launch ransomware campaigns, phishing schemes, and extortion operations, according to a recent company report. The attacks, which targeted at least 17 organisations, demonstrate how individuals with little or no technical expertise used AI tools to carry out sophisticated cybercrime.
Anthropic revealed that its chatbot was used to create customised malware, identify vulnerable targets, and organise stolen data. The attackers also relied on Claude to craft tailored ransom messages, making their operations highly automated and efficient.
“Agentic AI has been weaponised,” the company said in its findings. While the identities of the affected companies remain undisclosed, the report confirmed that some ransom demands reached as high as US$500,000.
How hackers exploited Claude AI
The attack was uncovered by Anthropic’s internal security team, which monitored how Claude’s advanced coding abilities were abused to orchestrate cyberattacks through simple, natural language prompts. The company referred to this tactic as “vibe hacking”, a term derived from “vibe coding”, where AI generates code based on plain English instructions.
Once the malicious activity was detected, Anthropic responded by suspending accounts linked to the campaign, reinforcing its safety measures, and sharing best practices to help businesses guard against AI-driven threats.
The findings highlight growing concerns in the cybersecurity community about the accessibility of AI systems. Attackers no longer require deep technical skills to execute harmful campaigns, as AI chatbots can now streamline much of the process.
Protecting businesses against AI-powered threats
The report’s release comes as cybersecurity experts call for businesses to remain vigilant as AI technology continues to evolve. Small business owners, in particular, are encouraged to adopt proactive measures to protect sensitive data.
Basic cyber hygiene remains a priority, with staff training to identify phishing attempts, the use of strong passwords, and multi-factor authentication serving as essential safeguards. Cybersecurity professionals also recommend regular audits and assessments, especially for organisations managing valuable or sensitive information.
Staying informed about the latest AI capabilities and their associated risks is key. Businesses can benefit from industry threat-sharing networks, which provide intelligence and best practices for defending against AI-enhanced cybercrime.
Anthropic’s findings serve as a stark warning that AI’s rapid advancement brings both innovation and risk. Without stronger security practices, businesses may face increasingly complex attacks from criminals who exploit AI tools.