Anthropic has introduced new rules for the use of its Claude AI chatbot, aiming to address growing fears about misuse in an increasingly dangerous digital landscape. The updated policy outlines stricter cybersecurity requirements and places clear restrictions on the development of some of the world’s most dangerous weapons.
Expanded restrictions on weapons development
Previously, Anthropic banned the use of Claude to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” The new version strengthens this rule by naming specific categories of weapons. Users are now prohibited from developing high-yield explosives, as well as biological, nuclear, chemical, and radiological (CBRN) weapons, with the assistance of Claude.
The move follows the company’s introduction of “AI Safety Level 3” protections in May, alongside its Claude Opus 4 model. These measures are designed to make the system more resistant to jailbreak attempts and to block attempts at using the technology in the design or creation of CBRN weapons.
Addressing cybersecurity and agentic AI risks
Anthropic also highlighted concerns about the risks posed by more advanced and autonomous AI tools. These include “Computer Use”, a feature that allows Claude to control a user’s computer directly, and “Claude Code”, which integrates the system into a developer’s terminal.
“These powerful capabilities introduce new risks, including potential for scaled abuse, malware creation, and cyber attacks,” the company wrote in its policy update.
In response, a new section has been added to the rules, titled “Do Not Compromise Computer or Network Systems.” This prohibits users from employing Claude to identify or exploit security vulnerabilities, create or distribute malware, or develop tools for denial-of-service attacks. By expanding its cybersecurity policy, Anthropic is aiming to minimise risks linked to hacking, fraud, and large-scale digital abuse.
Adjustments to political content rules
While restrictions have tightened in some areas, Anthropic has also eased its stance on political content. Previously, all campaign-related and lobbying content was banned. Under the new guidelines, the company will only prohibit use cases that are “deceptive or disruptive to democratic processes, or involve voter and campaign targeting.”
Anthropic further clarified its rules for “high-risk” use cases. These requirements apply only when Claude is being used in consumer-facing scenarios, rather than internal business applications. This distinction aims to provide businesses with greater flexibility when deploying AI in professional settings, while still safeguarding individuals from potential harm.
The changes reflect Anthropic’s effort to strike a balance between innovation and safety as AI systems become increasingly powerful and widely available. By tightening rules around weapons and cybersecurity while refining its political guidelines, the company is aiming to prevent misuse without overly limiting legitimate uses of Claude.