Anthropic, the company behind the popular Claude chatbot, has announced a new policy that will see transcripts of user chats used to train its artificial intelligence models. The change has begun rolling out to users, who have until 28 September to review and accept the updated terms.
Claude, widely considered a strong contender to enhance Apple’s Siri, will now utilise logs of user interactions to improve performance, strengthen safety measures, and enhance the accuracy of detecting harmful content. The policy also applies to Claude Code, a developer-focused version of the chatbot.
“By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations,” said Anthropic. While the change represents a shift in the company’s approach to data use, it is not mandatory. Users can opt out of model training by adjusting the platform’s settings.
Notifications about the change will continue to appear until 28 September, giving users the option to disable the “You can help improve Claude” toggle before accepting the terms. After this date, users will need to adjust their settings manually via the model training dashboard.
New chats affected, older conversations excluded
Anthropic confirmed that only new or resumed conversations will be included in AI training, with previous chats remaining excluded. This change reflects a broader trend in the AI sector, where companies are seeking more data to enhance model capabilities amid growing competition and limited access to high-quality training material.
Existing Claude users who wish to opt out of contributing data can follow this path: Settings > Privacy > Help Improve Claude. The updated policy applies to Claude Free, Pro, and Max plans. Still, it does not affect Claude for Work, Claude Gov, Claude for Education, APU usage, or instances where the service is accessed through third-party platforms such as Google’s Vertex AI and Amazon Bedrock.
The decision underscores Anthropic’s commitment to enhancing Claude’s overall performance and ensuring it remains competitive in a rapidly evolving AI market. The company has previously positioned Claude as a safer, more reliable chatbot, prioritising trust and user control.
Data retention extended to five years
Alongside its new training policy, Anthropic is revising its data retention rules. User data may now be stored for up to five years, though conversations that are manually deleted will not be used for AI training. The company stated that this approach would help refine its models while providing users with the ability to manage their own data.
Anthropic’s decision to introduce opt-out data training contrasts with those of its competitors, which have adopted mandatory data usage policies. By allowing users to choose, the company aims to strike a balance between transparency, safety improvements, and respect for individual privacy preferences.
With AI technology evolving quickly, Anthropic’s approach reflects the growing tension between innovation and user control. For now, Claude users have a month to decide whether they are comfortable contributing to the platform’s development.