Sunday, 31 August 2025
28.7 C
Singapore
28.7 C
Thailand
20.3 C
Indonesia
28 C
Philippines

Anthropic updates Claude chatbot policy to use chat data for AI training

Anthropic will utilise Claude chatbot conversations for AI training starting from 28 September, with opt-out options and a five-year data retention policy.

Anthropic, the company behind the popular Claude chatbot, has announced a new policy that will see transcripts of user chats used to train its artificial intelligence models. The change has begun rolling out to users, who have until 28 September to review and accept the updated terms.

Claude, widely considered a strong contender to enhance Apple’s Siri, will now utilise logs of user interactions to improve performance, strengthen safety measures, and enhance the accuracy of detecting harmful content. The policy also applies to Claude Code, a developer-focused version of the chatbot.

“By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations,” said Anthropic. While the change represents a shift in the company’s approach to data use, it is not mandatory. Users can opt out of model training by adjusting the platform’s settings.

Notifications about the change will continue to appear until 28 September, giving users the option to disable the “You can help improve Claude” toggle before accepting the terms. After this date, users will need to adjust their settings manually via the model training dashboard.

New chats affected, older conversations excluded

Anthropic confirmed that only new or resumed conversations will be included in AI training, with previous chats remaining excluded. This change reflects a broader trend in the AI sector, where companies are seeking more data to enhance model capabilities amid growing competition and limited access to high-quality training material.

Existing Claude users who wish to opt out of contributing data can follow this path: Settings > Privacy > Help Improve Claude. The updated policy applies to Claude Free, Pro, and Max plans. Still, it does not affect Claude for Work, Claude Gov, Claude for Education, APU usage, or instances where the service is accessed through third-party platforms such as Google’s Vertex AI and Amazon Bedrock.

The decision underscores Anthropic’s commitment to enhancing Claude’s overall performance and ensuring it remains competitive in a rapidly evolving AI market. The company has previously positioned Claude as a safer, more reliable chatbot, prioritising trust and user control.

Data retention extended to five years

Alongside its new training policy, Anthropic is revising its data retention rules. User data may now be stored for up to five years, though conversations that are manually deleted will not be used for AI training. The company stated that this approach would help refine its models while providing users with the ability to manage their own data.

Anthropic’s decision to introduce opt-out data training contrasts with those of its competitors, which have adopted mandatory data usage policies. By allowing users to choose, the company aims to strike a balance between transparency, safety improvements, and respect for individual privacy preferences.

With AI technology evolving quickly, Anthropic’s approach reflects the growing tension between innovation and user control. For now, Claude users have a month to decide whether they are comfortable contributing to the platform’s development.

Hot this week

Apple’s upcoming iPhone strategy signals a major design shift

Apple is set to launch a slimmer iPhone Air next month, with a foldable model expected in 2026 and a curved-glass 20th anniversary device planned for 2027.

Confluent launches streaming agents to accelerate real-time agentic AI

Confluent has launched Streaming Agents, enabling enterprises to scale real-time AI agents with secure integrations and contextual data.

WhatsApp introduces AI-powered Writing Help and Message Summaries in Singapore

WhatsApp launches Writing Help and Message Summaries in Singapore, offering AI-powered assistance with strong data privacy measures.

Huawei to launch second triple-folding phone on 4 September

Huawei will unveil its second triple-folding smartphone, the Mate XTs, on 4 September ahead of a busy month of rival tech launches.

TechLaw.Fest marks 10th edition with focus on digital innovation in law

TechLaw.Fest 2025 marks its 10th edition in Singapore with keynotes, global legal tech discussions, and the launch of the e-Apostille.

Meta introduces new AI safeguards to protect teens from harmful conversations

Meta is strengthening AI safeguards to prevent teens from discussing self-harm and other sensitive topics with chatbots on Instagram and Facebook.

ChatGPT to introduce parental controls as AI safety concerns rise

OpenAI is introducing parental controls for ChatGPT, addressing growing concerns about the safety of AI chatbots and their impact on young users.

Japan uses an AI simulation of Mount Fuji’s eruption to prepare citizens

Japan uses AI to simulate a Mount Fuji eruption, showing its potential devastation and promoting disaster preparedness.

Microsoft releases Windows 11 25H2 update for testing in the Release Preview channel

Microsoft has released the Windows 11 25H2 update in the Release Preview Channel, with feature removals and improved admin controls.

Related Articles

Popular Categories