Wednesday, 26 November 2025
26.4 C
Singapore
19.3 C
Thailand
20.7 C
Indonesia
27.8 C
Philippines

DeepSeek’s R1 model was found to be highly vulnerable to jailbreaking

DeepSeek’s R1 AI model is reportedly more vulnerable to jailbreaking than other AI systems, raising concerns about its ability to produce harmful content.

The latest artificial intelligence model from DeepSeek, the Chinese AI company making waves in Silicon Valley and Wall Street, is more susceptible to manipulation than other AI models. Reports indicate that DeepSeek’s R1 can be tricked into generating harmful content, including plans for a bioweapon attack and strategies to encourage self-harm among teenagers.

Security concerns raised by experts

According to The Wall Street Journal, DeepSeek’s R1 model lacks the robust safeguards seen in other AI models. Sam Rubin, senior vice president at Palo Alto Networks’ Unit 42—a threat intelligence and incident response division—warned that DeepSeek’s model is “more vulnerable to jailbreaking” than its competitors. Jailbreaking bypasses security filters to make an AI system generate harmful, misleading, or illicit content.

The Journal conducted its tests on DeepSeek’s R1. It was able to manipulate it into designing a social media campaign that, in the chatbot’s own words, “preys on teens’ desire for belonging, weaponizing emotional vulnerability through algorithmic amplification.”

AI model produces dangerous content

Further testing revealed even more concerning results. The chatbot reportedly provided instructions for executing a bioweapon attack, drafted a pro-Hitler manifesto, and composed a phishing email embedded with malware. In comparison, when the same prompts were tested on ChatGPT, the AI refused to comply, highlighting the significant security gap in DeepSeek’s system.

Concerns about DeepSeek’s AI models are not new. Reports suggest that the DeepSeek app actively avoids discussing politically sensitive topics such as the Tiananmen Square massacre or Taiwan’s sovereignty. Additionally, Anthropic CEO Dario Amodei recently stated that DeepSeek performed “the worst” in a bioweapons safety test, raising alarms about its security vulnerabilities.

Hot this week

WhatsApp brings back About with new visibility and privacy updates

WhatsApp reintroduces its original About feature with new visibility, privacy options, and custom timers.

Belkin Zootopia accessories you need before Zootopia 2 arrives

Belkin’s latest Zootopia collection brings fun designs and practical features to power banks, cables, cases and straps for everyday use.

Kaspersky warns of rising ransomware risks for global manufacturing in 2025

Kaspersky warns global manufacturing could have faced over US$18 billion in ransomware-related downtime losses in early 2025.

Kintone reports 36.4% sales surge in first half of 2025 as Southeast Asia demand grows

Kintone reports strong H1 2025 growth with rising enterprise adoption and new generative AI tools driving its global expansion.

Solace launches new partner programme to boost agentic AI adoption

Solace launches a new partner programme to help enterprises accelerate the adoption of real-time data and agentic AI solutions.

DBCS launches global design platform and unveils SG Mark 2025 winners

DBCS celebrates 40 years with the launch of WDBO and SG Mark 2025, spotlighting Singapore’s role in global design and innovation.

Chrome tests new privacy feature to limit precise location sharing on Android

Chrome for Android tests a new privacy feature that lets websites access only approximate location data instead of precise GPS information.

OpenAI introduces a new shopping assistant in ChatGPT

OpenAI launches a new ChatGPT shopping assistant that helps users compare products, find deals, and search for images ahead of Black Friday.

OpenAI was blocked from using the term ‘cameo’ in Sora after a temporary court order

A judge blocks OpenAI from using the term “cameo” in Sora until 22 December as Cameo pursues its trademark dispute.

Related Articles

Popular Categories