Friday, 21 November 2025
26.2 C
Singapore
20.6 C
Thailand
20.6 C
Indonesia
27.7 C
Philippines

DeepSeek’s R1 model was found to be highly vulnerable to jailbreaking

DeepSeek’s R1 AI model is reportedly more vulnerable to jailbreaking than other AI systems, raising concerns about its ability to produce harmful content.

The latest artificial intelligence model from DeepSeek, the Chinese AI company making waves in Silicon Valley and Wall Street, is more susceptible to manipulation than other AI models. Reports indicate that DeepSeek’s R1 can be tricked into generating harmful content, including plans for a bioweapon attack and strategies to encourage self-harm among teenagers.

Security concerns raised by experts

According to The Wall Street Journal, DeepSeek’s R1 model lacks the robust safeguards seen in other AI models. Sam Rubin, senior vice president at Palo Alto Networks’ Unit 42—a threat intelligence and incident response division—warned that DeepSeek’s model is “more vulnerable to jailbreaking” than its competitors. Jailbreaking bypasses security filters to make an AI system generate harmful, misleading, or illicit content.

The Journal conducted its tests on DeepSeek’s R1. It was able to manipulate it into designing a social media campaign that, in the chatbot’s own words, “preys on teens’ desire for belonging, weaponizing emotional vulnerability through algorithmic amplification.”

AI model produces dangerous content

Further testing revealed even more concerning results. The chatbot reportedly provided instructions for executing a bioweapon attack, drafted a pro-Hitler manifesto, and composed a phishing email embedded with malware. In comparison, when the same prompts were tested on ChatGPT, the AI refused to comply, highlighting the significant security gap in DeepSeek’s system.

Concerns about DeepSeek’s AI models are not new. Reports suggest that the DeepSeek app actively avoids discussing politically sensitive topics such as the Tiananmen Square massacre or Taiwan’s sovereignty. Additionally, Anthropic CEO Dario Amodei recently stated that DeepSeek performed “the worst” in a bioweapons safety test, raising alarms about its security vulnerabilities.

Hot this week

Call of Duty: Black Ops 7 faces backlash from players over AI-generated content

Players slam Call of Duty: Black Ops 7 over AI-generated art and gameplay issues despite strong critical reviews.

Apple’s ring light-style feature reaches Windows first through Microsoft VP’s new tool

Windows users gain early access to a ring light-style screen feature through Microsoft VP Scott Hanselman’s new Windows Edge Light tool.

TikTok tests new tools to help users manage AI-generated content

TikTok tests an AI content slider and invisible watermarks to help users control and identify AI-generated videos on the platform.

Singapore organisations face rising data risks amid AI adoption and data sprawl, says Proofpoint

Proofpoint’s 2025 report finds Singapore firms face growing data security risks as AI tools and data sprawl intensify insider threats.

LinkedIn introduces AI-powered search to help users find the right people

LinkedIn introduces AI-powered search to help users find relevant people more quickly, starting with Premium members in the US.

Google TV may introduce solar-powered remote controls

Google TV may soon feature a solar-powered remote, reducing battery waste and offering an eco-friendly solution for streaming devices.

Adobe to acquire Semrush for US$1.9 billion

Adobe plans to acquire Semrush for US$1.9 billion to strengthen its digital marketing and AI-driven search tools.

Roblox’s selfie verification hints at a more intrusive online future

Roblox’s new age verification system signals a growing shift toward identity checks across online platforms, raising safety and privacy concerns.

Google unveils Antigravity, an agent-first coding tool built for Gemini 3

Google launches Antigravity, a new agent-first coding tool for Gemini 3 designed to enhance autonomous software development.

Related Articles

Popular Categories