Friday, 7 November 2025
30 C
Singapore
26.7 C
Thailand
20.7 C
Indonesia
28.8 C
Philippines

Tenable reveals seven ChatGPT vulnerabilities that expose users to data theft and hijacking

Tenable identifies seven ChatGPT flaws exposing users to data theft and manipulation through indirect prompt injection attacks.

Tenable has uncovered seven security vulnerabilities in OpenAI’s ChatGPT models that could expose users to data theft, manipulation, and long-term compromise. The cybersecurity firm, known for its exposure management expertise, found these weaknesses in ChatGPT-4o, with some persisting in the newer ChatGPT-5 model. Collectively called “HackedGPT”, the flaws enable attackers to bypass built-in safety controls and potentially steal stored data, including chat histories and user memories.

While OpenAI has addressed several issues identified by Tenable, others remain unresolved. The findings highlight emerging security concerns in generative AI systems as they become more deeply integrated into daily workflows.

New wave of indirect prompt injection attacks

The vulnerabilities introduce a new class of cyberattack known as indirect prompt injection. This occurs when hidden instructions on external websites or within online content manipulate the AI into executing unauthorised commands. Because ChatGPT’s web browsing and memory functions process live data and store user information, they can become prime targets for such manipulation.

Tenable researchers demonstrated that these attacks can be executed silently in two main ways. “0-click” attacks occur when simply asking ChatGPT a question triggers a compromise, while “1-click” attacks are activated when a user clicks a malicious link. A particularly concerning variant, Persistent Memory Injection, allows harmful commands to be stored in ChatGPT’s long-term memory. These instructions remain active even after the session ends, potentially enabling continuous exposure of private data until the memory is cleared.

“HackedGPT exposes a fundamental weakness in how large language models judge what information to trust,” said Moshe Bernstein, Senior Research Engineer at Tenable. “Individually, these flaws seem small — but together they form a complete attack chain, from injection and evasion to data theft and persistence. It shows that AI systems aren’t just potential targets; they can be turned into attack tools that silently harvest information from everyday chats or browsing.”

The seven vulnerabilities identified

Tenable’s research outlines seven distinct attack methods. These include indirect prompt injection via trusted websites, 0-click indirect prompt injection in search contexts, and prompt injection through single-click links. Other flaws involve bypassing ChatGPT’s link safety mechanisms, conversation injection between its SearchGPT and ChatGPT systems, malicious content hiding through formatting bugs, and the persistent memory injection technique that stores harmful commands permanently.

Each method provides a different path for attackers to manipulate ChatGPT or extract sensitive information. Together, they demonstrate how easily AI-driven systems can be exploited through indirect and non-technical means, making them harder to detect.

Implications and next steps for AI security

Tenable warns that the potential impact of these vulnerabilities is significant given ChatGPT’s global user base. With hundreds of millions relying on the platform for communication, research, and business use, the identified flaws could be used to insert hidden commands, steal data from chat histories or connected services such as Google Drive, and influence user interactions.

The company urges AI vendors to strengthen defences by testing safety mechanisms like URL validation tools and isolating different AI functions to reduce cross-context attacks. It also recommends that organisations treat AI platforms as active attack surfaces, monitor integrations for abnormal behaviour, and implement governance frameworks for responsible AI use.

“This research isn’t just about exposing flaws — it’s about changing how we secure AI,” Bernstein added. “People and organisations alike need to assume that AI tools can be manipulated and design controls accordingly. That means governance, data safeguards, and continuous testing to make sure these systems work for us, not against us.”

Hot this week

Apple may launch an affordable Mac laptop in early 2026

Apple may launch its first affordable Mac laptop in early 2026, aiming to attract students and everyday users with a price under US$1,000.

ECOVACS: Advancing smart living and robotics adoption in Southeast Asia

ECOVACS is reshaping smart living in Southeast Asia with AI-driven, multi-scenario robotics built for local homes and lifestyles.

Affiliate marketing becomes major growth driver for brands in Singapore as investments surge

Affiliate marketing becomes a core growth channel for Singapore brands as investment rises and creators gain greater influence.

Innovation drives legacy industries at TechInnovation 2025

Industry leaders at TechInnovation 2025 shared how innovation and collaboration are helping legacy businesses modernise for the future.

Airwallex surpasses US$1 billion in annualised revenue milestone

Airwallex reaches US$1 billion in annualised revenue, driven by rapid customer adoption, AI innovation, and global expansion.

Ambitionz introduces Cipher, an AI platform built to think like a game developer

Ambitionz launches Cipher, an AI designed to think like a game developer, with early access for Roblox creators worldwide.

Corning and Nokia partner to bring fibre to the edge for enterprise networks

Corning and Nokia partner to deliver fibre-to-the-edge and optical LAN solutions, offering scalable, high-speed, and sustainable enterprise networks.

AI adoption grows 20% in Singapore as 170,000 businesses embrace the technology

AI adoption in Singapore rises 20% in 2025, with 170,000 businesses now using AI across finance, tech, and healthcare sectors.

Hitachi Vantara launches Hitachi iQ Studio to accelerate enterprise AI adoption

Hitachi Vantara launches Hitachi iQ Studio to simplify and scale AI deployment with no-code tools and enterprise-grade governance.

Related Articles

Popular Categories