Saturday, 13 December 2025
26 C
Singapore
22.1 C
Thailand
20.6 C
Indonesia
26.9 C
Philippines

Tenable reveals seven ChatGPT vulnerabilities that expose users to data theft and hijacking

Tenable identifies seven ChatGPT flaws exposing users to data theft and manipulation through indirect prompt injection attacks.

Tenable has uncovered seven security vulnerabilities in OpenAI’s ChatGPT models that could expose users to data theft, manipulation, and long-term compromise. The cybersecurity firm, known for its exposure management expertise, found these weaknesses in ChatGPT-4o, with some persisting in the newer ChatGPT-5 model. Collectively called “HackedGPT”, the flaws enable attackers to bypass built-in safety controls and potentially steal stored data, including chat histories and user memories.

While OpenAI has addressed several issues identified by Tenable, others remain unresolved. The findings highlight emerging security concerns in generative AI systems as they become more deeply integrated into daily workflows.

New wave of indirect prompt injection attacks

The vulnerabilities introduce a new class of cyberattack known as indirect prompt injection. This occurs when hidden instructions on external websites or within online content manipulate the AI into executing unauthorised commands. Because ChatGPT’s web browsing and memory functions process live data and store user information, they can become prime targets for such manipulation.

Tenable researchers demonstrated that these attacks can be executed silently in two main ways. “0-click” attacks occur when simply asking ChatGPT a question triggers a compromise, while “1-click” attacks are activated when a user clicks a malicious link. A particularly concerning variant, Persistent Memory Injection, allows harmful commands to be stored in ChatGPT’s long-term memory. These instructions remain active even after the session ends, potentially enabling continuous exposure of private data until the memory is cleared.

“HackedGPT exposes a fundamental weakness in how large language models judge what information to trust,” said Moshe Bernstein, Senior Research Engineer at Tenable. “Individually, these flaws seem small — but together they form a complete attack chain, from injection and evasion to data theft and persistence. It shows that AI systems aren’t just potential targets; they can be turned into attack tools that silently harvest information from everyday chats or browsing.”

The seven vulnerabilities identified

Tenable’s research outlines seven distinct attack methods. These include indirect prompt injection via trusted websites, 0-click indirect prompt injection in search contexts, and prompt injection through single-click links. Other flaws involve bypassing ChatGPT’s link safety mechanisms, conversation injection between its SearchGPT and ChatGPT systems, malicious content hiding through formatting bugs, and the persistent memory injection technique that stores harmful commands permanently.

Each method provides a different path for attackers to manipulate ChatGPT or extract sensitive information. Together, they demonstrate how easily AI-driven systems can be exploited through indirect and non-technical means, making them harder to detect.

Implications and next steps for AI security

Tenable warns that the potential impact of these vulnerabilities is significant given ChatGPT’s global user base. With hundreds of millions relying on the platform for communication, research, and business use, the identified flaws could be used to insert hidden commands, steal data from chat histories or connected services such as Google Drive, and influence user interactions.

The company urges AI vendors to strengthen defences by testing safety mechanisms like URL validation tools and isolating different AI functions to reduce cross-context attacks. It also recommends that organisations treat AI platforms as active attack surfaces, monitor integrations for abnormal behaviour, and implement governance frameworks for responsible AI use.

“This research isn’t just about exposing flaws — it’s about changing how we secure AI,” Bernstein added. “People and organisations alike need to assume that AI tools can be manipulated and design controls accordingly. That means governance, data safeguards, and continuous testing to make sure these systems work for us, not against us.”

Hot this week

Proofpoint completes acquisition of Hornetsecurity

Proofpoint completes its US$1.8 billion acquisition of Hornetsecurity, expanding its Microsoft 365 and MSP-focused security capabilities.

Tech industry overlooks Auracast as momentum quietly builds

Auracast promises major improvements in wireless audio, but limited marketing and slow adoption mean many consumers still don't know it exists.

Pudu Robotics unveils new robot dog as it expands global presence

Pudu Robotics unveils its new D5 robot dog in Tokyo as part of its global push into service and industrial robotics.

AMD introduces EPYC Embedded 2005 series for compact, power-efficient AI systems

AMD launches the EPYC Embedded 2005 Series, offering compact, power-efficient processors for constrained networking, storage and industrial systems.

Nintendo launches official eShop and Switch Online service in Singapore

Nintendo launches the Singapore eShop and Switch Online service, giving local players full access to digital games, subscriptions, and regional deals.

PlayStation introduces limited edition Genshin Impact DualSense controller

PlayStation announces a limited edition Genshin Impact DualSense controller for PS5, launching in Singapore on 21 January 2026.

PGL brings Counter-Strike 2 Major to Singapore in November 2026

PGL confirms the Counter-Strike 2 Major is coming to Singapore in November 2026, marking the first CS2 Major in Southeast Asia.

Denodo: Rethinking data architecture for AI agility and measurable ROI in Asia-Pacific

Denodo highlights how modern, composable data architectures powered by logical data management are helping Asia-Pacific enterprises accelerate AI adoption, ensure governance, and achieve measurable ROI.

Veeam completes acquisition of Securiti AI to build unified trusted data platform

Veeam completes its US$1.725 billion acquisition of Securiti AI to form a unified trusted data platform for secure and scalable AI adoption.

Related Articles

Popular Categories