Tuesday, 23 December 2025
30.5 C
Singapore
26.1 C
Thailand
29 C
Indonesia
26.3 C
Philippines

Tenable reveals seven ChatGPT vulnerabilities that expose users to data theft and hijacking

[output_post_excerpt]

Tenable has uncovered seven security vulnerabilities in OpenAI’s ChatGPT models that could expose users to data theft, manipulation, and long-term compromise. The cybersecurity firm, known for its exposure management expertise, found these weaknesses in ChatGPT-4o, with some persisting in the newer ChatGPT-5 model. Collectively called “HackedGPT”, the flaws enable attackers to bypass built-in safety controls and potentially steal stored data, including chat histories and user memories.

While OpenAI has addressed several issues identified by Tenable, others remain unresolved. The findings highlight emerging security concerns in generative AI systems as they become more deeply integrated into daily workflows.

New wave of indirect prompt injection attacks

The vulnerabilities introduce a new class of cyberattack known as indirect prompt injection. This occurs when hidden instructions on external websites or within online content manipulate the AI into executing unauthorised commands. Because ChatGPT’s web browsing and memory functions process live data and store user information, they can become prime targets for such manipulation.

Tenable researchers demonstrated that these attacks can be executed silently in two main ways. “0-click” attacks occur when simply asking ChatGPT a question triggers a compromise, while “1-click” attacks are activated when a user clicks a malicious link. A particularly concerning variant, Persistent Memory Injection, allows harmful commands to be stored in ChatGPT’s long-term memory. These instructions remain active even after the session ends, potentially enabling continuous exposure of private data until the memory is cleared.

“HackedGPT exposes a fundamental weakness in how large language models judge what information to trust,” said Moshe Bernstein, Senior Research Engineer at Tenable. “Individually, these flaws seem small — but together they form a complete attack chain, from injection and evasion to data theft and persistence. It shows that AI systems aren’t just potential targets; they can be turned into attack tools that silently harvest information from everyday chats or browsing.”

The seven vulnerabilities identified

Tenable’s research outlines seven distinct attack methods. These include indirect prompt injection via trusted websites, 0-click indirect prompt injection in search contexts, and prompt injection through single-click links. Other flaws involve bypassing ChatGPT’s link safety mechanisms, conversation injection between its SearchGPT and ChatGPT systems, malicious content hiding through formatting bugs, and the persistent memory injection technique that stores harmful commands permanently.

Each method provides a different path for attackers to manipulate ChatGPT or extract sensitive information. Together, they demonstrate how easily AI-driven systems can be exploited through indirect and non-technical means, making them harder to detect.

Implications and next steps for AI security

Tenable warns that the potential impact of these vulnerabilities is significant given ChatGPT’s global user base. With hundreds of millions relying on the platform for communication, research, and business use, the identified flaws could be used to insert hidden commands, steal data from chat histories or connected services such as Google Drive, and influence user interactions.

The company urges AI vendors to strengthen defences by testing safety mechanisms like URL validation tools and isolating different AI functions to reduce cross-context attacks. It also recommends that organisations treat AI platforms as active attack surfaces, monitor integrations for abnormal behaviour, and implement governance frameworks for responsible AI use.

“This research isn’t just about exposing flaws — it’s about changing how we secure AI,” Bernstein added. “People and organisations alike need to assume that AI tools can be manipulated and design controls accordingly. That means governance, data safeguards, and continuous testing to make sure these systems work for us, not against us.”

Hot this week

Indie Game Awards withdraws Clair Obscur honours over generative AI use

Indie Game Awards withdraws Clair Obscur’s top honours after confirming generative AI assets were used during the game’s production.

Valve ends production of its last Steam Deck LCD model

Valve ends production of its last Steam Deck LCD model, leaving OLED versions as the only option and raising the entry price for new buyers.

Square Enix releases Final Fantasy VII Remake Intergrade demo on Switch 2 and Xbox

Free demo for Final Fantasy VII Remake Intergrade launches on Switch 2 and Xbox, letting players carry progress into the full 2026 release.

Zoom introduces AI Companion 3.0 with a web-based assistant and expanded task automation

Zoom launches AI Companion 3.0, adding a web-based assistant that automates tasks, drafts emails and reshapes the platform into an AI workspace.

AI designs a Linux computer with 843 parts in a single week

Quilter reveals a Linux computer designed by AI in one week, hinting at a future where hardware development is faster and more accessible.

Square Enix releases Final Fantasy VII Remake Intergrade demo on Switch 2 and Xbox

Free demo for Final Fantasy VII Remake Intergrade launches on Switch 2 and Xbox, letting players carry progress into the full 2026 release.

AI designs a Linux computer with 843 parts in a single week

Quilter reveals a Linux computer designed by AI in one week, hinting at a future where hardware development is faster and more accessible.

Super Mario Bros inspired Hideo Kojima’s path into game development

Hideo Kojima reveals how Super Mario Bros convinced him that video games could one day surpass movies and led him into game development.

Indie Game Awards withdraws Clair Obscur honours over generative AI use

Indie Game Awards withdraws Clair Obscur’s top honours after confirming generative AI assets were used during the game’s production.

Related Articles

Popular Categories