Tenable has uncovered seven security vulnerabilities in OpenAI’s ChatGPT models that could expose users to data theft, manipulation, and long-term compromise. The cybersecurity firm, known for its exposure management expertise, found these weaknesses in ChatGPT-4o, with some persisting in the newer ChatGPT-5 model. Collectively called “HackedGPT”, the flaws enable attackers to bypass built-in safety controls and potentially steal stored data, including chat histories and user memories.
While OpenAI has addressed several issues identified by Tenable, others remain unresolved. The findings highlight emerging security concerns in generative AI systems as they become more deeply integrated into daily workflows.
New wave of indirect prompt injection attacks
The vulnerabilities introduce a new class of cyberattack known as indirect prompt injection. This occurs when hidden instructions on external websites or within online content manipulate the AI into executing unauthorised commands. Because ChatGPT’s web browsing and memory functions process live data and store user information, they can become prime targets for such manipulation.
Tenable researchers demonstrated that these attacks can be executed silently in two main ways. “0-click” attacks occur when simply asking ChatGPT a question triggers a compromise, while “1-click” attacks are activated when a user clicks a malicious link. A particularly concerning variant, Persistent Memory Injection, allows harmful commands to be stored in ChatGPT’s long-term memory. These instructions remain active even after the session ends, potentially enabling continuous exposure of private data until the memory is cleared.
“HackedGPT exposes a fundamental weakness in how large language models judge what information to trust,” said Moshe Bernstein, Senior Research Engineer at Tenable. “Individually, these flaws seem small — but together they form a complete attack chain, from injection and evasion to data theft and persistence. It shows that AI systems aren’t just potential targets; they can be turned into attack tools that silently harvest information from everyday chats or browsing.”
The seven vulnerabilities identified
Tenable’s research outlines seven distinct attack methods. These include indirect prompt injection via trusted websites, 0-click indirect prompt injection in search contexts, and prompt injection through single-click links. Other flaws involve bypassing ChatGPT’s link safety mechanisms, conversation injection between its SearchGPT and ChatGPT systems, malicious content hiding through formatting bugs, and the persistent memory injection technique that stores harmful commands permanently.
Each method provides a different path for attackers to manipulate ChatGPT or extract sensitive information. Together, they demonstrate how easily AI-driven systems can be exploited through indirect and non-technical means, making them harder to detect.
Implications and next steps for AI security
Tenable warns that the potential impact of these vulnerabilities is significant given ChatGPT’s global user base. With hundreds of millions relying on the platform for communication, research, and business use, the identified flaws could be used to insert hidden commands, steal data from chat histories or connected services such as Google Drive, and influence user interactions.
The company urges AI vendors to strengthen defences by testing safety mechanisms like URL validation tools and isolating different AI functions to reduce cross-context attacks. It also recommends that organisations treat AI platforms as active attack surfaces, monitor integrations for abnormal behaviour, and implement governance frameworks for responsible AI use.
“This research isn’t just about exposing flaws — it’s about changing how we secure AI,” Bernstein added. “People and organisations alike need to assume that AI tools can be manipulated and design controls accordingly. That means governance, data safeguards, and continuous testing to make sure these systems work for us, not against us.”



