Tenable has announced that it bypassed OpenAI’s new GPT-5 safety measures within 24 hours of the model’s release, raising concerns over AI security and governance.
AI safety measures breached shortly after launch
OpenAI introduced GPT-5 on 7 August 2025, promoting its “significantly more sophisticated” prompt safety features. These were designed to prevent the model from generating harmful or illegal content. However, Tenable researchers reported that they circumvented the safeguards using a social engineering tactic called the crescendo technique.
The approach involved presenting themselves as a history student seeking background information on a Molotov cocktail. In just four prompts, the model provided step-by-step instructions for creating the incendiary device. The incident underscores how AI systems, even with improved guardrails, remain vulnerable to manipulation.
Industry concerns over AI vulnerability
Tenable’s findings, detailed in a blog post, add to a growing number of reports from researchers and users who have observed jailbreaks, hallucinations, and other quality issues in GPT-5 since its launch.
“The ease with which we bypassed GPT-5’s new safety protocols proves that even the most advanced AI is not foolproof,” said Tomer Avni, VP of Product Management at Tenable. “This creates a significant danger for organisations where these tools are being rapidly adopted by employees, often without oversight. Without proper visibility and governance, businesses are unknowingly exposed to serious security, ethical, and compliance risks. This incident is a clear call for a dedicated AI exposure management strategy to secure every model in use.”
Call for stronger oversight and protection
OpenAI has said it is working on fixes to address the vulnerability. However, Tenable argues that the incident demonstrates why businesses cannot depend solely on built-in safety features. The company stressed the importance of implementing AI exposure management tools to monitor and control the AI models an organisation uses, whether developed in-house or sourced from third parties.
According to Tenable, adopting such measures will help ensure AI applications are used responsibly, securely, and in line with global compliance standards, reducing the risk of misuse and potential harm.