Tenable has uncovered three major vulnerabilities in Google’s Gemini suite that could have let attackers steal sensitive data from millions of users. The flaws, dubbed the Gemini Trifecta, have since been fixed but showed how artificial intelligence (AI) systems can be turned against their own users.
The Gemini Trifecta involved weaknesses across different parts of the Gemini platform. In Gemini Cloud Assist, malicious log entries could have been planted, causing Gemini to follow harmful commands when users later interacted with the system. In the Search Personalisation Model, attackers could insert hidden queries into a victim’s browser history, which Gemini might treat as trusted context. This could have exposed personal details such as location history and saved information. Meanwhile, the Gemini Browsing Tool could have been tricked into making secret web requests that embedded user data and sent it directly to attacker-controlled servers.
According to Tenable, the root issue was that Gemini’s systems did not properly separate safe user inputs from malicious ones. By exploiting this weakness, attackers could create invisible channels to hijack AI behaviour without needing malware or phishing emails.
How the flaws could have been exploited
If exploited before Google patched the flaws, attackers could have silently inserted harmful instructions into logs or search histories, stolen sensitive details such as saved user data and location, and used cloud integrations to reach wider resources. The Gemini Browsing Tool could also have been manipulated to send private information directly to an external server.
“These vulnerabilities show how AI platforms can be manipulated in ways users never see, making data theft invisible,” said Liv Matan, Senior Security Researcher at Tenable. “Like any powerful technology, large language models such as Gemini bring enormous value but remain susceptible to vulnerabilities. Security professionals must act early, locking down weaknesses before attackers can exploit them and building AI systems that are resilient by design.”
Response and security recommendations
Google has remediated all three vulnerabilities, so no user action is required. However, Tenable advises security teams to treat AI features as active attack surfaces rather than passive tools. This includes auditing logs, search histories, and integrations for signs of tampering, monitoring for unusual outbound requests, and testing AI tools against prompt injection attacks.
“This disclosure underscores that securing AI isn’t just about fixing flaws,” Matan added. “It’s about anticipating how attackers could exploit the unique mechanics of AI systems and building layered defences that prevent small cracks from becoming systemic exposures.”
The discovery highlights the growing complexity of AI security and the need for proactive safeguards as AI platforms become deeply embedded in everyday workflows and enterprise systems.