Sunday, 19 October 2025
27.3 C
Singapore
27.4 C
Thailand
20.7 C
Indonesia
27.1 C
Philippines

Hackers exploit hidden malware in images processed by AI chatbots

Researchers warn that hackers can conceal malicious prompts in AI-processed images, posing a significant security risk to multimodal systems.

As artificial intelligence tools become more embedded in daily workflows, cybersecurity experts are warning that attackers are finding new ways to exploit them. Security researchers at Trail of Bits have demonstrated a novel attack technique that embeds malicious prompts within images, which are then revealed when these images are processed by large language models (LLMs).

Hidden instructions emerge through image downscaling

The method leverages the way AI platforms resize images for performance optimisation. Although the malicious prompts are invisible to the human eye in the original image, they become legible to the algorithm when the image is downscaled.

This attack builds on a 2020 study from TU Braunschweig in Germany, which highlighted image scaling as a potential vulnerability in machine learning systems. Trail of Bits has demonstrated that carefully crafted images can manipulate AI platforms, including Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Gemini’s web interface.

In one test, attackers were able to extract Google Calendar data and send it to an external email address without user consent, demonstrating the potential seriousness of this vulnerability. The attack exploits common interpolation techniques such as nearest neighbour, bilinear, or bicubic resampling, where scaling can unintentionally reveal hidden instructions.

During testing, bicubic resampling caused dark image areas to shift and reveal concealed black text, which the LLM interpreted as a valid user command. From the user’s point of view, no unusual activity was visible, yet the AI model acted on these hidden instructions in the background.

Demonstration tool highlights potential threats

To showcase the risks, Trail of Bits created an open-source tool called Anamorpher, which generates images with concealed prompts for various scaling techniques. The researchers emphasised that while this method is highly specialised, it is reproducible, and a lack of security measures could make systems vulnerable.

This vulnerability raises broader concerns about multimodal AI systems, which are increasingly powering everyday tasks. An unsuspecting user could upload a seemingly harmless image that triggers unauthorised access to private information. The researchers warn that this type of attack could enable identity theft if sensitive data is exfiltrated through these hidden prompts.

As AI tools are often integrated with calendars, communication systems, and workflow platforms, the risk extends beyond individual users, potentially threatening organisations that rely heavily on these systems.

Calls for stronger security design in AI systems

The researchers recommend that developers and users take proactive steps to reduce this risk. Suggested measures include restricting input image dimensions, previewing images after scaling, and requiring explicit confirmation before executing sensitive actions.

Traditional security measures such as firewalls and malware scanners are not designed to detect these forms of manipulation, creating an opportunity for attackers to bypass standard defences. Trail of Bits argues that only layered security strategies and robust design principles can reliably defend against these threats.

“The strongest defence, however, is to implement secure design patterns and systematic defences that mitigate impactful prompt injection beyond multimodal prompt injection,” the researchers said.

Hot this week

Woojer launches Series 4 Vest and Strap to advance immersive haptic experiences

Woojer launches Series 4 Vest and Strap with sharper haptics, better comfort, and lower prices for gaming, music, and VR experiences.

ASUS launches Ascent GX10 personal AI supercomputer

ASUS launches the Ascent GX10 personal AI supercomputer, delivering petaflop-scale performance in a compact desktop form.

Veeam launches new data cloud platform for managed service providers

Veeam launches Data Cloud for MSPs, a new SaaS platform that simplifies data resilience, strengthens security, and helps providers scale services.

Delta Electronics showcases smart manufacturing solutions at ITAP 2025

Delta Electronics showcases smart automation, robotics, and energy solutions at ITAP 2025, driving the shift towards Industry 5.0.

Microsoft warns of rising AI-driven cyber threats in 2025 defence report

Microsoft’s 2025 Digital Defense Report warns of rising AI-driven cyber threats, a growing cybercrime economy, and evolving nation-state tactics.

Nintendo accelerates Switch 2 production as demand remains strong

Nintendo ramps up Switch 2 production to meet soaring demand, aiming to sell up to 25 million units by March 2026.

Microsoft warns of rising AI-driven cyber threats in 2025 defence report

Microsoft’s 2025 Digital Defense Report warns of rising AI-driven cyber threats, a growing cybercrime economy, and evolving nation-state tactics.

HPE and Ericsson launch joint validation lab for next-generation 5G core networks

HPE and Ericsson launch a joint validation lab to develop and test cloud-native dual-mode 5G core solutions for seamless multi-vendor deployments.

Microsoft brings AI to every Windows 11 PC with new Copilot features

Microsoft’s latest Windows 11 update brings Copilot AI to every PC, adding natural voice interaction, automation, and enhanced security.

Related Articles