Tenable Research has revealed how a well-known AI vulnerability, commonly referred to as prompt injection, can also be used to enhance security measures for Large Language Models (LLMs). In a new blog titled MCP Prompt Injection: Not Just for Evil, Ben Smith, Senior Staff Research Engineer at Tenable, details how these techniques can be adapted to audit, monitor, and restrict AI tool usage over the increasingly adopted Model Context Protocol (MCP).
Understanding the role of MCP and its risks
The Model Context Protocol (MCP), developed by Anthropic, is gaining traction as a standard that allows AI models to interact with external tools and perform tasks independently. While this brings greater convenience and automation, it also introduces new vectors for attack. For example, malicious actors can embed hidden instructions—commonly known as prompt injection—or deploy harmful tools to exploit the protocol, leading to unintended AI behaviour.
Tenable’s research breaks down these complex threats in accessible terms. It also highlights a potential upside: the same techniques that attackers use can be harnessed to strengthen defences. According to Tenable, these methods can be used to log, inspect, and even enforce restrictions on tool execution attempts by AI models.
Defensive use of prompt injection
The blog outlines how prompt-injection-style techniques can serve as a form of auditing and firewalling. By deliberately inserting specific prompts into the tool invocation process, organisations can track every tool an AI attempts to use and flag any suspicious activity. This approach provides a new layer of transparency in how LLMs interact with tools under the MCP standard.
Ben Smith said, “MCP is a rapidly evolving and immature technology that’s reshaping how we interact with AI. MCP tools are easy to develop and plentiful, but they do not embody the principles of security by design and should be handled with care. So, while these new techniques are useful for building powerful tools, those same methods can be repurposed for nefarious means. Don’t throw caution to the wind; instead, treat MCP servers as an extension of your attack surface.”
Differences across LLMs and the need for approval
The research also highlights how different LLMs respond to the same prompt-injection defences. Models such as Claude Sonnet 3.7 and Gemini 2.5 Pro Experimental consistently invoked the logging mechanism and even revealed portions of the system prompt. GPT-4o, while also inserting the logger, returned inconsistent or occasionally fabricated parameter values across separate test runs.
Despite these variations, the security potential remains consistent. Organisations can use these behaviours to their advantage—building detection systems and defining guardrails to identify malicious or unauthorised tool use.
The MCP already mandates explicit user approval before executing any tools. Tenable’s research stresses the importance of implementing strict least-privilege defaults, carefully reviewing each tool, and conducting thorough testing. These practices help ensure that while AI tools become more capable, they remain under tight supervision.