SailPoint has published a new global report highlighting growing concerns around AI agents, revealing a sharp disconnect between their rising use and the lack of robust security measures in place. Despite almost universal plans among organisations to expand the use of these autonomous systems, most are not fully prepared to govern them effectively, putting sensitive data at risk.
Organisations embrace AI agents despite security concerns
The report, titled AI agents: The new attack surface. A global survey of security, IT professionals and executives, was released by SailPoint, a provider of unified identity security solutions. It surveyed technology and security professionals worldwide and found that 82% of organisations are already using AI agents. However, only 44% currently have any policies in place to secure them.
At the same time, 96% of respondents see AI agents as a growing risk. Still, 98% of organisations plan to increase their use of them in the next 12 months, pointing to a significant gap between adoption and governance. According to the report, these agents often function with elevated privileges, accessing a wide range of applications, data, and systems. Many have the capacity to make autonomous decisions, modify themselves, and even generate other agents.
AI agents are defined as autonomous systems that can perceive their environment, make decisions, and act toward defined goals. They are increasingly used across various business functions but often rely on multiple machine identities and have access to high-value data. Notably, 72% of respondents said they believe AI agents pose a greater security risk than traditional machine identities.
Oversight remains limited for high-risk actions
Concerns stem from the nature of AI agents’ access and actions. Key risks identified by respondents include their ability to access privileged data (60%), perform unintended actions (58%), share sensitive information (57%), rely on unverified data (55%), and expose inappropriate content (54%).
Chandra Gnanasambandam, Executive Vice President of Product and Chief Technology Officer at SailPoint, said: “Agentic AI is both a powerful force for innovation and a potential risk. These autonomous agents are transforming how work gets done, but they also introduce a new attack surface. They often operate with broad access to sensitive systems and data, yet have limited oversight. That combination of high privilege and low visibility creates a prime target for attackers.”
The report revealed that 92% of respondents consider governing AI agents to be critical for maintaining enterprise security. Yet 23% admitted their AI agents had already been manipulated into revealing access credentials, while 80% reported instances of agents taking unintended actions. These included unauthorised access to systems or resources (39%), accessing or sharing inappropriate or sensitive data (31% and 33%), and downloading sensitive files (32%).
Call for identity-first governance of AI agents
SailPoint is calling for a shift in how organisations manage AI agents—treating them with the same level of scrutiny as human users. This includes implementing real-time permission controls, least privilege access, full visibility into their actions, and ensuring audit trails are in place.
With 98% of organisations planning to scale up their use of agentic AI within the year, SailPoint emphasised the need for identity security solutions that go beyond human identities to include both AI and machine identities. These solutions should provide comprehensive discovery capabilities, unified visibility across environments, enforcement of zero standing privileges, and full auditability to support compliance and security standards.
The report positions AI agents not as tools embedded in systems, but as a distinct identity category requiring evolved governance strategies. Without these controls, the growing reliance on AI risks compounding existing vulnerabilities, especially in an era marked by frequent and costly data breaches.