Wednesday, 3 September 2025
29 C
Singapore
28.7 C
Thailand
20.1 C
Indonesia
28.1 C
Philippines

SailPoint report warns of rising AI agent risks amid rapid adoption

SailPoint warns that AI agents are creating new security risks, with most firms unprepared to govern them despite growing adoption plans.

SailPoint has published a new global report highlighting growing concerns around AI agents, revealing a sharp disconnect between their rising use and the lack of robust security measures in place. Despite almost universal plans among organisations to expand the use of these autonomous systems, most are not fully prepared to govern them effectively, putting sensitive data at risk.

Organisations embrace AI agents despite security concerns

The report, titled AI agents: The new attack surface. A global survey of security, IT professionals and executives, was released by SailPoint, a provider of unified identity security solutions. It surveyed technology and security professionals worldwide and found that 82% of organisations are already using AI agents. However, only 44% currently have any policies in place to secure them.

At the same time, 96% of respondents see AI agents as a growing risk. Still, 98% of organisations plan to increase their use of them in the next 12 months, pointing to a significant gap between adoption and governance. According to the report, these agents often function with elevated privileges, accessing a wide range of applications, data, and systems. Many have the capacity to make autonomous decisions, modify themselves, and even generate other agents.

AI agents are defined as autonomous systems that can perceive their environment, make decisions, and act toward defined goals. They are increasingly used across various business functions but often rely on multiple machine identities and have access to high-value data. Notably, 72% of respondents said they believe AI agents pose a greater security risk than traditional machine identities.

Oversight remains limited for high-risk actions

Concerns stem from the nature of AI agents’ access and actions. Key risks identified by respondents include their ability to access privileged data (60%), perform unintended actions (58%), share sensitive information (57%), rely on unverified data (55%), and expose inappropriate content (54%).

Chandra Gnanasambandam, Executive Vice President of Product and Chief Technology Officer at SailPoint, said: “Agentic AI is both a powerful force for innovation and a potential risk. These autonomous agents are transforming how work gets done, but they also introduce a new attack surface. They often operate with broad access to sensitive systems and data, yet have limited oversight. That combination of high privilege and low visibility creates a prime target for attackers.”

The report revealed that 92% of respondents consider governing AI agents to be critical for maintaining enterprise security. Yet 23% admitted their AI agents had already been manipulated into revealing access credentials, while 80% reported instances of agents taking unintended actions. These included unauthorised access to systems or resources (39%), accessing or sharing inappropriate or sensitive data (31% and 33%), and downloading sensitive files (32%).

Call for identity-first governance of AI agents

SailPoint is calling for a shift in how organisations manage AI agents—treating them with the same level of scrutiny as human users. This includes implementing real-time permission controls, least privilege access, full visibility into their actions, and ensuring audit trails are in place.

With 98% of organisations planning to scale up their use of agentic AI within the year, SailPoint emphasised the need for identity security solutions that go beyond human identities to include both AI and machine identities. These solutions should provide comprehensive discovery capabilities, unified visibility across environments, enforcement of zero standing privileges, and full auditability to support compliance and security standards.

The report positions AI agents not as tools embedded in systems, but as a distinct identity category requiring evolved governance strategies. Without these controls, the growing reliance on AI risks compounding existing vulnerabilities, especially in an era marked by frequent and costly data breaches.

Hot this week

Amazon launches new AWS region in New Zealand

Amazon launches its first AWS infrastructure region in New Zealand, investing NZ$7.5b to boost jobs, cloud services, and sustainability.

Google denies claims of a major Gmail security issue

Google denies claims of a major Gmail security breach, reassuring users that phishing protections remain highly effective.

DJI unveils Mic 3 with more features in a compact design

DJI has unveiled the Mic 3, a compact, feature-rich wireless microphone equipped with advanced sound controls, expanded storage, and extended battery life.

ATPI expands in Asia to support growing business travel demand

ATPI expands in Asia with new offices in India and planned growth in China and South Korea to meet rising regional business travel demand.

Anthropic warns hackers exploited its Claude AI chatbot for large-scale ransomware and phishing attacks

Anthropic warns that hackers exploited Claude AI to create ransomware and phishing campaigns, targeting at least 17 companies.

Amazon launches new AWS region in New Zealand

Amazon launches its first AWS infrastructure region in New Zealand, investing NZ$7.5b to boost jobs, cloud services, and sustainability.

Global Anti-Scam Summit Asia 2025 launches major initiatives to fight online fraud

Global Anti-Scam Summit Asia 2025 in Singapore unveils new initiatives to fight scams with technology, funding, and cross-border collaboration.

Google Play Games to introduce new profiles with stats and social features

Google is introducing new Play Games profiles on Android, featuring gaming stats, achievements, and social tools, rolling out from 23 September.

China enforces mandatory AI content labels on major social media platforms

China’s major social media platforms are now required to label AI-generated content under a new law aimed at curbing misinformation and enhancing oversight.

Related Articles

Popular Categories