AI agent adoption in Asia Pacific outpaces enterprise security controls, Rubrik finds
Rubrik research finds AI agent adoption in Asia Pacific is outpacing security, exposing gaps in visibility, identity governance, and recovery.
Enterprises across Asia Pacific are deploying AI agents faster than they can secure them, according to new research from Rubrik Zero Labs, which points to widening gaps in visibility, identity governance, and recovery capability.
Table Of Content
The study, based on a survey of 500 IT and security leaders in the region, finds that organisations are operationalising autonomous systems without the controls needed to govern them effectively. This creates a disconnect between the pace of AI adoption and the ability to manage risk in production environments.
Limited visibility and expanding identity risk
The report highlights a visibility problem at the centre of the issue. Only 30% of respondents say they have full visibility into AI agents operating within their environments, a figure the report suggests may itself be overstated. Without a clear view of how these systems function and interact with data, organisations struggle to secure the identities tied to them.
This challenge is compounded by the rapid growth of non-human identities linked to AI agents. These identities are expanding faster than enterprises can track or control, forming what the report describes as a “shadow workforce”. Many operate with persistent access and limited oversight, creating potential entry points for misuse, compromise, and lateral movement within systems.
At the same time, 82% of organisations expect AI agents to outpace their existing security guardrails within the next year. This suggests that current governance models are not keeping pace with the scale and autonomy of agent-based systems being deployed.
Operational strain and recovery challenges
The operational impact of AI agents is also emerging as a concern. While these systems are often deployed to improve efficiency, more than 83% of respondents report that agents require more manual oversight than they save in productivity gains.
Recovery remains a critical weak point. Nearly all respondents, 99%, say they cannot roll back agent-driven actions without causing system disruption. This limits the ability to respond quickly when errors or malicious actions occur, especially in environments where agents interact directly with critical systems and data.
Concerns extend to broader resilience. Nearly nine in ten respondents say they are worried about meeting recovery objectives as threats linked to AI agents increase. The inability to isolate, contain, and reverse actions in real time raises the operational risk associated with deploying these systems at scale.
Rising threat exposure from agent-driven systems
The research also points to a changing threat landscape. Nearly half of respondents expect agentic systems to drive the majority of cyberattacks within the next year. These systems can operate at machine speed, scale rapidly, and blur traditional distinctions between insider threats and external attacks.
Autonomous systems reduce the time available for detection and response, while expanding the number of potential attack surfaces. This places additional pressure on organisations to rethink how they monitor and secure activity that is no longer initiated or controlled by human users.
“Across Asia Pacific, organisations are moving quickly to operationalise AI, but many are finding that adoption is outpacing their ability to fully observe, govern, and restore these environments,” said Ananth Nag, General Manager and Vice President, Asia Pacific, Rubrik. “As decision-making shifts from human to machine, the priority is no longer just securing AI deployments, but maintaining operational safety, minimising disruption, and recovering quickly when incidents occur.”
Governance and resilience become linked priorities
The report positions AI governance and operational resilience as closely linked concerns. As AI agents take on more decision-making roles, traditional security models focused on perimeter defence or breach prevention become less effective.
Rubrik’s findings suggest a growing need for controls that can monitor agent activity, manage identity access, and enable recovery from unintended or malicious actions without disrupting broader systems.
The report, titled The State of the Agent: Understanding Adoption, Risk, and Mitigation, combines survey data with technical analysis of emerging attack vectors across tool, cognitive, and identity layers of AI systems. It outlines a shift in enterprise security priorities towards maintaining control in environments where systems can act independently of human input.




