AI agents are becoming a new identity risk for Singapore organisations
Semperis study finds AI agents are expanding identity security risks as Singapore organisations move them into sensitive workflows.
AI adoption is expanding the number of identities that enterprise security teams need to manage, as organisations give AI agents access to sensitive systems before governance controls have fully caught up.
Table Of Content
A new Semperis study found that 66% of Singapore respondents believe AI will increase attacks on identity infrastructure, including systems such as Active Directory, Entra ID and Okta. At the same time, as many as 93% of organisations in Singapore either already use or plan to use AI agents for sensitive security tasks such as password resets and VPN access.
The findings are based on The State of Identity Security in the AI Era, which surveyed 1,100 organisations across the US, UK, France, Germany, Italy, Spain, Australia and Singapore.
AI agents create non-human identities
The challenge for enterprises is that AI agents are becoming part of the identity environment. They may not behave like employees, contractors or administrators, but they can still hold access rights, interact with sensitive systems and operate inside security workflows.
“Singapore organisations have been quick to explore AI across business and security operations, but every AI agent introduced into the enterprise also creates a new identity that must be governed, monitored and recovered if compromised,” said Gerry Sillars, Semperis Vice President of Asia Pacific and Japan. “It’s encouraging that 90% of Singapore respondents see AI identity governance as a priority, but priority must translate into practical controls. As AI moves closer to privileged systems, identity resilience needs to be built into AI adoption from the start,” said Gerry Sillars, Semperis Vice President of Asia Pacific and Japan.
Globally, only 65% of organisations said AI identities are fully registered, authenticated and authorised in a formal system. Another 6% said they do not track them at all. Among organisations that do track AI identities, 57% use the same system as human identities, while 43% authenticate and authorise them through a separate system.
That creates a more complex environment for security teams. AI agents can become another class of non-human identity, adding new access pathways that need to be registered, monitored, limited and recovered if compromised.
Sensitive security workflows are already involved
The study found that AI agents are already being placed close to identity infrastructure. Globally, 29% of surveyed organisations already use AI agents to manage security-related help desk tickets, including password resets and VPN access. Another 65% intend to do so within the next year.
The risk is not limited to central identity systems. The study also found that 92% of respondents said some percentage of their workforce has AI installed on local machines where it can access SSH and encryption keys. This places AI-enabled tools near credentials and sensitive access mechanisms that are already attractive targets for attackers.
For Singapore organisations, this makes AI identity governance an operational issue rather than a future policy concern. If AI agents are used for security tasks, they need the same level of discipline applied to other privileged identities, including clear ownership, access control, monitoring and recovery planning.
Governance priorities need to become enforceable controls
Semperis found that 90% of Singapore respondents identified AI identity governance as a priority in the coming months. The harder task is translating that priority into controls that can work across human users, AI agents and other non-human identities.
The study pointed to several areas of practice, including treating AI agents explicitly as non-human identities, enforcing least-privilege, just-enough and just-in-time access, separating agent and human trust boundaries where appropriate, using analytics to detect anomalous agent behaviour, and ensuring identity systems can be restored to a trusted state after a breach.
“The pattern of global organisations overestimating how quickly they can recover from a cyberattack is real, especially when identity is within the blast radius. On paper, organisations have plans and backups; in practice, identity failures turn technical incidents into prolonged business crises, exposing a dangerous gap between perceived resilience and reality,” said Chris Inglis, the first U.S. National Cyber Director and Semperis Strategic Advisor.





