Exabeam has released new research showing that insider threats, accelerated by artificial intelligence, have become the leading security concern for organisations in Asia Pacific and Japan (APJ). The study, based on a survey of 1,010 cybersecurity professionals worldwide, found that 69% of respondents in APJ expect insider threats to rise over the next year. More than half (53%) now view insiders, whether malicious or compromised, as a greater risk than external attackers.
The report, titled From Human to Hybrid: How AI and the Analytics Gap Are Fueling Insider Risk, highlights how generative AI is reshaping the threat landscape. AI allows malicious actors to carry out faster, stealthier, and more complex attacks that are difficult to detect. In the words of Steve Wilson, Chief AI and Product Officer at Exabeam, “Insiders aren’t just people anymore. They’re AI agents logging in with valid credentials, spoofing trusted voices, and making moves at machine speed.”
Regional differences in insider threat outlook
The survey revealed that insider activity is intensifying, with three in five APJ organisations (60%) reporting a measurable increase in incidents over the past year. The region also leads in projected growth, reflecting heightened awareness of identity-driven attacks. By contrast, the Middle East showed a different outlook, with nearly one-third (30%) anticipating a decrease in insider threats, suggesting either stronger confidence in existing defences or a possible underestimation of risks.
This regional divergence underscores the complexity of insider security. Exabeam noted that organisations will need to tailor defences to address the realities of their operating environments rather than rely on a single global approach.
AI making insider threats faster and harder to detect
AI is a key driver of this shift, with 75% of APJ respondents acknowledging that it makes insider threats more effective. The most concerning threat vector cited was AI-enhanced phishing and social engineering (31%), followed by privilege misuse or unauthorised access (18%) and data exfiltration (17%).
The research also flagged risks linked to unauthorised use of generative AI tools. Around 64% of APJ organisations reported unapproved GenAI use by employees, and 12% identified it as their top insider concern. This creates a dual challenge, as tools meant to boost productivity can also be weaponised for malicious activity.
Detection gaps and lack of behavioural visibility
Despite widespread recognition of the problem, many insider threat programmes remain ineffective. While 82% of APJ organisations say they have such programmes in place, fewer than four in ten (37%) use user and entity behaviour analytics (UEBA), which is considered essential for spotting abnormal activity. Instead, many continue to rely on access management, training, data loss prevention, and endpoint detection tools, which provide visibility but lack the behavioural context needed to identify subtle risks.
AI adoption in security tooling is high, with 94% of APJ organisations using it in some form. However, the study found that governance and readiness are lagging. More than half of executives believe AI tools are fully deployed, but frontline managers and analysts say many are still at pilot or evaluation stage. Privacy concerns, fragmented systems, and difficulties in understanding user intent continue to limit progress.
Kevin Kirkwood, Chief Information Security Officer at Exabeam, noted, “AI has added a layer of speed and subtlety to insider activity that traditional defences weren’t built to detect. Security teams are deploying AI to detect these evolving threats, but without strong governance or clear oversight, it’s a race they’re struggling to win.”
Towards stronger insider threat defences
The findings suggest that success against AI-driven insider threats will require more than policy changes. Organisations will need leadership engagement, cross-functional collaboration, and governance models that can keep pace with rapid AI adoption. According to Exabeam, progress will come from focusing on behavioural context, distinguishing between human and AI-driven activity, and improving collaboration across teams.
Ultimately, the study emphasises the importance of shortening detection and response times to limit the window of opportunity for insider activity. As AI continues to amplify risks, organisations in APJ and beyond will need to evolve their strategies to stay ahead of emerging threats.