Monday, 6 October 2025
29 C
Singapore
31.5 C
Thailand
20.2 C
Indonesia
28.2 C
Philippines

When AI turns against you protecting organisations from insider risk

AI adoption is reshaping insider risks in Southeast Asia. Learn how organisations can protect themselves from data leaks, prompt attacks, and compliance gaps.

Insider risk has long been one of the most complex challenges in corporate security. Traditionally, the threat came from employees or contractors who misused their access, either through negligence or intent. Today, the definition is expanding. The rise of artificial intelligence in the workplace has created new insider pathways where machines act with permissions once reserved for humans.

Recent figures underline this trend. Verizon’s 2025 Data Breach Investigations Report found that 29% of breaches in EMEA had an insider component, while breaches involving third parties doubled year on year to 30%, according to Verizon’s press summary. This pattern highlights how access, rather than intent alone, drives risk. In Southeast Asia, where outsourcing and cross-border work are common, this reliance on extended ecosystems makes insider threats even more acute.

When AI turns against you protecting organisations from insider risk - 1

At the same time, organisations are adopting AI at a rapid pace. Tools like generative AI chatbots, autonomous agents, and embedded workplace assistants now reside within the corporate network, providing access to sensitive data. When these tools are misused, compromised, or poorly governed, they effectively become new insiders. The shift marks a fundamental change in how businesses must understand and address insider risk.

The challenge is particularly relevant for the region’s financial services, healthcare, and public sectors, where regulators are demanding stricter controls. As AI adoption accelerates, leaders must recognise that risk does not only come from employees but also from the systems they empower.

How AI changes the insider risk landscape

Artificial intelligence expands insider risk in ways that were previously unimaginable. A clear example is data leakage that occurs through the everyday use of public AI tools. In 2023, Samsung employees accidentally exposed sensitive source code and meeting notes by pasting them into ChatGPT, prompting the company to ban the tool internally. Cases like this demonstrate how easily staff can share critical data outside controlled environments.

Shadow AI use is another concern. Many employees in Southeast Asia adopt generative AI tools informally to speed up tasks, often without IT oversight. This weakens data governance and leaves organisations blind to where sensitive data flows.

More technical risks include prompt-injection attacks, where hidden instructions in a web page or email can hijack an AI assistant and make it exfiltrate data. Security researchers demonstrated this with “EchoLeak”, which manipulated an AI connected to email and documents to leak sensitive information. Supply-chain issues compound the problem, as organisations may unknowingly rely on vulnerable AI models or plugins that provide malicious actors with insider-like access.

In short, AI multiplies insider risks by introducing new, often invisible, vectors for exploitation. Unlike traditional insiders, AI systems can replicate mistakes or malicious instructions at scale, making breaches faster and harder to contain.

Regulatory pressures and regional expectations

Governments are responding quickly to the convergence of AI and insider risk. In the European Union, the AI Act begins to phase in from 2024, requiring businesses to apply risk-based controls and ensure transparency in their AI systems, according to the AI Act tracker. While Southeast Asia is not bound by EU law, its influence is already shaping practices among multinational firms in the region.

When AI turns against you protecting organisations from insider risk - 2

Singapore has issued Advisory Guidelines on the use of personal data in AI-driven systems, clarifying when organisations can use data for training and how they must provide transparency to users. This is especially relevant to financial institutions and healthcare providers, which handle sensitive data categories. In parallel, the Infocomm Media Development Authority has championed voluntary AI governance frameworks to guide the responsible adoption of AI.

Elsewhere, the UK National Cyber Security Centre has released guidelines for secure AI development. At the same time, the US National Institute of Standards and Technology has published an AI Risk Management Framework with a profile tailored to generative AI, including a specific Generative AI profile. These standards are influencing corporate policy in Asia, especially for firms with global footprints.

For Southeast Asian businesses, the regulatory environment is becoming less forgiving. Compliance now extends beyond traditional data protection to cover how AI systems are designed, trained, and monitored. Boards and leadership teams need to treat insider risk from AI not as a technical issue but as a governance priority.

Strategies to secure against AI-driven insider risks

Addressing AI-related insider risk requires a multi-layered approach that combines governance, technology, and culture. At the foundation lies data classification and access control. Businesses must define which categories of data can be processed by AI tools and enforce this through enterprise-grade safeguards such as cloud access brokers and sensitivity labels.

Identity and access management play an equally important role. Adopting zero-trust principles ensures that AI tools and agents operate only with the least privilege required. In practice, this means issuing scoped tokens, regularly rotating credentials, and requiring step-up authentication for sensitive AI actions, as outlined in NIST’s Zero Trust Architecture.

Security must also extend into the AI development lifecycle. Following frameworks like the OWASP Top 10 for Large Language Model Applications helps teams identify common vulnerabilities such as prompt injection and data poisoning. Red-teaming exercises, where internal teams attempt to manipulate or exploit AI systems, are increasingly seen as essential to reveal weaknesses before adversaries can.

Monitoring is another critical pillar. Organisations should instrument their AI systems with detailed logging to record who entered what prompts, what data was accessed, and which actions were taken. Honeytokens—decoy files designed to trigger alerts when accessed—offer an effective way to detect unauthorised exfiltration attempts. MITRE’s ATLAS knowledge base offers examples of real-world adversarial techniques that can inform detection strategies.

When AI turns against you protecting organisations from insider risk - 3

Ultimately, clear policies and effective employee training are essential. Staff require guidance on safe prompting, approved tools, and reporting procedures in the event of an incident. In Southeast Asia, where cultural diversity influences workplace behaviour, training programmes must be adapted to local contexts to be effective.

The path forward for Southeast Asian organisations

The insider risk landscape is becoming more complex, and artificial intelligence is at the centre of this shift. For businesses in Southeast Asia, the stakes are particularly high. The region’s rapid digital adoption, reliance on third-party ecosystems, and cross-border data flows mean that a single AI misstep can have far-reaching consequences.

One path forward is to treat AI adoption not only as a productivity initiative but as a security transformation. This requires aligning IT, compliance, and business units around common goals, with senior leadership taking direct responsibility for AI risk governance. Embedding AI policies into standard joiner-mover-leaver processes, procurement reviews, and vendor risk assessments ensures that safeguards are not treated as add-ons but as part of everyday operations.

Another opportunity lies in using AI defensively. Just as AI can create insider risks, it can also help mitigate them. Behavioural analytics, powered by machine learning, can flag unusual activity, such as an AI agent accessing financial records outside regular hours. These tools provide early warning signals that traditional monitoring may miss.

Industry collaboration will also be critical. Given the shared challenges across borders, Southeast Asian businesses stand to benefit from participating in regional forums, sharing threat intelligence, and aligning on best practices. Regulators are increasingly open to such collaboration, recognising that AI risks cannot be managed in isolation.

At its core, managing AI-driven insider risk is not about limiting innovation but ensuring it happens safely. By embedding robust governance, applying technical controls, and fostering a culture of accountability, Southeast Asian organisations can turn AI into an asset rather than a liability.

Hot this week

Confluent partners with Visa Cash App Racing Bulls to power real-time data in Formula 1

Confluent joins Visa Cash App Racing Bulls to deliver real-time data streaming for faster race-day decisions and smarter engineering in Formula 1.

Microsoft enhances Photos app with AI features exclusive to Copilot+ PCs

Microsoft’s new AI-powered Photos app offers smarter image sorting and enhancement, but is limited to Copilot+ PCs with built-in NPUs.

Cisco launches Splunk Observability Cloud on AWS in Singapore

Cisco launches Splunk Observability Cloud on AWS in Singapore to help businesses improve system resilience, data control, and innovation.

Tenable uncovers critical AI vulnerabilities in Google Gemini

Tenable reveals and Google fixes three major flaws in Gemini AI that could have exposed sensitive user data to cyberattacks.

Kyndryl introduces expanded agentic AI framework to drive enterprise-wide adoption

Kyndryl expands its Agentic AI Framework to help enterprises integrate AI at scale, accelerate adoption and achieve real-world business results.

Canon’s Think Big series returns to help businesses build resilience in uncertain times

Canon’s Think Big Leadership Business Series returns on 22 and 23 October in Singapore to help companies build resilience amid global uncertainty.

Payment security experts to meet in Bangkok amid rising cyber threats in Asia-Pacific

Cybersecurity attacks across Asia-Pacific continue to pose serious risks...

Meta unveils Candle subsea cable and expands Asia-Pacific connectivity projects

Meta unveils Candle, the largest subsea cable in APAC, and updates Bifrost, Echo, and Apricot to boost regional and transpacific connectivity.

C-suite’s rush to adopt AI exposes critical security blind spots

A new Tenable report warns that outdated security strategies are leaving organisations exposed as AI adoption surges.

Related Articles