More than one in three Australian professionals are exposing sensitive company information to artificial intelligence platforms without formal oversight, according to a new report from IT management firm Josys. The Shadow AI Report 2025 highlights growing risks around “shadow AI” — the use of unauthorised AI tools in the workplace — and warns of serious compliance and governance challenges for organisations nationwide.
Rising risks from shadow AI
The report, released on 4 September, found that 78% of professionals now use AI tools, but 63% admit they are not confident in doing so securely. At the same time, 70% of organisations have little to no visibility into which AI platforms employees are actually using, creating significant blind spots for data security.
The findings show that 36% of employees are uploading sensitive business information to AI tools. This includes strategic plans (44%), technical data (40%), financial records (34%), and internal communications (28%). In addition, nearly one in four users (24%) acknowledged sharing customer personally identifiable information (PII), while 18% uploaded intellectual property and legal or compliance documents.
The report found sales and marketing teams to be most at risk, with 37% of staff in these departments uploading sensitive data, followed closely by finance and IT or telecoms (36%) and healthcare (31%).
Governance gaps and compliance challenges
The study surveyed 500 Australian technology decision makers and revealed a sharp gap between adoption and preparedness. Only one in three organisations (33%) felt fully prepared to assess AI risks, with almost 20% admitting they were not prepared at all. Even in highly regulated industries, preparedness was patchy. Only 52% of finance, 55% of IT and telecoms, and 62% of healthcare teams said they were confident in their ability to manage AI securely.
Jun Yokote, COO and President of Josys International, described the situation as a governance crisis. “Shadow AI is no longer a fringe issue. It’s a looming full-scale governance failure unfolding in real time across Australian workplaces,” he said. “While the nation is racing to harness AI for increased productivity, without governance, that momentum quickly turns into risk. Productivity gains mean nothing if they come at the cost of trust, compliance, and control.”
Compliance pressures are also mounting, with 47% of respondents citing new transparency requirements for AI models and amendments to the Privacy Act as their top concerns. Despite this, half of organisations still rely on manual policy reviews, while one-third have no formal AI governance processes in place. Even among organisations with some oversight, only a quarter believe their enforcement tools are highly effective.
Call for coordinated action
Josys warned that smaller businesses face the biggest challenges, with only 30% of companies with fewer than 250 employees feeling prepared to assess AI risks, compared with 42% of larger firms. As economic pressures encourage workers to embrace AI for productivity gains, the lack of formal oversight is creating opportunities for data leaks and compliance failures.
The report calls for immediate action, including auditing AI usage across all teams, automating risk assessments based on data sensitivity and function, enforcing real-time policies through role-based access, and preparing AI-specific compliance reports. According to Josys, this unified approach is essential if businesses are to balance productivity with trust and resilience.
Yokote added that governance needs to keep pace with adoption. “What’s needed is a unified approach to AI governance which combines visibility, policy enforcement, and automation in a single, scalable framework,” he said.