Sunday, 14 December 2025
26 C
Singapore
25.1 C
Thailand
23 C
Indonesia
26.6 C
Philippines

Tenable research finds serious security gaps in AI cloud services

Tenable finds 70% of AI cloud workloads have unpatched vulnerabilities, warning of data tampering and poor security in popular cloud services.

A new report by Tenable has revealed that artificial intelligence (AI) services used in cloud environments are highly vulnerable to cyber threats, with the majority of workloads exposed to unresolved security issues. The Cloud AI Risk Report 2025 outlines how the growing use of AI tools on platforms like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure is leading to a sharp increase in security risks for businesses.

The analysis, released on 20 March by the exposure management firm, shows that around 70% of cloud-based AI workloads contain at least one known but unpatched vulnerability. These weaknesses could allow attackers to manipulate data, tamper with AI models, or cause data leaks. The report aims to raise awareness about the potential consequences of combining AI and cloud technologies without proper safeguards.

Unpatched vulnerabilities and misconfigurations widespread

One of the most striking findings from the report is the widespread presence of critical vulnerabilities. In particular, Tenable identified CVE-2023-38545—a severe flaw in the popular curl data transfer tool—in 30% of the AI workloads it analysed. This type of vulnerability could be used by attackers to gain unauthorised access to data or systems.

In addition, misconfigurations in cloud services were found to be alarmingly common. For example, 77% of organisations using Google Vertex AI Notebooks had left the default Compute Engine service account overprivileged. This means services built using these accounts are more vulnerable to exploitation, as they often have more permissions than necessary.

Tenable refers to this issue as a “Jenga-style” misconfiguration—where services are built on top of others, inheriting risky default settings from the lower layers. If one component is misconfigured, it can lead to cascading vulnerabilities throughout the system.

AI data and models at risk of tampering

The report also found evidence of poor controls around AI training data, a key part of machine learning systems. Data poisoning, where training data is deliberately tampered with to manipulate the behaviour of AI models, remains a serious concern.

According to the report, 14% of organisations using Amazon Bedrock had not properly restricted public access to at least one AI training data storage bucket. In 5% of cases, the permissions were found to be overly broad, increasing the likelihood of unauthorised access or data tampering.

Similarly, Tenable discovered that 91% of users running Amazon SageMaker notebook instances had at least one instance that granted root access by default. This creates a significant risk if any notebook is compromised, as it could allow attackers to make changes to all files on the system.

Call for improved AI cloud security

Liat Hayun, Vice President of Research and Product Management for Cloud Security at Tenable, stressed the importance of addressing these risks.

“When we talk about AI usage in the cloud, more than sensitive data is on the line. If a threat actor manipulates the data or AI model, there can be catastrophic long-term consequences, such as compromised data integrity, compromised security of critical systems and degradation of customer trust,” said Hayun.

She added that cloud security strategies need to evolve in line with the increasing use of AI. “Cloud security measures must evolve to meet the new challenges of AI and find the delicate balance between protecting against complex attacks on AI data and enabling organizations to achieve responsible AI innovation.”

The findings serve as a timely reminder for businesses to review their cloud AI security practices and ensure they are not leaving critical data or infrastructure exposed to potential threats.

Hot this week

Proofpoint completes acquisition of Hornetsecurity

Proofpoint completes its US$1.8 billion acquisition of Hornetsecurity, expanding its Microsoft 365 and MSP-focused security capabilities.

Tiiny AI unveils pocket-sized AI supercomputer verified by Guinness World Records

Tiiny AI reveals a Guinness-verified pocket-sized AI supercomputer designed to run massive models locally without relying on the cloud.

New research finds growing public demand for modern emergency call systems in Australia and New Zealand

New study shows strong public support for modern, data-driven and AI-enabled emergency call systems in Australia and New Zealand.

PlayStation introduces limited edition Genshin Impact DualSense controller

PlayStation announces a limited edition Genshin Impact DualSense controller for PS5, launching in Singapore on 21 January 2026.

Nintendo launches official eShop and Switch Online service in Singapore

Nintendo launches the Singapore eShop and Switch Online service, giving local players full access to digital games, subscriptions, and regional deals.

Tiiny AI unveils pocket-sized AI supercomputer verified by Guinness World Records

Tiiny AI reveals a Guinness-verified pocket-sized AI supercomputer designed to run massive models locally without relying on the cloud.

Samsung Galaxy Z TriFold sells out first batch, second waitlist opens in Singapore

Samsung’s Galaxy Z TriFold sells out its first batch in Singapore, with a second waitlist now open for the premium tri-fold phone.

PlayStation introduces limited edition Genshin Impact DualSense controller

PlayStation announces a limited edition Genshin Impact DualSense controller for PS5, launching in Singapore on 21 January 2026.

PGL brings Counter-Strike 2 Major to Singapore in November 2026

PGL confirms the Counter-Strike 2 Major is coming to Singapore in November 2026, marking the first CS2 Major in Southeast Asia.

Related Articles

Popular Categories