Sunday, 14 September 2025
27.8 C
Singapore
27.4 C
Thailand
19.7 C
Indonesia
27.6 C
Philippines

Tenable research finds serious security gaps in AI cloud services

Tenable finds 70% of AI cloud workloads have unpatched vulnerabilities, warning of data tampering and poor security in popular cloud services.

A new report by Tenable has revealed that artificial intelligence (AI) services used in cloud environments are highly vulnerable to cyber threats, with the majority of workloads exposed to unresolved security issues. The Cloud AI Risk Report 2025 outlines how the growing use of AI tools on platforms like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure is leading to a sharp increase in security risks for businesses.

The analysis, released on 20 March by the exposure management firm, shows that around 70% of cloud-based AI workloads contain at least one known but unpatched vulnerability. These weaknesses could allow attackers to manipulate data, tamper with AI models, or cause data leaks. The report aims to raise awareness about the potential consequences of combining AI and cloud technologies without proper safeguards.

Unpatched vulnerabilities and misconfigurations widespread

One of the most striking findings from the report is the widespread presence of critical vulnerabilities. In particular, Tenable identified CVE-2023-38545—a severe flaw in the popular curl data transfer tool—in 30% of the AI workloads it analysed. This type of vulnerability could be used by attackers to gain unauthorised access to data or systems.

In addition, misconfigurations in cloud services were found to be alarmingly common. For example, 77% of organisations using Google Vertex AI Notebooks had left the default Compute Engine service account overprivileged. This means services built using these accounts are more vulnerable to exploitation, as they often have more permissions than necessary.

Tenable refers to this issue as a “Jenga-style” misconfiguration—where services are built on top of others, inheriting risky default settings from the lower layers. If one component is misconfigured, it can lead to cascading vulnerabilities throughout the system.

AI data and models at risk of tampering

The report also found evidence of poor controls around AI training data, a key part of machine learning systems. Data poisoning, where training data is deliberately tampered with to manipulate the behaviour of AI models, remains a serious concern.

According to the report, 14% of organisations using Amazon Bedrock had not properly restricted public access to at least one AI training data storage bucket. In 5% of cases, the permissions were found to be overly broad, increasing the likelihood of unauthorised access or data tampering.

Similarly, Tenable discovered that 91% of users running Amazon SageMaker notebook instances had at least one instance that granted root access by default. This creates a significant risk if any notebook is compromised, as it could allow attackers to make changes to all files on the system.

Call for improved AI cloud security

Liat Hayun, Vice President of Research and Product Management for Cloud Security at Tenable, stressed the importance of addressing these risks.

“When we talk about AI usage in the cloud, more than sensitive data is on the line. If a threat actor manipulates the data or AI model, there can be catastrophic long-term consequences, such as compromised data integrity, compromised security of critical systems and degradation of customer trust,” said Hayun.

She added that cloud security strategies need to evolve in line with the increasing use of AI. “Cloud security measures must evolve to meet the new challenges of AI and find the delicate balance between protecting against complex attacks on AI data and enabling organizations to achieve responsible AI innovation.”

The findings serve as a timely reminder for businesses to review their cloud AI security practices and ensure they are not leaving critical data or infrastructure exposed to potential threats.

Hot this week

Young Singapore inventor wins James Dyson Award for diabetes innovation

NUS graduate Zoey Chan wins James Dyson Award 2025 in Singapore for nido, a tool designed to simplify daily insulin injections.

Silksong’s Simplified Chinese translation sparks negative reviews on Steam

Silksong faces backlash in China as Simplified Chinese translations draw criticism, prompting developers to promise improvements.

Coursera launches Skill Tracks to address workplace skill gaps

Coursera launches Skill Tracks to help organisations close skill gaps with role-based, data-driven learning across IT, data, software, and GenAI.

OpenAI set to develop its own AI chips in 2025

OpenAI is reportedly set to develop its own AI chips with Broadcom in 2025, aiming to reduce reliance on NVIDIA and expand capacity.

Firefox introduces shake to summarise feature on iPhones

Firefox launches a new “shake to summarise” feature on iPhones, offering AI-powered webpage summaries starting in the US.

Asus unveils US$4,000 ProArt P16 with 4K tandem OLED and RTX 5090

Asus launches its ProArt P16 laptop with a 4K tandem OLED, RTX 5090 GPU, and creator-focused features, priced from US$1,999.

Lenovo unveils Legion Go 2 handheld with OLED display and higher price tag

Lenovo launches the Legion Go 2 handheld with an OLED display, upgraded specs and a higher starting price of €999 at IFA 2025.

Samsung could launch two Galaxy Z Fold8 models in 2026

Samsung may release two Galaxy Z Fold8 models in 2026, including one with a square-like screen, alongside the Galaxy Z Flip8.

Apple brings new health features to older Watch models

Apple adds hypertension notifications and Sleep Score to older Watch models with watchOS 26, expanding health tools beyond its newest devices.

Related Articles

Popular Categories