Sunday, 28 December 2025
28.4 C
Singapore
29.3 C
Thailand
26.5 C
Indonesia
27.3 C
Philippines

Tenable research finds serious security gaps in AI cloud services

[output_post_excerpt]

A new report by Tenable has revealed that artificial intelligence (AI) services used in cloud environments are highly vulnerable to cyber threats, with the majority of workloads exposed to unresolved security issues. The Cloud AI Risk Report 2025 outlines how the growing use of AI tools on platforms like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure is leading to a sharp increase in security risks for businesses.

The analysis, released on 20 March by the exposure management firm, shows that around 70% of cloud-based AI workloads contain at least one known but unpatched vulnerability. These weaknesses could allow attackers to manipulate data, tamper with AI models, or cause data leaks. The report aims to raise awareness about the potential consequences of combining AI and cloud technologies without proper safeguards.

Unpatched vulnerabilities and misconfigurations widespread

One of the most striking findings from the report is the widespread presence of critical vulnerabilities. In particular, Tenable identified CVE-2023-38545—a severe flaw in the popular curl data transfer tool—in 30% of the AI workloads it analysed. This type of vulnerability could be used by attackers to gain unauthorised access to data or systems.

In addition, misconfigurations in cloud services were found to be alarmingly common. For example, 77% of organisations using Google Vertex AI Notebooks had left the default Compute Engine service account overprivileged. This means services built using these accounts are more vulnerable to exploitation, as they often have more permissions than necessary.

Tenable refers to this issue as a “Jenga-style” misconfiguration—where services are built on top of others, inheriting risky default settings from the lower layers. If one component is misconfigured, it can lead to cascading vulnerabilities throughout the system.

AI data and models at risk of tampering

The report also found evidence of poor controls around AI training data, a key part of machine learning systems. Data poisoning, where training data is deliberately tampered with to manipulate the behaviour of AI models, remains a serious concern.

According to the report, 14% of organisations using Amazon Bedrock had not properly restricted public access to at least one AI training data storage bucket. In 5% of cases, the permissions were found to be overly broad, increasing the likelihood of unauthorised access or data tampering.

Similarly, Tenable discovered that 91% of users running Amazon SageMaker notebook instances had at least one instance that granted root access by default. This creates a significant risk if any notebook is compromised, as it could allow attackers to make changes to all files on the system.

Call for improved AI cloud security

Liat Hayun, Vice President of Research and Product Management for Cloud Security at Tenable, stressed the importance of addressing these risks.

“When we talk about AI usage in the cloud, more than sensitive data is on the line. If a threat actor manipulates the data or AI model, there can be catastrophic long-term consequences, such as compromised data integrity, compromised security of critical systems and degradation of customer trust,” said Hayun.

She added that cloud security strategies need to evolve in line with the increasing use of AI. “Cloud security measures must evolve to meet the new challenges of AI and find the delicate balance between protecting against complex attacks on AI data and enabling organizations to achieve responsible AI innovation.”

The findings serve as a timely reminder for businesses to review their cloud AI security practices and ensure they are not leaving critical data or infrastructure exposed to potential threats.

Hot this week

Valve ends production of its last Steam Deck LCD model

Valve ends production of its last Steam Deck LCD model, leaving OLED versions as the only option and raising the entry price for new buyers.

AI designs a Linux computer with 843 parts in a single week

Quilter reveals a Linux computer designed by AI in one week, hinting at a future where hardware development is faster and more accessible.

Indie Game Awards withdraws Clair Obscur honours over generative AI use

Indie Game Awards withdraws Clair Obscur’s top honours after confirming generative AI assets were used during the game’s production.

Sony and Honda’s first electric car brings PlayStation Remote Play on the road

Sony and Honda’s Afeela EV will support PlayStation Remote Play, letting passengers stream PS5 and PS4 games to the car’s display.

Samsung unveils Exynos 2600 as first 2nm mobile processor

Samsung unveils the Exynos 2600, the world’s first 2nm mobile chip, expected to debut in the Galaxy S26 in early 2026.

How Southeast Asia’s smart cities can unlock the next wave of AI with real-time, connected data

How Southeast Asia’s cities can use real-time, connected data to unlock AI-driven operations, improve resilience, and enhance urban services.

Square Enix releases Final Fantasy VII Remake Intergrade demo on Switch 2 and Xbox

Free demo for Final Fantasy VII Remake Intergrade launches on Switch 2 and Xbox, letting players carry progress into the full 2026 release.

AI designs a Linux computer with 843 parts in a single week

Quilter reveals a Linux computer designed by AI in one week, hinting at a future where hardware development is faster and more accessible.

Super Mario Bros inspired Hideo Kojima’s path into game development

Hideo Kojima reveals how Super Mario Bros convinced him that video games could one day surpass movies and led him into game development.

Related Articles