Tuesday, 8 July 2025
31.4 C
Singapore
34.9 C
Thailand
22.3 C
Indonesia
29.3 C
Philippines

Tenable research finds serious security gaps in AI cloud services

Tenable finds 70% of AI cloud workloads have unpatched vulnerabilities, warning of data tampering and poor security in popular cloud services.

A new report by Tenable has revealed that artificial intelligence (AI) services used in cloud environments are highly vulnerable to cyber threats, with the majority of workloads exposed to unresolved security issues. The Cloud AI Risk Report 2025 outlines how the growing use of AI tools on platforms like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure is leading to a sharp increase in security risks for businesses.

The analysis, released on 20 March by the exposure management firm, shows that around 70% of cloud-based AI workloads contain at least one known but unpatched vulnerability. These weaknesses could allow attackers to manipulate data, tamper with AI models, or cause data leaks. The report aims to raise awareness about the potential consequences of combining AI and cloud technologies without proper safeguards.

Unpatched vulnerabilities and misconfigurations widespread

One of the most striking findings from the report is the widespread presence of critical vulnerabilities. In particular, Tenable identified CVE-2023-38545—a severe flaw in the popular curl data transfer tool—in 30% of the AI workloads it analysed. This type of vulnerability could be used by attackers to gain unauthorised access to data or systems.

In addition, misconfigurations in cloud services were found to be alarmingly common. For example, 77% of organisations using Google Vertex AI Notebooks had left the default Compute Engine service account overprivileged. This means services built using these accounts are more vulnerable to exploitation, as they often have more permissions than necessary.

Tenable refers to this issue as a “Jenga-style” misconfiguration—where services are built on top of others, inheriting risky default settings from the lower layers. If one component is misconfigured, it can lead to cascading vulnerabilities throughout the system.

AI data and models at risk of tampering

The report also found evidence of poor controls around AI training data, a key part of machine learning systems. Data poisoning, where training data is deliberately tampered with to manipulate the behaviour of AI models, remains a serious concern.

According to the report, 14% of organisations using Amazon Bedrock had not properly restricted public access to at least one AI training data storage bucket. In 5% of cases, the permissions were found to be overly broad, increasing the likelihood of unauthorised access or data tampering.

Similarly, Tenable discovered that 91% of users running Amazon SageMaker notebook instances had at least one instance that granted root access by default. This creates a significant risk if any notebook is compromised, as it could allow attackers to make changes to all files on the system.

Call for improved AI cloud security

Liat Hayun, Vice President of Research and Product Management for Cloud Security at Tenable, stressed the importance of addressing these risks.

“When we talk about AI usage in the cloud, more than sensitive data is on the line. If a threat actor manipulates the data or AI model, there can be catastrophic long-term consequences, such as compromised data integrity, compromised security of critical systems and degradation of customer trust,” said Hayun.

She added that cloud security strategies need to evolve in line with the increasing use of AI. “Cloud security measures must evolve to meet the new challenges of AI and find the delicate balance between protecting against complex attacks on AI data and enabling organizations to achieve responsible AI innovation.”

The findings serve as a timely reminder for businesses to review their cloud AI security practices and ensure they are not leaving critical data or infrastructure exposed to potential threats.

Hot this week

E Ink transforms laptop touchpads into smart e-reader displays for AI use

E Ink’s new touchpad brings e-reader tech to laptops, offering a low-power screen for AI apps and assistants right under your fingertips.

Stop Killing Games hits 1.2 million signatures, but challenges remain

The Stop Killing Games petition passed 1.2M signatures, but fake entries and industry pushback may slow its path to EU law.

Mainland investment boom lifts Hong Kong’s market

Chinese firms turn to Hong Kong listings after mainland investors spend US$93B on stocks, eyeing global growth and fresh funding sources.

Union Gas Holdings boosts operational resilience with Lenovo infrastructure upgrade

Union Gas Holdings upgrades to Lenovo ThinkSystem infrastructure to ensure round-the-clock energy delivery and improve IT performance across Singapore.

Singapore’s first recycled Fortune Merlion unveiled to mark SG60

Singapore’s first 3D-printed recycled Fortune Merlion celebrates SG60 with just 60 figures made from 1,800 plastic bottles.

Xiaomi Sound Pocket review: Small in size, big on sound

The Xiaomi Sound Pocket is a sleek, compact speaker with IP67 rating, smart tuning, and strong battery life for all-day listening.

Huawei defends AI model amid claims of using third-party code

Huawei denies using third-party models to train its latest AI, despite claims from a whistleblower and rising competition in China's tech sector.

AI will make cyber defence harder unless you think like a hacker

Cyber experts warn that AI is making cyber attacks smarter, urging firms to adopt a hacker mindset and prepare through simulations.

Persona 5: The Phantom X finally arrives in Southeast Asia

Persona 5: The Phantom X launches in Southeast Asia with a fresh story, fan-favourite characters, and a special event running until July 31.

Related Articles

Popular Categories