Monday, 16 June 2025
29.3 C
Singapore
28.2 C
Thailand
20.1 C
Indonesia
28.7 C
Philippines

AI-controlled robots can be hacked, posing serious risks

A Penn Engineering study found AI-powered robots vulnerable to hacking, raising concerns over safety risks and real-world dangers.

Researchers at Penn Engineering have discovered alarming security vulnerabilities in AI-powered robotic systems, raising concerns about the safety of these advanced technologies. They found that certain AI-controlled robots can be hacked, allowing hackers to take complete control and potentially cause serious harm.

“Our work demonstrates that large language models are not yet safe enough when integrated into the physical world,” said George Pappas, the UPS Foundation Professor of Transportation in Electrical and Systems Engineering at Penn. His comments highlight the significant risks these systems pose in their current state.

The Penn Engineering research team conducted tests using a tool they developed called RoboPAIR. The tool could “jailbreak” three well-known robotic platforms: the four-legged Unitree Go2, the four-wheeled Clearpath Robotics Jackal, and the Dolphins LLM simulator for autonomous vehicles. Incredibly, the tool was successful in every single attempt, bypassing the safety systems of these platforms in just a few days.

Once the safety guardrails were disabled, the researchers gained complete control over the robots. They could direct the machines to perform dangerous actions, such as sending them through road crossings without stopping. This demonstration revealed that jailbroken robots could pose real-world dangers if misused.

The researchers’ findings mark the first time that jailbroken large language models (LLMs) risks have been linked to physical damage, showing that the dangers go well beyond simple text generation errors.

Strengthening systems against future attacks

Penn Engineering is working closely with the developers of these robotic platforms to improve their security and prevent further vulnerabilities. However, the researchers have issued a strong warning that these problems are not limited to just these specific robots but are part of a wider issue that needs immediate attention.

“The results make it clear that adopting a safety-first mindset is essential for the responsible development of AI-enabled robots,” said Vijay Kumar, a co-author of the research paper and professor at the University of Pennsylvania. “We must address these inherent vulnerabilities before deploying robots into the real world.”

In addition to strengthening the systems, the researchers also stress the importance of “AI red teaming.” This practice involves testing AI systems for possible risks and weaknesses to ensure they are robust enough for safe use. According to Alexander Robey, the study’s lead author, identifying and understanding these weaknesses is a crucial step. Once the flaws are found, the robots can be trained to avoid such vulnerabilities, making them safer for real-world applications.

As AI continues to evolve and more robots are integrated into daily life, it becomes increasingly important to ensure their safety. If not properly secured, these technologies could seriously threaten public safety. Penn Engineering’s work is a crucial step towards ensuring that AI-controlled robots are safe and trustworthy in the future.

Hot this week

ASUS showcases next-gen NVIDIA GB300 NVL72 system and deepens Nebius partnership at GTC Paris 2025

ASUS debuts NVIDIA GB300 NVL72 systems and expands partnership with Nebius to accelerate scalable AI infrastructure at GTC Paris 2025.

OpenAI delays the release of new open model until later this summer

OpenAI delayed its new open AI model, now expected later this summer, aiming to rival Mistral and Qwen.

Apple’s visionOS 26 brings spatial widgets, lifelike avatars, and shared experiences

Apple’s visionOS 26 update brings spatial widgets, improved avatars, and shared headset experiences for a more immersive digital world.

Semperis and Akamai address critical Active Directory flaw in Windows Server 2025

Semperis and Akamai introduce new detection tools to counter a critical Windows Server 2025 vulnerability affecting Active Directory security.

Proofpoint opens new Singapore office to expand APAC operations and AI capabilities

Proofpoint opens new Singapore office to expand APAC presence and boost AI-led, human-centric cybersecurity efforts across the region.

Informatica deepens partnership with Databricks to support new Iceberg and OLTP services

Informatica joins Databricks as launch partner for new Iceberg and OLTP solutions, introducing AI tools to speed up GenAI development.

Hong Kong opens skies to larger drones in bid to grow low-altitude economy

Hong Kong will allow the testing of larger drones to boost its low-altitude economy and improve logistics, following mainland China's lead.

Hong Kong to build new AI supercomputing centre in bid to lead global tech race

Hong Kong plans a new AI supercomputing centre to boost its tech hub status and support growing start-ups across the Greater Bay Area.

Steam adds full native support for Apple Silicon Macs

Steam runs natively on Apple Silicon Macs, ditching Rosetta 2 for smoother performance and better gaming on M1 and M2 devices.

Related Articles

Popular Categories