Rhoda AI launches with US$450 million to push robotics beyond lab environments
Rhoda AI launches with US$450M funding and a video-driven robotics model designed to bring AI robots into real-world industrial environments.
Robotics startups have long demonstrated promising results in controlled settings, yet translating those capabilities into real industrial environments remains a persistent challenge. Rhoda AI has now emerged from stealth with a new architecture designed to address that gap, alongside US$450 million in Series A funding aimed at accelerating deployment in production settings.
Table Of Content
The company introduced FutureVision, an intelligence layer for robotics built on a system it calls Direct Video Action. Rhoda positions the approach as a way to move robotic systems beyond rigid programming and laboratory demonstrations, enabling machines to adapt to real-world environments where layouts, materials, and workflows change constantly.
Rethinking how robots learn to act in physical environments
Traditional industrial robots excel at performing repetitive actions in predictable environments. However, these systems rely on fixed trajectories and struggle when faced with unexpected changes, new objects, or evolving workflows.
More recent AI-driven robotics research has explored vision-language-action models that allow robots to learn from data rather than explicit programming. While those systems have produced promising results in controlled experiments, real-world deployment has proved more complex, particularly when robots must operate in dynamic environments.
Rhoda’s approach focuses on how machines learn physical motion. Rather than training primarily on teleoperated robot trajectories, the company pre-trains models on hundreds of millions of internet-scale videos to learn motion patterns, physics, and physical interactions.
These models are then post-trained using smaller volumes of robot data. The goal is to teach the system how to translate video-based predictions into specific robotic actions tied to a physical machine.
A closed-loop architecture built around predictive video
At the core of Rhoda’s system is what the company describes as a closed-loop architecture. The model observes the surrounding environment, predicts future states as video, converts those predictions into actions, and then reassesses the environment again in rapid cycles.
This process repeats every few hundred milliseconds, allowing the robot to continuously update its behaviour based on changing conditions. The Direct Video Action model bridges perception and control, enabling systems to adjust dynamically rather than following pre-generated plans.
“We believe the next era of robotics requires models that understand how the world moves — not just what it looks like or how it’s described in language,” said Jagdeep Singh, cofounder and CEO of Rhoda. “By learning from internet-scale video and operating in closed loop, our systems are designed to adapt to real-world variability in ways conventional approaches struggle to achieve. The goal is simple: robots that work in the real world, not just controlled lab settings.”
According to the company, the motion knowledge learned during video pre-training allows new robotic tasks to be learned with limited teleoperation data. In some cases, Rhoda says as little as ten hours of data is sufficient.
Early manufacturing deployments signal industrial ambitions
Rhoda says its technology has already been tested in production environments that require robots to handle continuously changing materials and layouts. These environments often represent the most difficult conditions for automation, particularly when workflows cannot be tightly controlled.
In one manufacturing evaluation, the system completed a component-processing workflow in under two minutes per cycle without human intervention. The result exceeded performance targets set by the customer involved in the test.
“In manufacturing, tasks with high variability have historically resisted automation. The real challenge isn’t solving it once, it’s delivering consistent, reliable output under real-world production conditions,” said Jens Wiese, Managing Partner at VC firm Leitmotif and former Volkswagen Group executive. “What impressed us about Rhoda’s approach is its ability to adapt to conditions that typically require human intervention. Technologies like this can dramatically expand the scope of what can be automated, playing a pivotal role in re-industrializing mature economies.”
Investors back a new wave of AI-driven robotics
The US$450 million Series A funding will support further research, engineering development, and expansion of industrial deployments and customer pilots. The company also plans to expand its team across generative AI, computer vision, and robotics disciplines.
Rhoda is backed by Capricorn Investment Group, Khosla Ventures, Leitmotif, Matter Venture Partners, Mayfield, Premji Invest, Prelude Ventures, Temasek, Xora, and Silicon Valley investor John Doerr.
“We believe the first company to deploy intelligent, manipulation capable robots at scale in real world environments will kick start a powerful data flywheel, creating a compounding advantage in capturing the long tail of real-world edge cases,” said Sandesh Patnam, Managing Partner at Premji Invest. “At Premji Invest, we take a long term view and are highly selective in where we partner. We invest only when we believe a company has the potential to build a truly large, enduring business. We believe Rhoda has assembled the technical foundation, ambition, and execution capability required to achieve that goal, and we are excited to partner with this exceptional team to help bring the next generation of intelligent robots into the world.”
The company is led by CEO and cofounder Jagdeep Singh, alongside Chief Science Officer Eric Ryan Chan and Stanford professor Gordon Wetzstein, who leads the Computational Imaging Lab.





