OpenAI, the company behind ChatGPT, has begun using Google’s artificial intelligence (AI) chips to support its growing product line, according to a source familiar with the matter who spoke to Reuters on June 28. This signals a notable shift in the AI landscape, especially as OpenAI is one of the world’s biggest users of Nvidia’s graphics processing units (GPUs), which are typically used to train large AI models and power real-time decision-making.
A surprising move between AI rivals
You may find it surprising that OpenAI, often seen as a key competitor to Google in the AI space, now rent AI chips from Google Cloud. Earlier this month, Reuters reported exclusively that OpenAI planned to tap into Google Cloud’s infrastructure to meet its growing computing demands. This step shows a rare collaboration between two of the biggest players in the artificial intelligence sector.
The chips in question are Google’s custom-built tensor processing units (TPUs), which were once used only internally. By making them available to outside customers, Google has attracted major clients, including Apple and emerging AI firms like Anthropic and Safe Superintelligence, founded by former OpenAI leaders.
For OpenAI, this marks the first time it’s significantly using chips other than Nvidia’s. It also shows the company diversifying its computing resources rather than depending solely on Microsoft’s data centres, despite Microsoft being a major investor. By renting TPUs through Google Cloud, OpenAI aims to cut costs, particularly those related to inference—the process where an AI model makes decisions using its learned knowledge.
Not all TPUs are on offer
However, not everything is on the table. A report from The Information claims that Google is not offering its most powerful TPUs to OpenAI. A Google Cloud staff member revealed that the most advanced versions are being withheld, likely to retain a competitive edge. When asked for comments, Google chose not to respond, and OpenAI has also remained silent.
Despite this limit, OpenAI’s choice to work with Google is a significant development. It highlights a shift in the dynamics of the AI sector, where rivals can become partners if the benefits are strong enough. For OpenAI, it’s about getting better value and capacity to support ChatGPT and other AI tools. For Google, it’s about expanding its cloud business by leveraging its advanced AI hardware.
What this means for the AI industry
If you’re following the rapid growth of AI, this partnership could have a lasting impact. Using Google’s TPUs, OpenAI may lower its running costs and reduce dependency on Nvidia, whose GPUs have become expensive and in high demand. At the same time, Google has gained a high-profile customer base and has proven that its chips are now viable for large-scale external use.
This also reflects a broader trend of companies seeking alternatives to the traditional giants in AI hardware. If TPUs prove cost-effective and powerful enough, more firms may follow OpenAI’s lead, potentially giving Google a major foothold in the competitive AI chip market.