Apple confirms Google Gemini will power next-generation Siri
Apple confirms Google Gemini will power a more personalised Siri as it works to improve Apple Intelligence and address AI delays.
Apple has confirmed that it will use Google’s Gemini artificial intelligence technology to support a more personalised version of Siri, ending months of speculation about a potential partnership between the two technology giants. The company said the updated voice assistant is expected to become available later this year and will also contribute to future Apple Intelligence features.
Table Of Content
The confirmation marks a significant shift in Apple’s approach to artificial intelligence, as the company looks beyond its in-house models to improve Siri’s performance and capabilities. While Apple and Google acknowledged the collaboration in a joint statement, neither company explained exactly how Gemini will be integrated into Apple’s software ecosystem.
Neither company provided details on which Gemini will power specific features, or whether the technology will be used to enhance existing Apple Intelligence tools, such as Writing Tools or Image Playground. Apple also did not clarify whether Gemini would operate across all supported devices or be limited to specific hardware configurations.
Apple looks to Gemini as it addresses AI challenges
Apple’s decision to work with Google comes amid well-documented struggles to deliver advanced generative AI features at the same pace as its competitors. The company first revealed plans for a more personalised version of Siri in 2024, when it introduced Apple Intelligence as part of a broader push into on-device and cloud-based AI.
However, progress has been slower than expected. Early last year, Apple confirmed that the upgraded Siri experience would be delayed, disappointing users and analysts who were hoping for rapid improvements. The delay also highlighted internal challenges within Apple’s AI leadership.
In response, Apple made a significant management change by replacing its AI chief, John Giannandrea, with Mike Rockwell. Rockwell previously led the development of the Vision Pro headset and is considered a senior executive with experience delivering complex, high-profile products. The leadership shift was widely interpreted as a signal that Apple was refocusing its AI strategy and accelerating development.
Partnering with Google’s Gemini could help Apple close the gap with rivals that have moved quickly to embed generative AI into consumer-facing products. Gemini is already used across several Google services and is positioned as a competitor to large language models from OpenAI and others. For Apple, leveraging Gemini may allow it to offer more advanced natural language understanding and contextual awareness without building every component from scratch.
Privacy remains a central concern for Apple users
As with any move involving cloud-based AI, privacy has emerged as a major concern for users. Apple and Google addressed this issue directly in their joint statement, saying that “Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple’s industry-leading privacy standards.”
While the statement offers reassurance, it leaves several questions unanswered. Apple did not explain how much data Gemini would process, or under what circumstances user requests would be handled in the cloud rather than on-device. Historically, Apple has prioritised on-device processing wherever possible, limiting the amount of personal data sent to external servers.
Industry observers expect Apple to continue this approach, using local processing for routine tasks and relying on cloud-based AI only for more complex operations. Private Cloud Compute, which Apple has previously described as a secure and tightly controlled environment, is likely to play a key role in ensuring user data remains protected.
Privacy has long been a differentiator for Apple, particularly as rivals face scrutiny over how user data is collected and used to train AI models. Any perception that Gemini integration weakens Apple’s privacy stance could draw criticism, making transparency and clear communication essential as the rollout approaches.
Timing and expected capabilities of the new Siri
Apple has not confirmed an exact launch date for the more personalised version of Siri, and Google has also declined to provide a timeline. However, reports suggest that Apple may introduce the feature as part of iOS 26.4, with a release expected in March or April.
Details about what users can expect from the upgraded Siri remain limited. Based on previous announcements and industry speculation, the assistant is likely to gain improved awareness of on-screen content, allowing it to respond more intelligently to what a user is viewing at any given moment. Deeper app controls are also expected, enabling Siri to carry out more complex tasks within and across applications.
Another anticipated improvement is a better understanding of personal context. This could include recognising user preferences, habits and recent activity to deliver more relevant responses and suggestions. Such capabilities would bring Siri closer to competing with assistants that already offer advanced contextual and conversational features.
As Apple prepares to roll out the new experience, the partnership with Google’s Gemini highlights a pragmatic approach to AI development. By combining its platform strengths with external technologies, Apple appears focused on delivering tangible improvements to users while maintaining its core principles of privacy and device integration.





