Google has now made the Apple-Gemini relationship harder to dismiss as rumor, with Google Cloud CEO Thomas Kurian saying during the Cloud Next 2026 keynote that the company is working with Apple as its “preferred cloud provider” to develop the next generation of Apple Foundation Models based on Gemini technology, and that those models will help power future Apple Intelligence features, including “a more personalized Siri” coming later this year.
That matters because, until now, most of the conversation around Apple’s delayed Siri overhaul had lived in leaks, analyst notes, and Apple’s own carefully worded promises. Apple had already told CNBC in February that the smarter version of Siri was still on track for a 2026 launch, even as reports suggested the company had run into internal setbacks, so Google’s latest remarks work less like a surprise announcement and more like a public confirmation that the broader plan is still alive.
The bigger story here is not simply that Siri is getting an upgrade, but that Apple appears willing to lean on Google’s AI stack to get there. Kurian’s wording suggests Gemini is not just being plugged in as a chatbot side feature, but is involved at the model level in helping shape the next generation of Apple Foundation Models, which in turn are expected to power new Apple Intelligence experiences.
If you haven’t been keeping tabs on Apple’s AI plans, here’s the quick version: a revamped Siri has been in the works for some time now. Back at WWDC 2024, Apple unveiled an ambitious vision for the assistant — one that would feel far more fluid and intuitive, capable of remembering what was said earlier in a conversation, accepting typed input alongside spoken commands, and developing an awareness of what’s actually on your screen so it could take meaningful action on your behalf.
Apple later added that Siri would be able to draw on a user’s personal context and take hundreds of new actions in and across Apple and third-party apps, which is a much bigger leap than the routine voice-assistant tasks people associate with Siri today. In practical terms, that is the difference between asking Siri to set a timer and asking it to understand what is on your screen, pull in relevant context from your apps, and complete a multi-step action without forcing you to tap around manually.
That is also why this rollout has been watched so closely. Apple previewed a far more capable Siri experience early, but the company has spent the past year trying to close the gap between a polished demo and a product that is reliable enough to ship at scale, and the repeated timeline adjustments have fueled doubts about whether Apple could deliver its AI ambitions on its own schedule.
Google’s confirmation changes the tone of that conversation a bit. Instead of the usual Apple line about features arriving “later,” there is now a direct statement from a major partner saying Gemini technology will sit under future Apple Intelligence features and that a more personalized Siri is expected this year.
There is still an important unanswered question, though: where exactly this AI work will run. MacRumors noted that while Google used the phrase “preferred cloud provider,” it remains unclear whether the new Siri experience and Gemini-backed Apple Intelligence features will rely on Apple’s Private Cloud Compute system, Google’s own servers, or some mix of the two.
That technical detail is not minor. Apple has spent a lot of time framing Apple Intelligence around privacy, on-device processing, and tightly controlled cloud infrastructure, so any deeper reliance on Google will inevitably raise questions about how much of Siri’s new intelligence comes from Apple’s own stack and how much comes from an outside partner.
At the same time, this partnership is a very Apple move in another sense: the company does not mind borrowing outside technology when it helps preserve the overall user experience. Users are still expected to interact with Siri as Siri, not with a separate Gemini app takeover inside the iPhone interface, which means Apple gets to keep the customer-facing layer while upgrading the engine underneath it.
There is also a business reality here that should not be missed. Apple’s installed base is enormous, and making Siri genuinely useful across iPhone, iPad, and Mac would create a huge spike in AI inference demand, which helps explain why cloud capacity and infrastructure partners suddenly matter so much in a company that traditionally prefers vertical control. MacRumors reported that Apple had asked Google to investigate setting up servers in Google data centers to run Siri because Apple expects much heavier cloud usage once the smarter assistant arrives.
If all of this sounds like Apple quietly admitting it needs help in AI, that is because it probably is. But that is not necessarily a weakness for consumers. In a market where people increasingly compare assistants by how well they understand context, handle follow-up questions, and work across apps, most users will care less about whose model sits behind the curtain and more about whether Siri finally stops feeling one step behind rivals.
The next big checkpoint is likely WWDC in June. Apple is set to introduce iOS 27 at the developer conference that begins on June 8, and that event now looks like the obvious place for the company to show what this more personalized Siri actually looks like, how deeply Gemini is involved, and whether Apple can finally turn years of Siri promises into something people will use every day instead of work around.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
