Apple is finally doing the thing people have been joking about for years: it’s trying to turn Siri from a slightly awkward voice assistant into a full-blown AI chatbot that behaves much more like ChatGPT or Google Gemini. And if the reports are even mostly right, this is less of a tune-up and more of a personality transplant for one of Apple’s oldest software characters.
Internally, the project has a code name that feels appropriately dramatic: Campos. Instead of being a thin voice layer that mostly hands off basic commands to apps, Campos is described as a new architecture and interface for Siri that’s built for the chatbot era from the ground up. The new Siri is set to ship as a core part of iOS 27, iPadOS 27, and macOS 27 later this year, replacing the interface you’ve known for more than a decade. You’ll still launch it the familiar way—saying “Siri” or holding the side button—but what appears on screen won’t be the old card-style answers; it’ll be a scrolling conversation thread that looks and feels much closer to using ChatGPT.
One of the more interesting details is that Apple reportedly toyed with launching this as a separate chatbot app and then backed away from that idea. Instead, Campos is being woven directly into the operating systems themselves, which means the assistant is supposed to feel like part of the fabric of your iPhone, iPad, and Mac—not something you visit like a website. That’s a classic Apple move: less “here’s a new AI toy” and more “your device just got a new brain.”
Where things get really fascinating—and a little controversial for longtime Apple watchers—is what’s under the hood. Campos won’t be powered purely by Apple’s own models. Instead, this is where Google walks in. Apple and Google have signed a multi‑year partnership that puts Google’s Gemini models at the core of the next generation of Siri and “Apple Foundation Models,” with Google’s cloud infrastructure doing at least part of the heavy lifting. After years of insisting on an almost purist on‑device, privacy‑first AI story, Apple has basically admitted that, to catch up quickly, it would rather stand on Gemini’s shoulders than wait for its own stack to mature.
Practically, that means this future Siri is expected to do the things you now instinctively open a chatbot for. You’ll be able to type or talk to it in long, natural language, ask follow‑up questions that stay in context, and hand it genuinely messy tasks instead of carefully phrased commands. Think: “Find those pictures from that beach trip with my friends where I’m wearing the blue t‑shirt, pick the best three, tweak the lighting, and send them to my mom with a nice message.” Or “Look at my email, calendar, and reminders and draft a polite reply to my boss explaining why I can’t make the Wednesday meeting, but suggest two other slots.” These are exactly the kinds of multi‑step, cross‑app workflows that current Siri trips over and modern LLMs tend to handle surprisingly well.
Apple is already rolling out a more capable, AI‑assisted Siri this year through its “Apple Intelligence” push, but Campos is positioned as the bigger swing that arrives after that. The early Apple Intelligence version of Siri leans on both Apple’s own models and an existing tie‑up with OpenAI’s ChatGPT for more complex questions, and focuses heavily on personalization—understanding your habits, your writing style, and your content. Campos, by contrast, is being framed as Siri’s real debut as a modern chatbot: a system that mixes an Apple‑designed “foundation layer” with Gemini to support things like rich web‑scale reasoning, summarization, maybe image generation, and coding help, all wrapped in an Apple‑style UI.
If that sounds like a big shift in Apple’s AI philosophy, it is. For years, Siri has been the cautionary tale of what happens when you ship a new interface ahead of the underlying tech. The assistant arrived early, got stuck with brittle, rule‑based plumbing, and then watched competitors leapfrog it with transformer models and generative AI. Campos is Apple essentially conceding that the old architecture can’t be patched forever, so it’s ripping out the foundations and rebuilding Siri to behave like the assistants people now expect to use.
From a user’s perspective, the bigger story isn’t just “Siri but smarter”; it’s “Siri as your AI front door.” Because it’s built into the OS instead of living in a browser tab or a standalone app, the new Siri is meant to sit at the intersection of your personal data, your system controls, and the wider internet. It’s expected to tie more deeply into Photos, Mail, Messages, Music, Calendar, Files, and even development tools like Xcode, which opens the door to features like natural‑language code generation, automated test creation, or on‑device debugging suggestions for developers. For everyday users, the pitch is simpler: instead of remembering where a setting lives or which app to open, you just explain what you want in normal language and let Siri orchestrate it.
All of this, of course, raises some classic Apple questions. Privacy is the big one. Apple’s original Apple Intelligence announcement leaned heavily on a concept called Private Cloud Compute, promising that when tasks are too heavy for your device, they’ll run on special Apple servers designed to minimize data access and retain as little as possible. Bloomberg’s reporting suggests that for Campos specifically, Apple and Google are actively talking about running parts of the new Siri on Google’s cloud instead. That’s a very different story from “your data barely leaves your device,” and Apple will have to explain in excruciating detail how it’s protecting user data when it traverses a partner’s infrastructure.
Then there’s the platform power dynamic. The idea that Apple—the company that’s spent years positioning itself as the privacy‑obsessed, vertically integrated alternative to Google—is now relying on Google’s Gemini to power its flagship assistant is genuinely wild. It turns Google from a rival into, effectively, a core infrastructure provider for one of iOS’s most visible features. That has implications up and down the industry: for OpenAI, which currently enjoys a premium integration slot on Apple devices; for Google, whose Gemini models suddenly gain massive distribution; and for regulators already anxious about big‑tech tie‑ups that lock consumers deeper into a handful of ecosystems.
Timing‑wise, Apple is aiming to make Campos the star of its Worldwide Developers Conference in June, with a consumer launch penciled in for around September alongside iOS 27 and friends. Between now and then, you can expect Apple to continue rolling out smaller AI upgrades—like the more personalized Siri arriving in an iOS 26.4 update—while developers get early hooks into whatever API surface this new assistant exposes. If Apple gets that developer story right, Siri could quickly become the layer apps plug into to offer AI features instead of each app bolting its own chatbot on the side.
The stakes are high not just for Apple, but for how AI assistants work across the industry. If Campos delivers, the future of Siri won’t just be “set a timer” and “what’s the weather”; it will be an always‑there, context‑aware operator that understands you, your device, and the wider web well enough to actually get things done on your behalf. And if it doesn’t—if latency, privacy questions, or reliability issues get in the way—then Siri risks becoming something arguably worse than a punchline: a default app that most people quietly ignore while they go back to the AI they trust in a browser tab.
Either way, the message from Cupertino is clear: the age of the old, rules‑based Siri is over. The next version wants to talk with you, learn from you, and run your digital life with the same confidence people now expect from ChatGPT‑class bots. After years of watching the AI race from behind, Apple is finally putting its assistant on the starting line.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
