When Mark Gurman says “Apple runs on Anthropic,” he is not talking about the Siri you and I use on our iPhones; he is talking about the invisible AI backbone that Apple’s own employees rely on every day to build the products that ship to hundreds of millions of people.
In a recent appearance on TBPN, Gurman described a reality inside Apple that looks very different from the company’s tightly scripted keynotes. According to him, Anthropic’s Claude models have quietly become the workhorse behind a lot of Apple’s internal product development and tooling. Apple, he says, is running custom versions of Claude on its own servers, tuned for its workflows and fenced in by its famously strict privacy rules. In other words, while the public story is “Apple Intelligence plus Google Gemini,” the story inside Apple Park is closer to “Gemini on the front, Anthropic at the back.”
This is what makes the quote “Apple runs on Anthropic at this point” so striking. For a company that has spent decades selling the idea that it controls the full stack—from chips to software to services—leaning this hard on an external AI vendor is a big philosophical shift. Gurman’s description paints Anthropic not as a side experiment, but as a deeply embedded part of Apple’s daily workflow: engineers asking Claude to refactor code, designers using it to iterate copy, security teams pushing it through internal tools to surface issues faster. These are the kinds of tasks that never show up in a keynote slide, yet they influence how fast Apple can ship features and how polished those features feel when they land on your device.
At the same time, Apple has now committed to a very public AI partnership with Google. Starting with an upcoming Siri overhaul, Apple will rely on a custom Google Gemini model—reportedly in the 1.2‑trillion‑parameter range—to handle some of the heavy lifting behind its new voice assistant and cloud‑scale AI experiences. Reports put that deal at around $1 billion per year, a huge number for most companies, but a relatively modest line item for Apple—especially compared to what Anthropic apparently asked for.
Gurman says Apple originally planned to rebuild Siri around Claude, effectively making Anthropic the primary AI brain behind the assistant. Negotiations reportedly fell apart when Anthropic pushed for “several billion dollars a year,” with pricing that would double annually over the next three years. For Apple, which already pays Google many billions just to remain the default search engine in Safari, those numbers weren’t just high—they were structurally risky. Locking Siri’s future to a partner whose fees escalate aggressively every year is the kind of dependency Apple usually designs its way out of, not into.
The result is a slightly awkward but very Apple compromise. Outwardly, the company can tell a clean story: Apple Intelligence on device for privacy, Private Cloud Compute for anything that needs the cloud, and Google Gemini as the big, powerful model that slots into this architecture under Apple’s rules. Google, according to multiple reports, agreed to run Gemini under Apple’s privacy constraints—processing requests on Apple‑controlled infrastructure so that user data doesn’t wander off into Google’s ad ecosystem. In the background, Apple keeps using Anthropic where it matters most to Apple itself: making its teams more effective and accelerating development.
Seen from the outside, that dual‑track approach looks almost contradictory. This is the same Apple that spent years insisting it didn’t need the same kind of cloud‑heavy AI stack as its rivals, because on‑device intelligence and tight integration would be enough. Yet the combination of Anthropic for internal work, Google for consumer‑facing features, and a still‑maturing in‑house AI effort suggests the company has accepted a short‑term reality: right now, no single model—and certainly not Apple’s own—ticks every box. Apple can either wait years to catch up, or it can buy time by renting the best brains on the market and hiding the seams behind its own UX.
The privacy angle is the other tension point here. Publicly, Apple has built its brand on not needing to harvest user data the way its competitors do. Privately, it now depends on large, external models that typically train on enormous data sets and run in hyperscale data centers. The way Apple tries to square that circle is by keeping the most sensitive parts of the pipeline under its control: custom Claude models running on Apple servers so proprietary code never leaves the company, Gemini wired into Apple’s Private Cloud Compute environment, and a continued emphasis on doing as much as possible on the device. It is not pure vertical integration anymore, but it is still Apple‑style risk management.
There is also a competitive subtext here that goes beyond cost. Anthropic has built its reputation on “constitutional AI” and safety‑first tuning, which plays nicely with Apple’s cautious brand. For internal tools—where hallucinations can become bugs and subtle mistakes can turn into security issues—those guardrails are a feature, not a limitation. Google, on the other hand, is offering raw scale and performance, with a massive Gemini model that Apple can drop into Siri to instantly close the gap with OpenAI‑powered rivals. Apple choosing both is essentially Apple admitting that safety and scale live in different places right now, and it is willing to juggle partners to get both.
For Anthropic, Gurman’s quote is a kind of back‑handed compliment. On the one hand, they lost the marquee position of being “the AI behind Siri,” which is the kind of branding money can’t buy. On the other, they have one of the world’s most valuable companies running customized Claude models so deeply into its internal stack that a senior Apple reporter casually says the whole company “runs” on them. That is a powerful story to tell every other enterprise customer that wants strong models but doesn’t want to be locked into the biggest ad company on Earth.
For users, the irony is that almost none of this is visible. When Apple ships its new Siri and Apple Intelligence features, the branding will be Apple‑first as always, with maybe a quiet “powered by Gemini” footnote and essentially zero public acknowledgment of Anthropic. Yet the cadence of software updates, the quality of system apps, and the speed at which bugs get fixed may be increasingly shaped by Anthropic models sitting behind Apple’s VPN. If Gurman is right, every time an Apple engineer asks an internal tool to review a pull request or generate tests, Anthropic is effectively working a quiet shift inside Apple Park.
In the end, “Apple runs on Anthropic” is less a literal statement than a neat shorthand for where Apple is in the AI race: a company with unrivaled hardware and UX, a still‑maturing homegrown AI stack, and a willingness—at least for now—to lean heavily on outsiders to fill the gap. For an industry that spent years treating Apple as the gold standard of build‑it‑yourself vertical integration, that shift might be the most interesting part of this story.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
