Microsoft quietly rolled out a small but important change to Visual Studio Code this month: an “auto model selection” option for GitHub Copilot that will automatically pick the AI model it thinks will do the job best — and in practice, that selector often prefers Anthropic’s Claude Sonnet 4 over OpenAI’s newest GPT-5 variants. The move is technical on its face, but strategic in consequence: it’s another sign that Microsoft is diversifying its AI bets even as it extends its own model-building ambitions.
What Microsoft changed (and what the UI actually does)
In mid-September, Microsoft added an Auto option to the model picker inside Copilot for Visual Studio Code. When you choose Auto, Copilot will pick a single model for the duration of the chat session with the aim of delivering “optimal performance” and avoiding rate limits. According to Microsoft’s documentation and blog posts, Auto can pick from Claude Sonnet 4, GPT-5, GPT-5 mini and other models — but the practical outcome for many paid Copilot users has been that Claude Sonnet 4 is the primary pick.
The change is rolling out as a public preview: Copilot now shows “Auto” as an option in the model picker, and users — free and paid alike — will see the feature, although enterprise administrators can opt models in or out for their organizations. If you prefer a particular model, you can still pick it manually; Auto simply gives Microsoft the freedom to route requests to what it judges will work best.
A tacit endorsement of Claude
What reads like product polish also reads like endorsement. Internally, Microsoft engineers were being told to favor Claude Sonnet 4 for Copilot-style tasks as early as June, according to people familiar with the company’s developer guidance — and that guidance appears to have persisted in the face of the GPT-5 rollout. An internal message from Julia Liuson, who runs Microsoft’s developer division, reportedly recommended Claude Sonnet 4 based on internal benchmarks. That counsel and the Auto selector together amount to a quiet prioritization of Anthropic’s model for coding workflows.
For developers, this matters because model choice isn’t merely marketing: different architectures and training emphases produce different behavior on code completion, reasoning about multi-file projects, or generating precise refactors. If Microsoft’s internal testing prefers Claude for those tasks, then routing users to Claude by default is a product decision that will shape day-to-day developer experience.
Microsoft’s multitrack strategy: buy, build, and partner
This is not a repudiation of OpenAI. Microsoft remains OpenAI’s biggest backer — it has invested billions and recently struck a new non-binding memorandum with OpenAI that keeps the door open for deeper collaboration and for OpenAI to reorganize its structure. But Microsoft’s approach looks multitrack: keep OpenAI close, buy capacity from other leading model producers (Anthropic), and invest heavily in building in-house capability.
On the “build” side, Microsoft AI chief Mustafa Suleyman has been explicit: the company is expanding its training cluster and capacity. In a recent employee town hall, he described MAI-1-preview as having been trained on roughly 15,000 NVIDIA H100 GPUs — “a tiny cluster” compared with the largest efforts — and said Microsoft plans to scale up capacity substantially so it can produce frontier models in-house. That statement frames Auto and partner-model choices as pragmatic: use what’s best today, while investing to reduce dependence tomorrow.
Anthropic in Microsoft 365 too
The shift isn’t confined to VS Code. Reporting from The Information and other outlets indicates Microsoft will blend Anthropic’s models into Microsoft 365 Copilot features in apps like Excel and PowerPoint, after internal tests suggested some Anthropic models outperformed certain OpenAI models on advanced tasks (for example, spreadsheet automation and presentation generation). That means the Anthropic signal in development tools could soon echo across Microsoft’s productivity stack.
Why Microsoft might prefer Claude for coding
A few practical reasons explain the tilt. First, model performance varies by task: models trained with different objectives can be better at programmatic reasoning, following complex instructions, or maintaining conversational safety — all useful for developer assist. Second, cost and reliability matter: routing to a model that hits the sweet spot on latency, throughput, and correctness helps products scale without blowing budgets or hitting rate limits. Finally, vendor diversification reduces strategic concentration: Microsoft is a major investor in OpenAI, but it doesn’t want a single supplier to be the bottleneck for its entire product line. (That last point is especially relevant given the global scramble for GPU capacity.)
What this means for developers and teams
For individual devs: not much to panic over. You can still pick a specific model in Copilot if you want to. For teams and enterprises: this matters for consistency and reproducibility. If your CI pipelines, code-analysis tools, or internal workflows are model-sensitive, you’ll want to pin models explicitly and test outputs across the set of models Copilot might pick. Administrators can opt models in or out at the org level, and that’s likely to become a best practice for companies that need deterministic behavior from their coding assistants.
The bigger picture: market dynamics, IPOs and infrastructure
The timing is interesting. Microsoft’s move to route Copilot toward Anthropic comes as OpenAI and Microsoft announced a new agreement that may smooth OpenAI’s path to a future IPO. Meanwhile, OpenAI is reportedly planning huge investments in backup server capacity — showing that the infrastructure race is only intensifying. Microsoft’s strategy — a mix of partnership, purchase, and internal R&D — is a hedge against that volatility: if GPU capacity or licensing shifts, Microsoft can still deliver features by leaning on multiple suppliers and its own growing cluster.
Bottom line
Auto model selection in VS Code is small in UI terms, but it’s a signal about faith: Microsoft is saying it wants to use the best tool for the job — whether that tool comes from Anthropic, OpenAI, or its own labs. For developers, the practical takeaway is simple: expect Copilot to nudge you toward Claude by default for many coding tasks, but retain control by pinning models when determinism matters. For the industry, the moment underscores how pluralized AI supply chains have become — and how much the future will hinge on who can secure chips, scale clusters, and turn model results into reliable product behavior.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
