Siri is finally growing up. After years of feeling like the least capable voice in the room, Apple is now testing a next‑generation version of its assistant that can juggle multiple commands in a single breath—exactly the kind of thing users have been doing with modern AI chatbots for a while.
Think about how you probably talk to an assistant today. You say, “Hey Siri, what’s the weather?” Then you wait. Then: “Set a calendar event for 5 pm.” Then: “Text my friend that I’ll be late.” Three separate interactions, three separate rounds of “Hey Siri,” and usually at least one misunderstanding thrown in for good measure. Apple’s new approach aims to compress all of that into one natural request: “What’s the weather this evening, set a haircut appointment at 5 pm, and text my wife that I’ll be home late.”
According to multiple reports, this multi‑command upgrade is part of a much bigger Siri reboot coming with iOS 27, iPadOS 27, and macOS 27 later this year. Apple is testing the feature internally right now, with examples like “check the weather, create a calendar appointment, and send a message” all being handled in one shot. The goal is simple: stop making users talk to Siri like a script and let them speak more like they would to an actual assistant.
Under the hood, this all ties into Apple’s shift to a new AI foundation model that leans on Google’s Gemini technology, customized to run within Apple’s own ecosystem. That sounds odd on paper—Apple relying on Google for something as core as Siri—but it fits into the company’s broader “Apple Intelligence” strategy, where heavyweight models can live in Apple’s Private Cloud Compute while keeping user data locked down with its usual privacy framing. In practice, that means Siri should finally gain the kind of rich language understanding, context awareness, and multi‑step reasoning people now expect by default from AI assistants.
One of the big promises here is context. The revamped Siri is expected to better understand what you’re doing and what’s on your screen, then chain actions together around that. Imagine saying, “Take that photo I just opened, brighten it a bit, crop it for Instagram, and send it to Alex,” and having Siri handle the entire pipeline: open the right app, apply edits, and share it. This kind of “multi‑step, multi‑app” flow is exactly what Apple reportedly wants to enable, built on the same foundations as the multi‑command feature.
Apple is also planning to move Siri closer to a full chatbot experience rather than the current fire‑and‑forget voice prompt system. The new interface is said to let you scroll through past interactions, reference previous questions, and follow up the way you would in ChatGPT or other AI tools. That matters because multi‑command support is not just about cramming three tasks into one sentence; it’s about maintaining memory across a conversation so you can say, “Reschedule that to tomorrow and send the same message to my boss,” without re‑explaining everything.
Crucially, Apple seems to know Siri can’t do all of this alone. Part of the iOS 27 overhaul is a more open “Siri Extensions”‑style system, where the assistant can tap into third‑party AI tools installed on your device. Apple already lets Siri hand off some queries to ChatGPT, but future builds are expected to support rivals like Google’s Gemini and Anthropic’s Claude, all accessible from the same Siri entry point. You could ask a question and explicitly push it to another assistant without jumping between apps, which subtly turns Siri into an orchestrator rather than the only brain in the room.
On the keyboard front, Apple is reportedly experimenting with a smarter system that goes beyond basic autocorrect. Instead of just fixing typos, the new keyboard would suggest alternative words or phrases, more like Grammarly, nudging you toward clearer or more polished writing as you type. This is still in testing and might not ship, but it hints at a future where Apple’s system‑wide AI touches everything from how you talk to your phone to how you write on it.
The uncomfortable reality for Apple is that all of this is playing catch‑up. Modern AI assistants and chatbots have handled compound requests for a while, often with surprisingly fluid multi‑step reasoning. Apple is only now testing a feature that lets Siri process several actions in one query—something that feels basic in 2026. But the company’s advantage is tight integration: if it gets this right, the payoff is an assistant that can actually use all the hooks iOS has into your apps, data, and on‑screen content in a way third‑party tools can’t fully match.
Of course, the big question is how much of this will be ready on day one. Some features are reportedly labeled as “Preview” internally, hinting that Apple might roll them out in an unfinished but usable state, the same way it treated early Apple Intelligence tools back in 2024. That could mean a world where Siri can technically handle multiple commands, but with quirks and edge cases that get ironed out over subsequent point releases. Still, even a rough first version would be a huge step forward from the Siri people are used to living with today.
If Apple delivers on the leaks, you’re looking at a Siri that can finally work the way people always assumed it should: understanding longer, more natural requests, chaining actions across apps, remembering context, and even handing off to other AI assistants when it makes sense. It’s late, sure—but if the new Siri can go from “set a timer” to being a genuinely competent, multi‑command AI hub for your iPhone, that might be enough to make people give it another shot.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
