Perplexity just made its Computer product feel a whole lot more alive. On March 4, 2026, the San Francisco-based AI company quietly but confidently dropped a new update to Perplexity Computer — one that lets users simply talk to it and get things done. The message from the company was as straightforward as the feature itself: “You can now just talk and do things.“
To really appreciate what this voice mode means, you have to understand what Perplexity Computer actually is — because it’s a lot more than just an AI chatbot with a new hat on.
Perplexity Computer, which was unveiled in late February 2026, is the company’s most ambitious product to date. It’s billed not as a chatbot, not even as a search engine, but as a “general-purpose digital worker” — a system that takes a goal you describe in plain language, breaks it into tasks and subtasks, assigns those to multiple sub-agents, and then executes the whole workflow largely without you having to babysit it. Think of it like having a remarkably capable employee who doesn’t need you to hold their hand, can run for hours or even months on a task, and has access to the internet, a real file system, a browser, and a wide range of tool integrations all at once.
What makes Computer particularly interesting — and a bit dizzying, honestly — is that it doesn’t rely on just one AI model. It orchestrates over 19 specialized models under the hood, pulling in whatever it considers the best fit for each specific subtask. The result is a kind of multi-model assembly line that’s quietly chugging along in the cloud while you focus on something else entirely. You can even run multiple Perplexity Computers simultaneously.
Now, into all of that, voice mode has arrived. And this isn’t just a microphone button that converts your speech to text before you hit enter — it’s a proper conversational interface layered right into the Computer product. The voice mode on Perplexity Computer comes with an “Extended Speaking” option, which means the system will hold off on responding until you’re completely done talking, instead of jumping in mid-sentence. That’s a thoughtful design choice, especially for people giving longer, more detailed instructions to an agent system.
It’s also worth noting that voice mode isn’t brand new to the broader Perplexity ecosystem. Back in late February, the company had already upgraded voice capabilities in its Comet browser with a changelog entry noting that voice mode now runs on OpenAI’s GPT Realtime 1.5 model, with reliability improvements of over 25% and dramatically better voice expressiveness. Bringing that same voice-first philosophy over to Computer feels like a natural and inevitable next step — it just makes the interaction feel far more fluid when you’re delegating complex, multi-step work to an AI system.
The broader context here matters a lot. The AI industry has been racing toward what’s increasingly called “agentic AI” — systems that don’t just answer questions but actually carry out tasks autonomously. Perplexity Computer is the company’s entry into this space alongside competitors like Claude-powered agent tools from Anthropic and various offerings from OpenAI. What Perplexity is betting on is that its multi-model orchestration approach, combined with a clean and accessible interface, can appeal to a wider audience than the technically savvy crowd that was already using things like MCP (Model Context Protocol) to wire multiple AI models to their own devices.
Voice mode accelerates that accessibility story in a meaningful way. Typing out a detailed task for an AI agent requires a certain kind of deliberate effort — you have to be precise, structured, and patient. Talking, on the other hand, is how most people naturally think through problems. Describing what you want out loud, in the way you’d explain it to a colleague, and then having a capable system actually go do it — that’s an interaction model that could genuinely lower the barrier of entry for people who’ve been curious about AI agents but put off by the complexity.
Access to Perplexity Computer, including the new voice mode, currently sits behind the company’s Perplexity Max subscription tier, which is priced at $200 per month. That’s a meaningful premium — designed for professionals and power users who depend on AI as a core part of their daily work, whether that’s researchers, developers, business strategists, or content teams running high-volume workflows. The Max tier also includes everything in the $20-per-month Pro plan, plus unlimited access to Perplexity Labs, early access to new products like the Comet browser, priority support, and access to the most advanced frontier models, including OpenAI’s o3-pro and Anthropic’s Claude Opus 4.1.
The reception to the voice mode announcement has been broadly enthusiastic, though not without the usual healthy skepticism that follows any AI product update. On LinkedIn, several users praised the direction, with one noting that “voice is underrated in AI right now” and that “once assistants can reliably execute tasks from natural conversation, the interface becomes the conversation itself.” Others were more cautious, wondering about the reliability of voice recognition and whether the feature would come to the more affordable Pro tier at some point.
That question — when does this trickle down to Pro? — is one that many users will be asking. Right now, voice mode in Computer joins a growing list of features that are gated behind the premium Max tier, which, at $200 per month, is a price point that makes sense for enterprise-adjacent use but is a harder sell for individual users or smaller teams. Still, Perplexity’s product cadence has been aggressive, and if voice mode proves popular, a broader rollout seems likely.
What Perplexity is doing with Computer, and now voice mode, is pushing on a question the whole AI industry is still working through: what does the ideal human-to-AI working relationship actually look like? Typing instructions into a text box was good enough when AI was a search tool or a writing assistant. But as these systems take on genuinely complex, multi-step work — building applications, running research, processing data, managing workflows — the interface needs to evolve too. Voice is one answer to that. It’s the most natural one humans have, and Perplexity is now betting it’s the right one for their most powerful product yet.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
