Opera has quietly but decisively expanded its AI arsenal inside Neon, its experimental browser that often serves as a playground for new ideas. The latest update brings Meta’s Llama 4 Maverick and Alibaba’s Qwen3-Next models into the mix, alongside existing options like Google’s Gemini 3 Pro and OpenAI’s GPT-5.2. For anyone who has been following the AI race, this is a notable move: Opera is positioning Neon as a kind of “AI buffet,” where users can pick the model that best suits the task at hand.
The Llama 4 Maverick model is designed for heavier lifting. Meta built it to handle complex queries, longer conversations, and more nuanced writing or coding tasks. Opera’s blog even jokes about it being “hosted by Top Gun,” before clarifying that Google is actually providing the infrastructure. The point is clear: Maverick isn’t the lightweight sibling—it’s the one you call in when you need depth and endurance.
On the other side, Qwen3-Next arrives in two flavors. The “Thinking” variant is tuned for multi-step reasoning, planning, and problem-solving. It’s the kind of model you’d want if you’re working through a complicated project or asking layered questions that require context retention. The “Instruct” variant, meanwhile, is more straightforward: it excels at following directions, summarizing, and explaining. Opera stresses that neither is “better” than the other—they’re simply optimized for different roles.

What makes this interesting is Opera’s philosophy of multi-model browsing. Neon isn’t just about giving you one AI assistant; it’s about letting you choose. There’s a drop-down menu in Neon Chat where you can select which model you want to interact with. If you don’t want to think about it, Opera’s own AI engine will automatically decide which model to use, blending them behind the scenes. But if you’re the kind of user who likes control, you can override that and pick your favorite.
This strategy reflects a broader trend in the browser space. As AI becomes more embedded in everyday tools, companies are experimenting with how much choice to give users. Microsoft’s Copilot, Google’s Gemini integration, and even smaller players like Brave are all exploring different approaches. Opera’s bet is that its audience—often early adopters and power users—wants flexibility.
The timing is also telling. By adding Llama and Qwen alongside Gemini and GPT, Opera is signaling that it doesn’t want to be locked into any single ecosystem. It’s building Neon into a kind of neutral hub, where the best models from different companies coexist. That’s a subtle but important stance in a market where AI providers are increasingly competing for exclusivity.
The practical takeaway is simple: Neon now gives you more options. If you’re writing code, Maverick might be your go-to. If you’re planning a trip or solving a tricky problem, Qwen Thinking could shine. And if you just need a quick summary or explanation, Qwen Instruct is ready. Opera is betting that this kind of choice will make Neon not just a browser, but a platform for experimenting with AI in everyday life.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
