By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AINVIDIAPerplexityTech

Perplexity enters NVIDIA Nemotron Coalition as a founding partner

As a founding member of NVIDIA’s Nemotron Coalition, Perplexity is helping turn frontier‑grade base models into open infrastructure for developers and enterprises.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Mar 18, 2026, 10:11 AM EDT
Share
We may get a commission from retail offers. Learn more
Wide banner showing the Perplexity logo and text on the left and the NVIDIA logo on the right against a dark background, above a stylized green landscape made of mossy hills overlaid with glowing white data points.
Image: Perplexity
SHARE

Perplexity stepping in as a founding member of NVIDIA’s Nemotron Coalition is a pretty big signal about where AI is headed: open, collaborative, and deeply infrastructure‑level rather than just another shiny chatbot feature. It plugs Perplexity directly into a global effort to build frontier‑class open models that anyone can inspect, fine-tune, and deploy, instead of relying only on closed systems controlled by a handful of players.

At the heart of this move is the Nemotron Coalition itself, a new NVIDIA‑led alliance of leading AI labs created to advance “open frontier models” — think GPT-scale systems, but with open weights and transparent training practices. The coalition was announced at NVIDIA’s GTC event and aims to pool research, data, evaluation frameworks, and compute so that building these huge models becomes a shared infrastructure project instead of something every company has to reinvent alone. Members range from research‑heavy outfits like Mistral AI and Sarvam AI to applied players such as Perplexity, Cursor, and others that bring real‑world workloads and benchmarks into the loop.

The coalition’s first concrete deliverable is a new base model co-developed by Mistral AI and NVIDIA, trained on NVIDIA’s DGX Cloud and then open-sourced for the broader ecosystem to fine‑tune and adapt. NVIDIA has been explicit that this base model will underpin its upcoming Nemotron 4 family, effectively turning the coalition’s work into the foundation for future high‑end NVIDIA models as well. Coalition members contribute at different layers: some bring sovereign‑language and regional expertise, others provide evaluation datasets, and others inject specialized domain knowledge from production systems.

Perplexity’s role is very much on that “real usage” side of the spectrum: it already runs a complex retrieval‑augmented search product at scale and has a habit of stitching together different open models for each stage of answering a query. Under the hood, Perplexity post-trains different open models for query parsing, retrieval, reranking, and drafting responses, which lets it tune latency, cost, and relevance for each step instead of throwing one monolithic model at everything. That experience—knowing where models fail, how they behave under heavy search traffic, and which fine‑tuning knobs actually matter—is exactly the kind of domain expertise the coalition wants to bake into its shared base models.

Open models are the philosophical core of this whole effort, and that matters more than the branding. Pre-training a frontier-scale model is the most expensive, resource-intensive part of the pipeline; once that’s open, thousands of smaller teams can afford to specialize and fine‑tune instead of trying to raise billions to compete from scratch. NVIDIA’s own positioning here is blunt: it calls open models “the lifeblood of innovation” because they invite students, startups, and enterprises worldwide to participate in the AI stack rather than just consume it.

Nemotron 3 Super is a good example of the kind of open foundation Perplexity is leaning into. This is a 120-billion-parameter hybrid MoE model, but only 12 billion parameters are active at inference time thanks to a LatentMoE architecture, which makes it far more efficient than a naive 120B-dense system. Nemotron 3 Super is optimized for agentic workloads rather than simple chat: long-context reasoning, tool calling, planning, code and IT automation, and other multi-step tasks where multiple tools and data sources come into play.

Perplexity has already wired Nemotron 3 Super into its own stack in three ways: in the model selector inside the search experience, via its Agent API, and as part of Perplexity Computer, where the model is integrated directly into the search pipeline. That means a lot of real queries—research tasks, coding questions, complex multi‑step prompts—will effectively act as stress tests and real‑world feedback loops for Nemotron 3 Super and future Nemotron-line models. For the coalition, this kind of deployment is gold: it surfaces edge cases, helps refine evaluation benchmarks, and proves whether these open models can actually hold up against proprietary systems in day-to-day use.

On the NVIDIA side, Nemotron is a broader strategy to make its hardware and software stack the default home for large-scale open models. Nemotron models are designed to run especially efficiently on NVIDIA’s latest platforms—Blackwell GPUs, NVFP4 precision, and so on—which means better throughput and lower cost for anyone building on top of them. In return, NVIDIA publishes model weights, recipes, and tooling that make it easier for developers to stand up their own fine‑tuned versions, either in the cloud or on-prem.

For Mistral AI, a fellow founding member, Nemotron is a natural extension of its own open-first philosophy. Mistral is contributing cutting-edge architectures, multimodal capabilities, and large‑scale training know-how, while using NVIDIA’s compute and tooling to push these open models to frontier scale. Combined with Perplexity’s search and agent workloads, this starts to look like a full ecosystem loop: Mistral and NVIDIA push the state of the art, Perplexity and others pressure‑test it in production, and the resulting improvements flow back into the open model base for everyone.

What this all adds up to is an attempt to change the balance of power in AI infrastructure without pretending that closed models will disappear overnight. By banding together under the Nemotron Coalition, labs like Perplexity get access to serious compute and a shared base model that would be painful to build alone—while still retaining the ability to keep their own post-training magic proprietary if they want. For developers and enterprises watching from the outside, the promise is simple but ambitious: frontier‑grade models that are open enough to inspect and customize, battle‑tested on real workloads like Perplexity’s, and backed by some of the strongest infrastructure in the industry.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Leave a Comment

Leave a ReplyCancel reply

Most Popular

Gemini 3.1 Flash TTS is Google’s new powerhouse text-to-speech model

Google app for desktop rolls out globally on Windows

Google debuts Gemini app for Mac with instant shortcut access

Google Chrome’s new Skills feature makes AI workflows one tap away

Anthropic’s revamped Claude Code desktop app is all about parallel coding workflows

Also Read
Claude design system interface showing an interactive 3D globe visualization with customizable settings. The left side displays a dark-themed globe with North America in focus, overlaid with cyan-colored connecting arcs between major North American cities including Reykjavik, Vancouver, Seattle, Portland, San Francisco, Los Angeles, Toronto, Montreal, Chicago, New York, Nashville, Atlanta, Austin, New Orleans, and Miami. The top of the interface includes navigation tabs for 'Stories' and 'Explore', along with 'Tweaks' toggle (enabled), and action buttons for 'Comment' and 'Edit'. On the right side is a dark control panel with three sections: Theme (Dark mode selected, with Light option available), Breakpoint (Desktop selected, with Tablet and Mobile options), and Network settings including adjustable sliders for Arc color (bright cyan), Arc width (0.6), Arc glow (13), Arc density (100%), City size (1.0), and Pulse speed (3.4s), plus checkboxes for 'Show arcs', 'Show cities', and 'City labels'.

Anthropic Labs unveils Claude Design

OpenAI Codex app logo featuring a stylized terminal symbol inside a cloud icon on a blue and purple gradient background, with the word “Codex” displayed below.

Codex desktop app now handles nearly your whole stack

A graphic design featuring the text “GPT Rosalind” in bold black letters on a light green background. Behind the text are overlapping translucent green rectangles. In the bottom left corner, part of a chemical structure diagram is visible with labels such as “CH₃,” “CH₂,” “H,” “N,” and the Roman numeral “II.” The right side of the background shows a blurred turquoise and green abstract pattern, evoking a scientific or natural theme.

OpenAI launches GPT-Rosalind to accelerate biopharma research

Perplexity interface showing a model selection menu with options for advanced AI models. The default choice, “Claude Opus 4.7 Thinking,” is highlighted as a powerful model for complex tasks. Other options include “GPT-5.4 New” for complex tasks and “Claude Sonnet 4.6” for everyday tasks using fewer credits. A toggle for “Thinking” is switched on, and a tooltip on the right reads “Computer powered by Claude 4.7 Opus.”

Perplexity Max users now get Claude Opus 4.7 in Computer by default

Anthropic brand illustration divided into two halves: On the left, an orange-coral background displays a stylized network or molecule diagram with white circular nodes connected by white lines, enclosed within a black wavy border outline representing a head or mind. On the right, a light teal background features an abstract line drawing of a figure or person with curved black lines and black dots, sketched over a white grid on transparent checkered background, suggesting data points and analytical thinking. The composition symbolizes the intersection of artificial intelligence and human cognition.

Claude Opus 4.7 is Anthropic’s new powerhouse for serious software work

Illustration of Claude Code routines concept: An orange-coral background with a stylized design featuring two black curly braces (code brackets) flanking a white speech bubble containing a handwritten lowercase 'u' symbol. The image represents code execution and automated routines within Claude Code.

Anthropic gives Claude Code cloud routines that work while you sleep

Gemini interface showing a NEET Mock Exam Practice Session. On the left side, a chat message from the user says 'I want to take a NEET mock exam.' Below it is Gemini's response explaining a complete NEET mock exam designed to test concepts in Physics, Chemistry, and Biology, with a 'Show thinking' option expanded. The response includes an embedded card for 'NEET UG Practice Test' dated Apr 11, 7:10 PM, with options to 'Try again without interactive quiz' and encouragement message. On the right side is a panel titled 'NEET UG Practice Test' displaying three subject sections: Physics (45 Questions with a yellow icon and blue Start button), Chemistry (45 Questions with a purple icon and blue Start button), and Biology (90 Questions with a green icon). Each section includes a brief description of question topics covered.

Google Gemini now lets you take full NEET mock exams for free

AI Mode in Chrome showing AI-powered shopping assistant panel alongside a Ninja coffee machine product page with pricing and details

Chrome’s AI Mode puts search and pages side by side

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.