Perplexity is stepping more deliberately into health care, and it is doing something a lot of AI products have been criticized for not doing: putting clinicians directly into the decision-making loop. With its new Health Advisory Board, the company is trying to answer the core question anyone has when they type a health query into an AI box: “Can this thing actually be trusted?”
The move comes at a moment when AI is rapidly spreading through medicine and consumer health search, but worries about accuracy, bias, and outright hallucinations remain loud and justified. Studies in recent years have shown that large language models can misinterpret clinical scenarios, confidently repeat fabricated diseases, or miss crucial nuances that a trained physician would catch. At the same time, clinicians and health researchers increasingly see AI as a powerful assistant for synthesizing evidence, triaging information, and cutting through the noise of an overwhelming medical literature. Perplexity is trying to thread that needle: lean into AI’s strengths while hard‑wiring more medical rigor and guardrails into how health information is generated and presented.
At the heart of the announcement is a small but heavyweight roster of experts—practicing doctors, researchers, and health-tech leaders—who will advise on how Perplexity’s health-related experiences are designed. The board’s mandate spans product decisions, content quality, patient safety, and clinical workflows, with an explicit emphasis on evidence‑based medicine rather than vague wellness content. Perplexity is essentially saying that decisions about how its system searches, ranks, and phrases health information should be grounded in the same standards clinicians use when they evaluate a new drug, device, or guideline.
The initial lineup sets the tone. Cardiologist and researcher Eric Topol, one of the most cited physician-scientists in the world and a prominent voice on AI in medicine, is among the first members. Topol has spent years arguing that AI, done right, could actually make medicine more human by catching diagnostic errors, reading complex scans more reliably, and freeing doctors from administrative overload so they can spend more time with patients. His involvement signals that this is not just a branding exercise but an attempt to square AI’s potential with the realities of clinical practice, where mistakes are measured not in engagement metrics but in lives.
Joining him is Dr. Devin Mann, a professor of population health and medicine at NYU Grossman School of Medicine and strategic director of digital health innovation at NYU Langone Health. Mann’s work sits right where AI is already starting to have an impact: chronic disease management, remote patient monitoring, and AI-assisted clinical workflows that help busy hospital systems keep track of complex patients. If Perplexity wants its tools to feel useful to clinicians instead of like yet another dashboard, having someone who lives inside those workflows matters.
On the pediatric and genomics front, Dr. Wendy Chung brings a different but crucial perspective. She is the Mary Ellen Avery Professor of Pediatrics at Harvard Medical School and the Chief of Pediatrics at Boston Children’s Hospital, overseeing care for some of the most complex and vulnerable patients in the system. Chung has led NIH-funded research in human genetics and rare diseases, where evidence is often fragmented and decisions depend on stitching together data from small trials, registry studies, and evolving guidelines. That is exactly the kind of landscape where an AI system that can rapidly aggregate and compare evidence could shine—if it is carefully tuned and transparently sourced.
Rounding out the first cohort is Tim Dybvig, a health‑technology founder and operator with experience building patient-facing tools and health infrastructure at scale. While the physicians set the bar for clinical rigor, someone still has to translate those principles into actual product architecture, data pipelines, and everyday user experience. Dybvig’s inclusion is a nod to the reality that “safe” in health tech is not just what the model knows, but how the whole system—from APIs to UI—handles data, edge cases, and failure modes.
Perplexity is also tying this advisory effort directly to new capabilities. Alongside the board, the company is rolling out connectors that let users bring in their own health data and build personalized dashboards or applications within Perplexity Health, running on Perplexity Computer. In practice, that could mean everything from patients exploring trends in their lab results to developers building tools that sit on top of Perplexity’s search and reasoning engine to help clinicians compare therapies or summarize complex records. The board is supposed to guide how those features evolve, so they end up augmenting clinical conversations rather than trying to replace them.
All of this sits against a larger backdrop: AI is already a go-to starting point for many people’s health research, but trust is fragile. Surveys show that a growing share of patients use AI-generated summaries to orient themselves on medical topics, yet only a minority actually trust those summaries to be accurate or consistently check the cited sources. At the same time, safety analyses by health-system researchers and watchdogs warn that AI tools can easily propagate misinformation or reflect biased training data if not carefully designed and monitored. That tension—heavy usage, limited trust—is exactly the gap Perplexity is betting an advisory board of front-line clinicians can help close.
One thing the company is careful to underline is what Perplexity Health is not. It is framed as an educational tool that helps people understand their data and prepare for better conversations with their doctors, not as a standalone diagnostic engine or treatment-planning system. The standard disclaimers are explicit: it is not intended to diagnose, treat, or prevent diseases, and it is not a substitute for professional medical advice—especially for people who are pregnant, nursing, managing an eating disorder, or living with other significant medical conditions. That positioning aligns with what many medical ethicists and digital‑health researchers have called for: AI as a first‑pass explainer and evidence organizer, with clinicians staying firmly in charge of actual care decisions.
For Perplexity, which has built its reputation around live, citation-rich answers that combine multiple AI models with web search, health care is both a natural extension and a high-stakes test. In clinical and research settings, the same ability to synthesize trial data, guidelines, and real-world evidence that helps a journalist or lawyer can be transformative—if the sourcing is transparent and the limitations are clear. By bringing in high‑profile medical voices early and giving them a formal role in governance, the company appears to be signaling that it understands the difference between answering a trivia question and weighing in on someone’s treatment options.
Of course, an advisory board is not a magic shield. The real test will be whether the board’s recommendations translate into measurable safeguards: how the system handles ambiguous symptoms, how it flags uncertainty, how it deals with outdated or conflicting studies, and how easy it is for both patients and clinicians to see where an answer came from. It will also hinge on how Perplexity treats privacy and data governance as it starts ingesting more sensitive health information—something patients consistently say is their top concern when they use digital tools to manage care. Those are hard, slow, unglamorous problems, but they are the ones that will determine whether AI in health ends up being a passing novelty or an infrastructure layer people actually trust.
For now, Perplexity’s Health Advisory Board looks like a statement of intent: a public commitment to let clinicians set the bar for what “responsible” health information from AI should look like. As more members are added in the coming weeks, spanning other specialties and perspectives, the experiment will be worth watching—not just as another product feature, but as a possible template for how AI companies and health systems might share responsibility for the information patients increasingly turn to first.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
