Perplexity is stepping up its security game in a big way, launching a new research hub called the Secure Intelligence Institute – a move that says a lot about where AI is headed and what it will take to keep it safe.
If you’ve been following Perplexity over the past couple of years, this launch doesn’t come out of nowhere. The company went from being “the AI answer engine” to building a full AI-native browser, Comet, and even a secure-server AI Computer – tools that don’t just answer questions, but act as autonomous agents on the open web. As soon as you put AI agents in a browser, you stop playing in a sandbox and start playing in traffic, where prompt injection, malicious sites, and subtle data exfiltration attempts are real threats, not hypotheticals.
The Secure Intelligence Institute (SII) is Perplexity’s attempt to turn all of that risk into a structured research agenda, rather than a never-ending game of patch-and-pray. Officially, SII is the company’s flagship research center for security, privacy, and trust in “frontier AI” – the cutting edge systems that browse, reason, and act on behalf of users. In practice, it’s the place where three things come together: foundational security research, hardening Perplexity’s own products, and publishing enough of the work that it moves the entire ecosystem forward.
Perplexity is not starting from scratch here. Before SII even had a name, the company had already been poking at the uncomfortable edges of AI agent security. In April 2025, ahead of Comet’s public launch, Perplexity brought in security firm Trail of Bits to run what it described as a first-of-its-kind security audit for an agentic browser – including threat modeling and new adversarial tests tailored for AI agents navigating the web. A few months later, in July 2025, Comet shipped with a “defense-in-depth” architecture specifically designed to protect users in open-world environments where AI is constantly reading, clicking, and executing. By the end of 2025, the company had released BrowseSafe, an open-source detection model and benchmark that tries to catch prompt injection attacks hidden inside real-world web pages, with more than 14,700 attack scenarios across 14 harm categories.
BrowseSafe is worth dwelling on for a moment, because it shows the kind of problems SII is meant to tackle. As AI agents start reading arbitrary web pages, the risk is not just “bad content” in the conventional sense but instructions embedded in HTML, comments, or product descriptions that hijack the model – telling it to ignore prior constraints, leak secrets, or perform actions the user never intended. BrowseSafe combines a detection model with a benchmark (BrowseSafe-Bench) that simulates nearly 15,000 realistic attack scenarios, mixing malicious and benign samples to avoid simple keyword-based heuristics. External write-ups note that the system targets real-time scanning of HTML and has reported detection accuracy around the 90% range, outperforming some off-the-shelf safety classifiers and LLM detectors while staying fast enough for interactive browsing.
By March 2026, Perplexity had also pivoted from “building defenses” to helping define what secure AI agents should look like on paper. Its first major security paper, “Security Considerations for Artificial Intelligence Agents,” is a lightly adapted response to a NIST/CAISI request for information on agent security. The paper lays out why existing security mechanisms – designed for traditional, mostly deterministic software – don’t map cleanly onto autonomous AI agents that operate with probabilistic models, broad tool access, and a lot of autonomy. It argues that new security abstractions are needed to capture the agent layer itself, and that classic ideas like least privilege and fine-grained access control need to be rethought for systems that learn and adapt over time. It also emphasizes layered defenses: input and model-level mitigations, sandboxed execution, deterministic policy enforcement for high-risk actions, and careful architectural choices around hosting, networking, and tool surfaces.
The Secure Intelligence Institute takes all of this – audits, architectures, benchmarks, and policy thinking – and turns it into an explicit, long-term program. Perplexity describes SII as focused on areas like authentication, usable privacy and security, robust machine learning, and the defense of agentic AI systems. That’s a deliberately broad scope, and it reflects a reality: securing an AI agent isn’t just about catching malicious web content; it’s about everything from how you authenticate tools and users to how you design interfaces so humans can actually understand and control what agents are doing on their behalf.
Leadership is a big part of how Perplexity is trying to signal that this is not just a marketing label. SII’s inaugural director is Dr. Ninghui Li, the Samuel D. Conte Professor of Computer Science at Purdue University and a well-known figure in security and privacy research. Li is a Fellow of both ACM and IEEE and has served as Chair of the Steering Committee for ACM CCS (one of the top security conferences), Chair of ACM SIGSAC, and Editor-in-Chief of ACM Transactions on Privacy and Security. External coverage points out that his appointment gives the institute academic heft and ties it directly into the existing security research community. It’s a clear signal that Perplexity wants SII’s work to stand up as serious research, not just internal engineering docs.
The collaboration story is just as important as the internal one. Perplexity has been explicit that SII will work with leading teams in cryptography, security, and machine learning across industry and academia, rather than trying to solve everything behind closed doors. A LinkedIn post from the company and other reports highlight that the institute’s first paper – the NIST response on securing autonomous agents – is framed as a contribution to emerging security standards, not just a company whitepaper. That positioning matters because NIST guidance and similar frameworks increasingly influence how regulators, enterprises, and cloud providers think about AI risk.
Industry watchers see this move as more than just a nice-to-have. One analysis notes that launching SII positions Perplexity as a serious player in AI safety and security research, especially in the niche of autonomous agents and AI-native browsing. It puts pressure on other frontier AI companies, which have tended to emphasize general capabilities and high-level “safety” messaging, to show similarly concrete work on agent security, benchmarks, and defenses. At the same time, it aligns Perplexity with broader trends in AI governance, where standards bodies and regulators are increasingly focused on supply chain security, monitoring, and risk management for complex AI systems.
Zooming back out, SII is also a hedge against the growing complexity of Perplexity’s own stack. The company now runs model-agnostic, multi-model systems that mix different LLMs, tools, and browsing capabilities, and it exposes that power to millions of users and thousands of enterprises. That creates a huge attack surface: any weakness in content detection, sandboxing, or policy enforcement could turn an innocuous question into a pathway for data leakage or account abuse. Perplexity’s security page already talks about investments in monitoring, observability, and rapid threat response across its production environments; SII is the research layer that feeds those operational systems with new ideas and defenses.
There’s also a subtle but important usability angle here. Secure systems that are impossible to understand or control tend not to be used correctly, and Perplexity explicitly includes “usable privacy and security” in SII’s mandate. That likely means research into how to surface AI agent behavior to users, how to present security decisions in ways that make sense, and how to balance automation with meaningful human oversight. In other words, not just building a safer AI browser, but building one that actually feels safe and transparent.
For developers and researchers, SII’s existence is an invitation as much as an announcement. Perplexity is already advertising roles for technical staff within SII, with responsibilities that include conducting original research on the security and privacy of frontier intelligence systems and translating that into tangible improvements in Perplexity’s products. The company is also pointing people to the SII homepage as the hub for future collaborations, papers, and possibly open-source tools and benchmarks beyond BrowseSafe.
For everyone else – the people who just want AI tools that don’t go rogue when they click a bad link – the launch of SII is a sign that security is starting to get the same kind of institutional attention that model quality and features have enjoyed for years. We’re moving into an era where AI systems don’t just answer questions, they act; putting a dedicated institute behind making those actions safer is less a nice PR line and more a requirement for any company that wants its AI to live in the real world.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
