By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIPerplexityTech

Perplexity unveils Secure Intelligence Institute led by Dr. Ninghui Li

Perplexity’s Secure Intelligence Institute is a new research hub dedicated to hardening frontier AI against real‑world security, privacy, and trust risks.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Apr 1, 2026, 7:06 AM EDT
Share
We may get a commission from retail offers. Learn more
Artistic illustration of a glowing padlock with a keyhole centered in a surreal landscape. The padlock emits warm golden and orange light against a dramatic backdrop of blue and teal tones, with a starry night sky above and a reflective water surface below. The scene conveys themes of security, privacy, and protection through luminous, ethereal imagery.
Image: Perplexity
SHARE

Perplexity is stepping up its security game in a big way, launching a new research hub called the Secure Intelligence Institute – a move that says a lot about where AI is headed and what it will take to keep it safe.

If you’ve been following Perplexity over the past couple of years, this launch doesn’t come out of nowhere. The company went from being “the AI answer engine” to building a full AI-native browser, Comet, and even a secure-server AI Computer – tools that don’t just answer questions, but act as autonomous agents on the open web. As soon as you put AI agents in a browser, you stop playing in a sandbox and start playing in traffic, where prompt injection, malicious sites, and subtle data exfiltration attempts are real threats, not hypotheticals.

The Secure Intelligence Institute (SII) is Perplexity’s attempt to turn all of that risk into a structured research agenda, rather than a never-ending game of patch-and-pray. Officially, SII is the company’s flagship research center for security, privacy, and trust in “frontier AI” – the cutting edge systems that browse, reason, and act on behalf of users. In practice, it’s the place where three things come together: foundational security research, hardening Perplexity’s own products, and publishing enough of the work that it moves the entire ecosystem forward.

Perplexity is not starting from scratch here. Before SII even had a name, the company had already been poking at the uncomfortable edges of AI agent security. In April 2025, ahead of Comet’s public launch, Perplexity brought in security firm Trail of Bits to run what it described as a first-of-its-kind security audit for an agentic browser – including threat modeling and new adversarial tests tailored for AI agents navigating the web. A few months later, in July 2025, Comet shipped with a “defense-in-depth” architecture specifically designed to protect users in open-world environments where AI is constantly reading, clicking, and executing. By the end of 2025, the company had released BrowseSafe, an open-source detection model and benchmark that tries to catch prompt injection attacks hidden inside real-world web pages, with more than 14,700 attack scenarios across 14 harm categories.

BrowseSafe is worth dwelling on for a moment, because it shows the kind of problems SII is meant to tackle. As AI agents start reading arbitrary web pages, the risk is not just “bad content” in the conventional sense but instructions embedded in HTML, comments, or product descriptions that hijack the model – telling it to ignore prior constraints, leak secrets, or perform actions the user never intended. BrowseSafe combines a detection model with a benchmark (BrowseSafe-Bench) that simulates nearly 15,000 realistic attack scenarios, mixing malicious and benign samples to avoid simple keyword-based heuristics. External write-ups note that the system targets real-time scanning of HTML and has reported detection accuracy around the 90% range, outperforming some off-the-shelf safety classifiers and LLM detectors while staying fast enough for interactive browsing.

By March 2026, Perplexity had also pivoted from “building defenses” to helping define what secure AI agents should look like on paper. Its first major security paper, “Security Considerations for Artificial Intelligence Agents,” is a lightly adapted response to a NIST/CAISI request for information on agent security. The paper lays out why existing security mechanisms – designed for traditional, mostly deterministic software – don’t map cleanly onto autonomous AI agents that operate with probabilistic models, broad tool access, and a lot of autonomy. It argues that new security abstractions are needed to capture the agent layer itself, and that classic ideas like least privilege and fine-grained access control need to be rethought for systems that learn and adapt over time. It also emphasizes layered defenses: input and model-level mitigations, sandboxed execution, deterministic policy enforcement for high-risk actions, and careful architectural choices around hosting, networking, and tool surfaces.

The Secure Intelligence Institute takes all of this – audits, architectures, benchmarks, and policy thinking – and turns it into an explicit, long-term program. Perplexity describes SII as focused on areas like authentication, usable privacy and security, robust machine learning, and the defense of agentic AI systems. That’s a deliberately broad scope, and it reflects a reality: securing an AI agent isn’t just about catching malicious web content; it’s about everything from how you authenticate tools and users to how you design interfaces so humans can actually understand and control what agents are doing on their behalf.

Leadership is a big part of how Perplexity is trying to signal that this is not just a marketing label. SII’s inaugural director is Dr. Ninghui Li, the Samuel D. Conte Professor of Computer Science at Purdue University and a well-known figure in security and privacy research. Li is a Fellow of both ACM and IEEE and has served as Chair of the Steering Committee for ACM CCS (one of the top security conferences), Chair of ACM SIGSAC, and Editor-in-Chief of ACM Transactions on Privacy and Security. External coverage points out that his appointment gives the institute academic heft and ties it directly into the existing security research community. It’s a clear signal that Perplexity wants SII’s work to stand up as serious research, not just internal engineering docs.

The collaboration story is just as important as the internal one. Perplexity has been explicit that SII will work with leading teams in cryptography, security, and machine learning across industry and academia, rather than trying to solve everything behind closed doors. A LinkedIn post from the company and other reports highlight that the institute’s first paper – the NIST response on securing autonomous agents – is framed as a contribution to emerging security standards, not just a company whitepaper. That positioning matters because NIST guidance and similar frameworks increasingly influence how regulators, enterprises, and cloud providers think about AI risk.

Industry watchers see this move as more than just a nice-to-have. One analysis notes that launching SII positions Perplexity as a serious player in AI safety and security research, especially in the niche of autonomous agents and AI-native browsing. It puts pressure on other frontier AI companies, which have tended to emphasize general capabilities and high-level “safety” messaging, to show similarly concrete work on agent security, benchmarks, and defenses. At the same time, it aligns Perplexity with broader trends in AI governance, where standards bodies and regulators are increasingly focused on supply chain security, monitoring, and risk management for complex AI systems.

Zooming back out, SII is also a hedge against the growing complexity of Perplexity’s own stack. The company now runs model-agnostic, multi-model systems that mix different LLMs, tools, and browsing capabilities, and it exposes that power to millions of users and thousands of enterprises. That creates a huge attack surface: any weakness in content detection, sandboxing, or policy enforcement could turn an innocuous question into a pathway for data leakage or account abuse. Perplexity’s security page already talks about investments in monitoring, observability, and rapid threat response across its production environments; SII is the research layer that feeds those operational systems with new ideas and defenses.

There’s also a subtle but important usability angle here. Secure systems that are impossible to understand or control tend not to be used correctly, and Perplexity explicitly includes “usable privacy and security” in SII’s mandate. That likely means research into how to surface AI agent behavior to users, how to present security decisions in ways that make sense, and how to balance automation with meaningful human oversight. In other words, not just building a safer AI browser, but building one that actually feels safe and transparent.

For developers and researchers, SII’s existence is an invitation as much as an announcement. Perplexity is already advertising roles for technical staff within SII, with responsibilities that include conducting original research on the security and privacy of frontier intelligence systems and translating that into tangible improvements in Perplexity’s products. The company is also pointing people to the SII homepage as the hub for future collaborations, papers, and possibly open-source tools and benchmarks beyond BrowseSafe.

For everyone else – the people who just want AI tools that don’t go rogue when they click a bad link – the launch of SII is a sign that security is starting to get the same kind of institutional attention that model quality and features have enjoyed for years. We’re moving into an era where AI systems don’t just answer questions, they act; putting a dedicated institute behind making those actions safer is less a nice PR line and more a requirement for any company that wants its AI to live in the real world.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Leave a Comment

Leave a ReplyCancel reply

Most Popular

Here’s how to sign up for a Amazon Prime membership

Copilot’s agentic mode auto-handles your Outlook inbox and calendar chaos

Outgoing CEO Tim Cook names Apple Maps his top leadership error

Apple shows how it made the MacBook Neo intro video

Snapchat adds Place Loyalty to Snap Map

Also Read
Apple logo

Liquid Glass iPhone: subtle curves make bezels vanish forever

The Apple logo, a white silhouette of an apple with a bite taken out of it, is displayed with a rainbow colored gradient. The stem and leaf of the apple are green. The background is black.

Apple’s Ultra products mark a new premium category above Pro

The classic Apple logo, shown in light silvery-blue, set against a black background. The logo has a clean, minimalist design featuring the iconic bitten apple silhouette with a soft, matte finish.

Apple teases MacBook Ultra supremacy with six features like M6 Max, OLED, and built-in 5G

The Apple Vision Pro computer glasses are presented to customers at the Apple Store on Kurfürstendamm.

Apple Vision Pro successfully guides the first eye surgery

Three smartphone screens show Spotify integrated with Claude Opus 4.6, displaying AI-generated podcast recommendations, a custom morning gym playlist, and focus study music mixes on a green gradient background with the Spotify logo in the corner.

Spotify now lives inside Anthropic’s Claude AI

Smartphone placed on a workout mat next to a water bottle, displaying a Peloton 20-minute HIIT cardio workout video with Spotify playback controls and exercise progress on screen.

Spotify Fitness Hub includes 1,400 Peloton classes, yoga, and strength training

“Ted Lasso” season four first-look image

Ted Lasso season 4 teaser drops now

A blush MacBook Neo displaying a macOS desktop with three overlapping windows, including a colorful chemistry project, a school worksheet, and a ChatGPT app generating an image of a grapefruit, showcasing multitasking for schoolwork and creative tasks.

MacBook Neo stock shortages plague Apple, but Amazon and Walmart deliver fast with discounts

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.