By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIPerplexityTech

Perplexity unveils Secure Intelligence Institute led by Dr. Ninghui Li

Perplexity’s Secure Intelligence Institute is a new research hub dedicated to hardening frontier AI against real‑world security, privacy, and trust risks.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Apr 1, 2026, 7:06 AM EDT
Share
We may get a commission from retail offers. Learn more
Artistic illustration of a glowing padlock with a keyhole centered in a surreal landscape. The padlock emits warm golden and orange light against a dramatic backdrop of blue and teal tones, with a starry night sky above and a reflective water surface below. The scene conveys themes of security, privacy, and protection through luminous, ethereal imagery.
Image: Perplexity
SHARE

Perplexity is stepping up its security game in a big way, launching a new research hub called the Secure Intelligence Institute – a move that says a lot about where AI is headed and what it will take to keep it safe.

If you’ve been following Perplexity over the past couple of years, this launch doesn’t come out of nowhere. The company went from being “the AI answer engine” to building a full AI-native browser, Comet, and even a secure-server AI Computer – tools that don’t just answer questions, but act as autonomous agents on the open web. As soon as you put AI agents in a browser, you stop playing in a sandbox and start playing in traffic, where prompt injection, malicious sites, and subtle data exfiltration attempts are real threats, not hypotheticals.

The Secure Intelligence Institute (SII) is Perplexity’s attempt to turn all of that risk into a structured research agenda, rather than a never-ending game of patch-and-pray. Officially, SII is the company’s flagship research center for security, privacy, and trust in “frontier AI” – the cutting edge systems that browse, reason, and act on behalf of users. In practice, it’s the place where three things come together: foundational security research, hardening Perplexity’s own products, and publishing enough of the work that it moves the entire ecosystem forward.

Perplexity is not starting from scratch here. Before SII even had a name, the company had already been poking at the uncomfortable edges of AI agent security. In April 2025, ahead of Comet’s public launch, Perplexity brought in security firm Trail of Bits to run what it described as a first-of-its-kind security audit for an agentic browser – including threat modeling and new adversarial tests tailored for AI agents navigating the web. A few months later, in July 2025, Comet shipped with a “defense-in-depth” architecture specifically designed to protect users in open-world environments where AI is constantly reading, clicking, and executing. By the end of 2025, the company had released BrowseSafe, an open-source detection model and benchmark that tries to catch prompt injection attacks hidden inside real-world web pages, with more than 14,700 attack scenarios across 14 harm categories.

BrowseSafe is worth dwelling on for a moment, because it shows the kind of problems SII is meant to tackle. As AI agents start reading arbitrary web pages, the risk is not just “bad content” in the conventional sense but instructions embedded in HTML, comments, or product descriptions that hijack the model – telling it to ignore prior constraints, leak secrets, or perform actions the user never intended. BrowseSafe combines a detection model with a benchmark (BrowseSafe-Bench) that simulates nearly 15,000 realistic attack scenarios, mixing malicious and benign samples to avoid simple keyword-based heuristics. External write-ups note that the system targets real-time scanning of HTML and has reported detection accuracy around the 90% range, outperforming some off-the-shelf safety classifiers and LLM detectors while staying fast enough for interactive browsing.

By March 2026, Perplexity had also pivoted from “building defenses” to helping define what secure AI agents should look like on paper. Its first major security paper, “Security Considerations for Artificial Intelligence Agents,” is a lightly adapted response to a NIST/CAISI request for information on agent security. The paper lays out why existing security mechanisms – designed for traditional, mostly deterministic software – don’t map cleanly onto autonomous AI agents that operate with probabilistic models, broad tool access, and a lot of autonomy. It argues that new security abstractions are needed to capture the agent layer itself, and that classic ideas like least privilege and fine-grained access control need to be rethought for systems that learn and adapt over time. It also emphasizes layered defenses: input and model-level mitigations, sandboxed execution, deterministic policy enforcement for high-risk actions, and careful architectural choices around hosting, networking, and tool surfaces.

The Secure Intelligence Institute takes all of this – audits, architectures, benchmarks, and policy thinking – and turns it into an explicit, long-term program. Perplexity describes SII as focused on areas like authentication, usable privacy and security, robust machine learning, and the defense of agentic AI systems. That’s a deliberately broad scope, and it reflects a reality: securing an AI agent isn’t just about catching malicious web content; it’s about everything from how you authenticate tools and users to how you design interfaces so humans can actually understand and control what agents are doing on their behalf.

Leadership is a big part of how Perplexity is trying to signal that this is not just a marketing label. SII’s inaugural director is Dr. Ninghui Li, the Samuel D. Conte Professor of Computer Science at Purdue University and a well-known figure in security and privacy research. Li is a Fellow of both ACM and IEEE and has served as Chair of the Steering Committee for ACM CCS (one of the top security conferences), Chair of ACM SIGSAC, and Editor-in-Chief of ACM Transactions on Privacy and Security. External coverage points out that his appointment gives the institute academic heft and ties it directly into the existing security research community. It’s a clear signal that Perplexity wants SII’s work to stand up as serious research, not just internal engineering docs.

The collaboration story is just as important as the internal one. Perplexity has been explicit that SII will work with leading teams in cryptography, security, and machine learning across industry and academia, rather than trying to solve everything behind closed doors. A LinkedIn post from the company and other reports highlight that the institute’s first paper – the NIST response on securing autonomous agents – is framed as a contribution to emerging security standards, not just a company whitepaper. That positioning matters because NIST guidance and similar frameworks increasingly influence how regulators, enterprises, and cloud providers think about AI risk.

Industry watchers see this move as more than just a nice-to-have. One analysis notes that launching SII positions Perplexity as a serious player in AI safety and security research, especially in the niche of autonomous agents and AI-native browsing. It puts pressure on other frontier AI companies, which have tended to emphasize general capabilities and high-level “safety” messaging, to show similarly concrete work on agent security, benchmarks, and defenses. At the same time, it aligns Perplexity with broader trends in AI governance, where standards bodies and regulators are increasingly focused on supply chain security, monitoring, and risk management for complex AI systems.

Zooming back out, SII is also a hedge against the growing complexity of Perplexity’s own stack. The company now runs model-agnostic, multi-model systems that mix different LLMs, tools, and browsing capabilities, and it exposes that power to millions of users and thousands of enterprises. That creates a huge attack surface: any weakness in content detection, sandboxing, or policy enforcement could turn an innocuous question into a pathway for data leakage or account abuse. Perplexity’s security page already talks about investments in monitoring, observability, and rapid threat response across its production environments; SII is the research layer that feeds those operational systems with new ideas and defenses.

There’s also a subtle but important usability angle here. Secure systems that are impossible to understand or control tend not to be used correctly, and Perplexity explicitly includes “usable privacy and security” in SII’s mandate. That likely means research into how to surface AI agent behavior to users, how to present security decisions in ways that make sense, and how to balance automation with meaningful human oversight. In other words, not just building a safer AI browser, but building one that actually feels safe and transparent.

For developers and researchers, SII’s existence is an invitation as much as an announcement. Perplexity is already advertising roles for technical staff within SII, with responsibilities that include conducting original research on the security and privacy of frontier intelligence systems and translating that into tangible improvements in Perplexity’s products. The company is also pointing people to the SII homepage as the hub for future collaborations, papers, and possibly open-source tools and benchmarks beyond BrowseSafe.

For everyone else – the people who just want AI tools that don’t go rogue when they click a bad link – the launch of SII is a sign that security is starting to get the same kind of institutional attention that model quality and features have enjoyed for years. We’re moving into an era where AI systems don’t just answer questions, they act; putting a dedicated institute behind making those actions safer is less a nice PR line and more a requirement for any company that wants its AI to live in the real world.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Leave a Comment

Leave a ReplyCancel reply

Most Popular

PayPal Business for side hustles, shops and agencies

Google Drive now uses AI to catch ransomware in real time

iPhone Lockdown Mode: Apple’s extreme security switch

JBL Xtreme 5 and Go 5 refresh iconic JBL portable speaker lineup

How the PayPal Debit Card works with your balance

Also Read
Simple illustration on a light gray background featuring a pixelated brown caterpillar or worm character with small black eyes, positioned above the text 'Auto mode' in a serif font. The character has a segmented body design in a retro video game style.

Claude Code auto mode lands for Enterprise and API users

Simple illustration of a black computer mouse cursor clicking on a stylized white network node with radiating branches, set against a soft pink background.

Anthropic brings computer use to Claude Code for hands-free dev work

Illustration on a coral background showing a code bracket symbol (curly braces) in black flanking a white rectangular window or tab. Inside the window is a globe icon with latitude and longitude grid lines, representing web or global connectivity. The design symbolizes web-based code development or programming in a browser environment.

Claude Platform’s new Compliance API answers “who did what and when”

Grid of 12 Apple App Store and Apple TV badges displaying localized text in different languages. Top row: Mac App Store in Punjabi, Download on the App Store in English, and Apple TV S6 in Telugu. Second row: App Store in Urdu, App Store in Gujarati, and App Store in Malayalam. Third row: Presite v trgovini App Store in Croatian, App Store in Tamil, and App Store in Odia. Bottom row: Apple TV in Hindi, App Store in Bengali, and App Store in Kannada. Each badge features the Apple logo and white text on a black background with rounded border frames.

App Store adds 11 new languages for localized listings

A user with Apple's AirPods 4th generation is shown.

Apple pilots automatic audio switching for third-party audio accessories in Europe

iCloud.com interface displaying the main dashboard for user Jenny Court (iCloud+ subscriber). The layout shows five service cards: a profile card with Jenny's avatar and email address; a Photos card showing a library of 14,789 photos and 1,234 videos with a grid of sample images including landscapes, portraits, and lifestyle photos; a Mail card displaying an inbox with 5 unread messages from contacts including Melody Cheung, Trev Smith, and Christine Huang; a Drive card labeled 'All Files' showing stored documents including a Ticket (JPG), Flight Confirmation (PDF), and Career folder; and a Notes card for iCloud notes. A bottom toolbar displays app shortcuts for Find My, Contacts, Photos, and Mail.

iOS 26.4 adds iCloud.com search for files and photos

Hero image for Veo 3.1 Lite featuring the text 'Build with Veo 3.1 Lite' centered on a dark background, surrounded by six sample AI-generated video frames showcasing diverse content: a mountaineer in red jacket at sunrise in a snowy alpine landscape, a white horse galloping through water, a person wearing round sunglasses and patterned jacket, a speedboat cutting through ocean waves, vibrant abstract landscape with colorful rolling hills and pink sky, and an underwater seaweed scene.

Google launches Veo 3.1 Lite for cheaper AI video in the Gemini API

Promotional graphic for Fitbit’s Personal Health Coach showing a smartphone screen with the Fitbit app dashboard, including a circular weekly cardio progress ring at 56%, tiles for steps, readiness, and sleep duration labeled ‘Good,’ and a detailed sleep summary card on a soft blue gradient background with the words ‘Personal Health Coach’ at the top.

Fitbit personal health coach adds cycle health, mental wellbeing and nutrition

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.