By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIPerplexityTech

Perplexity CEO horrified after student uses his free AI browser to complete entire course in 16 seconds

Student brazenly tags Perplexity CEO while using his AI to cheat on Coursera assignment.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Oct 12, 2025, 4:44 AM EDT
Share
We may get a commission from retail offers. Learn more
Aravind Srinivas, CEO of Perplexity AI, at the Bloomberg Tech conference in San Francisco, June 5, 2025.
Photo by David Paul Morris for Bloomberg / Getty Images
SHARE

The Coursera incident is far from an isolated case. Recent data shows that student discipline rates for AI-related plagiarism rose from 48% in 2022-23 to 64% in 2024-24, with approximately 90% of students knowing about ChatGPT and 89% using it for homework assignments. The statistics paint a picture of a rapidly evolving academic landscape where the line between legitimate study aid and outright cheating has become increasingly blurred.

According to researchers, while 60 to 70 percent of students admitted to cheating even before the release of ChatGPT, that rate has remained stable through 2023. What’s changed isn’t necessarily the proportion of students willing to cut corners—it’s the sophistication and ease with which they can now do so.

Through surveys examining the 2023-2024 school year, cheating with AI occurred at a rate of 5.1 students for every 1,000, up from 1.6 per 1,000 in the 2022-2023 school year, with more recent figures showing that number rising to 7.5 students in the current academic year. While these numbers might seem small, experts warn they’re likely severe undercounts. In one University of Reading test, 94% of AI-written submissions went undetected by standard plagiarism checks.

The disconnect between students and educators on what constitutes cheating has also widened. While 65% of students believe using AI to generate ideas or outlines is acceptable, nearly an equal percentage of educators (62%) view such practices as a form of plagiarism or academic misconduct if not properly cited.

The detection arms race that nobody’s winning

Universities have poured millions into AI detection tools, with mixed results at best. Universities primarily use Turnitin, Copyleaks, and GPTZero for AI detection, spending anywhere from $2,768 to $110,400 per year on these tools. Yet the return on investment has been questionable.

Many top schools have already deactivated AI detectors in 2024-2025 due to approximately 4% false positive rates and costs, including UCLA, UC San Diego, and Cal State LA. The problem isn’t just accuracy—it’s the fundamental impossibility of proving intent. A student using AI to brainstorm might produce work indistinguishable from one who used it to write entire essays.

Cornell University’s Center for Teaching Innovation now advises against using automatic detection algorithms for academic integrity violations, citing their unreliability and inability to provide definitive evidence of violations. Similarly, the University of Pittsburgh recommends against using AI detection tools, stating they are not accurate enough to prove students have violated academic integrity policies.

The Perplexity paradox

Srinivas’s public rebuke of the Coursera cheater reveals a deeper tension within Silicon Valley’s education push. On one hand, AI companies are racing to capture the lucrative education market, worth billions annually. On the other, they’re scrambling to prevent their tools from undermining the very institutions they’re trying to serve.

Perplexity’s Comet browser exemplifies this contradiction perfectly. The browser, which was just lowered from $200 to free for students, features “agentic” AI that can navigate the web, click through tasks—and finish homework. It’s designed to be autonomous, to take action on behalf of users. That’s precisely what makes it powerful for legitimate research—and devastating for academic integrity.

Beyond the honor code: the security nightmare

The cheating concerns are just the tip of the iceberg. Cybersecurity researchers have disclosed details of an attack called CometJacking that can embed malicious prompts within a seemingly innocuous link to siphon sensitive data from connected services like email and calendar. When a tool designed for students can be hijacked to steal personal information, the stakes extend far beyond plagiarism.

The security vulnerabilities compound the ethical challenges. Students using Comet for assignments might unknowingly expose their academic accounts, personal emails, or even financial information to bad actors. Universities, already struggling with cybersecurity, now face the prospect of AI browsers becoming new attack vectors into their systems.

The great rethinking

Research shows that 89% of students admit to using AI tools like ChatGPT for homework—a reality that’s forcing educators to fundamentally reconsider assessment methods. Some institutions are abandoning traditional take-home essays altogether, reverting to in-person, handwritten exams. Others are embracing AI, but require students to document their usage transparently.

According to Inside Higher Ed’s 2024 provosts’ survey, student use of generative AI greatly outpaced faculty use—45 percent of students used AI in their classes in the past year, while only 15 percent of instructors said the same. This gap creates an asymmetry where students often understand the technology’s capabilities better than those assessing their work.

Progressive educators argue the solution isn’t to fight AI but to fundamentally reimagine education. “We need to teach students to work with AI, not around it,” argues Dr. Sarah Chen, who heads Stanford’s AI and Education Initiative. “The skills that matter now are critical thinking, source evaluation, and understanding AI’s limitations—things a bot can’t do for you.”

The view from the C-suite

For Srinivas and other AI executives, the education market presents both an enormous opportunity and a reputational risk. Education technology is projected to reach $404 billion by 2025, with AI-powered tools capturing an increasing share. But scandals around academic cheating could poison the well, leading to blanket bans that shut out legitimate uses.

The CEO’s four-word response—”Absolutely don’t do this“—was likely calculated to distance Perplexity from the cheating narrative while maintaining the company’s education-friendly stance. But critics argue that’s not enough. “They need to build in safeguards, not just issue Twitter warnings,” says Marcus Thompson, who studies AI ethics at MIT. “If your tool can complete an entire course in seconds, maybe the problem is the tool, not the user.”

What comes next

In recent surveys, three in four education technology officers said AI has proven to be a moderate (59 percent) or significant (15 percent) risk to academic integrity. Universities are responding with a patchwork of policies—some embracing AI with guidelines, others imposing outright bans.

The debate has even reached accreditation bodies and federal regulators. The Department of Education is reportedly considering guidelines for AI use in federally funded institutions, though details remain scarce. Meanwhile, some states are crafting their own rules, creating a regulatory maze that could complicate nationwide EdTech rollouts.

For students caught in the middle, the message is increasingly muddled. Use AI, but not too much. Embrace technology, but maintain integrity. Prepare for an AI-powered future, but don’t use AI to get there. The contradictions are pushing some to question whether traditional education models can survive the AI revolution intact.

As Perplexity’s Comet browser demonstrates, we’ve entered an era where the tools meant to enhance learning can eliminate it entirely. Srinivas’s horror at seeing his product used for cheating might be genuine, but it also highlights Silicon Valley’s frequent blind spot: building powerful tools without fully considering their implications.

The student who brazenly tagged Srinivas while cheating might have done education a favor—forcing a public conversation about boundaries that the industry has been reluctant to have. Because if a CEO needs to tweet warnings about his own product, perhaps it’s time to ask whether the product should exist in its current form at all.

The next few years will likely determine whether AI becomes education’s great equalizer or its great underminer. For now, students will continue pushing boundaries, educators will continue playing catch-up, and companies like Perplexity will continue walking the tightrope between innovation and integrity.

What’s certain is that the 16-second Coursera video represents more than just one student’s shortcut. It’s a glimpse into a future where the very concept of “doing your own work” may need to be completely redefined. And if Silicon Valley’s track record is any indication, that redefinition will happen with or without educators’ input—one viral video at a time.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Perplexity Comet
Most Popular

ExpressVPN is the first to plug VPN infrastructure into Anthropic’s MCP ecosystem

ExpressVPN MCP server: what it is, how it works, and who it’s for

How to enable the ExpressVPN MCP server on your AI tools

This Nimble 35W GaN charger with retractable cable is $16 off

25W Qi2 wireless comes alive with this Google Pixelsnap Charger deal

Also Read
Minimalist banner showing the Promptfoo logo and wordmark on the left and the OpenAI wordmark on the right, separated by a small “x” on a soft gradient off‑white background.

Promptfoo joins OpenAI as the new security layer for Frontier

Minimal flat illustration of code review: an orange background with two large black curly braces framing the center, where a white octagonal icon containing a simple code symbol “” is examined by a black magnifying glass.

Anthropic’s Claude Code Review is coming for your bug backlog

Toni Schneider

Bluesky taps Toni Schneider as interim CEO

Jay Graber

Jay Graber exits Bluesky CEO role, becomes Chief Innovation Officer

Screenshot of the Perplexity Computer interface showing a user prompt at the top asking the agent to contribute to the Openclaw project by fixing bugs using Claude Code and then opening a pull request on a linked GitHub issue, with the assistant’s response below saying it will load relevant skills, fetch the GitHub issue details, and displaying a “Running tasks in parallel” status list for loading the coding‑and‑data skill and fetching the issue details, all on a light themed UI.

Claude Code and GitHub CLI now live inside Perplexity Computer

A person stands in front of a blue tiled wall featuring the illuminated word “OpenAI.” They are holding a smartphone and appear to be engaged with it, possibly taking a photo or interacting with content. The scene emphasizes the OpenAI brand in a modern, tech-savvy setting.

The Pentagon AI deal that OpenAI’s robotics head couldn’t accept

Nimble Fold 3-in-1 Wireless Travel Charging Dock

Charge iPhone, Apple Watch and AirPods with this Nimble 3‑in‑1 deal

A simple illustration shows a large black computer mouse cursor pointing toward a white central hub with five connected nodes on an orange background.

Claude Marketplace lets you use one AI commitment across multiple tools

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.