By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Best Deals
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIPerplexityTech

Perplexity CEO horrified after student uses his free AI browser to complete entire course in 16 seconds

Student brazenly tags Perplexity CEO while using his AI to cheat on Coursera assignment.

By
Shubham Sawarkar
Shubham Sawarkar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Oct 12, 2025, 4:44 AM EDT
Share
We may get a commission from retail offers. Learn more
Aravind Srinivas, CEO of Perplexity AI, at the Bloomberg Tech conference in San Francisco, June 5, 2025.
Photo by David Paul Morris for Bloomberg / Getty Images
SHARE

The Coursera incident is far from an isolated case. Recent data shows that student discipline rates for AI-related plagiarism rose from 48% in 2022-23 to 64% in 2024-24, with approximately 90% of students knowing about ChatGPT and 89% using it for homework assignments. The statistics paint a picture of a rapidly evolving academic landscape where the line between legitimate study aid and outright cheating has become increasingly blurred.

According to researchers, while 60 to 70 percent of students admitted to cheating even before the release of ChatGPT, that rate has remained stable through 2023. What’s changed isn’t necessarily the proportion of students willing to cut corners—it’s the sophistication and ease with which they can now do so.

Through surveys examining the 2023-2024 school year, cheating with AI occurred at a rate of 5.1 students for every 1,000, up from 1.6 per 1,000 in the 2022-2023 school year, with more recent figures showing that number rising to 7.5 students in the current academic year. While these numbers might seem small, experts warn they’re likely severe undercounts. In one University of Reading test, 94% of AI-written submissions went undetected by standard plagiarism checks.

The disconnect between students and educators on what constitutes cheating has also widened. While 65% of students believe using AI to generate ideas or outlines is acceptable, nearly an equal percentage of educators (62%) view such practices as a form of plagiarism or academic misconduct if not properly cited.

The detection arms race that nobody’s winning

Universities have poured millions into AI detection tools, with mixed results at best. Universities primarily use Turnitin, Copyleaks, and GPTZero for AI detection, spending anywhere from $2,768 to $110,400 per year on these tools. Yet the return on investment has been questionable.

Many top schools have already deactivated AI detectors in 2024-2025 due to approximately 4% false positive rates and costs, including UCLA, UC San Diego, and Cal State LA. The problem isn’t just accuracy—it’s the fundamental impossibility of proving intent. A student using AI to brainstorm might produce work indistinguishable from one who used it to write entire essays.

Cornell University’s Center for Teaching Innovation now advises against using automatic detection algorithms for academic integrity violations, citing their unreliability and inability to provide definitive evidence of violations. Similarly, the University of Pittsburgh recommends against using AI detection tools, stating they are not accurate enough to prove students have violated academic integrity policies.

The Perplexity paradox

Srinivas’s public rebuke of the Coursera cheater reveals a deeper tension within Silicon Valley’s education push. On one hand, AI companies are racing to capture the lucrative education market, worth billions annually. On the other, they’re scrambling to prevent their tools from undermining the very institutions they’re trying to serve.

Perplexity’s Comet browser exemplifies this contradiction perfectly. The browser, which was just lowered from $200 to free for students, features “agentic” AI that can navigate the web, click through tasks—and finish homework. It’s designed to be autonomous, to take action on behalf of users. That’s precisely what makes it powerful for legitimate research—and devastating for academic integrity.

Beyond the honor code: the security nightmare

The cheating concerns are just the tip of the iceberg. Cybersecurity researchers have disclosed details of an attack called CometJacking that can embed malicious prompts within a seemingly innocuous link to siphon sensitive data from connected services like email and calendar. When a tool designed for students can be hijacked to steal personal information, the stakes extend far beyond plagiarism.

The security vulnerabilities compound the ethical challenges. Students using Comet for assignments might unknowingly expose their academic accounts, personal emails, or even financial information to bad actors. Universities, already struggling with cybersecurity, now face the prospect of AI browsers becoming new attack vectors into their systems.

The great rethinking

Research shows that 89% of students admit to using AI tools like ChatGPT for homework—a reality that’s forcing educators to fundamentally reconsider assessment methods. Some institutions are abandoning traditional take-home essays altogether, reverting to in-person, handwritten exams. Others are embracing AI, but require students to document their usage transparently.

According to Inside Higher Ed’s 2024 provosts’ survey, student use of generative AI greatly outpaced faculty use—45 percent of students used AI in their classes in the past year, while only 15 percent of instructors said the same. This gap creates an asymmetry where students often understand the technology’s capabilities better than those assessing their work.

Progressive educators argue the solution isn’t to fight AI but to fundamentally reimagine education. “We need to teach students to work with AI, not around it,” argues Dr. Sarah Chen, who heads Stanford’s AI and Education Initiative. “The skills that matter now are critical thinking, source evaluation, and understanding AI’s limitations—things a bot can’t do for you.”

The view from the C-suite

For Srinivas and other AI executives, the education market presents both an enormous opportunity and a reputational risk. Education technology is projected to reach $404 billion by 2025, with AI-powered tools capturing an increasing share. But scandals around academic cheating could poison the well, leading to blanket bans that shut out legitimate uses.

The CEO’s four-word response—”Absolutely don’t do this“—was likely calculated to distance Perplexity from the cheating narrative while maintaining the company’s education-friendly stance. But critics argue that’s not enough. “They need to build in safeguards, not just issue Twitter warnings,” says Marcus Thompson, who studies AI ethics at MIT. “If your tool can complete an entire course in seconds, maybe the problem is the tool, not the user.”

What comes next

In recent surveys, three in four education technology officers said AI has proven to be a moderate (59 percent) or significant (15 percent) risk to academic integrity. Universities are responding with a patchwork of policies—some embracing AI with guidelines, others imposing outright bans.

The debate has even reached accreditation bodies and federal regulators. The Department of Education is reportedly considering guidelines for AI use in federally funded institutions, though details remain scarce. Meanwhile, some states are crafting their own rules, creating a regulatory maze that could complicate nationwide EdTech rollouts.

For students caught in the middle, the message is increasingly muddled. Use AI, but not too much. Embrace technology, but maintain integrity. Prepare for an AI-powered future, but don’t use AI to get there. The contradictions are pushing some to question whether traditional education models can survive the AI revolution intact.

As Perplexity’s Comet browser demonstrates, we’ve entered an era where the tools meant to enhance learning can eliminate it entirely. Srinivas’s horror at seeing his product used for cheating might be genuine, but it also highlights Silicon Valley’s frequent blind spot: building powerful tools without fully considering their implications.

The student who brazenly tagged Srinivas while cheating might have done education a favor—forcing a public conversation about boundaries that the industry has been reluctant to have. Because if a CEO needs to tweet warnings about his own product, perhaps it’s time to ask whether the product should exist in its current form at all.

The next few years will likely determine whether AI becomes education’s great equalizer or its great underminer. For now, students will continue pushing boundaries, educators will continue playing catch-up, and companies like Perplexity will continue walking the tightrope between innovation and integrity.

What’s certain is that the 16-second Coursera video represents more than just one student’s shortcut. It’s a glimpse into a future where the very concept of “doing your own work” may need to be completely redefined. And if Silicon Valley’s track record is any indication, that redefinition will happen with or without educators’ input—one viral video at a time.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:Perplexity Comet
Most Popular

Disney+ Hulu bundle costs just $10 for the first month right now

The creative industry’s biggest anti-AI push is officially here

Bungie confirms March 5 release date for Marathon shooter

The fight over Warner Bros. is now a shareholder revolt

Forza Horizon 6 confirmed for May with Japan map and 550+ cars

Also Read
Three NexPhone rugged smartphones are lying on a wooden table, each displaying a different operating system on the screen—Android on the left, a Linux desktop with a penguin wallpaper in the middle, and a Windows 11-style interface on the right.

This rugged Android phone boots Linux and Windows 11

Nelko P21 Bluetooth label maker

This Bluetooth label maker is 57% off and costs just $17 today

Blue gradient background with eight circular country flags arranged in two rows, representing Estonia, the United Arab Emirates, Greece, Jordan, Slovakia, Kazakhstan, Trinidad and Tobago, and Italy.

National AI classrooms are OpenAI’s next big move

A computer-generated image of a circular object that is defined as the OpenAI logo.

OpenAI thinks nations are sitting on far more AI power than they realize

The image shows the TikTok logo on a black background. The logo consists of a stylized musical note in a combination of cyan, pink, and white colors, creating a 3D effect. Below the musical note, the word "TikTok" is written in bold, white letters with a slight shadow effect. The design is simple yet visually striking, representing the popular social media platform known for short-form videos.

TikTok’s American reset is now official

Sony PS-LX5BT Bluetooth turntable

Sony returns to vinyl with two new Bluetooth turntables

Promotional graphic for Xbox Developer_Direct 2026 showing four featured games with release windows: Fable (Autumn 2026) by Playground Games, Forza Horizon 6 (May 19, 2026) by Playground Games, Beast of Reincarnation (Summer 2026) by Game Freak, and Kiln (Spring 2026) by Double Fine, arranged around a large “Developer_Direct ’26” title with the Xbox logo on a light grid background.

Everything Xbox showed at Developer_Direct 2026

Close-up top-down view of the Marathon Limited Edition DualSense controller on a textured gray surface, highlighting neon green graphic elements, industrial sci-fi markings, blue accent lighting, and Bungie’s Marathon design language.

Marathon gets its own limited edition DualSense controller from Sony

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2025 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.