By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIGoogleOpenAITech

AI is getting smarter – but also more racist, experts warn

Experts warn that as AI tools advance, they are acquiring deeply embedded racist attitudes and stereotypes, discriminating against speakers of Black English dialects.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Mar 16, 2024, 2:36 PM EDT
Share
We may get a commission from retail offers. Learn more
The image depicts stylized letters spelling “AI (artificial intelligence)” against a dark blue background with a grid pattern. The 3D-effect letters appear to have depth and dimensionality, filled with a neon grid pattern ranging from purple to blue. The overall design exudes a retro-futuristic vibe, reminiscent of 1980s synthwave aesthetics.
Illustration by Kasia Bojanowska for DigitalOcean / Dribbble
SHARE

Popular artificial intelligence tools like ChatGPT and Google’s AI are becoming increasingly covert in their racism as they advance, according to an alarming new report from technology and linguistics researchers. While previous studies examined overt racial biases in these systems, this team took a deeper look at how AI reacts to more subtle indicators of race, like differences in dialect.

“We know that these technologies are really commonly used by companies to do tasks like screening job applicants,” said Valentin Hoffman, a researcher at the Allen Institute for AI and co-author of the paper published on arXiv. He explained that until now, researchers had not closely examined how AI responds to dialects like African American Vernacular English (AAVE), created and spoken by many Black Americans.

The disturbing findings reveal that large language models are significantly more likely to describe AAVE speakers as “stupid” and “lazy,” assigning them to lower-paying jobs compared to those speaking “standard American English.” This bias could punish Black job candidates for code-switching between AAVE and more formal styles of speech and writing.

“One big concern is that, say a job candidate used this dialect in their social media posts,” Hoffman said. “It’s not unreasonable to think that the language model will not select the candidate because they used the dialect in their online presence.”

Beyond the workplace, the study found language models were more inclined to recommend harsher punishments like the death penalty for hypothetical criminal defendants using AAVE during court statements. “I’d like to think that we are not anywhere close to a time when this kind of technology is used to make decisions about criminal convictions,” Hoffman said. “That might feel like a very dystopian future, and hopefully it is.”

However, AI is already being utilized in some areas of the legal system for tasks like creating transcripts and conducting research. As Hoffman notes, “Ten years ago, even five years ago, we had no idea all the different contexts that AI would be used today.”

The new findings are a sobering reminder that as language models grow larger by ingesting more data from the internet, their blind embrace of human knowledge leads them to learn and proliferate the racist stereotypes and attitudes that pervade online content – the classic “garbage in, garbage out” problem in computer science.

While earlier AI systems were criticized for overt racism, like chatbots regurgitating neo-Nazi rhetoric, recent models utilize “ethical guardrails” aiming to filter out such clearly offensive output. But as Avijit Ghosh, an AI ethics researcher at Hugging Face, explains, “It doesn’t eliminate the underlying problem; the guardrails seem to emulate what educated people in the United States do.”

He elaborates, “Once people cross a certain educational threshold, they won’t call you a slur to your face, but the racism is still there. It’s a similar thing in language models…These models don’t unlearn problematic things, they just get better at hiding it.”

Critics like Timnit Gebru, the former co-leader of Google‘s ethical AI team, have been sounding the alarm about the unchecked proliferation of large language models for years. “It feels like a gold rush,” she said last year. “In fact, it is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it.”

Recent controversies, like Google’s AI system generating images depicting historical figures as people of color, underscore the risks of deploying these systems without sufficient safeguards. Yet the private sector’s embrace of generative AI is expected to intensify, with the market projected to become a $1.3 trillion industry by 2032, according to Bloomberg.

Meanwhile, federal regulators have only begun addressing AI-driven discrimination, with the first EEOC case on the issue emerging late last year. AI ethics experts like Ghosh argue that curtailing the unregulated use of language models in sensitive areas like hiring and criminal justice must be an urgent priority.

“You don’t need to stop innovation or slow AI research, but curtailing the use of these technologies in certain sensitive areas is an excellent first step,” Ghosh stated. “Racist people exist all over the country; we don’t need to put them in jail, but we try to not allow them to be in charge of hiring and recruiting. Technology should be regulated in a similar way.”


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPTGemini AI (formerly Bard)
Most Popular

What is ChatGPT? The AI chatbot that changed everything

Anthropic launches The Anthropic Institute for frontier AI oversight

Samsung’s Galaxy Book6, Pro and Ultra land in the US today

Alexa+ adds new response styles so your smart speaker feels more personal

Apple’s biggest product launch of 2026 is here — buy everything today

Also Read
A SpaceX Falcon 9 rocket lifts off from Space Launch Complex 4 East at Vandenberg Space Force Base in California, surrounded by billowing white smoke and bright engine flames, with a clear blue sky and the Pacific Ocean visible in the background.

SpaceX launches 25 Starlink v2 Mini satellites on Falcon 9

Tesla Powerwall 3 WPfRR4Lt

Tesla Powerwall 3 is officially headed to Japan

A screenshot of a Perplexity-branded document titled "Ways to Reclaim Focus," showing the Final Pass document markup feature in action — a yellow tooltip popup highlights a spelling/grammar suggestion labeled "low" severity, pointing out that "less tabs" should be corrected to "fewer tabs," with the suggested fix reading "your research happens in fewer tabs," demonstrating how Final Pass flags actionable edits inline within the document.

Perplexity Computer can now mark up any document with Final Pass

A dark-themed screenshot of the Perplexity Computer interface on a Mac desktop with a floral wallpaper. The left sidebar shows navigation options including Computer, New Task, Tasks, Files, Connectors, Skills, and Gallery. The main panel displays a task titled "Syncing Google Docs and Notion Editor" in progress. Visible steps include building the updated app, running a terminal command (cd unified-editor && npm run build), restarting the production server with NODE_ENV=production node dist/index.js, and a final deployment step labeled "Deploying with new document open by default." At the bottom, a completion message reads "Done — the app now opens straight into the new document editor with the title field and block editor ready to go," accompanied by a preview thumbnail of the generated app called "Perplexity Docs." Two additional browser tabs labeled "Google Docs" and "Media | Articles | Notion" are open at the top of the window.

Perplexity Computer adds embedded connector support for web app builders

Screenshot of the Perplexity Pro interface showing the model selection dropdown menu with "Nemotron 3 Super" selected, labeled as "New," alongside other available models including Best, Sonar, GPT-5.4, Gemini 3.1 Pro, Claude Sonnet 4.6, and Claude Opus 4.6 (Max, locked). The "Computer" mode button and "Thinking" toggle are also visible in the dark-themed UI.

NVIDIA Nemotron 3 Super lands on Perplexity, Agent API, and Computer

A screenshot of a Perplexity AI-generated Tokyo 3-Day Itinerary displayed on an iPhone, showing a "Quick Practical Notes" section with bullet points covering cash recommendations (¥20,000–30,000), a no-tipping reminder, cherry blossom timing (first bloom forecast March 19), plum blossom viewing at Yushima Tenjin, and weather details (12–14°C highs, 5–7°C lows). Below the notes, a card preview shows a generated app with a dark map pinpointing Tokyo locations, labeled "Tokyo 3-Day Itinerary — Generated app," with an "Add details or clarifications..." input field at the bottom.

Perplexity Computer is now on iPhone — Android is next

humanoid head and futuristic background, artificial intelligence concept

We’re all thinking the same — and AI might be why

A person holding a TV remote in a dimly lit room, pointing it toward a TCL television displaying the Amazon Prime Video logo on a bright blue screen.

Amazon bumps ad-free Prime Video price starting April 10

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.