By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIGoogleOpenAITech

AI is getting smarter – but also more racist, experts warn

Experts warn that as AI tools advance, they are acquiring deeply embedded racist attitudes and stereotypes, discriminating against speakers of Black English dialects.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Mar 16, 2024, 2:36 PM EDT
Share
We may get a commission from retail offers. Learn more
The image depicts stylized letters spelling “AI (artificial intelligence)” against a dark blue background with a grid pattern. The 3D-effect letters appear to have depth and dimensionality, filled with a neon grid pattern ranging from purple to blue. The overall design exudes a retro-futuristic vibe, reminiscent of 1980s synthwave aesthetics.
Illustration by Kasia Bojanowska for DigitalOcean / Dribbble
SHARE

Popular artificial intelligence tools like ChatGPT and Google’s AI are becoming increasingly covert in their racism as they advance, according to an alarming new report from technology and linguistics researchers. While previous studies examined overt racial biases in these systems, this team took a deeper look at how AI reacts to more subtle indicators of race, like differences in dialect.

“We know that these technologies are really commonly used by companies to do tasks like screening job applicants,” said Valentin Hoffman, a researcher at the Allen Institute for AI and co-author of the paper published on arXiv. He explained that until now, researchers had not closely examined how AI responds to dialects like African American Vernacular English (AAVE), created and spoken by many Black Americans.

The disturbing findings reveal that large language models are significantly more likely to describe AAVE speakers as “stupid” and “lazy,” assigning them to lower-paying jobs compared to those speaking “standard American English.” This bias could punish Black job candidates for code-switching between AAVE and more formal styles of speech and writing.

“One big concern is that, say a job candidate used this dialect in their social media posts,” Hoffman said. “It’s not unreasonable to think that the language model will not select the candidate because they used the dialect in their online presence.”

Beyond the workplace, the study found language models were more inclined to recommend harsher punishments like the death penalty for hypothetical criminal defendants using AAVE during court statements. “I’d like to think that we are not anywhere close to a time when this kind of technology is used to make decisions about criminal convictions,” Hoffman said. “That might feel like a very dystopian future, and hopefully it is.”

However, AI is already being utilized in some areas of the legal system for tasks like creating transcripts and conducting research. As Hoffman notes, “Ten years ago, even five years ago, we had no idea all the different contexts that AI would be used today.”

The new findings are a sobering reminder that as language models grow larger by ingesting more data from the internet, their blind embrace of human knowledge leads them to learn and proliferate the racist stereotypes and attitudes that pervade online content – the classic “garbage in, garbage out” problem in computer science.

While earlier AI systems were criticized for overt racism, like chatbots regurgitating neo-Nazi rhetoric, recent models utilize “ethical guardrails” aiming to filter out such clearly offensive output. But as Avijit Ghosh, an AI ethics researcher at Hugging Face, explains, “It doesn’t eliminate the underlying problem; the guardrails seem to emulate what educated people in the United States do.”

He elaborates, “Once people cross a certain educational threshold, they won’t call you a slur to your face, but the racism is still there. It’s a similar thing in language models…These models don’t unlearn problematic things, they just get better at hiding it.”

Critics like Timnit Gebru, the former co-leader of Google‘s ethical AI team, have been sounding the alarm about the unchecked proliferation of large language models for years. “It feels like a gold rush,” she said last year. “In fact, it is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it.”

Recent controversies, like Google’s AI system generating images depicting historical figures as people of color, underscore the risks of deploying these systems without sufficient safeguards. Yet the private sector’s embrace of generative AI is expected to intensify, with the market projected to become a $1.3 trillion industry by 2032, according to Bloomberg.

Meanwhile, federal regulators have only begun addressing AI-driven discrimination, with the first EEOC case on the issue emerging late last year. AI ethics experts like Ghosh argue that curtailing the unregulated use of language models in sensitive areas like hiring and criminal justice must be an urgent priority.

“You don’t need to stop innovation or slow AI research, but curtailing the use of these technologies in certain sensitive areas is an excellent first step,” Ghosh stated. “Racist people exist all over the country; we don’t need to put them in jail, but we try to not allow them to be in charge of hiring and recruiting. Technology should be regulated in a similar way.”


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPTGemini AI (formerly Bard)
Most Popular

ExpressVPN’s long‑term VPN plans get a massive 81 percent price cut

Apple’s portable iPad mini 7 falls to $399 in limited‑time sale

Valve warns Steam Deck OLED will be hard to buy in RAM crunch

Figma partners with Anthropic to bridge code and design

Claude Sonnet 4.6 levels up coding, agents, and computer use in one hit

Also Read
Wide desktop monitor showing the Windows 11 home screen with the Xbox PC app centered, displaying a Grounded 2 postgame recap card that highlights the recent gaming session, including playtime and achievements.

Xbox brings smart postgame recaps to the PC app for Insiders

Green “Lyria 3” wordmark centered on a soft gradient background that fades from light mint at the top to deeper green at the bottom, with a clean, minimalist design.

Google Gemini just learned how to make music with Lyria 3

Two blue Google Pixel 10a phones are shown in front of large repeated text reading ‘Smooth by design,’ with one phone displaying a blue gradient screen and the other showing the matte blue back with dual camera module and Google logo.

Google’s Pixel 10a keeps the price, upgrades the experience

Meta and NVIDIA logos on black background

Meta just became NVIDIA’s biggest AI chip power user

A side-by-side comparison showing a Google Pixel 10 Pro XL using Quick Share to successfully send a file to an iPhone, with the iPhone displaying the Android device inside its native AirDrop menu.

Pixel 9 users can now AirDrop files to iPhones and Macs

Screenshot of Google Search’s AI Mode on desktop showing a conversational query for “How can I get into curling,” with a long-form AI-generated answer on the left using headings and bullet points, and on the right a vertical carousel of website cards from multiple sources, plus a centered hover pop-up card stack highlighting individual source links and site logos over the carousel.

Google’s AI search is finally easier on publishers

Google I/O 2026 event graphic showing the Google I/O logo with a colorful gradient rectangle, slash, and circle on a black background, with the text ‘May 19–20, 2026’ and ‘io.google’ beneath.

Google I/O 2026 set for May 19–20 at Shoreline Amphitheatre

Dropdown model selector in Perplexity AI showing “Claude Sonnet 4.6 Thinking” highlighted under the “Best” section, with other options like Sonar, Gemini 3 Flash, Gemini 3 Pro, GPT‑5.2, Claude Opus 4.6, Grok 4.1, and Kimi K2.5 listed below on a light beige interface.

Claude Sonnet 4.6 lands for all Perplexity Pro and Max users

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.