By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIGoogleOpenAITech

AI is getting smarter – but also more racist, experts warn

Experts warn that as AI tools advance, they are acquiring deeply embedded racist attitudes and stereotypes, discriminating against speakers of Black English dialects.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Mar 16, 2024, 2:36 PM EDT
Share
We may get a commission from retail offers. Learn more
The image depicts stylized letters spelling “AI (artificial intelligence)” against a dark blue background with a grid pattern. The 3D-effect letters appear to have depth and dimensionality, filled with a neon grid pattern ranging from purple to blue. The overall design exudes a retro-futuristic vibe, reminiscent of 1980s synthwave aesthetics.
Illustration by Kasia Bojanowska for DigitalOcean / Dribbble
SHARE

Popular artificial intelligence tools like ChatGPT and Google’s AI are becoming increasingly covert in their racism as they advance, according to an alarming new report from technology and linguistics researchers. While previous studies examined overt racial biases in these systems, this team took a deeper look at how AI reacts to more subtle indicators of race, like differences in dialect.

“We know that these technologies are really commonly used by companies to do tasks like screening job applicants,” said Valentin Hoffman, a researcher at the Allen Institute for AI and co-author of the paper published on arXiv. He explained that until now, researchers had not closely examined how AI responds to dialects like African American Vernacular English (AAVE), created and spoken by many Black Americans.

The disturbing findings reveal that large language models are significantly more likely to describe AAVE speakers as “stupid” and “lazy,” assigning them to lower-paying jobs compared to those speaking “standard American English.” This bias could punish Black job candidates for code-switching between AAVE and more formal styles of speech and writing.

“One big concern is that, say a job candidate used this dialect in their social media posts,” Hoffman said. “It’s not unreasonable to think that the language model will not select the candidate because they used the dialect in their online presence.”

Beyond the workplace, the study found language models were more inclined to recommend harsher punishments like the death penalty for hypothetical criminal defendants using AAVE during court statements. “I’d like to think that we are not anywhere close to a time when this kind of technology is used to make decisions about criminal convictions,” Hoffman said. “That might feel like a very dystopian future, and hopefully it is.”

However, AI is already being utilized in some areas of the legal system for tasks like creating transcripts and conducting research. As Hoffman notes, “Ten years ago, even five years ago, we had no idea all the different contexts that AI would be used today.”

The new findings are a sobering reminder that as language models grow larger by ingesting more data from the internet, their blind embrace of human knowledge leads them to learn and proliferate the racist stereotypes and attitudes that pervade online content – the classic “garbage in, garbage out” problem in computer science.

While earlier AI systems were criticized for overt racism, like chatbots regurgitating neo-Nazi rhetoric, recent models utilize “ethical guardrails” aiming to filter out such clearly offensive output. But as Avijit Ghosh, an AI ethics researcher at Hugging Face, explains, “It doesn’t eliminate the underlying problem; the guardrails seem to emulate what educated people in the United States do.”

He elaborates, “Once people cross a certain educational threshold, they won’t call you a slur to your face, but the racism is still there. It’s a similar thing in language models…These models don’t unlearn problematic things, they just get better at hiding it.”

Critics like Timnit Gebru, the former co-leader of Google‘s ethical AI team, have been sounding the alarm about the unchecked proliferation of large language models for years. “It feels like a gold rush,” she said last year. “In fact, it is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it.”

Recent controversies, like Google’s AI system generating images depicting historical figures as people of color, underscore the risks of deploying these systems without sufficient safeguards. Yet the private sector’s embrace of generative AI is expected to intensify, with the market projected to become a $1.3 trillion industry by 2032, according to Bloomberg.

Meanwhile, federal regulators have only begun addressing AI-driven discrimination, with the first EEOC case on the issue emerging late last year. AI ethics experts like Ghosh argue that curtailing the unregulated use of language models in sensitive areas like hiring and criminal justice must be an urgent priority.

“You don’t need to stop innovation or slow AI research, but curtailing the use of these technologies in certain sensitive areas is an excellent first step,” Ghosh stated. “Racist people exist all over the country; we don’t need to put them in jail, but we try to not allow them to be in charge of hiring and recruiting. Technology should be regulated in a similar way.”


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPTGemini AI (formerly Bard)
Most Popular

This $3 ChromeOS Flex stick from Google and Back Market wants to save your old PC

Claude Platform’s new Compliance API answers “who did what and when”

Amazon Prime just made Friday gas runs $0.20 per gallon cheaper

OpenAI offers $500 Codex credit per Business workspace

Microsoft AI unveils MAI-Transcribe-1 for fast, accurate speech-to-text

Also Read
Simple illustration on an orange background showing the Microsoft logo in a white rounded square on the left connected by a thin line to the Anthropic Claude burst icon in a white rounded square on the right, representing integration between Microsoft and Claude.

Claude rolls out Microsoft 365 connectors across all plans

Apple CarPlay home screen showing app icons including Phone, Music, Maps, Messages, Now Playing, Meet, Podcasts, Audiobooks, Calendar, and Settings, with the Meet app visible in the dock and a cellular and battery status bar on the left side.

Apple CarPlay users can now join Google Meet audio calls

Google Vids editor interface showing a completed workspace promo video timeline with multiple clips, and a centered pop‑up message reading “Export complete – Your video is now ready to review and publish” with a prominent blue “Open YouTube” button.

Google Vids gets native YouTube export button

Chrome browser tab displaying a product page for a mechanical keyboard while the Google Vids recording overlay in the bottom right shows a person on camera and controls to pause, mute, or finish the screen recording.

Google Vids screen recorder lets you capture any Chrome tab in one click

Person standing in a mountain meadow carrying a yellow tote bag, with their face blurred, and a caption underneath that reads “while keeping the same voice and identity.”

New Google Vids avatars keep the same face and voice across your video

Google Vids interface displaying an AI avatars panel with a grid of blurred human avatars, a highlighted custom avatar option, and a Select button at the bottom right on a light gray background.

Google Vids adds custom AI avatars with consistent voice and face

Dark background with the Gemma 4 logo, featuring a blue geometric diamond‑shaped icon on the left and the words ‘Gemma 4’ in bold blue text on the right.

Gemma 4 lands on Google Cloud with open models for every stack

Black background with the Gemini API logo on the left as a glowing blue four-point star and white text, and on the right two grey speedometer-style gauges representing performance and cost, one with a checkmark icon and one with a dollar symbol.

Gemini API Flex and Priority tiers bring cloud-style controls to AI inference

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.