Popular artificial intelligence tools like ChatGPT and Google’s AI are becoming increasingly covert in their racism as they advance, according to an alarming new report from technology and linguistics researchers. While previous studies examined overt racial biases in these systems, this team took a deeper look at how AI reacts to more subtle indicators of race, like differences in dialect.
“We know that these technologies are really commonly used by companies to do tasks like screening job applicants,” said Valentin Hoffman, a researcher at the Allen Institute for AI and co-author of the paper published on arXiv. He explained that until now, researchers had not closely examined how AI responds to dialects like African American Vernacular English (AAVE), created and spoken by many Black Americans.
The disturbing findings reveal that large language models are significantly more likely to describe AAVE speakers as “stupid” and “lazy,” assigning them to lower-paying jobs compared to those speaking “standard American English.” This bias could punish Black job candidates for code-switching between AAVE and more formal styles of speech and writing.
“One big concern is that, say a job candidate used this dialect in their social media posts,” Hoffman said. “It’s not unreasonable to think that the language model will not select the candidate because they used the dialect in their online presence.”
Beyond the workplace, the study found language models were more inclined to recommend harsher punishments like the death penalty for hypothetical criminal defendants using AAVE during court statements. “I’d like to think that we are not anywhere close to a time when this kind of technology is used to make decisions about criminal convictions,” Hoffman said. “That might feel like a very dystopian future, and hopefully it is.”
However, AI is already being utilized in some areas of the legal system for tasks like creating transcripts and conducting research. As Hoffman notes, “Ten years ago, even five years ago, we had no idea all the different contexts that AI would be used today.”
The new findings are a sobering reminder that as language models grow larger by ingesting more data from the internet, their blind embrace of human knowledge leads them to learn and proliferate the racist stereotypes and attitudes that pervade online content – the classic “garbage in, garbage out” problem in computer science.
While earlier AI systems were criticized for overt racism, like chatbots regurgitating neo-Nazi rhetoric, recent models utilize “ethical guardrails” aiming to filter out such clearly offensive output. But as Avijit Ghosh, an AI ethics researcher at Hugging Face, explains, “It doesn’t eliminate the underlying problem; the guardrails seem to emulate what educated people in the United States do.”
He elaborates, “Once people cross a certain educational threshold, they won’t call you a slur to your face, but the racism is still there. It’s a similar thing in language models…These models don’t unlearn problematic things, they just get better at hiding it.”
Critics like Timnit Gebru, the former co-leader of Google‘s ethical AI team, have been sounding the alarm about the unchecked proliferation of large language models for years. “It feels like a gold rush,” she said last year. “In fact, it is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it.”
Recent controversies, like Google’s AI system generating images depicting historical figures as people of color, underscore the risks of deploying these systems without sufficient safeguards. Yet the private sector’s embrace of generative AI is expected to intensify, with the market projected to become a $1.3 trillion industry by 2032, according to Bloomberg.
Meanwhile, federal regulators have only begun addressing AI-driven discrimination, with the first EEOC case on the issue emerging late last year. AI ethics experts like Ghosh argue that curtailing the unregulated use of language models in sensitive areas like hiring and criminal justice must be an urgent priority.
“You don’t need to stop innovation or slow AI research, but curtailing the use of these technologies in certain sensitive areas is an excellent first step,” Ghosh stated. “Racist people exist all over the country; we don’t need to put them in jail, but we try to not allow them to be in charge of hiring and recruiting. Technology should be regulated in a similar way.”
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
