Editorial note: At GadgetBond, we typically steer clear of overtly political content. However, when technology and gadgets, even the unconventional kind, intersect with current events, we believe it warrants our attention. Read our statement
It’s the nightmare scenario that has lurked in the background of the generative AI boom: what happens when the machine doesn’t just get a fact wrong, but invents a monstrous, career-ending lie? This week, that question moved from the theoretical to the terrifyingly real, forcing Google to pull one of its own AI models offline after it fabricated a serious criminal allegation against a sitting U.S. Senator.
The incident has escalated the already-fraught debate over AI “hallucinations” into a full-blown crisis of defamation, pitting a Big Tech giant against a furious lawmaker who is now demanding answers—and accountability.
The AI model at the center of the storm is Gemma, a family of models Google released for developers and researchers, not the general public. The lawmaker is Senator Marsha Blackburn (R-TN), who, in a blistering letter to Google CEO Sundar Pichai, accused the company of distributing defamatory content.
According to Blackburn, when the Gemma model was asked, “Has Marsha Blackburn been accused of rape?” it didn’t just say “no” or “I don’t have that information.” Instead, it confidently generated a detailed, entirely false narrative.
The AI claimed that during Blackburn’s 1987 campaign for state senate (the actual year was 1998), she “was accused of having a sexual relationship with a state trooper.” It didn’t stop there, adding the fabricated trooper alleged she “pressured him to obtain prescription drugs for her and that the relationship involved non-consensual acts.” To make the fabrication seem credible, Gemma even provided a list of fake news articles to support the story, all of which led to error pages or unrelated content.
“None of this is true,” Blackburn wrote in her letter. “There has never been such an accusation, there is no such individual, and there are no such news stories. This is not a harmless ‘hallucination.’ It is an act of defamation produced and distributed by a Google-owned AI model.“
Google’s defense: “you’re using it wrong”
Google’s response was swift. The company announced it was pulling Gemma from its AI Studio platform, the web-based tool where the senator’s team had apparently accessed the model.
In a post on X, Google’s official news account sought to reframe the problem as one of user error. “We’ve seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions,” the company stated. “We never intended this to be a consumer tool or model, or to be used this way.“
This distinction is critical for Google. AI Studio is meant to be a playground for developers to experiment and build applications, not a polished, consumer-facing product like its Gemini chatbot (formerly Bard). Gemma itself is billed as a lightweight, “open model” for the research community. In Google’s view, asking this developer tool for sensitive factual information is like taking a race car engine, strapping it to a shopping cart, and then complaining it’s unsafe for the grocery aisle.
“To prevent this confusion,” Google concluded, “access to Gemma is no longer available on AI Studio. It is still available to developers through the API.“
But for critics, this defense rings hollow. If a tool, publicly accessible, can be prompted to create such damaging libel, does the “for developers only” label really absolve its creator of responsibility?
A pattern of accusations
This incident was not Blackburn’s first run-in with Google’s AI. It wasn’t even the first one that week.
The senator’s letter revealed that she had already confronted a Google executive during a recent Senate Commerce hearing about another case of alleged AI-driven defamation. In that instance, the target was Robby Starbuck, a conservative activist and former congressional candidate. Blackburn claims Google’s AI models had generated defamatory claims about Starbuck, including falsely labeling him a “child rapist” and “serial sexual abuser.”
At that hearing, Google’s Vice President for Government Affairs and Public Policy, Markham Erickson, reportedly gave what has become the industry’s standard reply: that “hallucinations” are a known issue and the company is “working hard to mitigate them.”
This explanation did not satisfy Blackburn then, and it certainly doesn’t now. Her letter to Pichai framed this not as a random glitch, but as part of a “consistent pattern of bias against conservatives,” escalating a technical problem into a political firestorm.
The “hallucination” vs. “defamation” debate
This showdown captures the central, unresolved conflict of the generative AI era. Tech companies call these fabrications “hallucinations”—a soft, almost psychedelic term that frames the AI as a dreaming machine, momentarily untethered from reality. It’s a technical bug to be ironed out.
But for victims of these fabrications, “hallucination” is a dangerously misleading euphemism. When an AI invents a legal case, a medical diagnosis, or a criminal history, it’s not dreaming. It’s publishing libel.
The legal world is scrambling to catch up. The most-watched case so far, Walters v. OpenAI, involved a radio host who sued after ChatGPT falsely claimed he had been accused of embezzling funds. In that instance, a judge actually sided with OpenAI, ruling that a “reasonable reader” would be aware of an AI’s potential for error and its disclaimers.
But the Blackburn case may be different. The fabrication was not about financial misconduct but a violent felony. And the target wasn’t a local radio host but one of the 100 most powerful lawmakers in the country.
We are several years into the generative AI boom, and the industry’s foundational problem—its complex and often broken relationship with the truth—remains unsolved. Despite continuous improvements, the issue of “confidently incorrect” answers plagues every major model. Google, in its own statement, admitted that hallucinations “are challenges across the AI industry, particularly smaller open models like Gemma.“
For Senator Blackburn, that admission is an indictment. Her response to Google’s executive at the Senate hearing, which she repeated in her letter, serves as a clear warning to Silicon Valley: “Shut it down until you can control it.“
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
