Editorial note: At GadgetBond, we typically steer clear of overtly political content. However, when technology and gadgets, even the unconventional kind, intersect with current events, we believe it warrants our attention. Read our statement
The Trump administration’s plan to lean on Google’s Gemini chatbot to help write federal transportation rules is one of those stories that sounds like satire until you read the fine print — and then it’s just unsettling.
Inside the Department of Transportation, officials have been told that generative AI won’t just summarize documents or help with boilerplate; it’s being positioned as the engine that cranks out the first drafts of actual safety regulations covering everything from airplanes to pipelines. At an internal demo in December, a department attorney described AI as having the “potential to revolutionize the way we draft rulemakings,” and staff watched as a presenter fed a topic into Gemini and got back something that looked like a Notice of Proposed Rulemaking in a matter of seconds.
The tone from the top has been even more revealing than the tech itself. Gregory Zerzan, DOT’s general counsel, reportedly told colleagues that President Donald Trump is “very excited about this initiative” and cast the department as the “point of the spear” for using AI to draft rules across the federal government. But what really stuck with staffers was his emphasis on volume over precision: “We don’t need the perfect rule… We don’t even need a very good rule,” he said, adding, “We want good enough” and describing the strategy as “flooding the zone.” In his vision, a proposed regulation could go from idea to draft ready for White House review in 30 days, because in theory “it shouldn’t take you more than 20 minutes to get a draft rule out of Gemini.”
If you work inside DOT and your day job is making sure airplanes don’t fall out of the sky or freight trains carrying toxic chemicals stay on the rails, that kind of rhetoric lands with a thud. The agency’s rules reach into almost every corner of transportation safety: aircraft certification, truck driver qualifications, hazardous materials, gas pipelines, transit systems, you name it. Staffers told reporters they were alarmed that drafting these high‑stakes rules could be offloaded to a system known for “hallucinations” — the polite term for confident but made‑up text that large language models like Gemini and ChatGPT are infamous for.
The pitch inside the building, though, has been that most of what goes into the preambles of regulations is “word salad” anyway, and word salad is exactly what Gemini is good at generating. At the demonstration, the presenter told employees that the tool could do 80 to 90 percent of the work of writing a rule, leaving humans to tweak the remaining 10 to 20 percent — essentially turning veteran rule writers into proofreaders for machine‑generated legal text. One longtime staffer summed up the implied future of their profession like this: “Our jobs would be to proofread this machine product,” and noted that the presenter seemed genuinely enthusiastic about that outcome.
This isn’t happening in isolation. Over the past few years, federal agencies have slowly woven AI into routine tasks: translating documents, sorting public comments, analyzing data sets, and even helping draft internal memos. The General Services Administration cleared big players like OpenAI, Google and Anthropic for federal use under pre‑vetted contract frameworks, a move that made it much easier for agencies to plug in commercial chatbots without starting procurement from scratch. The Trump White House has layered on a series of executive orders and policy memos urging agencies to “accelerate” AI adoption, culminating in an “AI Action Plan” that essentially told departments to find more ways to automate.
The Department of Transportation seems determined to prove it got the message. Officials have already used AI to help draft at least one still‑unpublished Federal Aviation Administration rule, according to people familiar with the effort. At an AI summit in Northern Virginia, a DOT cybersecurity division chief talked about building an “AI culture” in government and predicted a future where humans mostly oversee “AI‑to‑AI interactions” rather than do the primary work themselves. The underlying assumption is that human review can clean up whatever the model gets wrong — and that the gains in speed outweigh the new kinds of risk that come with outsourcing reasoning to a text generator.
Not everyone buys that tradeoff, especially given the specific domain we’re talking about. Mike Horton, who previously served as DOT’s acting chief AI officer, compared the idea of using Gemini to write regulations to “having a high school intern that’s doing your rulemaking.” His worry is blunt: in transportation, “going fast and breaking things means people are going to get hurt,” because mistakes in rule text can ripple out into real‑world accidents, lawsuits and years of regulatory cleanup. Academic experts in administrative law say that if AI is treated as a glorified research assistant — summarizing evidence, helping brainstorm options, generating early drafts that are heavily reworked — it might save time, but turning models into de facto co‑authors of binding rules could run straight into legal requirements that regulations be grounded in reasoned, explainable decision‑making.
There’s also a talent story running underneath all of this. The Trump administration’s aggressive push to shrink the federal workforce has hit DOT too, with federal data showing the department has lost nearly 4,000 of its roughly 57,000 employees since Trump returned to office, including more than 100 attorneys. Consumer advocates argue that trying to plug those gaps with AI is exactly backwards: you lose subject‑matter experts who know the statutes and the engineering details, then ask a model trained on scraped text to imitate that expertise on demand. One watchdog called the plan “especially problematic” precisely because those human guardrails inside the agency are thinner now than they were a few years ago.
Step back, and you can see a broader pattern. Another Trump‑era initiative, the Department of Government Efficiency, or DOGE, has been experimenting with its own AI system designed to identify and help eliminate federal rules at scale. A leaked DOGE presentation, obtained by major outlets, laid out an ambition to cut roughly half of all federal regulations by using an AI tool that automatically drafts the paperwork needed to repeal or revise rules, with lawyers brought in mainly to edit and sign off. One version of that tool, nicknamed “SweetREX Deregulation AI,” has been scanning hundreds of thousands of regulations to flag candidates for removal, and documents suggest it has already reviewed more than 1,000 sections in some agencies in just a couple of weeks.
Put together, DOT’s Gemini plan and DOGE’s deregulation machine point to a vision of “government by AI” that goes well beyond chatbots answering citizen questions on a website. In this model, AI systems help decide what the rules should say, which rules should survive, and how fast changes can move from a policy memo to the Federal Register. Inside the White House, officials insist they’re focused on “trustworthy” and “American‑made” AI, and procurement frameworks now require agencies to at least talk about transparency and accountability when they buy these systems. But the emerging reality is that these tools are being deployed into environments — like aviation safety or pipeline oversight — where the cost of a bad output isn’t a broken app feature, it’s lives and critical infrastructure.
For now, DOT is selling Gemini internally as a way to crank through the laborious, text‑heavy parts of rulemaking, not as a replacement for human judgment. The worry among skeptics is that once the machinery is in place and the pressure for speed kicks in — from the White House, from industry, from political appointees who like talking about cutting red tape — the temptation will be to lean harder and harder on the model’s output, trusting that someone down the line will catch any mistakes. And if they don’t, the consequences won’t show up as a glitch in a document; they’ll show up as a safety rule that was “good enough” until it wasn’t.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
