By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AITech

Google Gemini may help draft U.S. transportation safety rules

“Good enough” rules may carry real-world consequences.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Jan 26, 2026, 9:00 PM EST
Share
We may get a commission from retail offers. Learn more
humanoid head and futuristic background, artificial intelligence concept
Image: jvphoto / Alamy
SHARE

Editorial note: At GadgetBond, we typically steer clear of overtly political content. However, when technology and gadgets, even the unconventional kind, intersect with current events, we believe it warrants our attention. Read our statement


The Trump administration’s plan to lean on Google’s Gemini chatbot to help write federal transportation rules is one of those stories that sounds like satire until you read the fine print — and then it’s just unsettling.

Inside the Department of Transportation, officials have been told that generative AI won’t just summarize documents or help with boilerplate; it’s being positioned as the engine that cranks out the first drafts of actual safety regulations covering everything from airplanes to pipelines. At an internal demo in December, a department attorney described AI as having the “potential to revolutionize the way we draft rulemakings,” and staff watched as a presenter fed a topic into Gemini and got back something that looked like a Notice of Proposed Rulemaking in a matter of seconds.

The tone from the top has been even more revealing than the tech itself. Gregory Zerzan, DOT’s general counsel, reportedly told colleagues that President Donald Trump is “very excited about this initiative” and cast the department as the “point of the spear” for using AI to draft rules across the federal government. But what really stuck with staffers was his emphasis on volume over precision: “We don’t need the perfect rule… We don’t even need a very good rule,” he said, adding, “We want good enough” and describing the strategy as “flooding the zone.” In his vision, a proposed regulation could go from idea to draft ready for White House review in 30 days, because in theory “it shouldn’t take you more than 20 minutes to get a draft rule out of Gemini.”

If you work inside DOT and your day job is making sure airplanes don’t fall out of the sky or freight trains carrying toxic chemicals stay on the rails, that kind of rhetoric lands with a thud. The agency’s rules reach into almost every corner of transportation safety: aircraft certification, truck driver qualifications, hazardous materials, gas pipelines, transit systems, you name it. Staffers told reporters they were alarmed that drafting these high‑stakes rules could be offloaded to a system known for “hallucinations” — the polite term for confident but made‑up text that large language models like Gemini and ChatGPT are infamous for.

The pitch inside the building, though, has been that most of what goes into the preambles of regulations is “word salad” anyway, and word salad is exactly what Gemini is good at generating. At the demonstration, the presenter told employees that the tool could do 80 to 90 percent of the work of writing a rule, leaving humans to tweak the remaining 10 to 20 percent — essentially turning veteran rule writers into proofreaders for machine‑generated legal text. One longtime staffer summed up the implied future of their profession like this: “Our jobs would be to proofread this machine product,” and noted that the presenter seemed genuinely enthusiastic about that outcome.

This isn’t happening in isolation. Over the past few years, federal agencies have slowly woven AI into routine tasks: translating documents, sorting public comments, analyzing data sets, and even helping draft internal memos. The General Services Administration cleared big players like OpenAI, Google and Anthropic for federal use under pre‑vetted contract frameworks, a move that made it much easier for agencies to plug in commercial chatbots without starting procurement from scratch. The Trump White House has layered on a series of executive orders and policy memos urging agencies to “accelerate” AI adoption, culminating in an “AI Action Plan” that essentially told departments to find more ways to automate.

The Department of Transportation seems determined to prove it got the message. Officials have already used AI to help draft at least one still‑unpublished Federal Aviation Administration rule, according to people familiar with the effort. At an AI summit in Northern Virginia, a DOT cybersecurity division chief talked about building an “AI culture” in government and predicted a future where humans mostly oversee “AI‑to‑AI interactions” rather than do the primary work themselves. The underlying assumption is that human review can clean up whatever the model gets wrong — and that the gains in speed outweigh the new kinds of risk that come with outsourcing reasoning to a text generator.

Not everyone buys that tradeoff, especially given the specific domain we’re talking about. Mike Horton, who previously served as DOT’s acting chief AI officer, compared the idea of using Gemini to write regulations to “having a high school intern that’s doing your rulemaking.” His worry is blunt: in transportation, “going fast and breaking things means people are going to get hurt,” because mistakes in rule text can ripple out into real‑world accidents, lawsuits and years of regulatory cleanup. Academic experts in administrative law say that if AI is treated as a glorified research assistant — summarizing evidence, helping brainstorm options, generating early drafts that are heavily reworked — it might save time, but turning models into de facto co‑authors of binding rules could run straight into legal requirements that regulations be grounded in reasoned, explainable decision‑making.

There’s also a talent story running underneath all of this. The Trump administration’s aggressive push to shrink the federal workforce has hit DOT too, with federal data showing the department has lost nearly 4,000 of its roughly 57,000 employees since Trump returned to office, including more than 100 attorneys. Consumer advocates argue that trying to plug those gaps with AI is exactly backwards: you lose subject‑matter experts who know the statutes and the engineering details, then ask a model trained on scraped text to imitate that expertise on demand. One watchdog called the plan “especially problematic” precisely because those human guardrails inside the agency are thinner now than they were a few years ago.

Step back, and you can see a broader pattern. Another Trump‑era initiative, the Department of Government Efficiency, or DOGE, has been experimenting with its own AI system designed to identify and help eliminate federal rules at scale. A leaked DOGE presentation, obtained by major outlets, laid out an ambition to cut roughly half of all federal regulations by using an AI tool that automatically drafts the paperwork needed to repeal or revise rules, with lawyers brought in mainly to edit and sign off. One version of that tool, nicknamed “SweetREX Deregulation AI,” has been scanning hundreds of thousands of regulations to flag candidates for removal, and documents suggest it has already reviewed more than 1,000 sections in some agencies in just a couple of weeks.

Put together, DOT’s Gemini plan and DOGE’s deregulation machine point to a vision of “government by AI” that goes well beyond chatbots answering citizen questions on a website. In this model, AI systems help decide what the rules should say, which rules should survive, and how fast changes can move from a policy memo to the Federal Register. Inside the White House, officials insist they’re focused on “trustworthy” and “American‑made” AI, and procurement frameworks now require agencies to at least talk about transparency and accountability when they buy these systems. But the emerging reality is that these tools are being deployed into environments — like aviation safety or pipeline oversight — where the cost of a bad output isn’t a broken app feature, it’s lives and critical infrastructure.

For now, DOT is selling Gemini internally as a way to crank through the laborious, text‑heavy parts of rulemaking, not as a replacement for human judgment. The worry among skeptics is that once the machinery is in place and the pressure for speed kicks in — from the White House, from industry, from political appointees who like talking about cutting red tape — the temptation will be to lean harder and harder on the model’s output, trusting that someone down the line will catch any mistakes. And if they don’t, the consequences won’t show up as a glitch in a document; they’ll show up as a safety rule that was “good enough” until it wasn’t.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Leave a Comment

Leave a ReplyCancel reply

Most Popular

ExpressVPN is the first to plug VPN infrastructure into Anthropic’s MCP ecosystem

ExpressVPN MCP server: what it is, how it works, and who it’s for

How to enable the ExpressVPN MCP server on your AI tools

This Nimble 35W GaN charger with retractable cable is $16 off

25W Qi2 wireless comes alive with this Google Pixelsnap Charger deal

Also Read
Toni Schneider

Bluesky taps Toni Schneider as interim CEO

Jay Graber

Jay Graber exits Bluesky CEO role, becomes Chief Innovation Officer

Screenshot of the Perplexity Computer interface showing a user prompt at the top asking the agent to contribute to the Openclaw project by fixing bugs using Claude Code and then opening a pull request on a linked GitHub issue, with the assistant’s response below saying it will load relevant skills, fetch the GitHub issue details, and displaying a “Running tasks in parallel” status list for loading the coding‑and‑data skill and fetching the issue details, all on a light themed UI.

Claude Code and GitHub CLI now live inside Perplexity Computer

A person stands in front of a blue tiled wall featuring the illuminated word “OpenAI.” They are holding a smartphone and appear to be engaged with it, possibly taking a photo or interacting with content. The scene emphasizes the OpenAI brand in a modern, tech-savvy setting.

The Pentagon AI deal that OpenAI’s robotics head couldn’t accept

Nimble Fold 3-in-1 Wireless Travel Charging Dock

Charge iPhone, Apple Watch and AirPods with this Nimble 3‑in‑1 deal

A simple illustration shows a large black computer mouse cursor pointing toward a white central hub with five connected nodes on an orange background.

Claude Marketplace lets you use one AI commitment across multiple tools

Perplexity Computer promotional banner featuring a glowing glass orb with a laptop icon floating above a field of wildflowers against a gray background, with the text "perplexity computer works" in the center and a vertical list of action words — sends, creates, schedules, researches, orchestrates, remembers, deploys, connects — displayed in fading gray text on the right side.

Perplexity Computer is the AI that actually does your work

99ONE Rogue 102321

99ONE Rogue wants to kill the ugly helmet comms box forever

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.