By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Best Deals
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AITech

Google Gemini may help draft U.S. transportation safety rules

“Good enough” rules may carry real-world consequences.

By
Shubham Sawarkar
Shubham Sawarkar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Jan 26, 2026, 9:00 PM EST
Share
We may get a commission from retail offers. Learn more
humanoid head and futuristic background, artificial intelligence concept
Image: jvphoto / Alamy
SHARE

Editorial note: At GadgetBond, we typically steer clear of overtly political content. However, when technology and gadgets, even the unconventional kind, intersect with current events, we believe it warrants our attention. Read our statement


The Trump administration’s plan to lean on Google’s Gemini chatbot to help write federal transportation rules is one of those stories that sounds like satire until you read the fine print — and then it’s just unsettling.

Inside the Department of Transportation, officials have been told that generative AI won’t just summarize documents or help with boilerplate; it’s being positioned as the engine that cranks out the first drafts of actual safety regulations covering everything from airplanes to pipelines. At an internal demo in December, a department attorney described AI as having the “potential to revolutionize the way we draft rulemakings,” and staff watched as a presenter fed a topic into Gemini and got back something that looked like a Notice of Proposed Rulemaking in a matter of seconds.

The tone from the top has been even more revealing than the tech itself. Gregory Zerzan, DOT’s general counsel, reportedly told colleagues that President Donald Trump is “very excited about this initiative” and cast the department as the “point of the spear” for using AI to draft rules across the federal government. But what really stuck with staffers was his emphasis on volume over precision: “We don’t need the perfect rule… We don’t even need a very good rule,” he said, adding, “We want good enough” and describing the strategy as “flooding the zone.” In his vision, a proposed regulation could go from idea to draft ready for White House review in 30 days, because in theory “it shouldn’t take you more than 20 minutes to get a draft rule out of Gemini.”

If you work inside DOT and your day job is making sure airplanes don’t fall out of the sky or freight trains carrying toxic chemicals stay on the rails, that kind of rhetoric lands with a thud. The agency’s rules reach into almost every corner of transportation safety: aircraft certification, truck driver qualifications, hazardous materials, gas pipelines, transit systems, you name it. Staffers told reporters they were alarmed that drafting these high‑stakes rules could be offloaded to a system known for “hallucinations” — the polite term for confident but made‑up text that large language models like Gemini and ChatGPT are infamous for.

The pitch inside the building, though, has been that most of what goes into the preambles of regulations is “word salad” anyway, and word salad is exactly what Gemini is good at generating. At the demonstration, the presenter told employees that the tool could do 80 to 90 percent of the work of writing a rule, leaving humans to tweak the remaining 10 to 20 percent — essentially turning veteran rule writers into proofreaders for machine‑generated legal text. One longtime staffer summed up the implied future of their profession like this: “Our jobs would be to proofread this machine product,” and noted that the presenter seemed genuinely enthusiastic about that outcome.

This isn’t happening in isolation. Over the past few years, federal agencies have slowly woven AI into routine tasks: translating documents, sorting public comments, analyzing data sets, and even helping draft internal memos. The General Services Administration cleared big players like OpenAI, Google and Anthropic for federal use under pre‑vetted contract frameworks, a move that made it much easier for agencies to plug in commercial chatbots without starting procurement from scratch. The Trump White House has layered on a series of executive orders and policy memos urging agencies to “accelerate” AI adoption, culminating in an “AI Action Plan” that essentially told departments to find more ways to automate.

The Department of Transportation seems determined to prove it got the message. Officials have already used AI to help draft at least one still‑unpublished Federal Aviation Administration rule, according to people familiar with the effort. At an AI summit in Northern Virginia, a DOT cybersecurity division chief talked about building an “AI culture” in government and predicted a future where humans mostly oversee “AI‑to‑AI interactions” rather than do the primary work themselves. The underlying assumption is that human review can clean up whatever the model gets wrong — and that the gains in speed outweigh the new kinds of risk that come with outsourcing reasoning to a text generator.

Not everyone buys that tradeoff, especially given the specific domain we’re talking about. Mike Horton, who previously served as DOT’s acting chief AI officer, compared the idea of using Gemini to write regulations to “having a high school intern that’s doing your rulemaking.” His worry is blunt: in transportation, “going fast and breaking things means people are going to get hurt,” because mistakes in rule text can ripple out into real‑world accidents, lawsuits and years of regulatory cleanup. Academic experts in administrative law say that if AI is treated as a glorified research assistant — summarizing evidence, helping brainstorm options, generating early drafts that are heavily reworked — it might save time, but turning models into de facto co‑authors of binding rules could run straight into legal requirements that regulations be grounded in reasoned, explainable decision‑making.

There’s also a talent story running underneath all of this. The Trump administration’s aggressive push to shrink the federal workforce has hit DOT too, with federal data showing the department has lost nearly 4,000 of its roughly 57,000 employees since Trump returned to office, including more than 100 attorneys. Consumer advocates argue that trying to plug those gaps with AI is exactly backwards: you lose subject‑matter experts who know the statutes and the engineering details, then ask a model trained on scraped text to imitate that expertise on demand. One watchdog called the plan “especially problematic” precisely because those human guardrails inside the agency are thinner now than they were a few years ago.

Step back, and you can see a broader pattern. Another Trump‑era initiative, the Department of Government Efficiency, or DOGE, has been experimenting with its own AI system designed to identify and help eliminate federal rules at scale. A leaked DOGE presentation, obtained by major outlets, laid out an ambition to cut roughly half of all federal regulations by using an AI tool that automatically drafts the paperwork needed to repeal or revise rules, with lawyers brought in mainly to edit and sign off. One version of that tool, nicknamed “SweetREX Deregulation AI,” has been scanning hundreds of thousands of regulations to flag candidates for removal, and documents suggest it has already reviewed more than 1,000 sections in some agencies in just a couple of weeks.

Put together, DOT’s Gemini plan and DOGE’s deregulation machine point to a vision of “government by AI” that goes well beyond chatbots answering citizen questions on a website. In this model, AI systems help decide what the rules should say, which rules should survive, and how fast changes can move from a policy memo to the Federal Register. Inside the White House, officials insist they’re focused on “trustworthy” and “American‑made” AI, and procurement frameworks now require agencies to at least talk about transparency and accountability when they buy these systems. But the emerging reality is that these tools are being deployed into environments — like aviation safety or pipeline oversight — where the cost of a bad output isn’t a broken app feature, it’s lives and critical infrastructure.

For now, DOT is selling Gemini internally as a way to crank through the laborious, text‑heavy parts of rulemaking, not as a replacement for human judgment. The worry among skeptics is that once the machinery is in place and the pressure for speed kicks in — from the White House, from industry, from political appointees who like talking about cutting red tape — the temptation will be to lean harder and harder on the model’s output, trusting that someone down the line will catch any mistakes. And if they don’t, the consequences won’t show up as a glitch in a document; they’ll show up as a safety rule that was “good enough” until it wasn’t.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Leave a Comment

Leave a ReplyCancel reply

Most Popular

The new AirTag is easier to find, easier to hear, and more useful

Microsoft Copilot is everywhere — here’s how to turn it off

Windows Recall is watching—here’s how to disable it

You can’t fully turn off Meta AI, but you can do this

OpenAI Prism merges LaTeX, PDFs, and GPT into one workspace

Also Read
Screenshot of Microsoft Paint on Windows 11 showing the new AI “Coloring book” feature, with a black-and-white line-art illustration of a cute cartoon cat sitting inside a donut on the canvas, while a Copilot side panel displays the prompt “A cute fluffy cat on a donut” and four generated coloring page preview options.

Microsoft Paint adds AI coloring books for Copilot Plus PCs

Illustration of the Google Chrome logo riding a white roller coaster car on a curved track, symbolizing Chrome’s evolving and dynamic browsing experience.

Google adds agentic AI browsing to Chrome

Silver Tesla Model S driving on a winding road through a forested landscape, shown alongside a red Model S in motion under clear daylight.

Tesla is ending Model S and X to build humanoid robots instead

This image shows the OpenAI logo prominently displayed in white text against a vibrant, abstract background. The background features swirling patterns of deep green, turquoise blue, and occasional splashes of purple and pink. The texture resembles a watercolor or digital painting with fluid, organic forms that create a sense of movement across the image. The high-contrast white "OpenAI" text stands out clearly against this colorful, artistic backdrop.

OpenAI backs youth wellbeing with fresh AI grants in Europe, Middle East and Africa

The image features a simplistic white smile-shaped arrow on an orange background. The arrow curves upwards, resembling a smile, and has a pointed end on the right side. This design is recognizable as the Amazon's smile logo, which is often associated with online shopping and fast delivery services.

These three retailers just tied for best customer satisfaction

Close-up of the new Unity Connection Braided Solo Loop.

Apple unveils its new Black Unity Apple Watch band for 2026

A group of AI-powered toys on a table, including a plush teddy bear, a soft gray character toy, and two small robot companions with digital faces and glowing blue eyes, arranged against a plain yellow background.

AI toys for children raise serious safety concerns

Illustration showing the Gmail logo above the text “Gmail in the Gemini era,” with the word “Gemini” highlighted in blue on a light gradient background.

How to disable Gmail’s AI features tied to Gemini

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2025 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.