By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIBusinessOpenAITech

The Pentagon AI deal that OpenAI’s robotics head couldn’t accept

OpenAI’s robotics chief has walked away from the company, saying the Pentagon AI deal went ahead without the guardrails such powerful tech demands.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Mar 9, 2026, 11:58 AM EDT
Share
We may get a commission from retail offers. Learn more
A person stands in front of a blue tiled wall featuring the illuminated word “OpenAI.” They are holding a smartphone and appear to be engaged with it, possibly taking a photo or interacting with content. The scene emphasizes the OpenAI brand in a modern, tech-savvy setting.
Photo by Pau Barrena / Getty Images
SHARE

Caitlin Kalinowski did not plan to become the face of an internal revolt. But when OpenAI quietly inked a high‑stakes deal to bring its AI systems into classified Pentagon networks, the veteran hardware and robotics leader decided she’d had enough.

Her resignation, announced in a short, sober post on X, landed like a small but sharp shock inside a company already under intense scrutiny over how far it’s willing to bend its own safety rules in exchange for government power and money. She said she was leaving “about principle,” stressing that she cared deeply about the robotics team she’d helped build—but that certain red lines around military AI should have been debated far more seriously before OpenAI rushed ahead with the Pentagon.

The trigger was OpenAI’s new agreement with the U.S. Department of Defense to deploy its models inside secure, classified systems—a landmark move that effectively makes the company one of the Pentagon’s go‑to AI suppliers. CEO Sam Altman has framed the deal as compatible with OpenAI’s values, insisting there are clear red lines: no domestic mass surveillance and no fully autonomous weapons that can decide to kill without a human in the loop. On paper, those safeguards sound reassuring. In practice, Kalinowski argued, the process simply didn’t live up to the stakes.

“AI has an important role in national security,” she wrote. “But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” That sentence captures the split now running through the industry: many researchers aren’t against military work outright, but they don’t trust that “lawful use” and “good intentions” are enough to keep frontier AI out of the darkest corners of modern warfare.

OpenAI, for its part, is trying to project calm confidence. A company spokesperson said the Pentagon agreement “creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons.” The message is: trust us, we’ve built layered protections. According to reporting from Reuters and others, those protections include technical and contractual guardrails that are supposed to block certain use cases, even when models are running in classified environments.

But even Altman has acknowledged that the rollout was bumpy. In interviews and posts about the deal, he’s conceded it was “definitely rushed” and that “the optics don’t look good,” especially coming just hours after President Donald Trump publicly ordered federal agencies to stop using products from OpenAI’s rival Anthropic over a contract dispute with the Pentagon. One company was effectively punished for saying “no” to certain forms of military AI; another was rewarded for saying “yes, with conditions.” That contrast is exactly what makes Kalinowski’s exit feel bigger than one person leaving a job.

To understand why, you have to zoom out to the broader fight between Anthropic, OpenAI, and the Pentagon over who gets to draw the ethical boundaries for AI in war.

Anthropic had spent months telling defense officials it was on board with “all lawful uses” of AI for national security, with two big exceptions: no mass domestic surveillance of Americans, and no fully autonomous weapons systems that select and engage targets without human oversight. The Pentagon, facing pressure to move quickly and keep options open, pushed back. Officials argued they could not let a private contractor dictate how the U.S. military uses tools it buys, as long as those uses remain within the law.

That tug‑of‑war ended abruptly when Trump ordered the government to stop using Anthropic’s technology and the Pentagon labeled the company a “supply chain risk.” In the vacuum, OpenAI stepped forward. It agreed to terms that allow the Defense Department to use its models for any lawful purpose, but says it has embedded its own “red lines” and technical safeguards to keep the technology from being turned into a domestic dragnet or a fully autonomous weapon.

In other words, Anthropic tried to hard‑code limits directly into federal contracts; OpenAI is trying to encode limits into its products and internal policies instead. For people like Kalinowski, that shift—from hard legal commitments to softer corporate promises—feels like a risky downgrade.

The timing also matters. The Pentagon is in the middle of a full‑tilt AI build‑out. It has already rolled out Google’s Gemini for Government as the first major model on its GenAI.mil platform, an “AI‑first” environment meant to put generative AI on desktops across military bases worldwide. Officials say these tools will help with everything from summarizing intelligence to drafting documents and analyzing video, and they’re clear that this is just the start. Next up: more “frontier” models—exactly the kind of systems companies like OpenAI, Google, Anthropic, and xAI are racing to build.

Inside OpenAI, Kalinowski wasn’t a public‑facing executive but a builder of the physical side of AI—robots and hardware that bring large models into the real world. Her LinkedIn describes work on scaling up a robotics organization and supporting efforts that connect advanced AI with physical infrastructure and machinery. That’s the kind of work that sits right on the edge between “cool demo” and “potential battlefield asset,” which likely made the Pentagon deal feel very immediate to her.

Even as she left, Kalinowski went out of her way not to turn this into a personal feud. She wrote that her concerns were aimed at process and policy, not at specific leaders, and said she had “deep respect for Sam and the team” and was proud of what they’d built. She also hinted she’s not walking away from the field—just from this particular approach: “I’m taking a little time, but I remain very focused on building responsible physical AI.”​

Still, a resignation like this sends a signal. For employees at other AI labs watching the Pentagon’s moves, it’s a live example of what happens when internal ethics collide with national‑security ambitions. At Google, at OpenAI, and at Anthropic, staff have already pushed leadership to draw firm lines around surveillance and weapons; some have signed letters, others have leaked concerns, and a few have quit. The message back from Washington has been equally clear: if a company won’t accept “any lawful use” as the baseline, there are competitors ready to step in.

That’s what makes this moment so tense. The U.S. government is betting hard that generative AI will be central to future conflict, and it wants maximum flexibility to deploy commercial systems across everything from logistics to intelligence to cyber operations. Meanwhile, the people actually building these models are looking at the same technology and seeing how easily “assistive” tools can slide into mass surveillance, automated targeting, or high‑speed decision chains that humans only rubber‑stamp after the fact.

And buried in all of this is a quiet legal gray zone. OpenAI can say its tools won’t be used for domestic mass surveillance or autonomous weapons, and it can build filters that try to block obvious abuse. But national‑security lawyers point out that “domestic” vs. “foreign,” “surveillance” vs. “intelligence collection,” or “lethal autonomy” vs. “automated targeting assistance” aren’t always bright, clean categories in U.S. law. A system that helps analysts sift through massive datasets on foreign targets might, with only minor tweaks, be turned inward. A tool labeled “decision support” can end up setting the options in ways humans almost never override.

That’s the gap Kalinowski is effectively pointing to: if those lines aren’t nailed down in advance—with robust guardrails, real oversight, and time for internal dissent—then the promises made in a rushed rollout don’t feel like enough. Her resignation won’t stop the Pentagon’s AI build‑out, and it won’t stop OpenAI’s models from entering classified networks. But it does put a human face on a question the industry can’t dodge much longer: who actually gets to say “no” when powerful AI meets the logic of war?


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Leave a Comment

Leave a ReplyCancel reply

Most Popular

Pixel Care+ makes owning a Pixel a lot less scary — here’s why

This Nimble 35W GaN charger with retractable cable is $16 off

TACT Dial 01: turn it, press it, focus — that’s literally it

Perplexity Computer is the AI that actually does your work

25W Qi2 wireless comes alive with this Google Pixelsnap Charger deal

Also Read
Nimble Fold 3-in-1 Wireless Travel Charging Dock

Charge iPhone, Apple Watch and AirPods with this Nimble 3‑in‑1 deal

A simple illustration shows a large black computer mouse cursor pointing toward a white central hub with five connected nodes on an orange background.

Claude Marketplace lets you use one AI commitment across multiple tools

99ONE Rogue 102321

99ONE Rogue wants to kill the ugly helmet comms box forever

Woman with blonde curly hair sitting outside in a lush park, holding a blue Google Pixel 10 and smiling at the screen.

Pixel 10a, Pixel 10, Pixel 10 Pro: one winner for every buyer

Google Search AI Mode showing Canvas in action, with a split-screen view of a conversational AI chat on the left and an "EE Opportunity Tracker" scholarship and grant tracking dashboard on the right, displaying a total funding secured amount of $5,000, scholarship cards with deadlines, and status labels including "To Apply" and "Awarded."

Google’s Canvas AI Mode rolls out to everyone in the U.S.

Google NotebookLM app listing on the Apple App Store displayed on an iPhone screen, showing the app icon, tagline "Understand anything," a Get button with In-App Purchases noted, 1.9K ratings, age rating 4+, and a chart ranking of No. 36 in Productivity.

NotebookLM Cinematic Video Overviews are live — here’s what’s new

A Google Messages conversation on an Android phone showing a real-time location sharing card powered by Find Hub and Google Maps, displaying a live map view near San Francisco Botanical Garden with a blue location dot, labeled "Your location – Sharing until 10:30 AM," within a chat about meeting up for coffee.

Google Messages real-time location sharing is here — here’s how it works

Ben Affleck

Ben Affleck’s AI company InterPositive is now part of Netflix

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.