By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIBusinessOpenAITech

The Pentagon AI deal that OpenAI’s robotics head couldn’t accept

OpenAI’s robotics chief has walked away from the company, saying the Pentagon AI deal went ahead without the guardrails such powerful tech demands.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Mar 9, 2026, 11:58 AM EDT
Share
We may get a commission from retail offers. Learn more
A person stands in front of a blue tiled wall featuring the illuminated word “OpenAI.” They are holding a smartphone and appear to be engaged with it, possibly taking a photo or interacting with content. The scene emphasizes the OpenAI brand in a modern, tech-savvy setting.
Photo by Pau Barrena / Getty Images
SHARE

Caitlin Kalinowski did not plan to become the face of an internal revolt. But when OpenAI quietly inked a high‑stakes deal to bring its AI systems into classified Pentagon networks, the veteran hardware and robotics leader decided she’d had enough.

Her resignation, announced in a short, sober post on X, landed like a small but sharp shock inside a company already under intense scrutiny over how far it’s willing to bend its own safety rules in exchange for government power and money. She said she was leaving “about principle,” stressing that she cared deeply about the robotics team she’d helped build—but that certain red lines around military AI should have been debated far more seriously before OpenAI rushed ahead with the Pentagon.

The trigger was OpenAI’s new agreement with the U.S. Department of Defense to deploy its models inside secure, classified systems—a landmark move that effectively makes the company one of the Pentagon’s go‑to AI suppliers. CEO Sam Altman has framed the deal as compatible with OpenAI’s values, insisting there are clear red lines: no domestic mass surveillance and no fully autonomous weapons that can decide to kill without a human in the loop. On paper, those safeguards sound reassuring. In practice, Kalinowski argued, the process simply didn’t live up to the stakes.

“AI has an important role in national security,” she wrote. “But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” That sentence captures the split now running through the industry: many researchers aren’t against military work outright, but they don’t trust that “lawful use” and “good intentions” are enough to keep frontier AI out of the darkest corners of modern warfare.

OpenAI, for its part, is trying to project calm confidence. A company spokesperson said the Pentagon agreement “creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons.” The message is: trust us, we’ve built layered protections. According to reporting from Reuters and others, those protections include technical and contractual guardrails that are supposed to block certain use cases, even when models are running in classified environments.

But even Altman has acknowledged that the rollout was bumpy. In interviews and posts about the deal, he’s conceded it was “definitely rushed” and that “the optics don’t look good,” especially coming just hours after President Donald Trump publicly ordered federal agencies to stop using products from OpenAI’s rival Anthropic over a contract dispute with the Pentagon. One company was effectively punished for saying “no” to certain forms of military AI; another was rewarded for saying “yes, with conditions.” That contrast is exactly what makes Kalinowski’s exit feel bigger than one person leaving a job.

To understand why, you have to zoom out to the broader fight between Anthropic, OpenAI, and the Pentagon over who gets to draw the ethical boundaries for AI in war.

Anthropic had spent months telling defense officials it was on board with “all lawful uses” of AI for national security, with two big exceptions: no mass domestic surveillance of Americans, and no fully autonomous weapons systems that select and engage targets without human oversight. The Pentagon, facing pressure to move quickly and keep options open, pushed back. Officials argued they could not let a private contractor dictate how the U.S. military uses tools it buys, as long as those uses remain within the law.

That tug‑of‑war ended abruptly when Trump ordered the government to stop using Anthropic’s technology and the Pentagon labeled the company a “supply chain risk.” In the vacuum, OpenAI stepped forward. It agreed to terms that allow the Defense Department to use its models for any lawful purpose, but says it has embedded its own “red lines” and technical safeguards to keep the technology from being turned into a domestic dragnet or a fully autonomous weapon.

In other words, Anthropic tried to hard‑code limits directly into federal contracts; OpenAI is trying to encode limits into its products and internal policies instead. For people like Kalinowski, that shift—from hard legal commitments to softer corporate promises—feels like a risky downgrade.

The timing also matters. The Pentagon is in the middle of a full‑tilt AI build‑out. It has already rolled out Google’s Gemini for Government as the first major model on its GenAI.mil platform, an “AI‑first” environment meant to put generative AI on desktops across military bases worldwide. Officials say these tools will help with everything from summarizing intelligence to drafting documents and analyzing video, and they’re clear that this is just the start. Next up: more “frontier” models—exactly the kind of systems companies like OpenAI, Google, Anthropic, and xAI are racing to build.

Inside OpenAI, Kalinowski wasn’t a public‑facing executive but a builder of the physical side of AI—robots and hardware that bring large models into the real world. Her LinkedIn describes work on scaling up a robotics organization and supporting efforts that connect advanced AI with physical infrastructure and machinery. That’s the kind of work that sits right on the edge between “cool demo” and “potential battlefield asset,” which likely made the Pentagon deal feel very immediate to her.

Even as she left, Kalinowski went out of her way not to turn this into a personal feud. She wrote that her concerns were aimed at process and policy, not at specific leaders, and said she had “deep respect for Sam and the team” and was proud of what they’d built. She also hinted she’s not walking away from the field—just from this particular approach: “I’m taking a little time, but I remain very focused on building responsible physical AI.”​

Still, a resignation like this sends a signal. For employees at other AI labs watching the Pentagon’s moves, it’s a live example of what happens when internal ethics collide with national‑security ambitions. At Google, at OpenAI, and at Anthropic, staff have already pushed leadership to draw firm lines around surveillance and weapons; some have signed letters, others have leaked concerns, and a few have quit. The message back from Washington has been equally clear: if a company won’t accept “any lawful use” as the baseline, there are competitors ready to step in.

That’s what makes this moment so tense. The U.S. government is betting hard that generative AI will be central to future conflict, and it wants maximum flexibility to deploy commercial systems across everything from logistics to intelligence to cyber operations. Meanwhile, the people actually building these models are looking at the same technology and seeing how easily “assistive” tools can slide into mass surveillance, automated targeting, or high‑speed decision chains that humans only rubber‑stamp after the fact.

And buried in all of this is a quiet legal gray zone. OpenAI can say its tools won’t be used for domestic mass surveillance or autonomous weapons, and it can build filters that try to block obvious abuse. But national‑security lawyers point out that “domestic” vs. “foreign,” “surveillance” vs. “intelligence collection,” or “lethal autonomy” vs. “automated targeting assistance” aren’t always bright, clean categories in U.S. law. A system that helps analysts sift through massive datasets on foreign targets might, with only minor tweaks, be turned inward. A tool labeled “decision support” can end up setting the options in ways humans almost never override.

That’s the gap Kalinowski is effectively pointing to: if those lines aren’t nailed down in advance—with robust guardrails, real oversight, and time for internal dissent—then the promises made in a rushed rollout don’t feel like enough. Her resignation won’t stop the Pentagon’s AI build‑out, and it won’t stop OpenAI’s models from entering classified networks. But it does put a human face on a question the industry can’t dodge much longer: who actually gets to say “no” when powerful AI meets the logic of war?


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Leave a Comment

Leave a ReplyCancel reply

Most Popular

Claude Platform’s new Compliance API answers “who did what and when”

Amazon Prime just made Friday gas runs $0.20 per gallon cheaper

Google Drive now uses AI to catch ransomware in real time

iOS 26.4 adds iCloud.com search for files and photos

This $3 ChromeOS Flex stick from Google and Back Market wants to save your old PC

Also Read
A person in a dress shirt sits at a desk typing on a keyboard in a dark room, while a glowing ribbon of light flows from a glass sphere with the Perplexity logo toward the computer, suggesting futuristic AI assistance.

Perplexity Computer just became your new tax assistant

Abstract sound wave illustration made of vertical textured lines in dark mauve on a soft pink background, suggesting audio waveform or voice signal for a modern tech or speech recognition theme.

Microsoft AI unveils MAI-Transcribe-1 for fast, accurate speech-to-text

Google Gemini AI. The image shows the word "Gemini" written in a modern, sans-serif font on a black background. The letters "G" and "e" are in a gradient blue color, while the letters "m," "i," "n," and "i" transition from a light blue to a light beige color. Above the second "i" in "Gemini," there is a stylized star or sparkle symbol, adding a celestial or futuristic touch to the design.

Google’s new MCP tools stop Gemini agents from hallucinating old APIs

A smart TV screen showing a paused YouTube podcast‑style video with two people talking into microphones, overlaid by a large circular “Ask” button with a sparkle icon in the bottom right corner.

YouTube’s new Ask AI button lands on smart TVs

Ray-Ban Meta Blayzer Optics (Gen 2) AI glasses

Meta’s new Ray-Ban AI glasses finally put prescriptions first

AT&T logo

AT&T OneConnect starts at $90 for fiber and wireless together

A wide Opera Neon promotional graphic showing the “MCP Connector” interface centered on a blurred gradient background, with a dialog that says “Connect AI systems to Opera Neon” and toggle for “Allow AI connection,” surrounded by labeled boxes for OpenClaw MCP Client, ChatGPT MCP Client, N8N MCP Client, Claude MCP Client, and Lovable MCP Client connected by dotted lines.

Opera Neon adds MCP Connector for true agentic browsing

Assassin’s Creed Shadows

Assassin’s Creed Shadows PS5 Pro patch adds new PSSR

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.