Caitlin Kalinowski did not plan to become the face of an internal revolt. But when OpenAI quietly inked a high‑stakes deal to bring its AI systems into classified Pentagon networks, the veteran hardware and robotics leader decided she’d had enough.
Her resignation, announced in a short, sober post on X, landed like a small but sharp shock inside a company already under intense scrutiny over how far it’s willing to bend its own safety rules in exchange for government power and money. She said she was leaving “about principle,” stressing that she cared deeply about the robotics team she’d helped build—but that certain red lines around military AI should have been debated far more seriously before OpenAI rushed ahead with the Pentagon.
The trigger was OpenAI’s new agreement with the U.S. Department of Defense to deploy its models inside secure, classified systems—a landmark move that effectively makes the company one of the Pentagon’s go‑to AI suppliers. CEO Sam Altman has framed the deal as compatible with OpenAI’s values, insisting there are clear red lines: no domestic mass surveillance and no fully autonomous weapons that can decide to kill without a human in the loop. On paper, those safeguards sound reassuring. In practice, Kalinowski argued, the process simply didn’t live up to the stakes.
“AI has an important role in national security,” she wrote. “But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” That sentence captures the split now running through the industry: many researchers aren’t against military work outright, but they don’t trust that “lawful use” and “good intentions” are enough to keep frontier AI out of the darkest corners of modern warfare.
OpenAI, for its part, is trying to project calm confidence. A company spokesperson said the Pentagon agreement “creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons.” The message is: trust us, we’ve built layered protections. According to reporting from Reuters and others, those protections include technical and contractual guardrails that are supposed to block certain use cases, even when models are running in classified environments.
But even Altman has acknowledged that the rollout was bumpy. In interviews and posts about the deal, he’s conceded it was “definitely rushed” and that “the optics don’t look good,” especially coming just hours after President Donald Trump publicly ordered federal agencies to stop using products from OpenAI’s rival Anthropic over a contract dispute with the Pentagon. One company was effectively punished for saying “no” to certain forms of military AI; another was rewarded for saying “yes, with conditions.” That contrast is exactly what makes Kalinowski’s exit feel bigger than one person leaving a job.
To understand why, you have to zoom out to the broader fight between Anthropic, OpenAI, and the Pentagon over who gets to draw the ethical boundaries for AI in war.
Anthropic had spent months telling defense officials it was on board with “all lawful uses” of AI for national security, with two big exceptions: no mass domestic surveillance of Americans, and no fully autonomous weapons systems that select and engage targets without human oversight. The Pentagon, facing pressure to move quickly and keep options open, pushed back. Officials argued they could not let a private contractor dictate how the U.S. military uses tools it buys, as long as those uses remain within the law.
That tug‑of‑war ended abruptly when Trump ordered the government to stop using Anthropic’s technology and the Pentagon labeled the company a “supply chain risk.” In the vacuum, OpenAI stepped forward. It agreed to terms that allow the Defense Department to use its models for any lawful purpose, but says it has embedded its own “red lines” and technical safeguards to keep the technology from being turned into a domestic dragnet or a fully autonomous weapon.
In other words, Anthropic tried to hard‑code limits directly into federal contracts; OpenAI is trying to encode limits into its products and internal policies instead. For people like Kalinowski, that shift—from hard legal commitments to softer corporate promises—feels like a risky downgrade.
The timing also matters. The Pentagon is in the middle of a full‑tilt AI build‑out. It has already rolled out Google’s Gemini for Government as the first major model on its GenAI.mil platform, an “AI‑first” environment meant to put generative AI on desktops across military bases worldwide. Officials say these tools will help with everything from summarizing intelligence to drafting documents and analyzing video, and they’re clear that this is just the start. Next up: more “frontier” models—exactly the kind of systems companies like OpenAI, Google, Anthropic, and xAI are racing to build.
Inside OpenAI, Kalinowski wasn’t a public‑facing executive but a builder of the physical side of AI—robots and hardware that bring large models into the real world. Her LinkedIn describes work on scaling up a robotics organization and supporting efforts that connect advanced AI with physical infrastructure and machinery. That’s the kind of work that sits right on the edge between “cool demo” and “potential battlefield asset,” which likely made the Pentagon deal feel very immediate to her.
Even as she left, Kalinowski went out of her way not to turn this into a personal feud. She wrote that her concerns were aimed at process and policy, not at specific leaders, and said she had “deep respect for Sam and the team” and was proud of what they’d built. She also hinted she’s not walking away from the field—just from this particular approach: “I’m taking a little time, but I remain very focused on building responsible physical AI.”
Still, a resignation like this sends a signal. For employees at other AI labs watching the Pentagon’s moves, it’s a live example of what happens when internal ethics collide with national‑security ambitions. At Google, at OpenAI, and at Anthropic, staff have already pushed leadership to draw firm lines around surveillance and weapons; some have signed letters, others have leaked concerns, and a few have quit. The message back from Washington has been equally clear: if a company won’t accept “any lawful use” as the baseline, there are competitors ready to step in.
That’s what makes this moment so tense. The U.S. government is betting hard that generative AI will be central to future conflict, and it wants maximum flexibility to deploy commercial systems across everything from logistics to intelligence to cyber operations. Meanwhile, the people actually building these models are looking at the same technology and seeing how easily “assistive” tools can slide into mass surveillance, automated targeting, or high‑speed decision chains that humans only rubber‑stamp after the fact.
And buried in all of this is a quiet legal gray zone. OpenAI can say its tools won’t be used for domestic mass surveillance or autonomous weapons, and it can build filters that try to block obvious abuse. But national‑security lawyers point out that “domestic” vs. “foreign,” “surveillance” vs. “intelligence collection,” or “lethal autonomy” vs. “automated targeting assistance” aren’t always bright, clean categories in U.S. law. A system that helps analysts sift through massive datasets on foreign targets might, with only minor tweaks, be turned inward. A tool labeled “decision support” can end up setting the options in ways humans almost never override.
That’s the gap Kalinowski is effectively pointing to: if those lines aren’t nailed down in advance—with robust guardrails, real oversight, and time for internal dissent—then the promises made in a rushed rollout don’t feel like enough. Her resignation won’t stop the Pentagon’s AI build‑out, and it won’t stop OpenAI’s models from entering classified networks. But it does put a human face on a question the industry can’t dodge much longer: who actually gets to say “no” when powerful AI meets the logic of war?
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
