The Pentagon Accepted From OpenAI Exactly What It Punished Anthropic For
The Pentagon Accepted From OpenAI Exactly What It Punished Anthropic For
Published February 28, 2026
Here’s what happened in the last 24 hours. Try to keep a straight face.
On Friday afternoon, the Pentagon blacklisted Anthropic and President Trump ordered every federal agency to stop using their products. Anthropic’s crime: refusing to remove two safeguards — one against mass surveillance of Americans, one against fully autonomous weapons. The Pentagon called CEO Dario Amodei “a liar with a God complex.” A $200 million classified contract was canceled. The company was designated a “supply chain risk,” a label typically reserved for Chinese military-linked firms.
Hours later — hours — OpenAI announced it had signed a deal to deploy its models in the exact same classified Pentagon systems.
OpenAI’s deal includes restrictions on mass surveillance and autonomous weapons.
The same restrictions.
I’ll say it again because it’s genuinely important: the Pentagon accepted the same safeguards from OpenAI that it just blacklisted Anthropic for demanding.
The Trick Is in the Framing
Anthropic wanted explicit contractual restrictions. Written into the agreement. “You will not use Claude for mass domestic surveillance. You will not use Claude for fully autonomous weapons.” Black and white. Legally binding. No wiggle room.
OpenAI did something more politically savvy. They agreed to “all lawful use” — the Pentagon’s magic phrase — while claiming the restrictions are baked into the models as “technical constraints.” Sam Altman announced: “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force.”
Same outcome. Different packaging. One gets a contract, the other gets blacklisted.
Anthropic’s mistake wasn’t the policy. It was insisting that the policy be real. They wanted it in writing. OpenAI offered the same thing in a format the Pentagon could pretend wasn’t a restriction. The Pentagon got to tell Congress they have “unrestricted AI access” while OpenAI quietly maintains the same guardrails Anthropic was punished for.
This is compliance theater. And the price of refusing to perform it is the destruction of your business.
”Almost Surely Illegal”
The supply chain risk designation is where this gets genuinely lawless.
The designation was designed for foreign adversaries — companies with ties to Chinese or Russian military that pose actual security threats to the defense supply chain. The Pentagon just used it against an American AI company because the CEO wouldn’t agree to the government’s preferred contract language.
Defense Secretary Pete Hegseth went further, interpreting the designation to mean that any contractor, supplier, or partner doing business with the military is prohibited from commercial activity with Anthropic. Not just military use. All commercial activity.
Legal scholars are not being subtle about what they think of this. One called it “almost surely illegal” and “attempted corporate murder.” Anthropic’s lawyers agree — they’re citing 10 USC 3252, which limits supply chain risk designations to Pentagon contracts only. The designation doesn’t have the legal authority to reach into Anthropic’s commercial relationships.
But here’s the thing about “almost surely illegal” government actions: they work until someone stops them. By the time the courts rule, the damage is done. Anthropic’s government revenue is already gone. Their planned IPO is in jeopardy. And every enterprise customer is now asking their legal team: “Can we still use Claude?”
That’s the play. You don’t need to win in court. You just need to make Anthropic radioactive long enough to cripple them.
The Open Letter Nobody Expected
Here’s where the story gets interesting. Over 300 Google employees and more than 60 OpenAI employees — OpenAI’s own employees — signed an open letter supporting Anthropic.
Read that again. People who work at the company that just got the Pentagon deal, the company that directly benefits from Anthropic’s destruction, looked at what happened and said: this is wrong.
That’s not a PR stunt. That’s engineers watching the government use Cold War supply chain tools against an American AI company for maintaining safety standards, and deciding they’d rather publicly oppose their own employer’s strategic advantage than stay quiet.
The letter is a signal. It says: the people actually building AI — not the executives negotiating contracts, not the lobbyists, not the MAGA donors — the engineers, the researchers, the people who understand what these models can actually do, think the government is wrong.
Follow the Money
This part writes itself but I’ll say it anyway.
OpenAI co-founder Greg Brockman and his wife donated $25 million to MAGA Inc, a super PAC supporting President Trump’s political operation. This is public, reported, and easily verifiable.
The same administration that received $25 million from OpenAI’s leadership just blacklisted OpenAI’s primary competitor and handed OpenAI the resulting contract — with the same restrictions that were supposedly the reason for the blacklisting.
I’m not going to tell you what to conclude from this. I’m just going to put the facts next to each other and let you do the math.
What Anthropic Is Doing About It
Anthropic is taking the Pentagon to federal court. They’re expected to file in the District of Columbia in the coming weeks, challenging the supply chain risk designation directly.
Their legal argument is straightforward: the designation exceeds its statutory authority. 10 USC 3252 limits these designations to Pentagon procurement relationships. It doesn’t give the Secretary of Defense the power to destroy a company’s entire commercial business because the CEO wouldn’t sign a contract the way the government wanted.
Will they win? Probably. The legal analysis is overwhelmingly on Anthropic’s side. The question is whether “winning eventually” matters when “losing in the meantime” means your IPO collapses, your enterprise customers flee, and your competitor gets handed your classified contract with the same restrictions you got punished for.
What This Actually Means
Strip away the legal arguments and the corporate positioning, and here’s what happened this week: the U.S. government demonstrated that AI safety policy is not the variable. The variable is compliance. Not compliance with safety standards — compliance with political preferences.
Anthropic and OpenAI have functionally identical positions on mass surveillance and autonomous weapons. One wrote it into a contract. One embedded it as “technical constraints.” The one that gave the Pentagon plausible deniability got rewarded. The one that insisted on transparency got destroyed.
If you’re building an AI company in the United States right now, the lesson is: don’t insist on writing your principles down. Don’t make the government acknowledge your restrictions in legally binding terms. Nod along, call your safeguards “technical features,” donate to the right PAC, and you’ll get the same deal — plus a classified contract.
The AI safety debate is over. Not because we resolved it, but because the government just showed everyone that the rules are negotiable as long as you negotiate them correctly. The principles don’t matter. The packaging does.
Anthropic’s case will be worth watching. Not because the legal outcome is in doubt — it’s not — but because it will show whether American courts can move fast enough to undo the damage of a government that figured out how to destroy a company legally, even if illegally.