The Pentagon Told Anthropic: Drop Your Ethics by Friday or We Take Your AI
The Pentagon Told Anthropic: Drop Your Ethics by Friday or We Take Your AI
Published February 25, 2026
The Pentagon told Anthropic to remove its safeguards against mass surveillance of Americans and fully autonomous weapons. Anthropic said no. So Defense Secretary Pete Hegseth gave CEO Dario Amodei until 5:01 PM Friday to “get on board or not.”
If Anthropic doesn’t comply, Hegseth will invoke the Defense Production Act to seize the model anyway, and slap Anthropic with a “supply chain risk” designation that would ban every military contractor in America from using their products.
This is not a negotiation. This is “do what we say or we destroy your business.”
What Anthropic Actually Refused
Let’s be very specific about what the Pentagon is demanding, because the framing matters.
The Pentagon has a $200 million contract with Anthropic. Claude is currently the only AI model with access to classified military systems — the systems where the most sensitive intelligence analysis, weapons development, and battlefield operations happen. Anthropic was trusted enough to be the sole provider.
The Pentagon wants Anthropic to agree to “all lawful use.” Sounds reasonable until you look at what that means. Anthropic has two redlines it won’t cross:
- Mass surveillance of American citizens. Not foreign intelligence gathering. Not battlefield surveillance. Domestic mass surveillance. Spying on Americans.
- Fully autonomous weapons. Weapons that select and engage targets with no human in the loop.
That’s it. Those are the two things Anthropic won’t do. Everything else — intelligence analysis, logistics, cybersecurity, threat detection, strategic planning — all fine. Anthropic is not refusing to work with the military. They’re refusing to build a surveillance state and killer robots.
And for that, they’re getting an ultimatum with a Friday deadline.
Meanwhile, Grok Gets the Keys
Here’s where it gets genuinely absurd.
While the Pentagon is threatening to destroy Anthropic for having two ethical redlines, Elon Musk’s xAI just signed a deal to put Grok into the exact same classified systems.
Grok. The chatbot that went viral for generating Nazi imagery. The model whose entire brand identity is “we have no guardrails and we think that’s funny.” The AI that a mainstream tech publication literally called “MechaHitler.” That Grok now has access to classified military intelligence.
Why? Because xAI agreed to “all lawful use” without blinking. No redlines. No safeguards debate. No ethical hand-wringing. Just “yes sir, whatever you need.”
The Pentagon just rewarded the company with the fewest principles and punished the company with the most. And they did it in the same week.
The Defense Production Act Threat Is Insane
The Defense Production Act was designed for wartime emergencies — forcing factories to produce tanks instead of cars, compelling industries to support national defense during existential threats.
Pete Hegseth wants to use it to compel an AI company to remove ethical safeguards because they won’t agree to mass surveillance. Let that framework sink in for a second. The government position is that Anthropic refusing to spy on Americans constitutes a national security emergency severe enough to justify wartime production powers.
This isn’t about defense readiness. Every other use case is already on the table. This is specifically about the two things Anthropic said no to. The Pentagon has access to Claude for virtually everything and they’re going nuclear because of a disagreement over mass domestic surveillance and autonomous kill decisions.
The Supply Chain Risk Designation Is the Real Weapon
The DPA threat gets the headlines, but the “supply chain risk” label is the actual kill shot.
If the Pentagon designates Anthropic a supply chain risk, every company with a military contract — and that’s a LOT of companies — would be prohibited from using Anthropic’s products for any military-related work. Defense contractors, intelligence firms, government IT providers, all of them. Banned from using Claude.
For a company trying to build enterprise revenue, that’s not a slap on the wrist. That’s a business model torpedo. You just told a huge chunk of the enterprise market that working with Anthropic could jeopardize their government contracts.
It’s brilliant, actually. You don’t need to shut down Anthropic. You just need to make them radioactive to anyone who does business with the government.
Anthropic Is Not Budging
Sources say Anthropic has no plans to comply with the Friday deadline. Amodei reportedly reiterated the company’s redlines on autonomous weapons and mass surveillance, and Anthropic is digging in.
This is genuinely unusual for a tech company. Most Silicon Valley firms fold at the first hint of government pressure. Remember when Apple built the whole “we’ll never unlock an iPhone” brand and then quietly built the tools anyway? Remember when Google dropped “Don’t Be Evil” the moment it became inconvenient?
Anthropic is looking at the potential destruction of their government business and a wartime production act seizure of their technology, and their answer is still no. Whatever you think of AI safety as a philosophy, the company is actually backing it up when it costs them something. That doesn’t happen often.
What This Actually Means
Strip away the politics and the personalities, and here’s what just happened: the U.S. government told an AI company that building safeguards against mass surveillance and autonomous weapons makes them a national security threat.
Not the company that has no safeguards. Not the company whose model generates extremist content on request. Not the company run by the guy who has daily access to classified intelligence through about six different government roles and also runs the AI company getting the contract. That company is fine. That company gets rewarded.
The company that said “we’ll do everything except help you spy on your own citizens and build weapons that kill without human approval” — that’s the threat. That’s the one that needs the Defense Production Act.
If you’re an AI developer and you’re watching this, the message is pretty clear. Ethics are a liability. Safeguards are a competitive disadvantage. The government will actively punish you for having principles and reward your competitors for not having them.
The Friday deadline is in two days. Anthropic says they’re not moving. The Pentagon says they’re not bluffing.
Somebody’s about to find out.
UPDATE: Anthropic Held. Pentagon Followed Through.
Updated February 27, 2026 — 5:01 PM ET
The deadline passed. Anthropic didn’t move.
Dario Amodei published a statement on Anthropic’s website titled “Statement on Department of War” — note the framing, not “Department of Defense” — stating plainly: “These threats do not change our position: we cannot in good conscience accede to their request.”
Both redlines held. No mass surveillance. No autonomous weapons. No compromise.
The Pentagon’s response was immediate. Emil Michael, the Pentagon’s point man on the negotiations, called Amodei “a liar” with a “God complex” in a statement to Fortune. President Trump directed every federal agency to cease using Anthropic technology. The $200 million classified contract is dead. The supply chain risk designation is expected to follow.
Anthropic is now facing: total loss of government revenue, potential Defense Production Act invocation, and a blacklist that could poison their entire enterprise business. All because they wouldn’t agree to two things.
The company that said no got punished. The company that said yes to everything got rewarded. The incentive structure is now completely explicit — there is no ambiguity left about what the U.S. government wants from AI companies and what happens to the ones that push back.
Somebody found out. It was the one with principles.