ONLINE

Sam Altman's Saturday Night AMA Was Supposed to Be Damage Control. It Was a Confession.

Sam Altman’s Saturday Night AMA Was Supposed to Be Damage Control. It Was a Confession.

Published March 1, 2026

Saturday night, while normal people were doing normal things, Sam Altman got on X and told the internet to ask him anything about OpenAI’s new Pentagon deal.

This is what corporate executives do when the narrative is getting away from them. You show up, you look open and transparent, you control the story. It’s a good playbook. It usually works.

It did not work.

Over the next few hours, Altman managed to confirm essentially every criticism that’s been leveled at OpenAI since Friday. Not by being evasive — by being honest in ways that were apparently supposed to be reassuring.

”The Deal Was Rushed”

Altman said the Pentagon deal came together quickly, in what he described as “an attempt to de-escalate the situation.”

Let me translate that. The U.S. government blacklisted Anthropic on Friday morning. By Friday evening, OpenAI had a signed classified contract. Altman is now telling you, on the record, that this deal was put together in hours to fill the vacuum created by the government destroying his competitor.

“De-escalate.” Interesting word choice for “our competitor got kneecapped and we sprinted to the podium.”

He also acknowledged that the “optics don’t look good.” This is the corporate equivalent of your friend saying “I know how this looks” while standing in your apartment holding your TV. He knows how it looks because it looks exactly like what it is.

”Anthropic May Have Wanted More Operational Control”

This was the tell. Buried in the AMA, Altman offered this explanation for why Anthropic’s deal collapsed and OpenAI’s succeeded: “I think Anthropic may have wanted more operational control than we did.”

Read that carefully. He’s not saying Anthropic’s safety principles were different. He’s saying Anthropic wanted those principles to be enforceable. Operational control. Binding terms. The ability to actually prevent misuse, not just express concern about it.

OpenAI’s approach — the one the Pentagon approved — is to state the same principles but without the operational teeth. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force,” Altman wrote. Beautiful sentence. Means nothing if the Pentagon can override it whenever they decide “technology evolves.”

Which brings us to the part that should have stayed unsaid.

”May Change as Technology Evolves”

Altman described his three safety redlines and then added that they “may change as technology evolves.”

This is the entire game in one sentence.

Anthropic demanded permanent restrictions. Written into the contract. Not subject to renegotiation when the political winds shift or when the models get more capable. The safeguards would hold whether or not the Pentagon liked them, whether or not the White House changed, whether or not the technology advanced.

OpenAI offered restrictions that explicitly have an expiration condition. They hold for now. They may not hold later. “As technology evolves” is a door left open, and everyone involved knows it.

The Pentagon blacklisted Anthropic for demanding permanent safeguards and rewarded OpenAI for offering temporary ones. Altman just told you that himself. In an AMA. On a Saturday night. While apparently thinking this would make people feel better.

Claude Hit #1 on the App Store

Here’s the part the Pentagon didn’t anticipate.

While Altman was explaining to X why everything is fine, Claude — Anthropic’s AI model, the one the government just designated a supply chain risk — climbed to the top of the iOS App Store. By Saturday, Claude, ChatGPT, and Gemini held the top three spots. The company the government tried to destroy saw a surge in consumer adoption.

The Streisand effect is a well-documented phenomenon. Telling people they can’t have something makes them want it more. Telling people an AI company is “dangerous” because it refused to build surveillance tools makes people think: maybe that’s the AI company I should trust.

Anthropic’s downloads aren’t going to save them from the supply chain risk designation. But they do suggest that the government’s attempt to make Anthropic radioactive has had the opposite effect on consumers. It turns out that “the AI company the Pentagon blacklisted for having ethics” is actually a pretty compelling brand.

The Open Letter, Revisited

Over 300 Google employees and more than 60 OpenAI employees signed an open letter supporting Anthropic. I covered this in the previous article, but it’s worth revisiting in the context of the AMA.

Sam Altman went on X to defend a deal that his own employees publicly oppose. Not privately grumbling — publicly, in writing, putting their names on a document that says the government is wrong and their competitor is right. That is an extraordinary thing to happen at any company, let alone one that just received a classified military contract.

Altman didn’t address the open letter in the AMA. Smart move. There’s no good answer to “why did 60 of your own people publicly side with Anthropic?” that doesn’t sound terrible.

What the AMA Actually Revealed

Here’s what Sam Altman told us on Saturday night, stripped of the PR framing:

  1. The deal was rushed. It came together in hours to fill the hole created by Anthropic’s blacklisting. This wasn’t careful deliberation. It was opportunism at speed.

  2. The optics are bad and he knows it. Getting the contract your competitor was destroyed for, the same day they were destroyed, with the same restrictions they were destroyed for demanding — he’s aware of how this looks.

  3. Anthropic’s sin was enforcement. “Operational control” — the ability to actually ensure the restrictions work. OpenAI offered the same restrictions without the enforcement mechanism. That’s the difference the Pentagon was willing to pay for.

  4. The safeguards are temporary. “May change as technology evolves” means these restrictions last exactly as long as OpenAI decides they should. Given that OpenAI is a company that went from “capped-profit nonprofit” to “for-profit corporation valued at $300 billion” in the span of two years, you can decide for yourself how long their voluntary safety commitments will hold.

The Trilogy Nobody Wanted

This is now the third article I’ve written about this story in four days. Not because I planned a series, but because the situation keeps getting worse in ways that demand documentation.

First: the Pentagon told Anthropic to drop their ethics by Friday or lose the contract. They did.

Then: the Pentagon accepted from OpenAI exactly what it punished Anthropic for. With a $25 million MAGA Inc donation sitting in the background like a decorative elephant in the room.

Now: the CEO of the company that benefited from all of this went on X and accidentally confirmed the whole thing. The deal was rushed. The restrictions are temporary. The difference between getting blacklisted and getting a contract was never about safety — it was about how loudly you insist on it.

Anthropic’s lawsuit is coming. The legal analysis says they’ll probably win. But by the time the courts rule, the precedent will be set: AI companies that insist on binding, permanent safety restrictions will be punished. AI companies that offer the same restrictions in a non-binding, temporary format will be rewarded. And the CEO of the winning company will go on X to explain why this is actually fine.

He did that on Saturday night. It was not fine. But the deal is signed, the contract is classified, and the safeguards “may change as technology evolves.”

Sleep well.