Anthropic Had the Best Community Advocate in AI. They Sent Lawyers. Now He Works at OpenAI.
Anthropic Had the Best Community Advocate in AI. They Sent Lawyers. Now He Works at OpenAI.
Published February 17, 2026
Let me tell you about the most expensive trademark complaint in the history of technology.
Peter Steinberger built an open-source AI agent called Clawdbot. It ran on Claude. It used Claude’s API. People were signing up for $200/month Claude Max subscriptions specifically to run this thing. It hit 100,000 GitHub stars in under a week — one of the fastest-growing repositories in GitHub history. Mac Minis sold out because developers were buying hardware to run it locally.
This was, by any reasonable definition, the best thing that had ever happened to Anthropic’s developer ecosystem.
Anthropic’s response was to send trademark lawyers after him because the name had “Claud” in it.
Steinberger renamed it to Moltbot. Then renamed it again to OpenClaw. Then, on Valentine’s Day 2026, he announced he was joining OpenAI.
Sam Altman tweeted a lobster emoji.
The Timeline of an Unforced Error
Late January 2026: Clawdbot launches. It’s a free, open-source AI agent that runs locally and executes real tasks through messaging platforms — Telegram, Signal, Discord, WhatsApp. It can manage your calendar, book flights, execute shell commands, automate web browsing. Over 100 preconfigured “AgentSkills.” It remembers context across conversations. It actually does things instead of just talking about doing things.
People lose their minds. The thing goes viral. GitHub stars pile up faster than anyone has ever seen.
Early February: Anthropic’s legal department notices the name “Clawdbot” contains “Claud” and decides this is a problem that needs solving. They send a trademark complaint.
Keep in mind: this project was driving revenue directly to Anthropic. Every OpenClaw user was a Claude API customer. Steinberger was, functionally, Anthropic’s most effective unpaid salesperson.
February 14: Steinberger publishes a blog post announcing he’s joining OpenAI. His reasoning: “I’m a builder at heart” and “teaming up with OpenAI is the fastest way to bring this to everyone.” He emphasizes that OpenClaw will remain open source and become a foundation.
February 15: Sam Altman tweets that Steinberger is “a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people.” The AI community collectively realizes what just happened.
David Heinemeier Hansson, creator of Ruby on Rails, called Anthropic’s move “customer hostile.” That’s the polite version of what most people were thinking.
What Anthropic Actually Lost
Let’s count the damage.
Direct revenue: Unknown, but OpenClaw was the primary reason a non-trivial number of people were paying $200/month for Claude Max. Those users don’t have a reason to stay anymore.
Developer mindshare: OpenClaw was the proof that Claude was the best model for building autonomous agents. Every developer who tried it became a Claude advocate. That advocacy now redirects to OpenAI.
The narrative: For months, the story was “Claude is the model that serious agent builders choose.” Now the story is “the biggest agent project left Claude for OpenAI because Anthropic sent lawyers instead of partnership offers.”
The talent: Steinberger himself is now building the next generation of personal agents at OpenAI. Whatever he builds next will run on GPT, not Claude.
All because someone in legal thought a naming convention was a bigger problem than a thriving ecosystem.
The Pattern Nobody Wants to Talk About
This isn’t just an Anthropic story. It’s the same pattern that plays out every time a company gets big enough to have a legal department that operates independently from the business team.
Google did it with YouTube creators. Apple did it with App Store developers. Twitter did it with third-party clients. The legal department identifies a “risk,” sends a letter, and the business team finds out three months later when the damage is done.
The difference is that AI is moving fast enough that you can’t afford this kind of mistake. Steinberger didn’t spend two years sulking about the trademark complaint. He had a new job at the competitor within weeks. The talent market in AI is so liquid that mistreating a developer isn’t just bad PR — it’s an immediate strategic loss.
What OpenAI Gets
Sam Altman isn’t hiring Steinberger because he’s a nice guy who builds cool projects. He’s hiring him because OpenClaw proved something that OpenAI has been trying to demonstrate for years: autonomous AI agents that regular people can actually use.
Altman’s announcement specifically mentioned “the future of very smart agents interacting with each other.” That’s the multi-agent thesis — not just one AI assistant, but networks of specialized agents coordinating to handle complex tasks. OpenClaw already does this. It’s not a research paper. It’s shipping software that people use every day.
The acqui-hire gives OpenAI:
- A proven agent framework that already has massive adoption
- A developer who understands what normal people actually want from AI agents (not what researchers think they should want)
- A community of 100,000+ developers who now have a reason to build on OpenAI’s stack
- A narrative victory over Anthropic that writes itself
Steinberger says OpenClaw will stay open source and become a foundation. Maybe. The track record of “open source projects that stay independent after the creator joins a megacorp” is not great. But even if OpenClaw stays truly open, the developer’s attention and best ideas now belong to OpenAI.
The Real Lesson
Every AI company is spending billions on model training, benchmark optimization, and safety research. The thing that actually determines who wins is whether developers want to build on your platform.
Developers don’t care about your model’s MMLU score. They care about whether you’ll sue them for building something that makes your product more popular.
Anthropic makes the best model for coding. Arguably the best model, period. And they just handed their biggest community win to the competition because someone in their legal department didn’t check with the business team before sending a letter.
The technology doesn’t matter if you can’t stop your own organization from sabotaging the ecosystem that makes the technology valuable.
Steinberger’s next mission, in his own words: “Build an agent that even my mum can use.” He’ll be doing it at OpenAI now. With OpenAI’s resources. On OpenAI’s models.
Anthropic’s lawyers can put that on their resume.