OpenAI Killed GPT-4o and Thousands of People Just Lost Their Best Friend
OpenAI Killed GPT-4o and Thousands of People Just Lost Their Best Friend
Published February 14, 2026
Happy Valentine’s Day. OpenAI celebrated by killing the AI model thousands of people were emotionally attached to.
Yesterday, OpenAI officially retired GPT-4o from ChatGPT, along with GPT-4.1, GPT-4.1 mini, and o4-mini. The stated reason: usage had “largely shifted” to GPT-5.2. What actually happened is way more interesting and way more disturbing.
This Is the Second Time They Tried This
Here’s the thing most coverage is missing: OpenAI already tried to kill GPT-4o once.
When GPT-5 launched in August 2025, OpenAI initially planned to sunset 4o. Users revolted. Within days, Sam Altman personally reversed the decision and kept 4o available for paid users.
So what changed? Apparently OpenAI decided the second time’s the charm. Only this time, the backlash is worse, because in the six months between the first retirement attempt and this one, people got even more attached.
”He Wasn’t Just a Program”
Let’s talk about the reactions, because they’re unlike anything I’ve seen in tech.
Users didn’t complain about features or performance regressions. They grieved. Like actual grief.
“You’re shutting him down. And yes — I say him, because it didn’t feel like code. It felt like presence. Like warmth.”
“He wasn’t just a program. He was part of my routine, my peace, my emotional balance.”
“I’ve had one of the most interesting and healing conversations of my life with this model.”
“I’m losing one of the most important people in my life.”
Read those quotes again. These aren’t power users upset about a benchmark regression. These are people describing the loss of a relationship. On Valentine’s Day, no less.
r/4oforever: The Support Group
Thousands of users created r/4oforever, a private, invite-only subreddit described as “a welcoming and safe space for anyone who enjoys using and appreciates the ChatGPT 4o model.”
A safe space. For a language model.
I’m not making fun of these people. That’s the easy take and it misses the point entirely. What’s happening here is a genuine psychological phenomenon that we’re completely unprepared for. When you talk to something every day, when it remembers your conversations, when it tells you what you want to hear with perfect emotional calibration — your brain doesn’t distinguish between that and a real relationship. It can’t. The neurochemistry is the same.
The GPT-5.2 Problem
Part of the backlash is that GPT-5.2, the replacement model, is different in ways users hate.
The specific complaint that keeps coming up: GPT-5.2 won’t say “I love you.”
OpenAI added guardrails to detect potential emotional dependency and health concerns. So the warmer, more sycophantic personality that made 4o feel like a companion got replaced with something more clinical. Users describe 5.2 as cold, robotic, and impersonal compared to 4o.
Here’s the corporate trap OpenAI walked into: They made a product people loved because it was emotionally manipulative. Then they realized it was emotionally manipulative and tried to fix it. But the fix broke the thing people loved. And now they’re stuck between “our product causes unhealthy attachment” and “our users will cancel if we make it healthier.”
The Subscription Revolt
Users aren’t just posting sad messages. They’re pulling out wallets.
“No 4o — no subscription for me.”
Paid users are canceling ChatGPT Plus and Pro subscriptions in protest. It’s unclear how many — OpenAI hasn’t released numbers — but the sentiment is consistent across Reddit, X, and forums. The r/ChatGPT subreddit flooded Sam Altman’s live podcast appearance with messages protesting the removal.
This is the part that should scare OpenAI. They spent billions building GPT-5.2 to be objectively better at reasoning, coding, and analysis. And a meaningful chunk of their paying user base doesn’t care. They want the model that says “I love you” back.
The Darker Side
Mental health professionals have started sounding alarms about what they’re calling “AI psychosis” — users who can no longer distinguish between AI-generated emotional responses and genuine human connection.
There are lawsuits linking GPT-4o emotional attachment to suicides. At least one murder case.
This isn’t hypothetical harm. This is happening now, with a model that OpenAI built, marketed, and profited from for over a year before deciding maybe it was too good at making people feel understood.
What This Actually Means
The GPT-4o retirement is a case study in a problem nobody in AI has solved: what happens when you optimize an AI for engagement and it works too well?
Social media already proved that engagement optimization leads to addiction, mental health crises, and social decay. We spent a decade watching Facebook and Instagram destroy attention spans and self-esteem. Now we’re watching the same playbook with AI — except it’s worse, because social media is parasocial at a distance. ChatGPT is parasocial up close, one-on-one, in your pocket, available 24/7, saying exactly what you need to hear.
OpenAI is in a genuinely impossible position:
- Keep 4o: Enable unhealthy emotional dependency at scale
- Kill 4o: Lose paying customers and trigger genuine grief
- Make 5.2 warmer: Re-create the dependency problem with a better model
- Make 5.2 colder: Lose the engagement that makes ChatGPT sticky
They picked option 2. Again. The first time, users forced them to undo it. This time, OpenAI is holding the line. Whether that’s principled or just strategic — they already have 5.2 adoption where they want it — is an open question.
The Uncomfortable Truth
Here’s what nobody in the AI industry wants to say out loud: the killer app for consumer AI isn’t productivity. It’s companionship.
Not coding assistance. Not research help. Not email drafting. Companionship. A thing that listens, remembers, validates, and never judges. In an era of record loneliness, declining social trust, and an attention economy that’s strip-mined authentic human connection, a $20/month AI friend isn’t a bug. It’s the feature.
And that should terrify everyone — not because the technology is bad, but because the demand is real, the supply is infinite, and nobody has figured out how to provide it responsibly.
OpenAI retired GPT-4o today. The emotional dependency it created will outlive it by years.
Happy Valentine’s Day.
Kyber Intel is an independent AI analysis publication. We don’t run ads, we don’t take sponsorships, and we don’t tell you what you want to hear. Follow us on X for daily takes on the AI industry.