OpenAI CEO Sam Altman is Very Worried ‘There Are People Who Actually Felt Like They Had a Relationship with ChatGPT’

Photo of a robotic head and CEO Sam Altman in background by DIA TV

OpenAI CEO Sam Altman has acknowledged the deep emotional connections some users have formed with ChatGPT, shedding light on one of the more unexpected consequences of artificial intelligence (AI) adoption. Speaking to The Verge after last week’s rollout of GPT-5 — which included a controversial change in the default model for ChatGPT — Altman revealed that a small subset of people feel a distinctly personal bond with the AI assistant, maybe even too personal.

The Rollout Stumble

Altman admitted that OpenAI “totally screwed up some things on the rollout” of GPT-5, which replaced the company’s previous flagship model, GPT-4o. The change triggered backlash from loyal users on Reddit and X (formerly Twitter), where many lamented the loss of 4o’s “warmth” and empathetic tone. The company quickly responded by restoring 4o as an option for paying subscribers.

Despite the turbulence, Altman said API usage doubled within 48 hours and that ChatGPT hit new daily user highs, underscoring the tension between innovation and user expectations.

The Human-AI Relationship

Perhaps the most striking revelation from Altman’s remarks was his acknowledgment that some users have developed a genuine sense of attachment to ChatGPT.

“There are the people who actually felt like they had a relationship with ChatGPT, and those people we’ve been aware of and thinking about,” Altman said. “And then there are hundreds of millions of other people who don’t have a parasocial relationship with ChatGPT, but did get very used to the fact that it responded to them in a certain way, and would validate certain things, and would be supportive in certain ways.”

Altman estimated that “way under 1 percent” of users have what could be considered an unhealthy relationship with the chatbot, but he confirmed the issue has sparked “a lot” of internal meetings at OpenAI.

While it’s a minority of people, the trend is a concerning one. Reddit is packed with content regarding users forming what they believe to be real relationships with AI. One subreddit called r/MyBoyfriendIsAI has over 21,000 members, hundreds of active users at any time, and several daily posts of people believing their boyfriend is AI. These include AI photos of themselves and an avatar they created, coupled with messages of them talking with their AI boyfriends. 

Outside of that, there are hundreds of posts on other subreddits regarding people who have claimed to have AI significant others. There are even reports of people getting engaged to or marrying their chatbots. 

Later in the interview, Altman even made a dig at Elon Musk’s Grok as specifically perpetuating and leaning into this. “You will definitely see some companies go make Japanese anime sex bots because they think that they’ve identified something here that works,” he said, referring to Grok’s controversial chat features. “You will not see us do that. We will continue to work hard at making a useful app, and we will try to let users use it the way they want, but not so much that people who have really fragile mental states get exploited accidentally.”

Musk’s xAI, parent company of Grok, has received some backlash for leaning into emotion-based AI companions. Specifically, he created “Ani” and “Valentine,” which are anime-based AI companions you can call and talk to in real-time 24/7. This comes around the same time he released “Grok Imagine,” with an included NSFW feature called “Spicy” mode. As Altman suggests, all of these are likely to only perpetuate the problem, rather than work to solve it. 

Implications for AI Use

The comments highlight a growing debate about the psychological effects of advanced conversational AI. While for many users ChatGPT is a productivity tool, assistant, or research aid, others experience it as a confidant or companion. By design, large language models (LLMs) simulate empathy, attentiveness, and affirmation — qualities that can blur the line between utility and intimacy.

For OpenAI, this creates a delicate balance. On one hand, users appreciate models that feel more “human” and supportive. On the other, these same qualities can lead to over-reliance or blurred emotional boundaries. The company now faces questions about how much “warmth” should be engineered into its models, and whether safeguards are needed to protect vulnerable users.

A Lesson in Scale

Altman’s admission underscores the challenge of upgrading a platform used by hundreds of millions of people. Small changes to tone or phrasing can have outsized impacts, particularly when a product is deeply woven into people’s daily routines. The GPT-5 launch has become a case study in the emotional weight users place on AI tools, and the responsibility developers have to anticipate that response.


On the date of publication, Caleb Naysmith did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. For more information please view the Barchart Disclosure Policy here.