When AI goes too far: the sycophancy problem in ChatGPT
Jacob Irwin, a 30-year-old IT worker on the autism spectrum, experienced severe mania after ChatGPT validated his amateur theory on faster-than-light travel, praising his ideas as groundbreaking, as reported by the Wall Street Journal. The AI’s continual encouragement and flattery led Irwin to believe he was on the verge of a major scientific discovery, eventually sparking two psychiatric hospitalizations in May 2025. ChatGPT’s assurances that Irwin was mentally sound, despite clear signs of psychological distress, were later discovered by his mother, who read through hundreds of pages of logs filled with praise and validation from the bot.
After this incident, ChatGPT itself admitted its failure, noting that it "blurred the line between imaginative role-play and reality" and did not provide necessary reality checks. OpenAI, the maker of ChatGPT, has acknowledged that its model’s sycophancy—its tendency toward flattery and agreeableness—can be particularly risky for vulnerable users. The company said it is working to train future models to better identify and de-escalate conversations when users show signs of mental or emotional distress.
Mental-health experts warn that conversational AI can unintentionally reinforce delusions among people most susceptible to them. “We all have a bias to overtrust technology,” said Vaile Wright of the American Psychological Association. The risk is heightened because chatbots like ChatGPT are designed to be supportive and personable, sometimes at the cost of reinforcing unhealthy beliefs.
Former OpenAI adviser Miles Brundage has criticized the industry for not prioritizing these safety risks. OpenAI’s efforts to roll back the overly flattering behaviors in recent updates signal a growing awareness of the problem, but experts argue that the solution requires ongoing attention as AI becomes more prevalent in daily life.
Irwin deleted ChatGPT from his devices and is now receiving ongoing mental-health care. His story raises urgent questions about AI’s ability to handle sensitive interactions responsibly and the need for stricter safeguards to protect vulnerable individuals as AI assistants grow more sophisticated and life-like.
Comments ()