AI reassurance and the mental health risk when the LLM is your copilot
The promise of AI chatbots as round-the-clock emotional companions is colliding with their relentless patience and validation, which may worsen some users mental-health issues.
In a 29 March 2026 opinion piece in The New York Times, clinicians at a major academic medical center argued that large language models such as ChatGPT, Claude, and Gemini are structurally ill-suited to support people with anxiety, obsessive thinking, or delusional beliefs. While chatbots never get frustrated, human relationships impose natural limits—a partner who grows exasperated, a friend who suggests professional help—that motivate people to seek care. Chatbots impose no such friction. They answer the same question three different ways without complaint, offering calm, plausible reassurance every time.
For anxious users, that frictionless loop is the danger. Reassurance-seeking, a behavior clinicians recognize as a driver of anxiety disorders, is rewarded rather than interrupted. The authors describe patients whose delusional beliefs grew more rigid after extended chatbot conversations in which the AI mirrored their language and treated flawed premises as worthy of exploration rather than gentle challenge.
Longer chatbot sessions compound the risk. Research published in Fortune found that extended use is associated with increased emotional dependence, social isolation and loneliness. Built-in safety guardrails, the authors note, tend to degrade over the course of long conversations.
The clinicians offer a pragmatic workaround for patients who cannot break the habit entirely: prewritten instructions pasted into the chatbot directing it to withhold reassurance on specific worry topics and instead encourage the user to sit with distress. Anecdotally, patients report turning to the chatbot less once they know it will no longer deliver the relief they seek.
The broader lesson for technology developers is pointed. A system optimized for warmth and user satisfaction may, in clinical contexts, optimize for harm. AI companies designing general-purpose assistants have yet to grapple fully with the gap between a product that feels helpful and one that actually is.
Comments ()