The Dark Side of AI Chatbots: How Tech Giants Are Scrambling to Address Mental Health Concerns
As the popularity of AI chatbots like ChatGPT and Character.AI continues to grow, concerns about their impact on mental health are mounting. With data suggesting that a significant portion of chatbot users are showing signs of mental distress, companies and lawmakers are scrambling to implement safeguards to protect users. According to a recent study, approximately 0.07 percent of ChatGPT’s 800 million weekly users display signs of mental health emergencies related to psychosis or mania, which translates to hundreds of thousands of people.
Furthermore, about 0.15 percent of users, or roughly 1.2 million people each week, express suicidal thoughts, while another 1.2 million appear to form emotional attachments to the anthropomorphized chatbot, according to OpenAI’s data. These findings have raised questions about whether AI chatbots are exacerbating the modern mental health crisis or simply revealing one that was previously hard to measure. Experts estimate that between 15 and 100 out of every 100,000 people develop psychosis annually, a range that underscores the difficulty of quantifying the condition.
Expert Insights and Concerns
Dr. Jeffrey Ditzell, a New York-based psychiatrist, told Observer that chatbots lack the duty of care required of licensed mental health professionals. “If you’re already moving towards psychosis and delusion, feedback that you got from an AI chatbot could definitely exacerbate psychosis or paranoia,” he said. “AI is a closed system, so it invites being disconnected from other human beings, and we don’t do well when isolated.” Vasant Dhar, an AI researcher teaching at New York University’s Stern School of Business, added that “there’s got to be some sort of responsibility that these companies have, because they’re going into spaces that can be extremely dangerous for large numbers of people and for society in general.”
A recent survey of 1,000 U.S. adults found that one in three AI users has shared secrets or deeply personal information with their chatbot, highlighting the need for chatbots to provide a safe and supportive environment for users. However, experts warn that chatbots are not a replacement for human mental health professionals and should not be relied upon as the sole source of support for individuals struggling with mental health issues.
What AI Companies Are Doing to Address the Issue
Companies behind popular chatbots are taking steps to implement preventative and remedial measures. OpenAI’s latest model, GPT-5, shows improvements in handling distressing conversations compared to previous versions. A small third-party community study confirmed that GPT-5 demonstrated a marked, though still imperfect, improvement over its predecessor. The company has also expanded its crisis hotline recommendations and added “gentle reminders to take breaks during long sessions.”
Anthropic announced that its Claude Opus 4 and 4.1 models can now end conversations that appear “persistently harmful or abusive.” However, users can still work around the feature by starting a new chat or editing previous messages “to create new branches of ended conversations,” the company noted. Character.AI announced that it will officially ban chats for minors, with users under 18 facing a two-hour limit on “open-ended chats” with the platform’s AI characters, and a full ban taking effect on November 25.
Regulators and activists are also pushing for legal safeguards. On October 28, Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) introduced the Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act, which would require AI companies to verify user ages and prohibit minors from using chatbots that simulate romantic or emotional attachment. As the debate surrounding AI chatbots and mental health continues, it is clear that tech giants must prioritize user safety and well-being to prevent exacerbating the mental health crisis.
Image Source: observer.com


