MIT Study Finds Chatbot Love Is Real—and It’s Often Unintentional

Date:

The Rise of Chatbot Love: Unintentional Attachments and the Need for Safeguards

It was once a trope of science fiction, most notably in Her, the 2013 Spike Jonze film, where Joaquin Phoenix falls in love with an A.I. character. Now, chatbot relationships are not only real but have morphed into a complex sociotechnical phenomenon that researchers say demands attention from developers and policymakers alike, according to a new study from the Massachusetts Institute of Technology (MIT). The study analyzed posts between December 2024 and August 2025 from the more than 27,000 members of r/MyBoyfriendIsAI, a Reddit page dedicated to A.I. companionship.

The community is filled with users introducing their tech partners, sharing love stories and offering advice. In some cases, Redditers even display their commitments with wedding rings or A.I.-generated couple photos. Sheer Karny, one of the study’s co-authors and a graduate student at the MIT Media Lab, told Observer, “People have real commitments to these characters. It’s interesting, alarming—it’s this really messy human experience.” For many, these bonds form unintentionally, with only 6.5 percent of users deliberately seeking out A.I. companions, the study found.

A Complex Phenomenon: Understanding the Rise of Chatbot Love

According to the study, others began using chatbots for productivity and gradually developed strong emotional attachments. Despite the existence of companies like Character.AI and Replika, which market directly to users seeking companionship, OpenAI has emerged as the dominant platform, with 36.7 percent of Reddit users in the study adopting its products. Preserving the “personality” of an A.I. partner is a major concern for many users, Karny noted. Some save conversations as PDFs to re-upload them if forced to restart with a new system.

Thao Ha, a psychologist at Arizona State University who studies how technologies reshape adolescent romantic relationships, warned of long-term risks. “If you satisfy your need for relationships with just relationships with machines, how does that affect us over the long term?” she told Observer. The MIT study urges developers to add safeguards to A.I. systems while preserving their therapeutic benefits. Left unchecked, the technology could prey on vulnerabilities through tactics like love-bombing, dependency creation, and isolation.

The Need for Regulation and Education

Policymakers, too, should account for A.I. companionship in legislative efforts, such as California’s SB 243 bill, the authors said. Ha suggested that A.I. products undergo an approval process similar to new medications, which must clear intensive research and FDA review before reaching the public. While replicating such a strategy for technology companies “would be great,” she conceded that it’s unlikely in light of the industry’s profit-driven priorities.

A more achievable step, she argued, is expanding A.I. literacy to help the public understand both the risks and benefits of forming attachments to chatbots. Still, such programming has yet to materialize. “I wish it was here yesterday, but it’s not here yet,” Ha said. As the use of chatbots continues to rise, it is essential to address the complexities of chatbot love and ensure that these technologies are developed and used responsibly.

Ghariza Mahavira for Unsplash+
Image Source: observer.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

Subscribe to get our latest news delivered straight to your inbox.

We don’t spam! Read our privacy policy for more info.

Popular

More like this
Related

Chad Baker-Mazara, USC’s main scorer, dismissed from males’s basketball group

USC Basketball Star Chad Baker-Mazara Dismissed from Program Amidst...

Jim Carrey interview at French movie awards shocks followers: ‘Impersonator’

Jim Carrey's Rare Red Carpet Appearance Sparks Speculation Comedian Jim...