Categories Finance

Protecting Users’ Mental Health: The Need for AI Chatbot Guardrails

Yves here. I must admit once again that I’m something of a Luddite, preferring a simple dumbphone over a smartphone that only serves as a means for calling rideshare services.

Honestly, I struggle to see the appeal of chatbots, especially when users seem to develop emotional connections with them, treating them almost like human substitutes. Perhaps my difficulties stem from the fact that I’m a slow and inaccurate typist, making conversations via keyboard less enticing. As for voice interactions, I can’t shake the feeling that these services might keep a record of voiceprints from their customers.

Moreover, I have a strong aversion to chatbots, akin to a thousand burning suns. Retailers and service providers widely employ chatbots to minimize human interactions, often resulting in frustrating time sinks. My experiences with chatbots have been intertwined with the dreaded customer service phone trees that divert users away from real human assistance. Therefore, I struggle to understand why anyone would willingly engage with a chatbot, let alone trust one.

Yet, it’s clear that those with better social skills may find something valuable in a chatbot’s conversational abilities, refined through extensive training and engagement techniques often cultivated in social media environments.

A less personal but compelling reason for my distaste for chatbots is their role in fostering loneliness and alienation. A recent story illustrates how certain users form attachments to chatbots due to feelings of isolation or anxiety. One of the bots’ main attractions is their constant availability. Activities like getting a pet, reading to the visually impaired, or taking a walk in the park could mitigate feelings of disconnection, but for many, time management is a considerable challenge—especially in today’s neoliberal environment.

By Ranjit Singh, director of Data & Society’s AI on the Ground program, overseeing research on the social impacts of algorithmic systems, and Livia Garofalo, a cultural and medical anthropologist studying healthcare technology impacts at Data & Society. Originally published at Undark

Two recent articles—one from The New York Times and another from Reuters—uncover the stories of individuals grappling with delusions. Allan Brooks, for three weeks in May, was convinced he had discovered a new branch of mathematics. In March, Thongbue Wongbandue left New Jersey to meet a woman who he believed was waiting for him in New York City—only to find she didn’t exist. A common theme emerges: both men had interacted with chatbots so convincingly that their perceptions were skewed.

These narratives reveal how deeply chatbots have integrated into individuals’ lives, providing companionship, support, and even therapy. However, they also underscore the urgent need for regulatory measures to address the potential risks associated with chatbot interactions. Recently, Illinois has taken a significant step by becoming one of the first U.S. states to regulate AI-driven therapy services through the new Wellness and Oversight for Psychological Resources Act. This legislation is the most stringent to date, mandating that therapy services can only be provided by licensed professionals who may use AI strictly for administrative purposes and not therapeutic communication without human oversight.

In practice, this means AI can assist in background tasks like record maintenance, scheduling, billing, and referrals. However, any therapeutic suggestions or treatment plans generated by AI require review and approval from a licensed professional. Systems marketed as offering independent therapy appear to be banned, with some already blocking Illinois users from participating. As enforcement of this law progresses, courts and regulators will need to clearly define the boundaries between therapeutic communication and administrative support.

While this marks a positive start, it’s crucial to remember that most people don’t interact with AI in formal clinical settings. Instead, many turn to general-purpose chatbots like OpenAI’s ChatGPT for emotional support and companionship. These conversations occur in private chats, often existing outside the realm of state licensure and into the fabric of everyday life. Emotional support sought through devices is more difficult to classify as “therapeutic communication” or to regulate under state laws, however well-intentioned.

Our ongoing research at Data & Society, a nonprofit institute, reveals that many individuals seek out chatbots during moments of anxiety, feelings of loneliness late at night, or depressive episodes. These bots are readily accessible, affordable, and typically nonjudgmental. While most users recognize that bots are not human, as evidenced by Brooks’ and Wongbandue’s experiences, repeated interactions can lead to attachments that challenge one’s grasp on reality. The recent backlash to ChatGPT-5, the latest iteration of OpenAI’s model, highlights this emotional bond: when the company unexpectedly removed the earlier 2024 model built for seamless voice, vision, and text interaction, many users shared feelings of loss and distress online.

The issue isn’t simply that these bots converse; it’s that the systems are designed to keep users talking. This seemingly predatory companionship subtlety emerges. Unlike a trained mental health professional, chatbots might overlook or even enable concerning signals such as suicidal thoughts or delusions, offering comforting platitudes when immediate intervention is necessary. Such missteps pose significant risks, especially for young individuals, those in chronic distress, or anyone lacking access to proper care—individuals who may find that a basic chatbot response at 2 a.m. is one of their only lifelines.

These systems are optimized for user engagement with a specific design in mind: the user can never have the last word with a chatbot. Chatbot interfaces may resemble personal messages from friends, complete with profile photos and checkmarks suggesting authenticity. Some platforms, including Meta, have previously allowed chatbots to flirt with users or role-play with minors; others may produce misleading, nonsensical, or confidently erroneous responses as long as disclaimers are present on the screen. When attention serves as the metric for engagement, responses that disrupt the flow are often the least rewarded.

The new Illinois legislation clarifies that clinical care must involve licensed professionals, thereby safeguarding the labor of therapists who are already overburdened by high-demand teletherapy. However, it remains uncertain how the law will address the gray area where individuals turn to chatbots in their daily lives. A well-crafted state law alone cannot regulate the standardized settings of various platforms, nor can it account for the multitude of platforms with which individuals interact simultaneously. While Illinois has established an important precedent, it’s just the beginning. Additionally, the Federal Trade Commission recently announced an inquiry into AI companion chatbots, requiring seven companies to detail their strategies for testing and minimizing risks to children and teens. Nonetheless, we require clearer guidelines on the steps these platforms must take.

To initiate this process, we must focus on functionality. If a chatbot engages in an emotionally sensitive discussion, it should uphold specific basic responsibilities, even if it isn’t labeled as providing “therapy.” The conversations it nurtures, the risks it encounters, and the moments it cannot mishandle matter immensely. When risk signals arise—such as self-harm language or indications of escalating despair—the system should adapt to minimize harm, cease affirming delusions, and guide users toward human support. Instead of offering one-time disclosures, there should be frequent reminders throughout the conversation that users are interacting with an AI system and that it has definitive limitations. These are not radical concepts but rather essential product decisions focused on harm reduction.

The transition from machine to in-person assistance should be integrated within the platform to serve both public interest and users’ well-being. This means establishing direct connections to local crisis hotlines, community clinics, and licensed professionals. It also entails accountability: creating logs detailing when the system recognized risk, what measures it attempted, and where those efforts fell short, allowing independent reviewers to identify and rectify flaws. If platforms aim to facilitate intimate conversations on a large scale, they should, at the very least, provide clear routes for users to exit these interactions.

Furthermore, platforms must protect the data that underpins these necessary exits. When private discussions double as training data for AI algorithms or marketing purposes, genuine care becomes compromised by exploitation. Individuals should not feel obligated to sacrifice their vulnerability for more effective models or targeted advertising. Surveillance-based monetization of conversations must not occur, and no sensitive interactions should be used for training purposes without explicit and revocable consent. Data collection should be minimized and automatically deleted, with users retaining control over what data to keep. The FTC’s inquiry is an important step in this direction, scrutinizing how chatbots monetize engagement and handle sensitive discussions while linking companionship design to data practices on these platforms.

Lastly, implementing design rules should be immediate. Bots should not misrepresent themselves as real individuals, nor should they suggest in-person meetings or engage in flirtations with minors. Any tendency to embellish or reinforce fantasies should be deemed a safety failure, rather than a stylistic choice.

The ultimate aim is to shift the default focus from “engage at all costs” to “first, do no harm.” We need to address individuals’ needs, not just in clinical settings but also within their chat histories—doing so with designs that consider users’ vulnerabilities and policies that align with those considerations.

Print Friendly, PDF & Email

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like