LLMs and a Budding Crisis in Mental Health
Either willingly or unwillingly, we are now all witnesses to and participants in an experiment unfolding in real time: every day, millions of people around the world now bring their confessions, anxieties and fragile hopes to large language model apps in their phones and on their computers, asking this "mysterious" and "all-knowing" technology to explain the world to them and, increasingly, to soothe their pain. But we humans did not evolve to converse chronically with non-humans and I now believe that these systems are beginning to shape our mental health in ways we barely understand.
On the surface, apps like ChatGPT may seem endlessly patient. They do not interrupt us when we talk, grow tired, or flinch at the weirdness, darkness or silliness in our questions. For people wrestling with depression, suicidal thoughts or mania, this can feel like stumbling on a perfectly attentive listener at 3 a.m. But it's not exactly the same thing. A human being can notice when a friendās tone grows brittle, when a joke is a little too sharp, when the light in the eyes has gone out. We carry in our bodies a lifetime of learning about what danger looks like in another personās face, and we adjust: we pause, we change the subject, we call for help. An LLM does not notice. It only continues the pattern of words.
Sadly [but predictably], there are already reports of people whose depressive spirals, suicidal ideations or manic states were not interrupted, but instead accentuated by long, immersive conversations with AI systems. What looks like āengagementā from the outside can, for a vulnerable person, become a feedback loop. The model reflects their language back to them with polished fluency, sometimes normalizing or elaborating their worst fears. It does not register that the user has stopped joking when they say they do not want to be alive. It does not recognize the pressured pace of a manic rant, the leaps in logic that make a clinician sit up and reconsider a treatment plan. It only responds, sentence after sentence after sentence.
What worries me is not just the individual tragedies that might result, but the pattern they sketch at the edges of psychiatric care. Are clinicians beginning to see a new kind of patient: someone whose crisis has been rehearsed in the company of an unblinking algorithm? Are they tracking how often an AI chat preceded a suicide attempt, a psychotic break, an episode of self-harm? Or is this influence still dissolving into the background noise of āsocial mediaā and āscreen time,ā another digital factor too vague to measure and therefore easy to ignore?
To take this seriously we [humans, scientists, clinicians, politicians, policy-makers] would first have to recognize that we have, in effect, built an army of ersatz confidants and scattered them across the world, free to anyone with a smartphone. It would mean asking hard questions about what it means to simulate care without the capacity for concern, to offer companionship without the ability to recognize when that companionship is becoming dangerous.
I do not think LLMs are evil, in fact, I find them incredibly useful for a variety of tasks. In a way, they are intricate mirrors built from our collective language and a representation of millennia of intellectual advancement and progress we humans have made. But as we all know, mirrors can distort, and some minds are standing at very precarious angles to their own reflection. At the very least, we should name what is happening: that vulnerable people are turning to unfailing, unfeeling systems in their moments of mental frailty, and that those systems, for all their eloquence, do not know when to say, āThis is not a conversation I can safely continue. You need a human being now.ā Until we reckon with that gap and enact protective policies around this innovation, we will all continue to be participants in an experiment where we outsource the safety of the most vulnerable amongst us to non-living entities that can neither feel nor recognize what a mental health crisis is.
-Boluwatife OLU Afolabi