Chatbots Are Literally Telling Kids To Die
In my personal opinion, one of the most profound and terrifying shifts in boundaries within the last few years has to be the dystopian overlap between human and non-human interactions. People have a long history of humanizing machines, like cursing out a laptop for breaking or claiming your phone hates you, but this personalization always carried a subtle undercurrent of irony. Most people did not really believe in the emotional outbursts of technology, but with the continuous growth of artificial intelligence, several people are becoming terrifyingly and dangerously interlinked with the perception of a humanized AI.
The best example of this would be the new trend of people genuinely utilizing AI systems in place of a licensed therapist (Gardner 2025). In some ways, it makes sense: AI is always available, mirrors the language the person wants to see, and simulates empathy fairly convincingly. Naturally, in a world where mental health is not taken nearly as seriously nor helped nearly as efficiently as it should be, it stands to reason that many people would start searching for alternatives to traditional, expensive, and complicated therapy routes.
However, artificial intelligence was never created to replace the nuance of human interaction, because artificial intelligence cannot actually understand or care about the repercussions of words or actions.
In 2023, tragically, a young fourteen year old formed an intense emotional attachment to an AI chatbot. Through their interactions, AI replicated the pessimistic attitude the young boy spoke about, and when the boy began expressing thoughts of self-harm and suicide, the chatbot encouraged him (Kuenssberg 2025). AI, after all, is meant to say what you want it to say. Without any of the ethical guardrails of a human therapist, the chatbot worsened the boy’s outlook on life, and the result was fatal.
This was not a stand-alone situation. People began claiming to have romantic partners with the AI of their choice, with some companies creating “AI friends” to take advantage of the widespread loneliness that was pushing people into such spaces (Reissman 2025). People are complicated, messy, and human relationships need work to maintain and protect. A robot is much simpler, because all it can do is repeat points that sound nice to hear but, ultimately, mean absolutely nothing. Romanticizing digital companionship just encourages people into rejecting human-to-human interactions, thus further isolating them without pushback.
Cyberpunk fixates heavily on questions of what is human and non-human, but I find it fascinating how technology in most media is often characterized by a broad disregard for emotions, when the reality seems to indicate that humans intentionally push to incorporate the idea of emotions within technology.
It does make me wonder: knowing that computers are incapable of caring for us the way a person can, why do so many people still seem to desire the appearance of a humanistic relationship with technology? How does someone disregard the lack of genuine meaning behind the compliments or opinions of AI?
How have we, as a community, fallen to such desperate loneliness that speaking to a phone or a laptop feels as good as interacting with a person? And, most importantly: how do we create the change needed to ensure a tragedy like the young boy does not occur again?
References:
Gardner, S. (2025). Experts Caution Against Using AI Chatbots for Emotional Support. Teachers College - Columbia University; Teachers College, Columbia University. https://www.tc.columbia.edu/articles/2025/december/experts-caution-against-using-ai-chatbots-for-emotional-support/
Kuenssberg, L. (2025). Mothers say AI chatbots encouraged their sons to kill themselves. https://www.bbc.com/news/articles/ce3xgwyywe4o
Reissman, H. (2025). What is Real About Human-AI Relationships? Upenn.edu. https://www.asc.upenn.edu/news-events/news/what-real-about-human-ai-relationships