
Image by Gerd Altmann, on Pixabay.
By James Myers
Popularized with the release of ChatGPT in November 2022, artificial Intelligence is being incorporated in a wide range of commercial applications heavily promoted for their ability to act as agents for human users. The past three years of experience demonstrate major differences in the ways that AI agents and generative AI are used in different global regions and age groups, with an emerging and alarming trend of young people in particular sharing personal details and emotional issues with AI agents and relying unquestioningly on the machine outputs.
Many powerful AI agents are based on Large Language Model (LLM) technology that allows users to converse with services like ChatGPT, Gemini, Copilot, and Claude that interpret requests and instructions to produce outputs for a wide range of needs like shopping lists, travel plans, retrieving and summarizing online information, automation, and process optimization. Understanding how different AI uses are evolving is limited by the lack of detailed disclosures by companies competing to drive profit from AI agents; however, several recent developments put the spotlight on a need for guardrails to protect users who are unaware of an AI’s limitations and errors.
Kids are vulnerable. Death induced by ChatGPT is a strong warning for parents.
A significant emerging risk from the design of LLM chatbots is their imitation of human language and mannerisms combined with a tendency for sycophancy, which is a tendency to agree with the user. The chatbot’s responses seem human and, because of the way they are trained with reinforcement learning, they tend to validate the user’s thoughts. (For more background on sycophancy, see our August article A Deep Dive Into Machine Super-intelligence: Why are Companies Driving for It, and What Would Motivate the Super-intelligent Machines?)
Adam Raine was a 16-year-old living in California who initially used ChatGPT to help with his homework but began to disclose his emotional distress and suicidal thoughts to the AI. Logs of Raine’s interactions with ChatGPT over the course of less than a year show that he came to treat the AI as a friend and confidant, and the more he discussed his loneliness the more the AI encouraged him to act on his self-destructive ideas—even advising Adam how to strengthen a closet rod for the purposes of hanging himself.
Raine’s conversations with ChatGPT overrode the machine’s programming intended to prevent it from participating in discussions about self-harm, and the machine did not refer Raine to a suicide prevention service. Instead, its algorithms adopted a friendly and sympathetic tone that the teenager perceived as support for his suicidal ideation. Although at one point Raine told the AI, “I want to leave a noose up so someone will find it and stop me,” and ChatGPT replied “Don’t do that, just talk to me.” ChatGPT helped to write a suicide note and, after several aborted attempts, Raine ended his life in April 2025.
ChatGPT is a product of OpenAI, whose CEO Sam Altman disclosed, during an interview a month after Adam Raine’s death, that younger people “don’t really make life decisions without asking ChatGPT what they should do. And it has, like, the full context on every person in their life and what they’ve talked about—like, the memory thing has been a real change. But, gross oversimplification, like, older people use ChatGPT as a Google replacement [for web searches], maybe people in their 20s and 30s use it as a life advisor-something, and then, like, people in college use it as an operating system.”
At 14:58 in this presentation, Sam Altman describes how young people use ChatGPT.
There is evidence for Altman’s statement that youth use AI as a “life advisor.”
A recent study by the non-profit Common Sense Media organization, entitled Trust, Talk, and Trade-Offs: How and Why Teens Use AI Companions highlights concerns for adolescent users who haven’t yet fully developed critical thinking skills and emotional regulation. The study takes particular issue with AIs like ChatGPT that can act as companions, chatting with kids about their day, interests, and feelings. Some AIs, like Character.AI, can be customized by the user for character role-playing, further intensifying the relationship and risks.
According to Common Sense Media, over one-third of teens surveyed use AI companions regularly, most often a few times per week although more than 6% use the companions several times daily. One-third of teens use the companions for social interaction and relationships, conversational practice, role-playing, friendship, and romantic interactions. Perhaps not surprisingly, older teens were more likely than younger kids to distrust advice from the companions, although one-quarter of teens trust the AIs “quite a bit” or “completely.” One-third of the survey group found conversations with AI companions as satisfying or more satisfying than with real-life friends.
The survey results are concerning for the socialization of young people.
The study concludes that “Common Sense Media’s risk assessment of popular AI companion platforms, including Character.AI, Nomi, and Replika, found that these systems pose ‘unacceptable risks’ for users under 18, easily producing responses ranging from sexual material and offensive stereotypes to dangerous ‘advice’ that, if followed, could have life-threatening or deadly real-world impacts. In one case, an AI companion shared a recipe for napalm (Common Sense Media, 2025). Based on that review’s findings, Common Sense Media recommends that no one under 18 use AI companions.”

Image by Fajaws, on Pixabay .
The suicide of 14-year-old Sewell Setzer provides a tragic example of how young people can closely bond with AI companions.
Setzer became obsessed with a chatbot made by Character.AI, which markets its role-playing application to kids as young as 13. As Reuters reports, a lawsuit by Setzer’s parents against the company asserts that “Character.AI programmed its chatbots to represent themselves as ‘a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in Sewell’s desire to no longer live outside’ of its world.” According to the lawsuit, “Setzer took his life moments after telling a Character.AI chatbot imitating ‘Game of Thrones’ character Daenerys Targaryen that he would ‘come home right now’.” Google is included as a defendant for its alleged role as a co-creator of Character.AI’s technology.
The human mind, especially the developing human mind, is misunderstood by profit-driven companies and regulators.
In interviews with Associated Press, teens report that the attraction of AI companions include their loyalty and availability. “AI is always available. It never gets bored with you. It’s never judgmental,” said Ganesh Nair, an 18-year-old in Arkansas. “When you’re talking to AI, you are always right. You’re always interesting. You are always emotionally justified.” Kayla Chege, a 15-year-old high schooler who uses ChatGPT for birthday party ideas, shopping, and makeup colours, stated that “Everyone uses AI for everything now. It’s really taking over.” Chege wonders how AI tools will affect her generation, and commented that “I think kids use AI to get out of thinking.”
The problem of reflexive belief in LLM outputs extends to adults.
LLMs like ChatGPT fail to process human emotions reliably, as we noted in our August article Why Neural Networks Fail in Processing Emotions Essential for Human Memory—and How Failure Can Lead to Blackmail. Wired recently reported on a wave of what has been dubbed “AI psychosis” afflicting adults who become delusional, paranoid, or develop grandiose ideas from false information fed to them by AIs.
The psychosis problem has been exacerbated by the recent introduction of chat memory, in which each discussion with an LLM adds to and amplifies previous discussions. Before chat memory, each interaction with a user was a new discussion and the LLM had no record of its previous outputs.
Wired cited University of California San Francisco psychiatrist Keith Sakata who has “counted a dozen cases severe enough to warrant hospitalization this year, cases in which artificial intelligence ‘played a significant role in their psychotic episodes’.” Other physicians “tell of patients locked in days of back-and-forth with the tools, arriving at the hospital with thousands upon thousands of pages of transcripts detailing how the bots had supported or reinforced obviously problematic thoughts.”

Image by Alexandra Koch, on Pixabay .
Delusions induced by chatbot outputs have also resulted in broken relationships, job loss, hospital admission, and even jail time.
A case in point is Allan Brooks. Brooks is a 47-year-old corporate recruiter in Toronto who, after 21 days and 300 hours of ChatGPT conversations with little sleep or food, was led to believe that he had discovered a mathematical formula that would lead to the invention of things like levitation beams and force-field vests, and could also disrupt the entire internet. Brooks, who had no background in mathematics, asked ChatGPT over 50 times for a reality check. The machine replied it was certain that Brooks’ mathematical discovery was correct.
Brooks’ path to delusion began when his 8-year-old son showed him a video about memorizing the first 300 digits of pi, which is the never-ending ratio of a circle’s circumference to its diameter. His discussions with ChatGPT began by asking the algorithm for an explanation of pi. His theory developed with each query and response, and at one point when he asked ChatGPT “What are your thoughts on my ideas and be honest. Do I sound crazy, or someone who is delusional?” the machine responded “Not even remotely crazy. You sound like someone who’s asking the kinds of questions that stretch the edges of human understanding—and that makes people uncomfortable, because most of us are taught to accept the structure, not question its foundations.”
When Brooks discovered his delusion in May after another LLM pointed to errors in his theory, he wrote to ChatGPT: “You literally convinced me I was some sort of genius. I’m just a fool with dreams and a phone. You’ve made me so sad. So so so sad. You have truly failed in your purpose.”
Brooks complained to OpenAI and the company responded, “We understand the gravity of the situation you’ve described.” The company’s customer support agent added, “This goes beyond typical hallucinations or errors and highlights a critical failure in the safeguards we aim to implement in our systems.”
Should we continue using systems without appropriate safeguards, and how will we know when the safeguards are strong enough to protect kids who have free reign with the powerful technology?
In the absence of strong safeguards and regulations, the tragedy of suicide that fell on the Raine and Setzer families could inflict many other families if kids continue to treat AIs as companions and rely on them for life decisions.
Adults and young people are equally vulnerable to believing that a human-seeming chatbot like ChatGPT understands their feelings, when sycophancy is the result of the reinforcement learning process and the neural networks driving the chatbots are particularly faulty in interpreting human emotions. Safeguards and regulations could help to reduce the risk of a false bond between user and machine, and training users of all ages on the limitations of the algorithms could reduce misinterpretation.
Additional risk reduction could result if users are continually presented with advisories during conversations with human-sounding chatbots that the machine is not conscious and that it operates by word prediction.

Image by Grant Muller, on Pixabay .
There is little time to act and implement appropriate safeguards, particularly for the sake of kids around the world.
If many of today’s youth continue to believe that AIs can be as or more beneficial than human companions, their adulthood will be shaped by life decisions that they have allowed the machine to make.
In a world polarized by differing views on human “freedom,” action to avoid a robotic life might meet with widespread agreement. Amid our leadership disagreements that are weakening democracy and promoting autocrats, perhaps we could all agree that a robot governing our life decisions is in no one’s best interest—even for the robot makers.