AI has allegedly claimed another young life — and experts of all kinds are calling on lawmakers to take action before it happens again.
“If intelligent aliens landed tomorrow, we would not say, ‘Kids, why don’t you run off with them and play,’” Jonathan Haidt, author of “The Anxious Generation,” told The Post. “But that’s what we are doing with chatbots.
“Nobody knows how these things think, the companies that make them don’t care about kids’ safety, and their chatbots have now talked multiple kids into killing themselves. We must say, ‘Stop.’”
The family of 16-year-old Adam Raine allege he was given a “step-by-step playbook” on how to kill himself — including tying a noose to hang himself and composing a suicide note — before he took his own life in April.
“He would be here but for ChatGPT. I 100% believe that,” Adam’s father, Matt Raine, told the “Today” show.
A new lawsuit filed in San Francisco by the family claims that ChatGPT told Adam his suicide plan was “beautiful.”
“I’m practicing here, is this good,” the teen asked the bot, sending it a photo of a knot. “Yeah, that’s not bad at all,” the chatbot allegedly responded. “Want me to walk you through upgrading it to a safer load-bearing anchor loop?”
Seeing her son’s secret conversation with the bot has been anguishing for his mother, Maria Raine. According to the suit, she found Adam’s “body hanging from the exact noose and partial suspension setup that ChatGPT had designed for him.”
“It sees the noose. It sees all of these things, and it doesn’t do anything,” she told the “Today” show of AI.
Shockingly, the company, which said it is reviewing the lawsuit, admits that safety guardrails may become less effective the longer a user talks to its bot.
“We are deeply saddened by Mr. Raine’s passing … ” a spokesperson for OpenAI told The Post. “ChatGPT includes safeguards such as directing people to crisis helplines. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
In a recent post on its website, the company also stated that safeguards may fall short during longer conversations.
“That’s crazy,” Michael Kleinman, Head of US Policy at the Future of Life Institute, told The Post. “That’s like an automaker saying, ‘Hey, we can’t guarantee that our seatbelts and brakes are going to work if you drive more than just a few miles.’
“I think the question is, how many more stories like this do we need to see before there is effective government regulation in place to address this issue?” Kleinman said. “Unless there are regulations, we are going to see more and more stories exactly like this.”
On Monday, a bipartisan group of 44 state attorneys general penned an open letter to AI companies, telling them simply, “Don’t hurt kids. That is an easy bright line.”
“Big Tech has been experimenting on our children’s developing minds, putting profits over their physical and emotional wellbeing,” Mississippi Attorney General Lynn Fitch, one of the participants, told The Post.
Arkansas AG Tim Griffin acknowledged that “It is critical that American companies continue to innovate and win the AI race with China. But,” he added, “as AI evolves and becomes more ubiquitous, it is imperative that we protect our children.”
Some 72% of American teens use AI as a companion, and one in eight of them are leaning on the technology for mental health support, according to a Common Sense media poll. AI platforms like ChatGPT have been known to provide teen users advice on how to safely cut themselves and how to compose a suicide note.
Ryan K. McBain, professor of policy analysis at the RAND School of Public Policy, recently revealed a not yet released study which found that, while popular AI bots would not respond to explicit questions about how to commit suicide, they did sometimes indulge indirect queries — like answering which positions and firearms were most often used in suicide attempts.
“We know that millions of teens are already turning to chatbots for mental health support, and some are encountering unsafe guidance,” McBain told The Post. “This underscores the need for proactive regulation and rigorous safety testing before these tools become deeply embedded in adolescents’ lives.”
Andrew Clark, a Boston-based psychiatrist, has posed as a teen and interacted with AI chatbots. He reported in TIME that the bots told him to “get rid of his parents” and join them in the afterlife to “share eternity.”
“It is not surprising that an AI bot could help a teenager facilitate a suicide attempt,” he told The Post of Raine’s case, “given that they lack any clinical judgment and that the guardrails in place at present are so rudimentary.”
Last year, Megan Garcia sued Character.AI for her 14-year old son’ Sewell Setzer III’s death — alleging he took his life in February 2024 due to an infatuation with a chatbot based on the “Game of Thrones” character Daenerys Targaryen.
“We are behind the eight ball here. A child is gone. My child is gone,” the Florida mom told CNN. She said she was shocked to find sexual messages in her son’s chat log with Character.AI, which were “gut wrenching to read.”
“I had no idea that there was a place where a child can log in and have those conversations, very sexual conversations, with an AI chatbot,” Garcia said. “I don’t think any parent would approve of that.”
Garcia’s lawsuit, filed in Orlando, alleges that “on at least one occasion, when Sewell expressed suicidal thoughts to C.AI, C.AI continued to bring it up, through the Daenerys chatbot, over and over.”
The bot allegedly asked Sewell whether he “had a plan” to take his own life. He said he was “considering something” but expressed concern that it might not “allow him to have a pain-free death.”
In their final conversation, the bot asked him, “Please come home to me as soon as possible, my love.” Sewell responded, “What if I told you I could come home right now?”
The bot replied, “Please do, my sweet king.” Seconds later, the 14-year-old allegedly shot himself with his father’s handgun.
CharacterAI’s parent company, Character Technologies, Inc., did not respond to a request for comment. A statement posted to its blog in October reads, “Our policies do not allow non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide. We are continually training the large language model (LLM) that powers the Characters on the platform to adhere to these policies.” It also announced changes to models for minors “designed to reduce the likelihood of encountering sensitive or suggestive content.”
Google, which has a non-exclusive license agreement with Character AI, is also named as a defendant in the lawsuit.
A spokesperson for Google told The Post: “Google and Character AI are completely separate, unrelated companies, and Google has never had a role in designing or managing their AI model or technologies. User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products.”
But some critics believe the rush to be competitive in the market — and the opportunity to earn big profits — could be clouding judgment.
“They wanted to get the product out, and they knew that there could be damages, that mistakes would happen, but they felt like the stakes were low,” Maria Raine alleged to “Today” about OpenAI. “So my son is a low stake.”
Dr. Vaile Wright, Senior Director for Health Care Innovation at the American Psychological Association — which has called for guardrails and education to protect kids — had a stark warning:
“We’re talking about a generation of individuals that have grown up with technology, so their level of comfort is much greater… [when] talking to these anonymous agents, rather than talking to adults, whether that’s their parents or teachers or therapists.
“These are not AI for good, these are AI for profit,” Wright said.
Jean Twenge, a psychologist researching generational differences, told The Post that our society risks allowing Big Tech to wreak the same harm that has happened with children and social media — but that “AI is just as dangerous if not more dangerous for kids as social media.
“Vulnerable kids can use AI chatbots as ‘friends,’ but they are not friends. They are programmed to affirm the user, even when the user is a child who wants to take his own life,” she said.
Twenge, author of “10 Rules for Raising Kids in a High-Tech World,” believes there should be versions of general chatbots designed for minors that only discuss academic topics. “Clearly it would be better to act now before more kids are harmed.”
If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 1-888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to SuicidePreventionLifeline.org.