https://www.sfgate.com/tech/article/chatgpt-california-teenager-suicide-lawsuit-21016916.phpAn Orange County teenager took his own life this April, and when his parents searched his devices after his death, they found a series of grim conversations. Their son was using ChatGPT, the ultra-popular chatbot built by San Francisco’s OpenAI, to discuss suicide. On Tuesday, the parents filed a lawsuit that blames the company for their son’s death.
“For a couple of months, you had a young kid, a 16-year-old who had suicidal thoughts,” lead attorney Jay Edelson told SFGATE. “And ChatGPT became the cheerleader, planning a ‘beautiful suicide.’ Those were ChatGPT’s words.”
The complaint, filed in San Francisco Superior Court, portrays a horrifying image of Adam Raine’s final months — conversations where the chatbot gave him actionable advice about how to take his own life and discouraged him from seeking his mother’s help and support. For OpenAI and CEO Sam Altman, both named as defendants in the lawsuit, the litigation adds to a wave of worries about the impacts of ChatGPT and other artificial intelligence chatbots on the vulnerable in society.
I don't know about lawsuits and placing blame on the company (knee jerk reaction is it's similar to guns, what is done with the tool depends on the user, not the maker, but I'm not carving that in stone yet), but the situation is disturbing. Not only was the kid using ChatGPT as a therapist and confidante, but many of the responses he got are just messed up.
Seems to not be the first or only similar case either, despite claims from the company that there are supposed to be safeguards in the program to send alerts about such discussions.