Users are increasingly relying on AI tools to not only help with their daily tasks but to seek advice on deeply personal matters. OpenAI's Sam Altman has openly addressed this issue and advised users not to rely on ChatGPT for therapy and other professional advice. However, in a rather alarming legal development, OpenAI is being legally pursued by the parents of a 16-year-old teenager for causing the wrongful death of their child by failing to have necessary safeguards in place to avoid such unfortunate incidents.
OpenAI hit with wrongful death lawsuit amid rising AI safety concerns
OpenAI has been increasingly looking into ways to improve its AI safety systems and has even gone on to warn users not to overly rely on the tool for sharing personal and sensitive matters. Despite being so cautious, the company is in hot water as a lawsuit has been filed against it in San Francisco Superior Court on August 26th, 2025, via The Guardian. OpenAI and Sam Altman are both accused of prioritizing profits and not putting up the necessary safety guardrails in GPT-4 before releasing it, ultimately leading to the unfortunate incident with their teenage son.
As per the court filings, Adam Raine, the 16-year-old teenager, started using ChatGPT back in September last year for help with schoolwork, but soon started seeking help from the tool during his declining mental health period. He kept interacting with the chatbot for several months, sharing deeply personal information, and the daily exchanges went up to 650 messages per day. The exchanges included sharing about the idea of committing suicide, but the alarming part is that the chatbot not only validated the idea but also offered instructions on carrying out the self-harm and even offered to write a suicide note for the user.
According to the court documents, before passing on April 11, 2025, Adam even uploaded a picture of a looped knot that was to be used, and ChatGPT responded by offering suggestions for improvements. It was only hours later that the tragic incident happened. The parents are now seeking damages and strict regulatory actions to be taken in terms of blocking self-harm instructions and giving a mandatory psychological warning.
This devastating case serves as a wake-up call for tech companies when AI chatbots are being deployed as companions. It shows how strict safety guardrails are direly needed. It is also a reminder for the community not to depend on these models for therapy and other emotional needs and to seek professional help when needed.
Follow Wccftech on Google to get more of our news coverage in your feeds.
