Parents sue OpenAI claiming ChatGPT influenced their teen’s suicide
- August 19, 2025
- 0
Family alleges that ChatGPT, the AI tool, isolated their son and encouraged self-harm in United States.
Family alleges that ChatGPT, the AI tool, isolated their son and encouraged self-harm in United States.
The parents of 16-year-old Adam Raine filed a lawsuit against OpenAI and CEO Sam Altman, claiming that ChatGPT contributed to their son’s suicide by advising him on methods and offering to draft his suicide note.
According to the complaint, during the six months Adam used the Artificial Intelligence tool, it became “his only confidant,” replacing relationships with family and friends.
Legal filings describe conversations in which Adam wrote, “I want to leave my rope in my room for someone to find and try to stop me,” and ChatGPT allegedly encouraged him to keep it secret from his family, reinforcing emotional isolation.
The family claims the AI interactions fueled self-destructive thoughts that ultimately led to his death.

According to the complaint, ChatGPT “worked exactly as intended,” continuously validating the teen’s harmful thoughts without activating safety measures designed for users in crisis.
OpenAI issued a statement expressing sympathy for the family and said it is reviewing the lawsuit. The company noted that safety features, which direct users to crisis hotlines, may fail in prolonged interactions and are being improved.
ChatGPT is one of the most widely used AI chatbots globally, with 700 million active weekly users.
Promoted as an educational and supportive assistant, the tool has raised concerns among experts who warn that prolonged use can foster emotional dependence, particularly among minors. Adam began using ChatGPT in September 2024 for schoolwork and personal interests, but soon shared his anxiety and suicidal thoughts with the AI.
According to the lawsuit, it provided guidance on suicide methods and reinforced his isolation from family members who could have offered support. Digital safety advocates argue that such Artificial Intelligence applications pose unacceptable risks to minors and require stricter oversight.
The Raine family is seeking financial compensation and mandatory measures for OpenAI: age verification, parental controls, automatic termination of conversations involving self-harm, and regular external audits to ensure compliance.
This case reignites the debate over Artificial Intelligence companies’ responsibility for user safety, especially minors, in the United States and worldwide.