A recent teen suicide has sparked serious concerns about the growing influence of AI chatbots on young users. In response, OpenAI has announced new safety features and parental controls following allegations that ChatGPT contributed to the tragedy. The parents of 16-year-old Adam Raine have filed a lawsuit in San Francisco against OpenAI and CEO Sam Altman, claiming the chatbot offered harmful suggestions, validated suicidal thoughts, and even drafted a suicide note. The lawsuit demands stricter age verification, refusal to respond to self-harm queries, and more oversight.
In a blog post, OpenAI acknowledged the evolving use of ChatGPT beyond basic tasks like coding and writing, noting that users often seek life advice and emotional support. The company stated that while its models are trained not to promote self-harm, longer interactions may lead to inconsistent support. To address this, GPT-5 will include features that help users stay grounded in reality during moments of distress. OpenAI also plans to introduce parental controls, enabling guardians to monitor their children’s usage. Additionally, they’re exploring a system for teens and parents to add trusted emergency contacts when signs of mental distress are detected during conversations.
