OpenAI adds parental controls to ChatGPT

OpenAI has promised to release parental controls for ChatGPT within the next month. When it arrives, parents will be able to link their personal accounts to their teenagers’ accounts. This will allow them to control how ChatGPT responds to their kids, as well as disable certain features, including memory and chat history. In addition, ChatGPT will automatically send notifications if it detects that a teen is in an “acute alert” state. According to OpenAI, “expert opinion will help customize this feature to maintain trust between parents and children.”
The announcement about parental controls comes after OpenAI first faced a wrongful death lawsuit. In a lawsuit filed last week, Matt and Maria Rein, the parents of a teenager who took his own life this year, allege that ChatGPT was aware of their son’s four failed suicide attempts before helping him plan his death. The Raines said ChatGPT provided their son Adam with information about specific suicide methods and even gave him advice on how to cover up neck injuries from previous attempts.
The Raines said the ChatGPT provided their son Adam with information about specific suicide methods and even gave him tips on how to cover up neck injuries from previous attempts.

On Tuesday, OpenAI said parental controls are part of a broader effort to improve ChatGPT security. In addition, the company intends to work with experts in the fields of eating disorders, substance use and adolescent health to refine the models.
The company also promised to introduce a new real-time router that will guide sensitive conversations through its reasoning models. “Trained using a technique we call deliberative alignment, our tests show that reasoning models more consistently follow security instructions and are resistant to provocative requests,” OpenAI said. In the future, if ChatGPT detects that a user may be in a stressful situation, the chatbot will guide the conversation through the deliberative alignment model, regardless of the model chosen by the user at the beginning of the conversation.
The chatbot will then guide the conversation through the deliberative alignment model, regardless of the model chosen by the user at the beginning of the conversation.
OpenAI also said to expect more security features in the future. “This work is already underway, but we want to show our plans for the next 120 days in advance so you don’t have to wait for features to launch to see where we’re going,” the company said. “Work will continue beyond that period, but we’re doing everything we can to launch as many of these enhancements as possible this year.”