Parents could soon have more control over how their children interact with ChatGPT. OpenAI claims it will be rolling out parental controls for its AI chatbot aimed at giving parents more oversight. OpenAI’s announcement comes in the wake of two parents filing a wrongful death lawsuit against the company for what they claim is ChatGPT’s role in their 16-year-old son’s suicide. The lawsuit itself comes at a time when concern is mounting about how people interact with artificial intelligence chatbots and their tendency to mishandle sensitive, and potentially fatal, conversations.
In light of that, it might seem like the changes OpenAI is making to ChatGPT are a move in the right direction. Most notably, parents will be able to receive alerts from ChatGPT if it detects their child is “in a moment of acute emotional distress.” However, experts argue these changes are insufficient to address the root of the concerns around how chatbots are mishandling mental health and creating AI-fostered delusions.