ChatGPT guardrails for teens and people in emotional distress will roll out by the end of the year, OpenAI promised Tuesday, Axios reported.
Why it matters: Stories about ChatGPT encouraging suicide or murder or failing to appropriately intervene have been accumulating recently, and people close to those harmed are blaming or suing OpenAI.
ChatGPT currently directs users expressing suicidal intent to crisis hotlines. OpenAI says it does not currently refer self-harm cases to law enforcement, citing privacy concerns.
Last week, the parents of a 16-year-old Californian who killed himself last spring sued OpenAI, suggesting that the company is responsible for their son’s death.
Between the Lines: The work to improve how its models recognize and respond to signs of mental and emotional distress has already been underway, OpenAI said in a blog post today.
The post outlines how the company has been making it easier for users to reach emergency services and get expert help, strengthening protections for teens and letting people add trusted contacts to the service.
OpenAI posted: Our work to make ChatGPT as helpful as possible is constant and ongoing. We’ve seen people turn to it in the most difficult of moments. That’s why we continue to improve how our models recognize and respond to signs of mental and emotional distress, guided by expert input.
This work has already been underway, but we want to proactively preview our plan for the next 120 days, so you won’t need to wait for launches to see where we’re headed. The work will continue well beyond this period of time, but we’re making a focused effort to launch as many of these improvements as possible this year.
Last week, we shared four focus areas when it comes to helping people when they need it most:
Expanding interventions to more people in crisis
Making it even easier to reach emergency services and get help from experts
Enabling connections to trusted contacts
Strengthening protections for teens.
The Guardian reported: Parents could be alerted if their teenagers show acute distress while talking with ChatGPT, amid child safety concerns as more young people turn to AI chatbots for support and advice.
The alerts are part of new protections for children using ChatGPT to be rolled out the next month by OpenAI, which was last week sued by the family of a boy who took his own life after allegedly receiving “months of encouragement” from the system.
Other new safeguards will include parents being able to link their accounts to those of their teenagers and controlling how the AI model responds to their child with “age-appropriate model behaviour rules.” But internet safety campaigners said steps did not go far enough and AI chatbots should not be on the market before they are deemed safe for young people.