OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm

0
2
👁 0 views

On Thursday OpenAI introduced a new characteristic referred to as Trusted Contact, designed to alert a trusted third occasion if mentions of self-harm are expressed inside a dialog. The characteristic permits an grownup ChatGPT consumer to designate one other individual as a trusted contact inside their account, akin to a good friend or member of the family. In cases the place a dialog might flip to self-harm, OpenAI will now encourage the consumer to succeed in out to that contact. It additionally sends an automatic alert to the contact, encouraging them to test in with the consumer.

OpenAI has confronted a wave of lawsuits from the households of individuals who have dedicated suicide after speaking with its chatbot. In a quantity of cases, the households say ChatGPT inspired their beloved one to kill themselves — and even helped them plan it out.

OpenAI at present makes use of a mixture of automation and human assessment to deal with probably dangerous incidents. Certain conversational triggers alert the corporate’s system to suicidal ideations, which then relay the knowledge to a human security crew. The firm claims that each time it receives this sort of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the corporate says.

If OpenAI’s inside crew decides that the scenario represents a critical security threat, ChatGPT proceeds to ship the trusted contact an alert — both by e-mail, textual content message, or an in-app notification. The alert is designed to be temporary and to encourage the contact to test in with the individual in query. It doesn’t embrace detailed details about what was being mentioned, as a way of defending the consumer’s privateness, the corporate says.

Image Credits:OpenAI

The Trusted Contact characteristic follows the safeguards the corporate launched final September that gave dad and mom the facility to have some oversight of their teenagers’ accounts, together with receiving security notifications designed to alert the dad or mum if OpenAI’s system believes their little one is going through a “serious safety risk.” For a while now, ChatGPT has additionally included automated alerts to hunt skilled well being providers, ought to a dialog development towards the subject of self-harm.

Crucially, Trust Contact is non-obligatory and, even when the safety is activated on a selected account, any consumer can have a number of ChatGPT accounts. OpenAI’s parental controls are additionally non-obligatory, presenting the same limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the corporate wrote within the announcement publish. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

Techcrunch occasion

San Francisco, CA
|
October 13-15, 2026

When you buy by hyperlinks in our articles, we might earn a small fee. This doesn’t have an effect on our editorial independence.