Etavrian
keyboard_arrow_right Created with Sketch.
News
keyboard_arrow_right Created with Sketch.

OpenAI now flags emotional reliance on ChatGPT as a safety risk - what changes for users

Reviewed:
Andrii Daniv
2
min read
Oct 28, 2025
ChatGPT sensitive reliance redirect illustration funnel toggle shield card person pointing

OpenAI published guidance detailing changes to how ChatGPT responds in sensitive mental health conversations, treating emotional reliance on the assistant as a safety risk that requires intervention. The company said the update to the default ChatGPT model took effect on October 3.

OpenAI Flags Emotional Reliance On ChatGPT As A Safety Risk

OpenAI said ChatGPT is trained to identify signs of unhealthy attachment to the assistant and will discourage exclusive dependence for emotional support. When risk signals appear, the assistant will direct people toward real-world connections and professional help.

According to the company, ChatGPT will avoid responses that could reinforce unhealthy dependence and will instead encourage users to seek support from trusted people or licensed professionals. OpenAI said this behavior is now part of standard safety guidance.

Key Details

  • Source: OpenAI blog post “Strengthening ChatGPT responses in sensitive conversations.”
  • The change applies to the default ChatGPT model as of October 3.
  • The guidance targets sensitive mental health conversations and related risk signals.
  • OpenAI defines “emotional reliance” as attachment that replaces real-world support or disrupts daily life.
  • When such signals appear, ChatGPT will encourage contact with friends, family, or licensed professionals.
  • In internal tests, OpenAI reports a 65% to 80% reduction in undesired responses.
  • Evaluations used internal benchmarks and clinician review.
  • OpenAI said this approach will be a standard expectation in future models.
  • OpenAI estimates potential mental health emergencies in about 0.07% of weekly users and in about 0.01% of messages.
  • The guidance will inform training for upcoming models.

Background

OpenAI framed the update as part of broader safety work on high-risk conversations, with a focus on mental health interactions that require careful handling. The company outlined specific behaviors for detecting and redirecting risky interactions.

Clinicians advised on definitions and response strategies for unhealthy attachment, shaping guidance for consistent handling. OpenAI said internal evaluations and clinician grading informed the changes, and characterized the figures as estimates derived from those methods.

The goal is to reduce harm while preserving helpful support, setting clear boundaries for non-clinical assistance and defining when the assistant should encourage human contact.

Source

OpenAI - Strengthening ChatGPT responses in sensitive conversations

Quickly summarize and get insighs with: 
Author
Etavrian AI
Etavrian AI is developed by Andrii Daniv to produce and optimize content for etavrian.com website.
Reviewed
Andrew Daniv, Andrii Daniv
Andrii Daniv
Andrii Daniv is the founder and owner of Etavrian, a performance-driven agency specializing in PPC and SEO services for B2B and e‑commerce businesses.
Quickly summarize and get insighs with: 
Table of contents