OpenAI Says ChatGPT Now Offer “Gentle Reminders” To Take Brakes During Long Sessions

OpenAI has announced that ChatGPT will now remind users to take breaks if they’re in a particularly long chat with AI. The new feature is part of OpenAI’s ongoing attempts to cultivate a healthier relationship with the frequently compliant and overly-encouraging AI assistant, Engadget reported. 

The company’s announcement suggests the “gentle reminders” will appear as pop-ups in chats that users will have to click or tap through to continue using ChatGPT. “Just Checking In,” OpenAI’s sample pop-up reads. “You’ve been chatting for a while — is this a good time for a break?” The system is reminiscent of the reminders some Nintendo Wii and Switch games will show you if you play for an extended period of time, though there’s an unfortunately dark context to the ChatGPT feature.

Some of the users whose delusions ChatGPT indulged already had a history of mental illness, but the chatbot still did a bad jot of consistently shutting down unhealthy conversations.  OpenAI  acknowledges some of those shortcomings in its blog post, and says that ChatGPT will be updated in the future to respond more carefully to “high-stakes personal decisions.” 

The Verge reported: OpenAI, which is expected to launch its GPT5  AI model this week, is making updates to ChatGPT that it says will improve the AI chatbot’s ability to detect mental or emotional distress. To do this, OpenAI is working with experts and advisory groups to improve ChatGPT’s response in these situations, allowing it to present “evidence-based resources when needed.

In recent months, multiple reports have highlighted stories from people who say their loved ones have experimented mental health crises in situations where using the chatbot seemed to have an amplifying effect on their delusions. OpenAI rolled back an update in April that made ChatGPT too agreeable, even in potentially harmful situations. At the time, the company said the chatbot’s “sycophantic interactions can be uncomfortable, unsettling, and cause distress.”

OpenAI acknowledges that its GPT-4o model “fell short in recognizing signs of delusion or emotional dependency” in some instance. “We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,” OpenAI says.

Mashable reported: ChatGPT is getting a health upgrade, this time for users themselves.

In a new blog post ahead of the company’s reported GPT-5 announcement, OpenAI unveiled it would be refreshing its generative AI chatbot with new features designed to foster healthier, more stable relationships between user and bot. Users who have spent prolonged periods of time in a single conversation, for example, will now be prompted to log off with a gentle nudge. 

The company is also doubling down on fixes to the bot’s sycophancy problem, building out its models to recognize mental and emotional distress.