ChatGPT will stop telling users to break up with their partners

ChatGPT logo is seen in this illustration taken February 3, 2023. REUTERS/File Photo

Audio By Carbonatix
ChatGPT will stop telling people they should break up with
their romantic partners, the popular artificial intelligence (AI) chatbot’s
developer, OpenAI, has said.
As part of changes in how the chatbot responds to users’
emotionally vulnerable requests, the American AI research company said ChatGPT
will soon stop offering clear-cut answers to “personal challenges” and instead
help people to mull over problems such as potential breakups.
“When you ask something like ‘Should I break up with my
boyfriend?’ ChatGPT shouldn’t give you an answer. It should help you think it
through—asking questions, weighing pros and cons,” OpenAI said in a blog post.
“New behaviour for high-stakes personal decisions is rolling
out soon.”
The company added: “Helping you thrive
means being there when you’re struggling... and guiding—not deciding—when you
face personal challenges.”
The update follows concerns that OpenAI’s large language
models, particularly the GPT-4o released May 2024, occasionally failed to
recognize signs of emotional dependency or delusion in user conversations.
OpenAI acknowledged these lapses and announced it is
developing tools to better detect mental or emotional distress and direct users
to “evidence-based” support resources.
Other changes include prompts encouraging users to take
breaks during extended conversations, to help people manage their time and
reduce over-reliance on the tool.
OpenAI said it is working with over 90 physicians across 30
countries and mental health and human-computer interaction experts to improve
ChatGPT’s responses in sensitive situations.
“Asking an AI for advice during an emotional crisis is
becoming more common,” OpenAI said. “We want to make sure the answers
support—not steer—people during those moments.”
OpenAI has said ChatGPT will hit 700 million monthly users
this week, up from 500 million in March.
But the ground-breaking chatbot has been criticised for what
some health researchers call worsening symptoms of mental health
illnesses like psychosis.
While thousands of users have been turning to AI chatbots
for therapy and counselling, experts warn that ChatGPT may be fueling
delusions in vulnerable people.
In April, OpenAI removed an update on the GPT‑4o model that
it admitted made ChatGPT too agreeable and altered its tone.
Leave a Comment