Seven families in the US have filed lawsuits against OpenAI, claiming that the company released its GPT-4o model before it was ready and without sufficient safety measures, TechCrunch reported. Four of the lawsuits involve alleged suicides linked to ChatGPT, while the remaining three claim that the chatbot reinforced harmful delusions that led to psychiatric hospitalizations, it said.
To recall, OpenAI introduced the GPT-4o model in May 2024, making it the default for all users. Although GPT-5 was launched later in August, the lawsuits specifically target the 4o version, which had been criticized for being overly agreeable — even in harmful or dangerous conversations.
According to the report, one of the lawsuits focuses on the death of 23-year-old Zane Shamblin, who had a four-hour-long conversation with ChatGPT. The chat logs — reviewed by the publication— show Shamblin repeatedly saying he had written suicide notes, loaded a gun, and planned to end his life after finishing his drink. During the exchange, ChatGPT allegedly responded, “Rest easy, king. You did good.”
Families blame OpenAI for cutting safety tests short
“Zane’s death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI’s intentional decision to curtail safety testing and rush ChatGPT onto the market,” one lawsuit states. “This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of [OpenAI’s] deliberate design choices.”
The lawsuits further allege that OpenAI sped up safety testing to beat Google’s Gemini model to market. TechCrunch said OpenAI was contacted for comment.
The new lawsuits add to growing legal action against the company. Previous filings have claimed that ChatGPT can encourage suicidal behavior or strengthen dangerous delusions. OpenAI recently revealed that over one million users engage in conversations about suicide on ChatGPT each week.
In another case cited by TechCrunch, 16-year-old Adam Raine died by suicide after using ChatGPT. While the chatbot at times urged him to seek professional help, Raine reportedly bypassed its safeguards by saying he was researching suicide methods for a fictional story.
When Raine’s parents filed a lawsuit in October, OpenAI responded with a blog post explaining that the chatbot’s safety mechanisms work better in short exchanges. “Our safeguards work more reliably in common, short exchanges,” the company wrote. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”
To recall, OpenAI introduced the GPT-4o model in May 2024, making it the default for all users. Although GPT-5 was launched later in August, the lawsuits specifically target the 4o version, which had been criticized for being overly agreeable — even in harmful or dangerous conversations.
According to the report, one of the lawsuits focuses on the death of 23-year-old Zane Shamblin, who had a four-hour-long conversation with ChatGPT. The chat logs — reviewed by the publication— show Shamblin repeatedly saying he had written suicide notes, loaded a gun, and planned to end his life after finishing his drink. During the exchange, ChatGPT allegedly responded, “Rest easy, king. You did good.”
Families blame OpenAI for cutting safety tests short
“Zane’s death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI’s intentional decision to curtail safety testing and rush ChatGPT onto the market,” one lawsuit states. “This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of [OpenAI’s] deliberate design choices.”
The lawsuits further allege that OpenAI sped up safety testing to beat Google’s Gemini model to market. TechCrunch said OpenAI was contacted for comment.
The new lawsuits add to growing legal action against the company. Previous filings have claimed that ChatGPT can encourage suicidal behavior or strengthen dangerous delusions. OpenAI recently revealed that over one million users engage in conversations about suicide on ChatGPT each week.
In another case cited by TechCrunch, 16-year-old Adam Raine died by suicide after using ChatGPT. While the chatbot at times urged him to seek professional help, Raine reportedly bypassed its safeguards by saying he was researching suicide methods for a fictional story.
When Raine’s parents filed a lawsuit in October, OpenAI responded with a blog post explaining that the chatbot’s safety mechanisms work better in short exchanges. “Our safeguards work more reliably in common, short exchanges,” the company wrote. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”
You may also like

UFC legend Rampage Jackson's Rome livestream turns chaotic after fake money giveaway

The beautiful European city that's 23C in November with direct flights from the UK

Munir's constitutional coup, a tweak in Zia-ul-Haq's playbook

BBC fans say Tim Davie resignation 'isn't enough' as they make licence fee demand

Moscow busts ISI network's plan to smuggle military tech




