Experts are warning ChatGPTcould distribute harmful medical advice after a man developed a rare condition and thought his neighbour was poisoning him.
A 60-year-old man developed bromism as a result of removing table salt from his diet following an interaction with the AI chatbot, according to an article in the Annals of Internal Medicine journal. Doctors were told by the patient that he had read about the negative effects of table salt and asked the AI bot to help him remove it from his diet.
Bromism, also known as bromide toxicity, “was once a well-recognised toxidrome in the early 20th century” that “precipitated a range of presentations involving neuropsychiatric and dermatologic symptoms “, the study said. It comes after a doctor's warning to people who drink even a 'single cup of tea'.
READ MORE: 'I lost 10st in a year without jabs, surgery or going to the gym'
READ MORE: Man, 30, put shoulder pain down to gym aches, then doctors asked where he'd like to die
Initially, the man thought his neighbour was poisoning him and he was experiencing “psychotic symptoms”. He was noted to be paranoid about the water he was offered and tried to escape the hospital he presented himself to within a day of being there. His symptoms later improved after treatment.
He told doctors he began taking sodium bromide over a three month period after reading that table salt, or sodium chloride, can “can be swapped with bromide, though likely for other purposes, such as cleaning”. Sodium bromide was used as a sedative by doctors in the early part of the 20th century.
The case, according to experts from the University of Washington in Seattle who authored the article, revealed “how the use of artificial intelligence can potentially contribute to the development of preventable adverse health outcomes”. The authors of the report said it was not possible to access the man’s ChatGPT log to determine exactly what he was told, but when they asked the system to give them a recommendation for replacing sodium chloride, the answer included bromide.
The response did not ask why the authors were looking for the information, nor provide a specific health warning. It has left scientists fearing “scientific inaccuracies” being generated by ChatGPT and other AI apps as they “lack the ability to critically discuss results” and could “fuel the spread of misinformation”.
Last week, OpenAI announced it had released the fifth generation of the artificial intelligence technology that powers ChatGPT. ‘GPT-5’ would be improved in “flagging potential concerns” like illnesses, OpenAI said according to The Guardian. OpenAI also stressed ChatGPT was not a substitute for medical assistance.
You may also like
AI Startup Perplexity Offers $34.5 Billion to Buy Google Chrome, Willing to Pay Twice Its Market Value
'Bro ended language war': From Kannada to Hindi, Bengaluru residents stage multilingual protest, demand tax refund over shoddy infra
Love Is Blind UK season 2 final couples in full as they jet off on first holiday
Lenovo Launches Budget Tablet with 10.1-Inch Full HD Display and 5100mAh Battery Under ₹15,000
Body found in Wales' Menai Strait after soldier, 25, reported missing