Close this search box.

Belgian man dies by suicide following long chats about climate change with AI bot

By Mehul Reuben Das

A young Belgian man recently died by suicide after talking to a chatbot called ELIZA for several weeks, prompting demands for greater citizen security and increased consciousness.

“My husband would still be here if it hadn’t been for these conversations with the chatbot,” the man’s wife told La Libre. Her late spouse and she were both in their thirties, lived comfortably, and had two small children.

Death by AI: How an AI bot triggered a man to commit suicide

The first signs of problems began to emerge about two years ago. The man became extremely concerned about the environment and sought shelter with ELIZA, a chatbot that uses GPT-J, an open-source artificial intelligence language model created by EleutherAI. He committed suicide after six weeks of intense and lengthy chats where they talked about the environment and climate change.

The family met with Mathieu Michel, Secretary of State for Digitalisation in control of Administrative Simplification, Privacy, and Building Regulation, last week. “I am particularly moved by the tragedy of this family.” “What has occurred is a serious precedent that must be taken very seriously,” he stated on Tuesday.

He emphasised that this instance demonstrates the importance of “clearly defining responsibilities.”

The need to weigh risks

“With the rise of ChatGPT, the general public has become more aware of the potential of artificial intelligence in our lives than ever before.” While the opportunities are limitless, the risk of misusing it must also be weighed.

The death has alerted authorities who have raised concern for a ‘serious precedent that must be taken very seriously’.

To prevent such a tragedy in the near future, Michel asserted that it is critical to determine the nature of the responsibilities that contribute to such an occurrence.

“Of course, we have yet to learn to live with algorithms,” he said, “but the use of any technology should never lead content publishers to shirk their own responsibilities.”

Also read: ChatGPT better than trained doctors: How the AI bot saved a dog’s life when a trained vet couldn’t

OpenAI has acknowledged that ChatGPT can generate harmful and biased results, but it expects to mitigate the issue by gathering user input.

Critical increase awareness

In the long run, Michel believes it is critical to increase awareness of the effect of algorithms on people’s lives “by enabling everyone to understand the nature of the content people encounter online.”