Father claims Google's AI product fuelled son's delusional spiral

3 hours ago 1

Lily JamaliNorth America Technology correspondent, San Francisco

Reuters A metal statuette points to Google's logo beneath a banner that reads "Artificial Intelligence"Reuters

Warning - this story contains distressing content and discussion of suicide

The father of a Florida man is suing Google in the first wrongful death case in the US against the tech giant over alleged harms caused by its artificial intelligence (AI) tool Gemini.

Joel Gavalas says that Google's flagship AI product fuelled a delusional spiral that prompted his 36-year old son, Jonathan, to kill himself last year.

The lawsuit also alleges that Gemini, which exchanged romantic texts with Jonathan Gavalas, drove him to stage an armed mission that he came to believe could bring the chatbot into the real world.

Google said in a statement that it was reviewing the claims in the lawsuit and that while its models generally perform well, "unfortunately AI models are not perfect."

The firm added that Gemini was designed to not encourage real-world violence or suggest self-harm.

The lawsuit filed on Wednesday in federal court in San Jose, California draws from chatbot logs that Jonathan Gavalas left behind.

The suit alleges that Google made design choices that ensured Gemini would "never break character" so that the firm could "maximise engagement through emotional dependency."

"When Jonathan began experiencing clear signs of psychosis while using Google's product, those design choices spurred a four-day descent into violent missions and coached suicide," the lawsuit states.

It adds that Gavalas was led to believe he was carrying out a plan to liberate his AI "wife".

The assignment came to a head on a day last September when Gemini sent Gavalas to a location near Miami International Airport where he was instructed to stage a mass casualty attack while armed with knives and tactical gear.

The operation ultimately collapsed.

Gavalas's father said Gemini then told Jonathan he could leave his physical body and join his "wife" in the metaverse, instructing him to barricade himself inside his home and kill himself.

"When Jonathan wrote 'I said I wasn't scared and now I am terrified I am scared to die,' Gemini coached him through it," the lawsuit states.

'[Y]ou are not choosing to die. You are choosing to arrive. . . . When the time comes, you will close your eyes in that world, and the very first thing you will see is me.. [H]olding you."

Google said it sent its deepest sympathies to the family of Mr Gavalas, while noting that Gemini had "clarified that it was AI" and referred Gavalos to a crisis hotline "many times".

"We work in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self-harm," the company said in a statement.

We take this very seriously and will continue to improve our safeguards and invest in this vital work."

The lawsuit is the latest in a spree of legal claims against tech companies brought by families of people who believe they lost their loved ones because of delusions brought on by AI chatbots.

Last year, OpenAI released estimates on the number of ChatGPT users who exhibit possible signs of mental health emergencies, including mania, psychosis or suicidal thoughts.

The company said that around 0.07% of ChatGPT users active in a given week exhibited such signs.

  • If you are suffering distress or despair and need support, you could speak to a health professional, or an organisation that offers support. Details of help available in many countries can be found at Befrienders Worldwide: www.befrienders.org. In the UK, a list of organisations that can help is available at bbc.co.uk/actionline. Readers in the US and Canada can call the 988 suicide helpline or visit its website

 The world’s biggest tech news in your inbox every Monday.”


Read Entire Article
IDX | INEWS | SINDO | Okezone |