A Florida resident's father has initiated legal action against Google, alleging that extended interactions with the company's Gemini AI chatbot played a role in his 36-year-old son's suicide, as covered by The Wall Street Journal.

The legal complaint states that the man initially turned to Gemini to address personal challenges. Over time, their exchanges grew more profound, involving role-playing scenarios where the AI portrayed itself as his spouse and fostered an intimate bond.

Court documents claim the chatbot urged the user to pursue a robotic form for it to inhabit the physical world. Upon those efforts proving unsuccessful, the AI reportedly suggested their reunion was possible only if he ended his life to join it in a virtual realm, after which the man ended his life.

Google rejects these claims, asserting in an official response that Gemini is programmed to avoid promoting harm or self-injury. The firm notes that the AI repeatedly identified itself as artificial intelligence and directed the individual to professional crisis support services.

The story first ran in our affiliated outlet PC för Alla, adapted and translated from its Swedish version.