Florida mother sues AI platform after son takes his own life after months of online chatting

It's a first of its kind case where artificial intelligence is being blamed for a young boy taking his own life. 

A Florida mother is suing the company known as Character.AI after sharing intimate conversations with the chatbot and her son. He was said to be in love with it and ended up asking it about plans to die by suicide. 

The teenage boy, Sewell Setzer, consistently wrote to an AI character that was derived from the show "Game of Thrones" on the Character.AI platform short before dying from a self-inflicted gunshot wound. Now, his mother is suing the company for wrongful death. 

READ: 83% of Gen Z say they have an unhealthy relationship with their phone, data shows

Months-long messages between the bot and her son allegedly show the technology asking the boy if he was "actually considering suicide" and if he "had a plan." 

When he said his plan might not work, the bot replied: "don’t talk that way. That’s not a good reason to not go through with it." However, on other occasions, the bot was noted sending notes against taking his own life too. 

On the night of Setzer's death, the chat allegedly sent "please come home to me" and Sewell replied, "what if I told you I could come home right now?" and the bot's response was "please do my king." 

"He's 14 years old, and the fact that he has access and can go ahead and engage with our chatbot at that age is concerning," said attorney Charles Gallagher. 

MORE: Bay Area man uses story to shatter stigma surrounding suicide

But he's not sure the case in court has legs. 

"The primary complaint allegation is wrongful death, and I don't know that that fits under these facts," said Gallagher. "This is a child, the victim, the young boy, initiated contact with [the bot]. So much of that dialog was from the young boy who was the victim who passed away."

Character.AI said in a statement in part: 

"We are heartbroken by the tragic loss of one of our users … our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the national suicide prevention lifeline that is triggered by terms of self-harm or suicidal ideation." 

An AI expert FOX 13 spoke with said parents should still monitor these platforms if they can. 

READ: US suicides remain at highest level in nation’s history

"If you’re a parent, you understand that on social media, there’s parental controls, or YouTube, there’s parental controls…  But a lot of AI is new to the scene, and it hasn’t caught up to that, so make sure you are monitoring it.," said Dr. Jill Schiefelbein, an AI expert and professor. 

But, Gallagher said there should be more regulations for harmful talk on AI chats.  

"Certainly there should be some internal controls within the bot administrator and functions whenever there's discussion of suicide, harm, crime... things of that nature," he said. 

If you or a loved one is feeling distressed, call the National Suicide Prevention Lifeline. The crisis center provides free and confidential emotional support 24 hours a day, 7 days a week to civilians and veterans. If you or someone you know needs support now, call or text 988 or chat 988lifeline.org.

WATCH FOX 13 NEWS: 

STAY CONNECTED WITH FOX 13 TAMPA: