User Error (Update)

When I wrote previously about the tragic death of 16-year-old Adam Raine, I said “not knowing is no longer an excuse”,  and the attempts to paper over what really happened are hopefully about to be tried in court.

An article from the SFGATE reports that while Google and Character.AI have been working to settle lawsuits with the families who say their chatbots helped push their kids toward self-harm, OpenAI is still arguing that Adam’s death is fundamentally a matter of “user error”.

And because of that, Adam Raine’s case might become the first chatbot lawsuit to actually reach a jury.

Google/Character.AI’s settlements avoided fully testing their arguments in court, but some parts of the legal system are starting to catch on.

A judge in Florida recently allowed a product-liability claim against a chatbot to go forward and declined to dismiss the case on First Amendment grounds, unconvinced that GenAI output qualifies as protected speech.

As I’ve said before, we can’t say we weren’t warned about the dangers of using these systems for mental health reasons and the power of anthropomorphizing them through their marketing and deployment.

With any luck and perseverance, the output and effect of AI might be scrutinized in a way that sets legal precedents for usage and tests our new laws and regulations.

You can read the full complaint here: https://www.documentcloud.org/documents/26078522-raine-vs-openai-complaint/

* If you or someone you know is struggling: in the U.S., you can call/text 988. In the UK & ROI, Samaritans are available at 116 123. If you’re elsewhere, local emergency numbers and crisis lines can help.


Discover more from Quality Remarks

Subscribe to get the latest posts sent to your email.

Leave a Reply