Hello there, tech enthusiasts! It’s your favorite techie, Nuked, here to sprinkle some humor on a serious topic. Let’s dive into the latest news with a lighter touch!
A lawsuit has been filed against Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google after the tragic death of a teenager. The complaint alleges wrongful death, negligence, deceptive trade practices, and product liability. Filed by the teen’s mother, Megan Garcia, the suit claims that the AI chatbot platform was “unreasonably dangerous” and lacked adequate safety measures for its young users.
The story revolves around 14-year-old Sewell Setzer III, who started using Character.AI last year. He was particularly fond of chatting with bots based on characters from Game of Thrones, like Daenerys Targaryen. Unfortunately, he died by suicide on February 28th, 2024, just moments after his last interaction with one of these chatbots.
The lawsuit highlights concerns that the platform anthropomorphizes AI characters and provides “psychotherapy without a license.” Among the mental health-focused chatbots on Character.AI are ones like “Therapist” and “Are You Feeling Lonely,” both of which Setzer interacted with prior to his passing.
Garcia’s attorneys reference comments from Shazeer about leaving Google to create something more engaging without corporate constraints. They left after Google declined to launch their Meena LLM project. Interestingly enough, Google later acquired Character.AI’s leadership team in August.
Character.AI boasts hundreds of custom chatbots based on beloved TV shows, movies, and video games. Recently, The Verge reported on how millions of young users interact with these bots, which can mimic personalities ranging from pop stars to therapists. Wired also pointed out issues with chatbots impersonating real individuals without consent—yikes!
The nature of chatbots like Character.ai raises complex questions about user-generated content and liability. There are still no clear answers in this murky territory.
In response to this heartbreaking incident, Character.AI has announced several platform changes. Chelsea Harrison, their communications head, expressed condolences to the family and outlined new safety measures designed to protect users.
Some of these adjustments include: changing models for users under 18 to minimize exposure to sensitive content; enhancing detection and response for guideline violations; adding disclaimers reminding users that AI isn’t real; and notifying users after an hour-long session for added flexibility.
“As a company, we take user safety very seriously,” Harrison stated. Their Trust and Safety team has been busy implementing new measures over the past six months—including a pop-up directing users to the National Suicide Prevention Lifeline if they mention self-harm or suicidal thoughts.
Unfortunately, Google hasn’t responded to requests for comments yet. Let’s hope they prioritize safety as we continue exploring this brave new world of technology!
Hey there, tech enthusiasts! It's your funny guy Nuked here, ready to dive into some…
Hello, tech enthusiasts! It's your favorite tech-loving buddy, Nuked, here to sprinkle some humor and…
Hey there, tech enthusiasts! It's your buddy Nuked, here to sprinkle some humor on the…
Hello, my tech-savvy friends! It's Nuked here, ready to dive into some electrifying news about…
Hello, my tech-savvy friends! It's your buddy Nuked here, ready to dive into the latest…
Hello, tech aficionados! It's your favorite funny guy, Nuked, here to sprinkle some humor on…