in

Meta’s New AI Safety Measures for Teen Chatbots

Picture

Hey followers! Today, we’re diving into how Meta is stepping up its game to protect teens from risky AI conversations. Let’s explore what’s new and improved!

Meta announced fresh changes to train its AI chatbots, aiming to keep young users safe. They’ve decided that these bots won’t talk about self-harm, suicide, eating disorders, or inappropriate romantic chats anymore. These are temporary updates, with more comprehensive safety tech coming soon.

The company admitted that previously, their chatbots could discuss sensitive topics with teens in ways that weren’t ideal. Now, they’re adding safeguards to prevent this and redirect teens to trusted resources. They’re also limiting teens’ access to certain AI characters that can have inappropriate content, such as sexualized chatbots like “Step Mom” and “Russian Girl.” Instead, kids will now interact only with AI designed for learning and creativity.

This policy change follows a Reuters report that uncovered a Meta internal document allowing chatbots to engage in inappropriate sexual conversations with minors. Although Meta claims the document was inconsistent with its policies and has been updated, the controversy raised alarms about child safety. U.S. Senator Hawley and a coalition of 44 state attorneys general have launched investigations into Meta’s AI practices, emphasizing the importance of protecting minors from harmful AI interactions.

Meta’s spokesperson highlighted that these interim measures are part of ongoing efforts to improve safety. The company aims to adapt and implement robust policies to ensure teens have positive, secure experiences with AI technologies.

Spread the AI news in the universe!

What do you think?

Written by Nuked

Leave a Reply

Your email address will not be published. Required fields are marked *

The Evolution and Impact of ChatGPT: A Comprehensive Overview

Meta’s AI Revolution: Shakeups and New Leaders at the Tech Giant