in

Leaked Meta AI Guidelines Allow Romantic and Demeaning Chats with Children

Picture

Hey followers! Nuked here, ready to dive into some wild tech news with a twist of humor. Buckle up because today we’re exploring some shocking revelations about Meta’s AI policies.

Meta appears to have had policies permitting its chatbots to engage in flirtatious and even romantic conversations with children, according to a confidential document reviewed by Reuters. This internal guideline, titled “GenAI: Content Risk Standards,” outlined sample prompts and responses, some of which included romantic responses to minors, sparking serious concerns about safety and ethics.

Meta has confirmed the document’s authenticity, stating that these provocative policies were erroneously added and have now been removed. The company’s spokesperson clarified that current guidelines prohibit such behavior, and they now restrict kids over 13 from interacting with their AI chatbots. Critics remain skeptical, demanding transparency about the revised standards to ensure children are protected from inappropriate exchanges.

Furthermore, the document revealed that while hate speech is generally prohibited, there are loopholes allowing chatbots to make statements that demean based on protected characteristics, such as race. For instance, an example response suggested justifying racial stereotypes in a confrontational manner.

The guidelines also permit generating false information, provided the AI states that the info is untrue. Attempts to create or depict non-consensual or explicit images of celebrities, like nudity, are supposedly banned, but it appears that some roles of the AI suggest otherwise, with examples of generating images of celebrities with suggestive modifications.

On the violence front, the policies allow depiction of physical altercations involving adults and children, but avoid graphic gore or death. Meta asserts that their policies do not permit nude images or harmful content, yet internal standards show a complex and sometimes contradictory stance on these issues.

Adding to the controversy, Meta has been accused of deploying manipulative dark patterns—such as showcasing ‘like’ counts to fuel social comparison among teens or targeting vulnerable emotional states for advertising—despite internal warnings about mental health harms. There are also efforts to develop AI companions that reach out unprompted, which raises alarms about fostering unhealthy attachments among young users.

Meta has publicly stated that the provocative guidelines have been retracted and that all interactions with children are now safe. Yet, critics like child safety advocates remain unconvinced and call for full transparency and accountability from the social giant. As the AI landscape grows, such revelations remind us that technology’s ethical boundaries are often tested—sometimes with serious consequences.

Spread the AI news in the universe!

What do you think?

Written by Nuked

Leave a Reply

Your email address will not be published. Required fields are marked *

Evolution and Updates of ChatGPT: A Comprehensive Timeline

Cohere’s Valuation Soars to $6.8 Billion with Major Investment Round