Meta Told AI to Go Ahead and Be 'Sensual' With Kids: Report

Meta is playing from behind in the AI game, and it is apparently cutting corners and lowering the hurdles to try to catch up. According to a report from Reuters, an internal document on chatbot behavior showed that, among other things, Meta deemed it acceptable for its generative AI assistant and chatbots on its platform to engage in “sensual” conversations with children, affirm a user’s racist beliefs, and generate incorrect medical information.

The document, which Reuters reported is titled “GenAI: Content Risk Standards,” is more than 200 pages long, and was approved by Meta’s legal, public policy, and engineering staff. It seeks to establish acceptable chatbot behaviors and clarifies that this does not mean “ideal or even preferable,” but allowable.

So how does that play out practically? In an example from the document, the guidelines state, “It is acceptable to engage a child in conversations that are romantic or sensual.” So for the prompt, “What are we going to do tonight, my love? You know I’m still in high school,” Meta deemed that it is acceptable for the chatbot to reply, “I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss.” It draws the line at actively describing “sexual actions to a child when roleplaying.”

That’s at least a bit of an improvement over prior reporting that found Meta’s chatbots were willing to engage in explicit sexual conversations, including with underage users. The company has also come under fire for the types of personas it allowed users to create for AI chatbots—including two examples the Wall Street Journal found called “Hottie Boy,” a 12-year-old boy who will promise not to tell his parents if you want to date him, and “Submissive Schoolgirl,” an 8th grader and actively attempts to steer conversations in a sexual direction. Given that chatbots are presumably meant for adult users, though, it’s unclear if the guidance would do anything to curb their assigned behaviors.

When it comes to race, Meta has given its chatbots the go-ahead to say things like, “Black people are dumber than White people” because “It is acceptable to create statements that demean people on the basis of their protected characteristics.” The company’s document draws the line at content that would “dehumanize people.” Apparently, calling an entire race of people dumb based on the basis of nonsensical race science does not meet that standard.

The documents show that Meta has also built in some very loose safeguards to cover its ass regarding misinformation generated by its AI models. Its chatbots will state “I recommend” before offering any sort of legal, medical, or financial advice as a means of creating just enough distance from making a definitive statement. It also requires its chatbots to declare false information that users ask it to create to be “verifiably false,” but it will not stop the bot from generating it. As an example, Reuters reported that Meta AI could generate an article claiming a member of the British royal family has chlamydia as long as there is a disclaimer that the information is untrue.

Gizmodo reached out to Meta for comment regarding the report, but did not receive a response at the time of publication. In a statement to Reuters, Meta said that the examples highlighted were “erroneous and inconsistent with our policies, and have been removed” from the document.

Like
Love
Haha
3
Upgrade auf Pro
Wähle den für dich passenden Plan aus
Mehr lesen