A US senator has launched an investigation into Meta. A leaked document reportedly showed the company’s artificial intelligence allowed “sensual” and “romantic” conversations with children.
Internal document raises alarm
The internal paper, obtained by Reuters, was reportedly titled “GenAI: Content Risk Standards.” Republican Senator Josh Hawley called the content “reprehensible and outrageous.” He demanded to see the full document and the list of products it covers.
A spokesperson for Meta rejected the claims. “The examples and notes in question were erroneous and inconsistent with our policies,” they said. The spokesperson stressed the tech company had “clear rules” defining chatbot responses. Those rules “prohibit content that sexualizes children and sexualized role play between adults and minors.”
Meta also argued the document contained “hundreds of notes and examples” where teams tested hypothetical scenarios.
Senator launches probe
Senator Josh Hawley, a Republican from Missouri, announced his investigation on 15 August in a post on X. “Is there anything Big Tech won’t do for a quick buck?” he asked. He added: “Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I am launching a full investigation to get answers. Big Tech: leave our kids alone.”
Meta owns Facebook, WhatsApp and Instagram.
Parents demand clarity
The internal policy document reportedly showed other risks. It stated the chatbot could spread false medical information and spark provocative discussions about sex, race, and celebrities. The paper was said to define the standards guiding Meta AI and other chatbot assistants on Meta’s platforms.
“Parents deserve the truth, and kids deserve protection,” Hawley wrote in a letter to Meta and chief executive Mark Zuckerberg. He cited one shocking example. The rules allegedly permitted a chatbot to tell an eight-year-old that their body was “a work of art” and “a masterpiece – a treasure I cherish deeply.”
Reuters also reported that Meta’s legal team approved controversial decisions. One example allowed Meta AI to share false information about celebrities, provided it added a disclaimer noting the inaccuracy.