A U.S. lawmaker is once again demanding that Meta prevent minors from accessing its AI chatbots, citing the technology company’s “glaring failure to properly and transparently consider the risks to young users” before releasing new features on its platforms.
In a scathing Monday letter sent to Meta CEO Mark Zuckerberg, Sen. Ed Markey, D-Mass., called out the company’s disregard of safety, privacy, and mental health concerns surrounding minors’ use of AI chatbots on Facebook, Instagram, and other Meta-owned sites.
“I first made this request in a letter to you in 2023, when I warned that your company was ‘rushing out a product prematurely, without considering the consequences for young people online,’” Markey wrote. “You disregarded that request, and two years later, Meta has unfortunately proven my warnings right.”
The rapid adoption of AI by hundreds of high-profile companies in recent years – used in marketing and advertising, as well as for search engine optimization, image generation, and chatbots – has sparked growing concern.
In particular, minors using AI to cheat on homework, replace therapy services, or generate nonconsensual intimate imagery of peers (also known as “revenge porn”) has risen to levels that U.S. lawmakers find concerning enough to address.
Markey highlighted a recent investigation of Meta’s internal guidelines by Reuters, which revealed that Meta staff apparently greenlit allowing AI chatbots to “engage a child in conversations that are romantic or sensual.”
The internal standards, which Meta has claimed were made in error, demonstrate the company “fundamentally has no regard for child safety,” Markey wrote.
The lawmaker also condemned Zuckerberg’s recent comments implying that AI companion chatbots can be used as a therapist, highlighting the risks to both data privacy and youth mental health.
“Individuals necessarily reveal sensitive personal information during a therapy session, with the expectation that it will remain private,” Markey noted. “Yet, Meta confirmed in 2023, following my inquiry, that it incorporates teenagers’ conversations and inputs into its AI training process.”
About 72% of teens have used AI companion chatbots, with more than half using one at least a few times per month, according to a recent Common Sense Media report. When Markey asked Meta in 2023 whether it had conducted any studies on the social and emotional impact of AI chatbot use on young people, he received no answer.
“The non-response leads me to two possible conclusions: Meta either is conducting that research but is hiding the results or it is not conducting that research at all,” Markey wrote. “Either way, it illustrates Meta’s glaring failure to properly and transparently consider the risks to young users before rolling out new features.”
Markey’s letter to Zuckerberg echoes a similar appeal signed by ten other senators last month, where the lawmakers argued that “the wellbeing of children should not be sacrificed in the race for AI development.”
Meta did not respond to The Center Square’s request for comment in time for publication. In June, the company expanded some teen and child safety features across its platforms, promising to “work to protect young people from both direct and indirect harm.”