Meta endangers children through AI companions and algorithm failures

NEWYou can now listen to Fox News articles!
If past behavior is the best predictor of future behavior, Meta has given parents every reason to be distrustful.
On more than one occasion, Meta’s social media products, which include Facebook, Instagram and WhatsApp, have endangered children. Its fixes come late or only after external attention and pressure. Their parental controls often fall short, are extremely cumbersome, or don’t keep pace with evolving technology and new features. If Meta truly cares about child safety – or about salvaging its reputation related to child safety – a new path that includes parents must be charted.
As a mother, I am tired of seeing report after report of Meta’s willingness to place engagement metrics and company growth ahead of basic child safety. Whether pushing untested technologies that harm children’s mental health or exposing young users to inappropriate and extreme sexual content, the company has shown a dangerous willingness to look the other way.
SOCIAL MEDIA GIANT HIT WITH SCATHING AD CAMPAIGN AMID ANGER OVER AI CHATBOTS SEXUALLY EXPLOITING KIDS
With Meta’s recent launch of AI “digital companions” a new controversy was born. These interactive chatbots allegedly designed to simulate friendly, personalized conversations were marketed broadly and made available to users as young as 12.
Meta CEO Mark Zuckerberg has been criticized by parents’ groups for not doing enough to protect children on the company’s social media platforms. FILE: Zuckerberg spoke with comedian Theo Von on his “This Past Weekend” podcast. (Theo Von YouTube channel)
As the Wall Street Journal reported, however, the chatbots were soon caught engaging with minors in graphic, sexually explicit exchanges, including simulated predatory scenarios. The company claimed it restricted those features from children, but Meta staffers found that “within a few prompts, the AI will violate its rules and produce inappropriate content even if you tell the AI you are 13.”
Dr. Nina Vasan, a Stanford psychiatrist and director of Stanford Brainstorm, has called the rise of AI companions among children “a potential public mental health crisis requiring preventive action rather than just reactive measures.” According to Vasan, these bots are failing “the most basic tests of child safety and psychological ethics.”
TEENS ARE NOW USING AI CHATBOTS TO CREATE AND SPREAD NUDE IMAGES OF CLASSMATES, ALARMING EDUCATION EXPERTS
She’s right, and it should not take a task force to see that. Anyone raising children in the digital age understands the emotional and developmental risks these technologies pose. If Meta cared about, or even considered child safety, that much would be obvious.
Unfortunately, Meta has a well-documented history of flouting child safety. In 2024, the WSJ found Meta’s Instagram platform recommendation system pushing sexually explicit videos to accounts set up as 13-year-olds within minutes. Another investigation revealed that Instagram “helps connect and promote a vast network of accounts openly devoted to the commission and purchase of underage-sex content.”
In other words, its algorithm is actively amplifying pedophile networks. Meta’s own internal reviews have admitted the company’s subscription tools have been used to promote sexualized child modeling accounts to suspected predators.
SCAMMERS ARE TARGETING TEENS WITH THESE NASTY TRICKS
Each time these failures come to light, the company insists it has made fixes, patched programs, or installed parental controls. Parents are expected to take their word for it. But why should we when a mountain of evidence suggests otherwise?
I, for one, don’t buy what Meta is selling. That’s why, as executive director of American Parents Coalition, our organization sent a letter to the Senate and House committees with oversight in this area urging them to open a full investigation into Meta’s repeated failures and its pattern of child endangerment.
Whether or not there is congressional intervention, Meta can start making changes today. One such step would be creating an external parental advisory committee. While expert voices are welcome, what’s really missing are the parents in the trenches – “real” parents who are raising kids in the real world, aren’t steeped in the latest techno-babble and can truly “red-team” new features, navigate parental controls, and provide feedback.
HOW TO (KINDLY) ASK PEOPLE NOT TO POST YOUR KIDS’ PHOTOS ON SOCIAL MEDIA

The American Parents Coalition is calling for a congressional investigation into how Meta treats children on its social media platforms. FILE: Meta HQ in Menlo Park, CA (AP Photo/Tony Avelar)
This board should have access to product development, the power to flag risks, and the authority to make public recommendations. If Meta is serious about addressing these dangers, it should welcome outside oversight.
The mounting data in support of delaying or forbidding access to smartphones and social media is convincing. It’s a route our family has chosen and one that I encourage others to follow. However, not every family will make that choice.
Many will rely on parental controls and others still will allow unfettered access. Regardless of a family’s path, parents cannot stand guard over every algorithm, every software update, every hidden risk embedded in the platforms our children access.
PARENTS TRUST AI FOR MEDICAL ADVICE MORE THAN DOCTORS, RESEARCHERS FIND
Anyone raising children in the digital age understands the emotional and developmental risks these technologies pose. If Meta cared about, or even considered child safety, that much would be obvious.
The mental and physical safety of the next generation should be of paramount concern to our technological leaders and elected representatives alike. Accountability should begin with a thorough congressional investigation into Meta’s conduct, including its product safety practices and repeated failures to establish basic child protection measures.
That alone, however, won’t be a long-term fix. Real change requires Meta to bring parents into the fold and with a permanent seat at the table. Until these changes occur, no parent should take Meta at its word and should suspend their child’s access to these platforms.
CLICK HERE TO READ MORE FROM ALLEIGH MARRÉ