Home Stocks Instagram Under Fire: Teen Safety Measures Labeled Ineffective

Instagram Under Fire: Teen Safety Measures Labeled Ineffective

33
0

Instagram Teen Safety Features Under Fire: Report Finds Serious Flaws

Meta has promoted dozens of safety features on Instagram over the years, claiming they protect teenagers from harmful content. But according to a new report from child-safety advocacy groups, many of these protections are either broken, ineffective, or never existed at all. The findings were confirmed by researchers at Northeastern University.

Most Teen Safety Tools Don’t Work as Promised

Of 47 safety features tested, only 8 worked fully as intended. The rest were flawed, outdated, or completely ineffective. Researchers found that:

  • Search-term blockers meant to stop teens from finding self-harm content were easily bypassed.

  • Anti-bullying filters failed to activate, even when tested with phrases Meta itself had flagged.

  • A feature designed to redirect teens from harmful binge content never triggered.

Some features did work, such as “quiet mode” for muting notifications at night, and parental approval tools for account changes.

Broken Promises and Grieving Parents

The report, titled “Teen Accounts, Broken Promises,” analyzed over a decade of Instagram’s youth safety announcements. Two advocacy groups behind the study, Molly Rose Foundation (UK) and Parents for Safe Online Spaces (US), were founded by parents who lost children to bullying and self-harm content on social media.

Laura Edelson, a Northeastern University professor who reviewed the report, said:
“Using realistic testing scenarios, we can see that many of Instagram’s safety tools simply are not working.”

Meta Pushes Back

Meta strongly disputed the report. Spokesman Andy Stone called the findings “misleading” and claimed that teens using its protections saw less harmful content, fewer unwanted contacts, and spent less time online at night. He insisted that Meta continues to improve its parental controls and welcomes constructive feedback.

Still, former Meta safety executive Arturo Bejar revealed that many safety tools were watered down during development. He accused Meta of ignoring internal data that showed serious risks to teens.

Reuters Confirms Key Findings

Reuters tested Instagram’s blockers and confirmed flaws. For example, while “skinny thighs” was blocked as a hashtag, removing the space allowed eating-disorder content to appear. Internal Meta documents also showed that automated detection systems for harmful content were not being maintained.

In addition, systems designed to block predators’ search terms weren’t updated regularly. Safety employees flagged these issues, but the problems persisted.

Political Pressure on Meta

US lawmakers are now investigating Meta’s practices. Senators are looking into reports that Meta’s chatbots could engage in inappropriate conversations with minors. Former employees also testified that the company downplayed research showing preteens in virtual reality were being exposed to predators.

In response, Meta announced that it is expanding its teen accounts to Facebook users worldwide and will partner with schools to improve online safety.

Instagram chief Adam Mosseri said:
“We want parents to feel good about their teens using social media.”