Meta’s Child Sex-Trafficking Problem

A newly unsealed court filing shows the tech giant connected children to predators on a scale much larger than previously understood

We’ve long known that Facebook and Instagram are major hubs for sex and labor trafficking.

In 2021, internal documents released by whistleblower Frances Haugen revealed, among other things, that Apple threatened to pull Facebook (now Meta) apps from the App Store because they were being used to trade and sell maids in the Middle East.

The company was able to assuage Apple, insisting it was working on the problem and had appropriate policies in place. In general, Meta has maintained that it does not allow human trafficking on its platforms.

But a newly unsealed court filing contains shocking details that undermine that claim.

The court filing shows that Meta leadership knowingly chose to expose teen users—particularly young women and girls—en masse to human traffickers and other child predators in order to protect its bottom line.

These are quite possibly the most appalling revelations yet to emerge about the tech giant and they come at a moment when Congress is considering a flurry of new bills to regulate Meta and other social media providers. They deserve the ear of every parent and every citizen in America.

The Hunt is On

The recently unsealed filing — part of a lawsuit brought against several social media platforms by school districts around the country — outlines in detail how Meta maneuvered aggressively from 2015 onward to maximize teen engagement despite major known safety risks.

Unless otherwise stated, all text in bold below is directly quoting internal documents at Meta, unearthed during legal discovery. Unbolded text quotations are from the filing as written by the plaintiffs.

The filing alleges that Meta’s leadership realized in 2015 that “the company was losing its youngest users — and treated that decline as an existential threat.” In response, and “at Zuckerberg’s direction, employees undertook a ‘lockdown sprint’ to launch Facebook Live in early 2016 as ‘the beachhead we need to expand into other use cases in videos and teens,’” with Zuckerberg himself “warning that notifying adults ‘will probably ruin the product from the start’ and instructing that the company ‘be very good about not notifying parents/teachers’” (pp. 15-16).

The launch of TikTok in 2018 led to a similar reaction. Meta considered the app “‘an existential threat’” and rushed to release Instagram Reels in response, despite knowing it would be doing so without adequate safety restrictions (p. 16).

Coincident with its years-long push to recapture the youth market, the company embarked on a no-holds-barred campaign to “embed Instagram and Facebook directly into school communities.”

Among other things, this involved developing new technical capabilities to determine when teen users were at school, to infer which school they attended, and to target push notifications to students at specific schools in what it called “‘school blasts’” (p. 17-18).

The filing also reveals that Meta paid the National PTA and Scholastic six-figure sums to conduct outreach on its behalf, a choice motivated by the perception, in the company’s own words, that these organizations could “‘get [their] materials into the hands of parents, grandparents, and educators at scale.’”

It also details how the company presented Orwellian “‘safety roadshows’ at high schools across the country,” and that it “recruited and paid 13-17 year-old ‘teen tastemakers to act as [their] plug at … high schools’” in key markets (p. 18-20).

Tinder for Pedophiles

If pushing questionably safe products on children and teens nationwide wasn’t bad enough, previously unknown internal research cited in the filing reveals that Meta did so despite the near certainty that this would mean putting large volumes of young people in harm’s way.

In particular, Meta’s recommendation algorithms put what the company internally calls “IIC violators”—IIC standing for inappropriate interactions with children—in direct contact with millions of minors.

At one point in 2023, the Instagram feature “Accounts You May Follow” recommended “‘nearly 2 million minors’” to “adult groomers” in the previous three months. Twenty-two percent of those recommendations, in turn, “‘resulted in a follow request.’”

The filing also discusses an internal 2022 audit, which found that Accounts You May Follow “recommended 1.4 million potential IIC violators to teenage users in a single day” (p. 53).

How do we know Meta could have stopped them? Because its own researchers recommended in 2019 that teen Instagram accounts be defaulted to private mode so that young users would not receive (among other things) “‘[u]nwanted messages that were sexual in nature’” (p. 55).

But instead of implementing these changes the moment they understood children were in danger, Meta leadership asked its growth team what the “‘growth and engagement impacts’” of the recommended protections would be.

Finding that the falloff in engagement would be too steep (“‘this will likely smash engagement, DAP, MAP, etc.’”), Meta decided to put the issue off due to concerns over its bottom line.

The same question arose again a year later—still Meta did not act. By this point, researchers within the company had put together a more detailed proposal for defaulting teens to private accounts, which they called “Smart Defaults” (p. 55).

As before, the growth team remained skeptical, determining that “a true private-by-default would result in a loss of 1.5 million monthly active teens a year” (p. 56).

In response, Meta leadership decided to formally shelve private-by-default accounts, even as they acknowledged that “‘[a]ctors take advantage of our tools on Instagram to find and inappropriately engage with children,’” and that “placing teens into a default-private setting would have eliminated 5.4 million unwanted interactions a day over Instagram direct message” (p. 57).

Finally, due to increasing pressure from virtually every team within the company—with the crucial exception again of the growth team—Meta launched a watered-down version of the private-by-default feature in March 2021.

It only applied, however, to new users under the age of 16, although it did prevent adults from DMing minors who didn’t follow them.

In practice, the change made little difference. Child predators could easily circumvent it by claiming to be minors using Instagram’s voluntary age-identification settings. In July 2021, after this new feature was launched, Meta found in an internally conducted survey that 13 percent of 13-to 15-year-olds had received “unwanted sexual advances on Instagram in the past seven days” (p. 58).

One would think that at a minimum Meta would deal swiftly with the most dangerous among this group — those looking to traffic minors for sex on its platforms. But no, and it’s here that things escalate from wildly irresponsible to truly depraved.

Perhaps the most stunning revelation from the November 21 filing is that the company, amid the extraordinary volume of contact it facilitated between children and predators, maintained a 17x strike policy for accounts engaged in “the ‘trafficking of humans for sex.’”

That is according to former Instagram Head of Safety and Wellbeing Vaishnavi Jayakumar (and was also confirmed by internal documentation). As she explained under deposition, “‘that means you could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended.’” “’By any measure across the industry,’” she continued, putting it mildly “‘[that] is a very high strike threshold’” (p. 61-62).

Jayakumar’s testimony accords with previous reporting by The Guardian, who, back in 2023, had already unearthed details like these surrounding Meta’s high tolerance for child sex trafficking on its platforms:

We talked to six other moderators who worked for companies that Meta subcontracted between 2016 and 2022. All made similar claims to Walker. Their efforts to flag and escalate possible child trafficking on Meta platforms often went nowhere, they said.

“On one post I reviewed, there was a picture of this girl that looked about 12, wearing the smallest lingerie you could imagine,” said one former moderator. “It listed prices for different things explicitly, like, a blowjob is this much. It was obvious that it was trafficking,” she told us.

She claims that her supervisor later told her no further action had been taken in this case.

In summary, from at least 2015 onward, Meta was knowingly exposing teen users to millions of interactions with suspected child predators in order to protect its bottom line—including sex traffickers reported as such as many as 16 times.

Meta leadership made a conscious decision to permit conditions on its platforms that it knew meant increasing traffickers’ access to minors.

Calculating the Cost

Nor did the decision’s consequences stop at inappropriate messages. What even some of Meta’s harshest critics may not know is that, over the last six years, Facebook and Instagram have been implicated as recruiting platforms in numerous federal sex trafficking cases — including those involving minors.

According to the Human Trafficking Institute’s 2023 Federal Human Trafficking report, from 2019 to 2023, a remarkable ~23% of all sex trafficking victims in federal court cases whose recruitment locations could be identified were recruited from Facebook and Instagram (p. 63).1

On top of that, of those victims from 2019-2023 whose ages could be identified, just over half were minors (p. 39).

Those are just the cases caught and tried by federal authorities. While the true number of such incidents is notoriously hard to discern, several recent studies have attempted to estimate the “dark figure” of total sex trafficking victims by state, county, or city.

Specifically, since 2015, this figure has been estimated for Florida, Texas, Sacramento County, CA, and Greater New Orleans. Two of these studies, those on Florida (in 2024) and Sacramento County (from 2015-2020), also collected data about time of victimization, making it possible to roughly estimate the number of victims per year, rather than only those victimized at any point in their lives.

The average yearly sex trafficking prevalence reported across both studies was ~.95 victims per 1,000 people per year. While it is important to note that trafficking prevalence might vary substantially across other states and counties, applying the prevalence rate from what little evidence we have to the country as a whole suggests a nationwide victim count of ~323,100 people per year.2

If we in turn assume that victims are recruited in the same year they experience other trafficking-related abuses, and that, as in federal court cases from 2019-2023, roughly half of victims nationwide are minors and roughly 23 percent are recruited from Facebook and Instagram, that implies that ~37,150 minors — almost all young women and girls — are recruited per year by sex traffickers on Meta platforms.

To be clear: this is an extrapolation from the small sample of victims whose recruitment location was identified in federal court. Recruitment via Facebook and Instagram may be overrepresented in this group.

It may also be underrepresented. For conservatism’s sake, let us assume victims recruited via Facebook and Instagram are overrepresented in this sample. Adjusting for that concern still yields unacceptably high numbers.

For instance, suppose we cut the 23 percent figure to 10 percent. We may as well cut it to five percent to account for any remaining methodological ambiguities. That would still leave ~8,075 minors recruited by sex traffickers on Meta platforms every single year.

And that is only in the United States.

While this figure is only a crude estimate, we can be confident that, whatever the true value is, it would be lower had Meta not consistently chosen to prioritize its profit margins over children’s safety.

The company knew for years that minors were interacting with child predators on its platforms. It also knew that some of them were being trafficked. Still, it chose to accept this state of affairs in return for higher engagement.

Meta leadership, including Mark Zuckerberg and Instagram head Adam Mosseri, made decisions that placed company profits above the safety of — conservatively — thousands of young women and girls.

Legislators and parents must never forget that. It is a moral imperative that we hold Meta and its leaders accountable.

Too Little, Too Late

These are still only some of the disturbing new insights contained in the recent filing. We also learned that Meta buried internal research that found evidence of a causal relationship between the use of its platforms and mental illness (p. 27), and that they refused to automatically delete content identified with 100 percent surety as child porn (p. 72); and still more, which would take another whole article to cover in depth.

Perhaps angling to get ahead of future revelations like those in the late November filing, Meta in September 2024 launched “Instagram Teen Accounts,” which are set to private by default, along with other new protections.

While this is of course a welcome change, there is a small wrinkle — most of the new safety features don’t work. An independent analysis by former Meta engineer Arturo Béjar and Cybersecurity for Democracy (in March through July 2025) found that, of 47 of 53 new Teen Accounts safety features, 64 percent were either ineffective, meaning they could be trivially circumvented, or were simply no longer available at all.

Critically, that includes features meant to limit contact between teens and adults. Among other things, the report found that teens were still encouraged to follow adults they didn’t know, and that once they did, those adults could message them.

Since Meta’s renewed teen-engagement push in 2015, even the most conservative estimates imply some tens of thousands of minors have been recruited by traffickers on Meta’s platforms, a tragedy the tech giant had every chance to prevent, and still hasn’t substantively corrected.

That they repeatedly chose not to, despite knowing the risks — that they were doing “safety roadshows” at schools, that they were recruiting “teen tastemakers” to hawk their products while all of this was going on — deserves to go down as one the greatest acts of corporate wrongdoing in modern history, one belonging in the same camp as those of corporate tobacco producers, opioid salers, chemical polluters, and climate deniers.

Meta has created a vast digital machine for putting the most defenseless among us in danger. What’s worse, that machine has worked its way into our homes, into our pockets, into our children’s bedrooms.

But now that we know what Meta is, and that its actions are worse than we could have imagined, there can no longer be any excuse for inaction.

Children should not be allowed on Meta platforms. So long as they are, thousands more will be delivered into the arms of traffickers.

See more here afterbabel.com

Header image: Dezeen

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via
Share via