WSJ just showed us that partner Meta directing clients to child sex abuse sites

 

Not only is the partner a huge problem (I will provide the WSJ article full text), but “digital safety” is also a euphemism for censorship

Of course, the issues with a digital identity and health passport are:

  1. that additional functionalities will be rapidly added, putting your life and all transactions on a connected net
  2. that your online data can be stolen (especially your health and banking data)
  3. your money can be turned off or taxed at will
  4. your movement and purchases can be limited
  5. your friendship network and associates will all be noted and tracked
  6. your family could be made to suffer for behaviors the controllers do not like. In China, your social credit score determines if you can rent or buy property or your children can go to a good school, I am told.

But let’s make it palatable: this is all about human rights. Wink, wink.

Tackling harmful content online represents a complex problem. It is an ongoing challenge to combat the likes of child sexual abuse and exploitation, terrorism and hate speech, misinformation and content related to self-harm and suicide.

As new technologies become available, the need to strike a balance between safety and privacy in the digital world and freedom of expression remains vital.

Delivering safe and secure online experiences is essential for global businesses, civil society groups and individuals alike. The World Economic Forum’s Global Coalition for Digital Safety is bringing together a diverse group of leaders to accelerate public-private cooperation to tackle harmful content and conduct online.

Members of the coalition have developed the Global Principles on Digital Safety, which define how human rights should be translated in the digital world. The Coalition Working Group includes representatives from Microsoft, WeProtect Global Alliance, Meta, Amazon, Google, OHCHR (UN Office of the High Commissioner for Human Rights), Ofcom UK, Global Partners Digital.

The principles have been developed through consultations with governments and regulators, major social media and tech platforms, safety tech companies and representatives of civil society and academia.

The principles will serve as a guide for all stakeholders in the digital ecosystem to advance digital safety on policy, industry and social levels. Collaboration between governments, companies and those involved in civil and social initiatives is required to ensure a real-world impact is made.

To summarize, the principles include:

  • All supporters should collaborate to build a safe, trusted and inclusive online environment. [Includes everything about you.—Nass]. Policy and decision-making should be made based on evidence, insights and diverse perspectives. Transparency is important, and innovation should be fostered.
  • Supportive governments should distinguish between illegal content and content that is lawful but may be harmful and differentiate any regulatory measures accordingly, ensure law and policy respect and protect all user rights. They should support victims and survivors of abuse or harm.
  • Supportive online service providers should commit to respecting human rights responsibilities and devise policies to ensure they do, ensure safety is a priority throughout the business and that it’s incorporated as standard when designing features, and collaborate with other online service providers. [ WTF is “safety” anyway?]

The principles also recognize that civil society groups and non-government organizations play a critical role in in advancing digital safety and promoting human rights; they provide valuable insights into the impact of online safety – and online freedom – on communities and individuals.

“Advancing digital safety is key to empowering people to make the most of the digital world. These principles were created as an actionable, positive step towards creating a safer digital experience for everyone and Microsoft remains committed to finding solutions that achieve this goal and respect fundamental human rights.”

— —Courtney Gregoire, Chief Digital Safety Officer, Microsoft

The principles take into consideration, and seek to complement, existing principles such as the Voluntary Principles to Counter Online Child Sexual Abuse and Exploitation, the Christchurch Call to eliminate terrorist and violent extremist content online, the Santa Clara Principles, the Australian eSafety Commissioner’s Safety by Design principles, the Digital Trust & Safety Partnership’s best practices framework, and many others.

“Governments, the private sector and civil society all have an important role to play in preventing abuse and exploitation online, particularly when protecting the most vulnerable in society. This new set of principles provides an important framework for a more effective response to online harms, including our own work to end child sexual abuse online.”

— Iain Drennan, Executive Director, WeProtect Global Alliance

Meta (the new name for Facebook) was a major partner of the WEF (above) to promote digital safety.

But instead Meta, which owns Instagram, has been facilitating pedophile networks on its platform, according to detailed research by the Stanford Internet Observatory staff (and these censors should know) and a U Mass professor, as well as experts in the digital landscape like Alex Stamos, who founded and runs the Stanford program.

The Wall Street Journal did a very deep dive and they leave no room for Meta to wiggle out of this. Since the WSJ is behind a paywall, I will provide the entire article from June 7 below.

Instagram, the popular social-media site owned by Meta Platforms, helps connect and promote a vast network of accounts openly devoted to the commission and purchase of underage-sex content, according to investigations by The Wall Street Journal and researchers at Stanford University and the University of Massachusetts Amherst.

Pedophiles have long used the internet, but unlike the forums and file-transfer services that cater to people who have interest in illicit content, Instagram doesn’t merely host these activities. Its algorithms promote them. Instagram connects pedophiles and guides them to content sellers via recommendation systems that excel at linking those who share niche interests, the Journal and the academic researchers found.

Though out of sight for most on the platform, the sexualized accounts on Instagram are brazen about their interest. The researchers found that Instagram enabled people to search explicit hashtags such as #pedowhore and #preteensex and connected them to accounts that used the terms to advertise child-sex material for sale. Such accounts often claim to be run by the children themselves and use overtly sexual handles incorporating words such as “little slut for you.”

Instagram accounts offering to sell illicit sex material generally don’t publish it openly, instead posting “menus” of content. Certain accounts invite buyers to commission specific acts. Some menus include prices for videos of children harming themselves and “imagery of the minor performing sexual acts with animals,” researchers at the Stanford Internet Observatory found. At the right price, children are available for in-person “meet ups.” The promotion of underage-sex content violates rules established by Meta as well as federal law.

In response to questions from the Journal, Meta acknowledged problems within its enforcement operations and said it has set up an internal task force to address the issues raised. “Child exploitation is a horrific crime,” the company said, adding, “We’re continuously investigating ways to actively defend against this behavior.”

Meta said it has in the past two years taken down 27 pedophile networks and is planning more removals. Since receiving the Journal queries, the platform said it has blocked thousands of hashtags that sexualize children, some with millions of posts, and restricted its systems from recommending users search for terms known to be associated with sex abuse. It said it is also working on preventing its systems from recommending that potentially pedophilic adults connect with one another or interact with one another’s content.

Alex Stamos, the head of the Stanford Internet Observatory and Meta’s chief security officer until 2018, said that getting even obvious abuse under control would likely take a sustained effort.

“That a team of three academics with limited access could find such a huge network should set off alarms at Meta,” he said, noting that the company has far more effective tools to map its pedophile network than outsiders do. “I hope the company reinvests in human investigators,” he added.

Technical and legal hurdles make determining the full scale of the network hard for anyone outside Meta to measure precisely.

Because the laws around child-sex content are extremely broad, investigating even the open promotion of it on a public platform is legally sensitive.

In its reporting, the Journal consulted with academic experts on online child safety. Stanford’s Internet Observatory, a division of the university’s Cyber Policy Center focused on social-media abuse, produced an independent quantitative analysis of the Instagram features that help users connect and find content.

The Journal also approached UMass’s Rescue Lab, which evaluated how pedophiles on Instagram fit into the larger world of online child exploitation. Using different methods, both entities were able to quickly identify large-scale communities promoting criminal sex abuse.

Test accounts set up by researchers that viewed a single account in the network were immediately hit with “suggested for you” recommendations of purported child-sex-content sellers and buyers, as well as accounts linking to off-platform content trading sites. Following just a handful of these recommendations was enough to flood a test account with content that sexualizes children.

The Stanford Internet Observatory used hashtags associated with underage sex to find 405 sellers of what researchers labeled “self-generated” child-sex material—or accounts purportedly run by children themselves, some saying they were as young as 12. According to data gathered via Maltego, a network mapping software, 112 of those seller accounts collectively had 22,000 unique followers.

Underage-sex-content creators and buyers are just a corner of a larger ecosystem devoted to sexualized child content. Other accounts in the pedophile community on Instagram aggregate pro-pedophilia memes, or discuss their access to children. Current and former Meta employees who have worked on Instagram child-safety initiatives estimate the number of accounts that exist primarily to follow such content is in the high hundreds of thousands, if not millions.

A Meta spokesman said the company actively seeks to remove such users, taking down 490,000 accounts for violating its child safety policies in January alone

“Instagram is an on-ramp to places on the internet where there’s more explicit child sexual abuse,” said Brian Levine, director of the UMass Rescue Lab, which researches online child victimization and builds forensic tools to combat it. Levine is an author of a 2022 report for the National Institute of Justice, the Justice Department’s research arm, on internet child exploitation.

Instagram, estimated to have more than 1.3 billion users, is especially popular with teens. The Stanford researchers found some similar sexually exploitative activity on other, smaller social platforms, but said they found that the problem on Instagram is particularly severe. “The most important platform for these networks of buyers and sellers seems to be Instagram,” they wrote in a report slated for release on June 7.

Instagram said that its internal statistics show that users see child exploitation in less than one in 10 thousand posts viewed.

The effort by social-media platforms and law enforcement to fight the spread of child pornography online centers largely on hunting for confirmed images and videos, known as child sexual abuse material, or CSAM, which already are known to be in circulation. The National Center for Missing & Exploited Children, a U.S. nonprofit organization that works with law enforcement, maintains a database of digital fingerprints for such images and videos and a platform for sharing such data among internet companies.

Internet company algorithms check the digital fingerprints of images posted on their platforms against that list, and report back to the center when they detect them, as U.S. federal law requires. In 2022, the center received 31.9 million reports of child pornography, mostly from internet companies—up 47% from two years earlier.

Meta, with more than 3 billion users across its apps, which include Instagram, Facebook and WhatsApp, is able to detect these types of known images if they aren’t encrypted. Meta accounted for 85 percent of the child pornography reports filed to the center, including some 5 million from Instagram.

Meta’s automated screening for existing child exploitation content can’t detect new images or efforts to advertise their sale. Preventing and detecting such activity requires not just reviewing user reports but tracking and disrupting pedophile networks, say current and former staffers as well as the Stanford researchers. The goal is to make it difficult for such users to connect with each other, find content and recruit victims.

Such work is vital because law-enforcement agencies lack the resources to investigate more than a tiny fraction of the tips NCMEC receives, said Levine of UMass. That means the platforms have primary responsibility to prevent a community from forming and normalizing child sexual abuse. 

Meta has struggled with these efforts more than other platforms both because of weak enforcement and design features that promote content discovery of legal as well as illicit material, Stanford found. 

The Stanford team found 128 accounts offering to sell child-sex-abuse material on Twitter, less than a third the number they found on Instagram, which has a far larger overall user base than Twitter. Twitter didn’t recommend such accounts to the same degree as Instagram, and it took them down far more quickly, the team found.

Among other platforms popular with young people, Snapchat is used mainly for its direct messaging, so it doesn’t help create networks. And TikTok’s platform is one where “this type of content does not appear to proliferate,” the Stanford report said.

Twitter didn’t respond to requests for comment. TikTok and Snapchat declined to comment.

David Thiel, chief technologist at the Stanford Internet Observatory, said, “Instagram’s problem comes down to content-discovery features, the ways topics are recommended and how much the platform relies on search and links between accounts.” Thiel, who previously worked at Meta on security and safety issues, added, “You have to put guardrails in place for something that growth-intensive to still be nominally safe, and Instagram hasn’t.”

The platform has struggled to oversee a basic technology: keywords. Hashtags are a central part of content discovery on Instagram, allowing users to tag and find posts of interest to a particular community—from broad topics such as #fashion or #nba to narrower ones such as #embroidery or #spelunking.

A screenshot taken by the Stanford Internet Observatory shows the warning and clickthrough option when searching for a pedophilia-related hashtag on Instagram. Photo: Stanford Internet Observatory

Pedophiles have their chosen hashtags, too. Search terms such as #pedobait and variations on #mnsfw (“minor not safe for work”) had been used to tag thousands of posts dedicated to advertising sex content featuring children, rendering them easily findable by buyers, the academic researchers found. Following queries from the Journal, Meta said it is in the process of banning such terms.

In many cases, Instagram has permitted users to search for terms that its own algorithms know may be associated with illegal material. In such cases, a pop-up screen for users warned that “These results may contain images of child sexual abuse,” and noted that production and consumption of such material causes “extreme harm” to children. The screen offered two options for users: “Get resources” and “See results anyway.”

In response to questions from the Journal, Instagram removed the option for users to view search results for terms likely to produce illegal images. The company declined to say why it had offered the option.

The pedophilic accounts on Instagram mix brazenness with superficial efforts to veil their activity, researchers found. Certain emojis function as a kind of code, such as an image of a map—shorthand for “minor-attracted person”—or one of “cheese pizza,” which shares its initials with “child pornography,” according to Levine of UMass. Many declare themselves “lovers of the little things in life.” 

Accounts identify themselves as “seller” or “s3ller,” and many state their preferred form of payment in their bios. These seller accounts often convey the child’s purported age by saying they are “on chapter 14,” or “age 31” followed by an emoji of a reverse arrow.

This is taken from a long document, read the rest here substack.com

Header image: Dezeen

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (1)

  • Avatar

    Nigel

    |

    All the social media sites operated by big tech are dangerous, and that includes LinkedIn, and mainstream dating sites. Now A.I. will digest all that data for the Big Brother surveillance state.

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via