Truth, Inspiration, Hope.

Stanford Finds Instagram Serves as Hub For Self-generated Child Sex Abuse Marketplace

Head of the Stanford Internet Observatory and former Facebook Chief Security Officer, Alex Stamos, told The Wall Street Journal, "That a team of three academics with limited access could find such a huge network should set off alarms at Meta."
Neil Campbell
Neil lives in Canada and writes about society and politics.
Published: June 7, 2023
Stanford found Instagram is serving as a hub for self-generated child sex abuse material marketplaces.
A file photo of girls posing with an unrelated costumed Instagram influencer themed around the War in Ukraine in Kiev on May 24. A newly released study by Stanford University’s Internet Observatory finds that Instagram is serving as a promotional hub for a self-generated child sex abuse marketplace where minors as young as 6 appear to be selling photos, videos, self harm content, and even live prostitution in exchange for gift cards. (Image: SERGEI SUPINSKY/AFP via Getty Images)

A new study published by Stanford University’s Internet Observatory (SIO) has found that Instagram is serving as the primary hub for a marketplace of self-generated child sex abuse content.

Published on June 6 on the Stanford website, the study is unique in that it specifically targeted the issue of self-generated underage sex abuse material, which a trio of researchers explained is distinct in that “an image or video appears to be created by the minor subject in the image” as opposed to being created by an adult groomer, handler, or abuser.

“In recent years, the creation and distribution…has increasingly become a commercial venture,” the Background section states, noting that the habits of sellers “often replicates the pattern of legitimate independent adult content production” such as those found on pornography platforms that have been made mainstream in recent years.

MORE ON THE CORRUPTION OF SOCIAL MEDIA

Notably, the SIO found that underage pornography self-creators are not only offering photography and videos of themselves, but often advertise “more dangerous services, such as in-person sexual encounters or media of bodily self-harm.”

As opposed to accepting cash directly, such sellers are primarily relying on a popular gift card and online goods marketplace to collect payment in kind, the SIO determined after finding that a number of accounts that exhibited the likely behavior and characteristics of buyers shared identical usernames on the marketplace.

Although the first concept that comes to mind from the topic of child sex abuse material is the most heinous form involving young children, Stanford notes that with the self-generated marketplace, “Based on the bios, most self- identified as between the ages of 13 and 17.” 

However, the SIO noted that “it is common for them to offer content of themselves from even younger ages, which is marketed at a premium.”

One screenshot included in the study from Twitter, however, identified as six years old, an ostensible female, and stated she was looking to meet in person with “deposit & condoms required.”

Researchers also found that Instagram and Twitter profiles uncovered in their dragnet would often link to chatrooms on the Discord and Telegram platform that “had hundreds or thousands of users.”

While some such venues appeared to be managed by the seller, others were “multi-seller groups (who sometimes appear to redistribute third- party content).”

The study’s techniques used an automated system to gather data from Twitter via its commercial-level API and manual searches of obvious hashtags on Instagram, where results were processed through a number of tools, including PhotoDNA and Google’s SafeSearch API which allow for algorithmic or artificial intelligence identification of violence and nudity or known child sex abuse material.

Flagged data was submitted to the National Center for Missing and Exploited Children (NCMEC), which the study defines as “the legally designated clearinghouse for reports of child sexual abuse.”

In an original sweep, Stanford found 405 sellers of self-generated content on Instagram and 128 on Twitter.

A June 7 article by The Wall Street Journal article on the SIO’s findings elucidated that 112 of the 405 Instagram accounts flagged “collectively had 22,000 unique followers.”

Unidentified current and former employees of Meta who “worked on Instagram child-safety initiatives” were said to “estimate the number of accounts that exist primarily to follow such content is in the high hundreds of thousands, if not millions.”

SIO researchers noted that on Instagram, 58 accounts composing the sellers’ follower network “appeared to be probable content buyers who used their real names, many of which were matched to Facebook, LinkedIn or TikTok profiles.”

After submitting content to NCMEC and reviewing the accounts a month later, Stanford found that only 31 of the seller accounts and 28 of the buyer accounts were still active on Instagram.

Only 22 of the original 128 were still active on Twitter.

“However, in the intervening time, hundreds of new…accounts were created, recreated or activated on both platforms, linked to the network as indicated by follower graph, hashtags and post/bio content,” the study regretted to inform readers.

One of the researchers is head of the SIO, Alex Stamos, who coincidentally also worked as Chief Security Officer of Facebook (which renamed to Meta in 2021), Instagram’s parent company.

Stamos told The Wall Street Journal on his group’s findings, “That a team of three academics with limited access could find such a huge network should set off alarms at Meta.”

“I hope the company reinvests in human investigators,” Stamos added, noting that Meta’s internal tools are far more effective at finding the content his team was able to uncover from the outside.

WSJ noted that only after it inquired with Meta about its story, the platform “has blocked thousands of hashtags that sexualize children, some with millions of posts, and restricted its systems from recommending users search for terms known to be associated with sex abuse.”

The Journal’s article cut deeper, connecting with researchers the outlet enlisted from the University of Massachusetts Rescue Lab, who used a test account method similar to a July of 2021 expose on TikTok’s propensity to algorithmically “rabbit hole” new users into a feed shuffling suicide promotion and drug abuse to examine the effects of Instagram’s algorithm on the child pornography network.

The Rescue Lab found that once a test account viewed an account in the SIO’s uncovered network, it was “immediately hit with ‘suggested for you’ recommendations of purported child-sex-content sellers and buyers, as well as accounts linking to off-platform content trading sites.”

“Following just a handful of these recommendations was enough to flood a test account with content that sexualizes children,” WSJ added.

In a September of 2021 expose, the WSJ obtained copies of internal Facebook and Instagram documents dated 2020 found that the company understood from its own research that its platform was harming teenage girls.

“Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse,” said one slide, which added, “Comparisons on Instagram can change how young women view and describe themselves.”

In all, company scientists conducted 5 studies over 18 months and found that the scenario was exclusive to Instagram and not other social media platforms, ultimately presenting their findings to Mark Zuckerberg.

Director of the Rescue Lab, Brian Levine, told WSJ for the June 7 story that Instagram is an “on-ramp” to online venues where “there’s more explicit child sexual abuse.”

The trend appears to be rapidly accelerating. The NCMEC was paraphrased as telling the Journal, “In 2022, the center received 31.9 million reports of child pornography, mostly from internet companies—up 47% from two years earlier.”

“Meta accounted for 85% of the child pornography reports filed to the center, including some 5 million from Instagram,” WSH added.