Truth, Inspiration, Hope.

Apple Announces Plan to Surveil Devices for Child Porn Photos in iOS 15

Neil Campbell
Neil lives in Canada and writes about society and politics.
Published: August 7, 2021
Apple CEO Tim Cook delivers the keynote address at the Apple 2012 World Wide Developers Conference (WWDC) at Moscone West on June 11, 2012 in San Francisco, California. Apple announced it will now surveil all iOS 15 and iPadOS15 devices for”child sexual abuse material” using a cryptographic database of hashes and AI.
Apple CEO Tim Cook delivers the keynote address at the Apple 2012 World Wide Developers Conference (WWDC) at Moscone West on June 11, 2012 in San Francisco, California. Apple announced it will now surveil all iOS 15 and iPadOS15 devices for”child sexual abuse material” using a cryptographic database of hashes and AI. (Image: Justin Sullivan/Getty Images)

Big Tech cartel crown jewel Apple announced plans to automatically surveil all devices for known child pornography images in the upcoming iOS 15 and iPadOS 15 upgrades. The scheme will implement a cryptographic hashing method that functions in a very similar way to a recently announced plan by a consortium of Big Tech social media companies such as Facebook, Microsoft, YouTube, and Google to automatically crack down on “white supremacy.”

In an update to the Apple website titled Expanded Protections for Children, the world’s most valuable company by market capitalization said it would deploy a series of three protections. The first is “new communications tools” to alert parents when their children view potentially suggestive content using “on-device machine learning” and will keep “private communications unreadable by Apple.”

In a description of how the function works, Apple says when an account marked as a child is sent a sexually explicit photo, “The photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view this photo.” 

“As an additional precaution, the child can also be told that, to make sure they are safe, their parents will get a message if they do view it. Similar protections are available if a child attempts to send sexually explicit photos. The child will be warned before the photo is sent, and the parents can receive a message if the child chooses to send it.”

‘Updated to intervene’

The update will also adjust Siri and Search to, “Provide parents and children expanded information and help if they encounter unsafe situations” in addition to blocking “child sexual abuse material” (CSAM). Apple says the apps will be “updated to intervene when users perform searches for queries related to CSAM. These interventions will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue.”

The root of the controversy is in the third prong of the protections for children, which will implement “new applications of cryptography to help limit the spread of CSAM online.” Apple says “CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos.”

Apple will use a cryptographic system that creates a “hash” of known CSAM material from the National Center for Missing and Exploited Children’s (NCMEC) database and will check for these hashes on the photos stored on each iOS 15 and iPadOS 15 device automatically before they are uploaded to iCloud Photos. When a photo is identified as matching a NCMEC database hash, it will be uploaded along with a “cryptographic safety voucher that encodes the match result along with additional encrypted data about the image.”

Privacy

Apple claims it protects user privacy through a technology called “threshold secret sharing,” which “ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content.”

When the threshold is crossed, an Apple employee will review the content manually and make relevant referrals to the NCMEC, who works with law enforcement agencies. 

The system in form works almost identically to that used by the Global Internet Forum to Counter Terrorism (GIFCT), a consortium of 17 of the biggest Internet companies, including LinkedIn, Pinterest, Facebook, YouTube, Discord, and WhatsApp, to create a hashed database of content provided by a United Nations body that is primarily composed of material promulgated by officially recognized terrorist organizations such as al-Qaeda and the Taliban. 

In addition to member companies, external companies, including names as big as Reddit and Verizon, have access to GIFCT’s collection of hashes.

However, on July 26 the GIFCT announced it would be expanding its reach beyond the original UN database to “add attacker manifestos – often shared by sympathizers after white supremacist violence – and other publications and links flagged by U.N. initiative Tech Against Terrorism.”

The expansion of powers would formally incorporate intelligence provided by the Five Eyes alliance between the intelligence communities of Australia, Canada, the UK, New Zealand, and the United States, and specifically takes aim at far-right groups such as the Proud Boys.

Anti-terrorism, or taking aim against dissent?

GIFCT Executive Director Nicholas Rasmussen told Reuters that “anyone looking at the terrorism or extremism landscape has to appreciate that there are other parts…that are demanding attention right now,” while Reuters itself summarized the comments as regarding “far-right or racially motivated violent extremism.”  

While nobody would argue against efforts to quash the spread of terrorism, GIFCT’s academic research arm, the Global Network on Extremism and Technology penned a July 22 article titled Extremeism Unmasked where it took aim at those who dissent against COVID-19 lockdowns and measures, claiming the group is being groomed by “violent extremist organizations” who are looking to recruit members. 

“COVID-19 health restrictions were seen as evidence of a deep state (or big government) conspiracy encroaching on individual freedoms, capitalising on chaos, panic, and fear to push conspiracies into the mainstream,” read the paper. 

“Whilst such narratives placed blame on different actors and incited varying levels of violence, many share a common anti-government conspiratorial framework. These cognitive and psychological phenomena are on display as various COVID-19 related grievances bring people together.”

Slippery slope

For Apple’s new paradigm of cryptographic surveillance, Apple enthusiast website Mac Rumors noted multiple security researchers and experts were concerned that the framework, which is innocuous enough if only used for the purpose proclaimed today, is likely to become a slippery slope as governments, by way of their partnership with Big Tech, have become more and more totalitarian in quashing dissent. 

Former editor for the UK’s The Guardian newspaper, Charles Arthur, noted however that a crackdown on CSAM is nothing new. Relying on his own articles, Arthur pointed out on Twitter that Facebook has been algorithmically scanning images for CSAM since 2011 and Google since 2008. 

Mac Rumors said Apple updated its Privacy Policy in 2019 to scan uploaded material for CSAM, while in 2020 Chief Privacy Officer Jane Horvath said Apple was already using “screening technology to look for illegal images and then disables accounts if evidence of CSAM is detected,” implying the only difference in today’s announcement is that iOS 15 and iPadOS15’s detection regime will run on each device as a de facto part of Apple’s walled garden.