Truth, Inspiration, Hope.

Twitter Drops AI Cropping Out Black People Following Racism Controversy

Jonathan Walker
Jonathan loves talking politics, economics and philosophy. He carries unique perspectives on everything making him a rather odd mix of liberal-conservative with a streak of independent Austrian thought.
Published: May 26, 2021
In a racism controversy, Twitter's AI cropping tool was found to be racist towards blacks.
Twitter's AI cropping tool was found to be racist towards blacks. (Image Pixabay via Pexels

Social media platform Twitter has dropped its automated artificial intelligence (AI) photo cropping feature following months of racism controversy. In a blog post, the company admitted that the saliency algorithm used in the cropping tool exhibited “unequal treatment based on demographic differences.”

Twitter first began using the saliency algorithm to crop images in 2018. The algorithm was trained based on how human eyes view a picture, and worked by “estimating what a person might want to see first within a picture.”

After testing for gender and racial biases within the algorithm, Twitter looked for deviations from “demographic parity,” or the gold standard where “each image has a 50% chance of being salient,” or noticeable.

There was an eight percent difference from demographic parity in favor of females, and a four percent difference from demographic parity towards white individuals. When white and black women were compared, white women were favored by seven percent. White men were favored over black men in the algorithm by two percent.

“We considered the tradeoffs between the speed and consistency of automated cropping with the potential risks we saw in this research. One of our conclusions is that not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people,” said the blog post.

Twitter also tested for “male gaze” after several users complained that the “image cropping chose a woman’s chest or legs as a salient feature.” However, the company did not find “evidence of objectification bias.”

The racial controversy over Twitter’s cropping tool attracted international attention last year when a user named Colin Madland, a Canadian PhD student, noticed that the algorithm would repeatedly choose to show him, rather than his black colleague, when pictures of the two men were displayed. “@Twitter is trash,” he said in a series of tweets discussing the issue in September 2020.

Madland also found that Zoom’s algorithm constantly removed his colleague’s head when using a virtual background. “Turns out @zoom_us has a crappy face-detection algorithm that erases black faces…and determines that a nice pale globe in the background must be a better face than what should be obvious,” he tweeted.

A study published at the Institute of Electrical and Electronics Engineers (IEEE) in February 2019 evaluated 11 commercial facial recognition systems designed to check a person’s identity. “Lower (darker) skin reflectance was associated with lower efficiency (higher transaction times) and accuracy (lower mated similarity scores),” meaning that the systems were faster and more accurate in processing and identifying people with light skin.

In July 2018, the American Civil Liberties Union (ACLU) published a study on facial analysis with Amazon’s ‘Rekognition’ tool, which was being used by law enforcement agencies in the United States. The study compared public photos of members of the House and Senate and matched them with 25,000 publicly available arrest photos.

The software incorrectly matched 28 members of Congress to other people who have been arrested for a crime. Almost 40 percent of false matches were of people of color, despite accounting for only 20 percent of Congress. “Face surveillance also threatens to chill First Amendment-protected activity like engaging in protest or practicing religion, and it can be used to subject immigrants to further abuse from the government,” the report stated.