Amazon’s Facial Recognition Comes Under Scrutiny

Facial recognition could be used to crack down on government dissidents and whistleblowers. (Image:  freecodecamp.org  /  CC0 1.0)
Facial recognition could be used to crack down on government dissidents and whistleblowers. (Image: freecodecamp.org / CC0 1.0)

With FBI and other intelligence agencies increasing their use of facial recognition surveillance, various human rights groups have come forward and asked tech companies like Amazon to stop selling such products to the government. In addition to risking the loss of personal freedoms, facial surveillance is also said to be heavily biased against certain communities.

Facial recognition

The organizations conducting the campaign include the Frontier Foundation, American Civil Liberties Union (ACLU), Free Press, Human Rights Watch, Government Accountability Project, and so on. In total, almost 85 advocacy groups are participating in the campaign. The ACLU was very critical of Amazon’s “Rekognition” surveillance tech that the company has been trying to sell to the U.S. Department of Homeland Security Immigration and Customs (ICE). The FBI is said to be testing out Rekognition.

“This marriage of Amazon’s face-surveillance technology to the FBI’s troves of Big Data about tens of millions of people threatens to supercharge the government’s ability to track and monitor all of us… Imagine a world in which secretive government agencies can track millions of faces — both in real time and through historical video footage — enabling them to identify political protesters, whistleblowers, and journalists’ confidential sources,” according to Kade Crockford, Director of the Technology for Liberty Program at the ACLU of Massachusetts, (Rolling Stone).

The ACLU was very critical of Amazon’s ‘Rekognition’ surveillance tech which the company has been trying to sell to the U.S. Department of Homeland Security Immigration and Customs (ICE). FBI is said to be testing out Rekognition. (Image: amazon / CC0 1.0)

The ACLU was very critical of Amazon’s ‘Rekognition’ surveillance tech that the company has been trying to sell to the U.S. Department of Homeland Security Immigration and Customs (ICE). The FBI is said to be testing out Rekognition. (Image: amazon / CC0 1.0)

Amazon had first introduced its facial recognition system in 2016. However, it was limited to images. But later, the company enhanced the software to handle videos, resulting in the creation of Rekognition. The potential threat surrounding the misuse of this technology is making even the company’s owners uneasy. Recently, a group of Amazon shareholders, who represent US$1.32 billion of company’s assets, passed a resolution seeking a ban on selling facial recognition software to government agencies until the time the board determines that the technology does not end up being misused.

“It’s a familiar pattern: A leading tech company marketing what is hailed as breakthrough technology without understanding or assessing the many real and potential harms of that product… Sales of Rekognition to government represent a considerable risk for the company and investors. That’s why it’s imperative those sales be halted immediately,” Michael Connor, Executive Director of Open MIC, a non-profit organization that organized the shareholder event, said in a statement (Geek Wire).

The bias

Another major issue haunting Rekognition is its alleged bias against people of color. Last year, ACLU tested Rekognition by comparing photos of 535 U.S. Congress members with images of 25,000 mugshots. Of the false matches made by the software, 40 percent of those were those of people of color even though only 20 percent of U.S. Congress members come from this group. However, such biases are not limited to Amazon’s facial recognition.

Rekognition was found to be biased against people of color. (Image: pixabay / CC0 1.0)

Another bigger test was conducted by the Massachusetts Institute of Technology in February 2018 in which photos of 1,270 people were used. The results showed that 93.6 percent of faces misgendered by Microsoft’s facial recognition were those of people with darker skin, while 95.9 percent of faces misgendered by Face++ software were those of women.

“Automated systems are not inherently neutral. They reflect the priorities, preferences, and prejudices — the coded gaze — of those who have the power to mold artificial intelligence. We risk losing the gains made with the civil rights movement and women’s movement under the false assumption of machine neutrality. We must demand increased transparency and accountability,” according to Gender Shades.

Microsoft is reportedly conducting an internal evaluation, while Face++ is yet to respond to results of the study. The problems are believed to be caused by a lack of diversity in the photographs fed into the facial recognition systems while training them.

Follow us on Twitter or subscribe to our weekly email

In a Secret Trial, Lawyer Wang Quanzhang Received a 4-Year Sentence
Cardinal Zen Honored for His Fight Against Communism