Ex-Google Ethicist Warns About the Looming Dangers of Tech

Tristan Harris.
Tristan Harris, ex-ethicist at Google and co-founder of the Center for Humane Technology, warned that the entire digital space has become a ‘dark infrastructure.' (Image: Collision Conf via Flickr)

Tristan Harris, an ex-ethicist at Google and the co-founder of the Center for Humane Technology, has warned that the entire digital space has become a “dark infrastructure,” with tech products and culture being intentionally designed for mass deception. This is one of the looming dangers of tech. He was speaking to a U.S. House Subcommittee on Consumer Protection and Commerce at a hearing called “Americans at Risk: Manipulation and Deception in the Digital Age.”

The dangers of tech

“Tech companies manipulate our sense of identity, self-worth, relationships, beliefs, actions, attention, memory, physiology, and even habit-formation processes, without proper responsibility… YouTube has north of 2 billion users, more than the followers of Islam. Tech platforms arguably have more psychological influence over two billion people’s daily thoughts and actions when considering that millions of people spend hours per day within the social world that tech has created, checking hundreds of times a day,” Harris said in the testimony (Sociable).

He pointed out that many of the problems and failures of modern society — like addiction, fake news, social isolation, declining mental health of teens, polarization, trust erosion, conspiracy thinking, and the breakdown of truth — are all the result of the “dark infrastructure” that affects people’s thinking. However, Harris said that there is no need to create more government agencies to tackle digital deception tactics like deepfakes, social media bots, and so on. Instead, he suggests that the existing agencies be better equipped and digitally updated.

According to Harris, problems like addiction, fake news, social isolation, declining mental health of teens, polarization, trust erosion, conspiracy thinking, and breakdown of truth are some of the dangers of tech thatresult from the ‘dark infrastructure’.
According to Harris, problems like addiction, fake news, social isolation, declining mental health of teens, polarization, trust erosion, conspiracy thinking, and breakdown of truth result from the ‘dark infrastructure.’ (Image: Screenshot via YouTube)

At the testimony, Justin (Gus) Hurwitz, an associate law professor at the University of Nebraska, argued that regulatory powers should be assigned to agencies like the Federal Trade Commission (FTC), which should identify practices that violate the FTC act and take necessary action against them.

However, Harris clarified that the FTC alone may not have the ability to deal with the issues of the dangers of tech. He suggested that the government should run educational campaigns to make the public aware of misinformation and deception.

But Hurwitz warned that such campaigns run the risk of being labeled a “dark pattern.” In layman’s terms, a “dark pattern” is basically when design choices are made so as to influence the audience in a specific manner. For instance, if a clean font and a blood-dripping font were to be used in a campaign, then everything associated with the bloody font might be unconsciously classified as “bad” by the audience.

Dealing with deepfakes

With the U.S. elections coming up, Facebook has announced that it will be removing deepfakes and other such manipulated videos from its platform. Any misleading media created through AI that replaces, merges, or superimposes content onto a video with the aim of passing it off as genuine will be considered a candidate for removal. However, if a video is clearly a satire or parody, or has been altered by deleting or editing the order of words, it will “likely” not be removed.

With the U.S. elections coming up, Facebook has announced that it will be removing deepfakes and other such manipulated videos from its platform.
With the U.S. elections coming up, Facebook has announced that it will be removing deepfakes and other such manipulated videos from its platform. (Image: Screenshot via YouTube)

As such, the recent viral video of Nancy Pelosi that was edited to make it look like she was stumbling with words will be allowed to continue on the platform. “The doctored video of Speaker Pelosi does not meet the standards of this policy and would not be removed. Only videos generated by artificial intelligence to depict people saying fictional things will be taken down… Once the video of Speaker Pelosi was rated by a third-party fact-checker we reduced its distribution, and critically, people who saw it, tried to share it or already had received warnings that it was false,” Facebook said in a statement (CNBC).

The U.S. elections are due in November 2020 and many are worried that fake news and deepfakes will storm social media in a bid to influence the poll results. While Republicans have criticized Facebook for discriminating against conservative views, the Democrats have been angry at the company for refusing to fact-check political ads.

Follow us on TwitterFacebook, or Pinterest

RECOMMENDATIONS FOR YOU