The CEO of Facebook, Mark Zuckerberg, defended his company’s decision to allow fake political ads to run on the platform. In an interview with CBS, the billionaire argued that blocking such ads would amount to censorship.
Blocking fake ads
“What I believe is that in a democracy, it’s really important that people can see for themselves what politicians are saying, so they can make their own judgments. And, you know, I don’t think that a private company should be censoring politicians or news,” Zuckerberg said to CBS. Several U.S. lawmakers have urged Facebook to ensure that the 2020 presidential elections do not get compromised due to a flood of paid fake news.
The interviewer pointed to a letter written by 200 Facebook employees, reminding Zuckerberg that free speech and paid speech are two different things. Though the CEO accepted the fact that his employees had the right to express themselves, he rejected their appeal and reasserted that people should be left free to judge for themselves the character of the politicians.
“[Political advertising] doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy… We want to work with our leadership to develop better solutions that both protect our business and the people who use our products,” the letter stated (The New York Times).
The employees had also suggested a few actions Facebook could implement to reduce the impact of false claims in political ads. This includes placing restrictions on ad options that allow user targeting, changing the visual design treatment for the ads, and setting up spending limits on ads by individual politicians.
Even if Facebook were to block fake political ads, the next demand will likely be to block fake posts about politics as well. And experts are not sure whether this is even possible. Such a censoring mechanism would require the company to fact-check thousands of ads and millions of posts. This would take a lot of time, money, and effort. And in the end, several fake ads and posts can still get through since there might not be enough counter-proof to block them. To make matters worse, the company might become wary of allowing even reliable posts to be published because it may not have “enough proof.”
In December, Facebook launched the Deepfake Detection Challenge (DFDC) aimed at preventing the spread of deepfake videos on the platform. The challenge will provide “entrants with the full release of a new, unique data set of 100,000-plus videos specially created to aid research on deepfakes. Participants will use the data set to create new and better models to detect manipulated media, and results will be scored for effectiveness,” Jerome Pesenti, vice president of Facebook AI, said in a statement.
The video data set includes a highly diverse group of paid actors, with 54 percent of them being female. The original videos of the actors have been altered by the Facebook AI team through face swaps and voice modifications. The participants are required to make their work open source so that other researchers can test their solutions and develop the deepfake detection system further. In addition to Facebook, companies like Amazon and Microsoft are also supporting the project. Facebook has donated $10 million in awards and grants. The challenge will remain live till March 2020.