Mark Zuckerberg, CEO and co-founder of Facebook (FB), recently sat before dozens of U.S. senators and was questioned about his company’s most recent involvement in the data-misuse affair involving Cambridge Analytica, a UK-based political consultancy company that had access to the personal data of millions of FB users. Zuckerberg said:
“We didn’t take a broad enough view of our responsibility, and that was a big mistake. It was my mistake, and I’m sorry. I started Facebook, I run it, and I’m responsible for what happens here.”
Among the many questions the Facebook co-founder refrained from answering during his testimony, or could not answer, included:
- Who is FB’s biggest competition?
- Can FB track a user’s browsing activities, even after they have logged off?
- Can FB track activities across different devices, even when users aren’t logged onto FB?
- Does FB store over 96 categories of user’s information?
- Why in 2015, after becoming aware of Cambridge Analytica and Dr. Kogan misappropriating data from 87 million users, did FB did not notify its users about the data breach of private information?
These and other questions where raised by Senator Kamala Harris, D-California.
Facebook is the world’s largest social media platform, with more than 2 billion users worldwide. It’s users share private pictures, social interests, political opinions, and even their location via the social media platform.
While Facebook has already fallen under scrutiny in the past for different issues related to privacy claims, this most recent case sets a stronger example for future dealings with companies like Facebook in regard to how they handle private information collected about their users.
How the personal data misuse allegation began
Reports by U.S. media on March 17 connected Cambridge Analytica (CA) to the U.S. presidential elections, claiming that CA was able to purchase the data on 50 million Facebook users, and use it to allegedly run opinionated political ads among FB users.
The current number as stated by the Senate during its hearing with Mark Zuckerberg stands at 87 million users.
According to Cris Wylie, former Cambridge Analytica employee: “This is what built the company,” referring to the initial Facebook data that Cambridge Analytica had come into possession of. The company had been able to collect the data via a third-party survey app running on Facebook’s platform by using its API to pool data on the friends of the 270,000 respondents who took part in the survey, according to media reports. In a Reuters article, Facebook stated that they were suspending Cambridge Analytica because the company had violated data privacy policies.
The personal information the Cambridge Analytica app was able to obtain via Facebook formed the “foundational dataset” that was used in 2015 to allegedly run a match between Facebook user data and a voter register, according to Cris. In response, Facebook took down full-page newspaper ads in the UK and the U.S. while apologizing for failing to protect user information.
How much is privacy worth?
The main argument that seems to be followed up on consistently during the current investigation of this matter is the question of how safe is the personal data that you create every day – when you wake up, switch on the television, browse social media on your phone, and the Internet on your tablet or other smart devices.
In other words, how safe is the data you generate about your personal identity, and how is it being used by third parties to predict the best strategies to profit from your likes, objections, and hopes?
Privacy, and how and when it is violated, it seems, has no clear line or concrete legal statute. Identity beyond the scope of ID cards and passports seems like a very abstract concept for many people to wrap their minds around.
The way we interact with one another with digital technology makes it necessary for agents like Facebook and Twitter to have a certain accessibility to users’ personal information. But with information comes power, and, therefore, a responsibility. It might very well be for the before told reasons that most people don’t really perceive the value that personal information has both to improve their lives and to be misused if a company like Cambridge Analytica stealthily steps over a vaguely defined line of privacy.
Should big data handlers be better regulated?
The question also being raised ever more frequently is to what extent companies that handle large amounts of personal data should be regulated. In general, the Facebook co-founder agreed to consider tighter regulations.
Should companies like Facebook allow for consistent scrutiny by external auditors, and which players should be assigned the secondary privilege and responsibility to access the public’s very private information during its audits?
The U.S. federal government is already investigating Facebook. The question now is how much further it will go to regulate it.