What Facebook’s Data Mishandling Means for 87 Million Users

0
Two weeks ago, a whistleblower’s revelation that U.K.-based political data firm Cambridge Analytica accessed private information on 87 million Facebook users, in order to manipulate voter sentiment regarding the 2016 U.S. presidential election, prompted immediate and immense fallout for the world’s largest social media platform—fallout that, at present, shows no signs of abating.

Facebook’s market value has since plummeted $100 billion amid uncertainty surrounding the Federal Trade Commission’s newly launched investigation into whether the company’s handling of user data violated a 2011 consent decree. The New York and Massachusetts attorney generals also announced a joint investigation into how Cambridge Analytica, which was hired by then-presidential candidate Donald Trump’s campaign, used its data and why Facebook did not alert users or the public that their data was handed over to a third party. In addition, Facebook shareholders have filed at least four lawsuits in federal court alleging, among other things, the company breached its fiduciary duties by failing to prevent Cambridge Analytica’s initial misappropriation of data and by not notifying affected users in 2015 upon first learning of this theft.

“Right now, Facebook faces the largest crisis it has ever faced, and for good reason,” said Fordham Law Professor Olivier Sylvain, an expert on information and communications law and policy. “Its handling of user data has proven to be irresponsible, to say the least.”

Olivier Sylvain

Olivier Sylvain

That Facebook finds itself in its current financial and legal predicament might come as a shock to many of its 2 billion users worldwide, but Sylvain has warned for years that a virtual iceberg awaited platforms such as Facebook that reaped enormous profits from collecting and selling unprecedented amounts of user data with little to no oversight. The professor has raised alarms, specifically, about the broad scope of immunity under 47 U.S.C. 230(c)(2) of the Communications Decency Act of 1996 for internet companies, which has emboldened Facebook, Google, and others to treat user data in a “careless and sometimes lawless” fashion.

The Cambridge Analytica saga highlights the absurdity of Facebook’s longstanding position that Section 230 allowed the company to avoid taking affirmative steps to protect user data once it contracted that data to another party, noted Sylvain, whose related essay is featured on the website of the Knight First Amendment Institute.

Critics assert that Facebook failed to act in a timely manner when it discovered that a Cambridge scholar, in violation of his agreement with the company, traded data on 87 million users to Cambridge Analytica, created by billionaire GOP donor Robert Mercer, a patron of right-wing news outlet Breitbart. Cambridge Analytica then used the psychometric profiles at its disposal to disseminate fake news on Facebook to polarize the electorate, observed Sylvain, explaining that this stratagem marked a drastic departure from how past presidential campaigns sought to harness the power of data to win votes. For instance, the overwhelming majority of the 87 million users whose data Cambridge Analytica accessed became ensnared not through their own volition but rather after their Facebook friend took a quiz.

Facebook CEO Mark Zuckerberg is scheduled to testify before Congress on April 11 about his company’s failure to protect user data. COO Sheryl Sandberg has indicated the company is “open to regulation.” It is unclear whether these remarks constitute a good faith gesture to improve future user data practices or an attempt to resurrect falling stock prices.

“Zuckerberg is out front now because this is an existential threat to the company in ways we haven’t seen,” Sylvain said. “Maybe a drop in stock price is a market correction. On the other hand, there are those of us who care less about Facebook’s performance on the market as a measure of whether it performs its obligations to the public.”

Corporations who possess as much power as Facebook or Google must act as moderators and curators of content, according to Sylvain. Such a role risks losing some diversity of speech, but the trade-off is worthwhile if it means preventing companies from using psychometric information about users to sway voters, the professor said. Numerous countries have recently expressed their concerns over weaponized data. U.K. officials asked Zuckerberg to address concerns about Facebook’s Cambridge Analytica scandal, but he declined. Australian and Indian government officials have also sought data-related information from Facebook.

Zuckerberg suggested in an interview with Vox published this week that his company would need to assume a greater role in determining what constitutes valid political speech.

“I think more than a lot of other companies, we’re in a position where we have to adjudicate those kinds of disputes between different members of our community,” Zuckerberg told Vox’s Ezra Klein. “And in order to do that, we’ve had to build a whole set of policies and governance around how that works.

“… With a community of more than 2 billion people all around the world, in every different country, where there are wildly different social and cultural norms, it’s just not clear to me that us sitting in an office here in California are best placed to always determine what the policies should be for people all around the world,” he continued, with an air of caution.

Facebook’s path forward in protecting user data and stopping the spread of fake political news will likely be shaped by some combination of outside forces, be they government regulation, litigation, market realities, user preferences, or even the ballot box with impassioned young voters supporting candidates who will fight for communications regulation.

“I am hopeful that all of this disclosure—and there’s still so much for us to learn—will mobilize people to care about these issues,” Sylvain said.

Share.

Comments are closed.