Recovering Tech’s Humanity

0

Professor Olivier Sylvain‘s essay was published in the Columbia Law Review and argues that any reform should treat social media companies as commercial businesses that focus mainly on optimizing for advertisers.

Tim Wu’s essay, Will Artificial Intelligence Eat the Law?, posits that automated decisionmaking systems may be taking the place of human adjudication in social media content moderation. Conventional adjudicative processes, he explains, are too slow or clumsy to keep up with the speed and scale of online information flows. Their eclipse is imminent, inevitable, and, he concludes, just as well.

But in at least two important ways, Wu’s essay masks important challenges. First, by presuming the inevitability of automated decisionmaking systems in online companies’ distribution of user-generated content and data, Wu obscures the indispensable role that human managers at the Big Tech companies have in developing and selecting their business designs, algorithms, and operational techniques for managing content distribution. These companies deploy these resources to further their bottom-line interests in enlarging user engagement and dominating markets. In this way, social media content moderation is really only a tool for achieving these companies’ central objectives. Wu’s essay also says close to nothing about the various resources at work “behind the screens” that support this commercial mission.6 While he recognizes that tens of thousands of human reviewers exist, for example, Wu downplays the companies’ role as managers of massive transnational production lines and employers of global labor forces. These workers and the proprietary infrastructure with which they engage are invaluable to the distribution of user-generated content and data.

Second, the claim that artificial intelligence is eclipsing law is premature, if not just a little misleading. There is nothing inevitable about the private governance of online information flows when we do not yet know what law can do in this area. This is because courts have abjured their constitutional authority to impose legal duties on online intermediaries’ administration of third-party content. The prevailing judicial doctrine under section 230 of the Communications Act (as amended by the Communications Decency Act)7 (section 230) allows courts to adjudicate the question of intermediary liability for user-generated content when the service at issue “contributes materially” to that content. This is to say that the common law has not had a meaningful hand in shaping intermediaries’ moderation of user-generated content because courts, citing section 230, have foresworn the law’s application. Defamation, fraud, and consumer protection law, for example, generally hold parties legally responsible for disseminating unlawful information that originates with third parties. But, under the prevailing section 230 doctrine, powerful companies like Facebook, Google, and Amazon do not have any legal obligation to block or remove user-generated content that they have no hand in “creat[ing]” or “develop[ing].”

Read the full article.

Share.

Comments are closed.