Algorithmic Injustice in the Age of the Digital Poorhouse

0

On May 3, Fordham Law School and the McGannon Communication Research Center honored Virginia Eubanks for her recent book, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. Eubanks, an associate professor of political science at the University of Albany, spoke with Ifeoma Ajunwa, assistant professor of labor and employment law at Cornell University and Cathy O’Neil, a data scientist and author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, in a panel discussion moderated by Fordham Law Professor Olivier Sylvain.

In recent years, we’ve accepted that algorithms make simple suggestions for us—friends to add to our social network, where to eat, people to date. Sylvain noted how prevalent the algorithm has become in our daily lives, saying, “We’ve come to think that algorithms are self-justifying and inevitable, but we should be alert to their hazards. It’s especially high-stakes for those who rely on government services and services that are deployed by virtue of algorithmic processes.” O’Neil agreed and referred to the “veneer of objectivity” that these algorithms exude. We often assume that a computerized decision is thus fair and impartial. Of course, the machines themselves are impartial, but humans do design these programs, which can sometimes operate with limited data sets that fail to represent and predict the trends within a given population. Ultimately, these incomplete and sometimes faulty systems are capable of automating decisions that govern the lives of many people, and those that design them have a deeper responsibility to the communities these algorithms affect.

While introducing her book, Eubanks quoted science fiction writer William Gibson, who said, “The future is already here—it’s just not very evenly distributed.” Gibson’s quote implies that it is the wealthy and upper middle classes who have this advance access, but Eubanks noted that we should look also to low rights communities to observe types of new technology, whether it is the poor and working classes, people living in public housing, or undocumented immigrants. Among these groups, the future of surveillance technology is already in place.

There are three issues at the core of these algorithm-governed benefit programs, she said. The first is that these algorithms are designed to perform what is essentially triage via an automated rationing system that operates on a commitment to austerity instead of addressing the roots of these inequalities. The second is bias—typically racial—that is almost always present in these decision-making systems, however much we hope to remove it by automating the processes. As an example, Eubanks emphasized that black children are still twice as likely to be removed from their homes by protective services than non-black children.

The third, and perhaps most overarching, issue is the way in which people discuss these algorithms. We have a tendency to view them as “administrative upgrades,” designed to lighten the burden for caseworkers and other administrators. In reality, however, they are decision-making tools capable of influencing policy, and we should discuss them—and even more importantly, regulate them—as such.

Automating Inequality likens the use of algorithms in governmental benefit programs to a much older American institution: the poorhouse, which was established to support those in need, but frequently succeeded only in worsening the prospects of its inhabitants. Likewise, the algorithms that govern many of America’s benefit systems often hinder the advancement and health of the very people they were designed to aid.

To illustrate the new “digital poorhouse,” Eubanks’ book tells the stories of those affected by different types of benefit decision-making algorithms. She traveled to Indiana, where so-called “modernization” processes for benefit eligibility have mistakenly denied many families of the healthcare or welfare support they on which they depend. In Los Angeles, the coordinated entry system attempts to combat homelessness using an algorithm that rates candidates by their vulnerability and matches them with the appropriate resources. In Allegheny County, Pennsylvania, a new family screening tool works to keep children safe by attempting to determine which ones are most likely to suffer abuse at home. Eubanks noted that the tools operate on limited and subsequently biased data set; the family screening algorithm only collects information from those families who have interacted with the county and state for parenting support, namely, poor and working-class families. Upper- and middle-class families typically obtain support through private means, making their data far less likely to enter the county’s algorithms.

In professions like law and medicine, a strict set of ethical standards applies, as well as real penalties for violating them. In data science, however, there are no such strictures, and thus, no real penalties for flouting what are at best vague statements of ethical responsibility. Ajunwa pointed out that the people who designed these decision-making programs now have as much power as doctors and lawyers, and just as strict codes of ethics were established for the medical and legal professions, one must be developed to guide programmers and data scientists.

Eubanks admitted that although her book paints a rather dark picture, she hopes that it will rouse readers to inform themselves about the automated systems that govern many aspects of our lives. “I see us engaged in a really difficult conversation that I hope will force us to face the deeper social issues that we have been trying to ignore,” she said.

Share.

Comments are closed.