John Pfaff wrote an op-ed for the New York Times about the need for empirical training in the U.S. Supreme Court.
Supreme Court justices have a tough job. They are required to hand down decisions that can affect millions of people and cost billions of dollars. And while some of the issues before them are purely legal, many turn on complex policy questions: Do black voters in the South still face substantial discrimination? How reliable are eyewitnesses? Are people convicted of sex offenses very likely to reoffend?
These questions can be answered only by understanding what the data says. Unfortunately, a report released by ProPublica on Tuesday suggests that the justices struggle with that task. Looking at a random sample of cases from 2011 to 2015, ProPublica found that the court cited faulty research or introduced its own errors in nearly a third of the 24 cases that relied on such facts.
In 2013, for example, Shelby County v. Holder invalidated a critical portion of the Voting Rights Act of 1965, making it arguably one of the most consequential cases in recent years. Justice John G. Roberts Jr., arguing that the South had taken great strides that made the protections of the act unnecessary, based his decision in part on a Senate Judiciary Committee analysis that misinterpreted how the Census Bureau reports race and ethnicity data and wrongly suggested that registration gaps between minorities and whites had shrunk significantly, an error that neither he nor his clerks caught.
The court has historically relied on amicus briefs, written by outside experts, to provide it with that broader empirical background and help compensate for its own institutional shortcomings. Unfortunately, these briefs are easily abused. In a 2014 article, Allison Orr Larsen, a law professor at William & Mary, pointed out that many amicus briefs include false or unsubstantiated empirical assertions, at least some of which make it into justices’ opinions. ProPublica similarly identified amicus-writing organizations that could not explain where specific numerical claims came from.
So what to do?
In the 1980s, the legal expert Kenneth Culp Davis proposed that the court create an outside research organization, akin to the Congressional Research Service, to do research on its behalf. However worthwhile, the idea went nowhere.
Perhaps a more viable idea is one that Mr. Davis rejected: establish a group of technical advisers to the court. A small team of social scientists and statisticians could help justices sift through empirical evidence. There is no shortage of scholars with Ph.D.s who would be eager to do that work for the court.
The court could take steps today, without any institutional change, by hiring clerks with empirical training instead of only recently minted J.D.s. Or if there is an immediate and specific need that the current clerks can’t address, justices could have the ability to hire experts to assist them with specific issues.
In the end, the question is a comparative one. It’s not, “Is there a perfect solution?” but rather, “Can the court make its policy decisions better?” The answer to the latter question is clear.