The work of Fordham Law Professor Chinmayi Sharma, who has been writing about creating a malpractice regime for engineers who develop artificial intelligence (AI), was referenced at length by Sue Halpern in this op-ed on the dark arts of AI, featured in The New York Review of Books.
A functional government, committed to safeguarding its citizens, might be keen to create a regulatory agency or pass comprehensive legislation, but we in the United States do not have such a government. In light of congressional dithering, regulatory capture, and a politicized judiciary, pundits and scholars have proposed other ways to ensure safe AI. Harding suggests that the Internet Corporation for Assigned Names and Numbers (ICANN), the international, nongovernmental group responsible for maintaining the Internet’s core functions, might be a possible model for international governance of AI. While it’s not a perfect fit, especially because AI assets are owned by private companies, and it would not have the enforcement mechanism of a government, a community-run body might be able, at least, to determine “the kinds of rules of the road that AI will need to adhere to in order to protect the future.”
In a similar vein, Marcus proposes the creation of something like the International Atomic Energy Agency or the International Civil Aviation Organization but notes that “we can’t really expect international AI governance to work until we get national AI governance to work first.” By far the most intriguing proposal has come from the Fordham law professor Chinmayi Sharma, who suggests that the way to ensure both the safety of AI and the accountability of its creators is to establish a professional licensing regime for engineers that would function in a similar way to medical licenses, malpractice suits, and the Hippocratic oath in medicine. “What if, like doctors,” she asks in the Washington University Law Review, “AI engineers also vowed to do no harm?”
Sharma’s concept, were it to be adopted, would overcome the obvious obstacles currently stymieing effective governance: it bypasses the tech companies, it does not require a new government bureaucracy, and it is nimble. It would accomplish this, she writes,
by establishing academic requirements at accredited universities; creating mandatory licenses to “practice” commercial AI engineering; erecting independent organizations that establish and update codes of conduct and technical practice guidelines; imposing penalties, suspensions or license revocations for failure to comply with codes of conduct and practice guidelines; and applying a customary standard of care, also known as a malpractice standard, to individual engineering decisions in a court of law.
Professionalization, she adds, quoting the network intelligence analyst Angela Horneman, “would force engineers to treat ethics ‘as both a software design consideration and a policy concern.’”
Sharma’s proposal, though unconventional, is no more or less aspirational than Marcus’s call for grassroots action to curb the excesses of Big Tech or Harding’s hope for an international, inclusive, community-run, nonbinding regulatory group. Were any of these to come to fruition, they would be likely targets of a Republican administration and its tech industry funders, whose ultimate goal, it seems, is a post-democracy world where they decide what’s best for the rest of us.
The danger of allowing them to set the terms of AI development now is that they will amass so much money and so much power that this will happen by default.
Read “The Coming Tech Autocracy” in The New York Review of Books.