Close Menu
    Facebook X (Twitter) Instagram
    Return to Fordham Law School
    X (Twitter) Facebook LinkedIn Instagram RSS
    Fordham Law News
    • Home
    • Law School News
    • In the News
    • Fordham Lawyer
    • Insider
      • Announcements
      • Class Notes
      • In Memoriam
    • For the Media
      • Media Contacts
    • News by Topic
      • Business and Financial Law
      • Clinics
      • Intellectual Property and Information Law
      • International and Human Rights Law
      • Legal Ethics and Professional Practice
      • National Security
      • Public Interest and Service
    Return to Fordham Law School
    X (Twitter) Facebook LinkedIn Instagram RSS
    Fordham Law News
    You are at:Home»Law School News»The New York Review of Books: Work of Prof. Chinmayi Sharma Referenced at Length in Larger Conversation about AI

    The New York Review of Books: Work of Prof. Chinmayi Sharma Referenced at Length in Larger Conversation about AI

    0
    By Newsroom on October 17, 2024 Law School News

    The work of Fordham Law Professor Chinmayi Sharma, who has been writing about creating a malpractice regime for engineers who develop artificial intelligence (AI), was referenced at length by Sue Halpern in this op-ed on the dark arts of AI, featured in The New York Review of Books.

    A functional government, committed to safeguarding its citizens, might be keen to create a regulatory agency or pass comprehensive legislation, but we in the United States do not have such a government. In light of congressional dithering, regulatory capture, and a politicized judiciary, pundits and scholars have proposed other ways to ensure safe AI. Harding suggests that the Internet Corporation for Assigned Names and Numbers (ICANN), the international, nongovernmental group responsible for maintaining the Internet’s core functions, might be a possible model for international governance of AI. While it’s not a perfect fit, especially because AI assets are owned by private companies, and it would not have the enforcement mechanism of a government, a community-run body might be able, at least, to determine “the kinds of rules of the road that AI will need to adhere to in order to protect the future.”

    In a similar vein, Marcus proposes the creation of something like the International Atomic Energy Agency or the International Civil Aviation Organization but notes that “we can’t really expect international AI governance to work until we get national AI governance to work first.” By far the most intriguing proposal has come from the Fordham law professor Chinmayi Sharma, who suggests that the way to ensure both the safety of AI and the accountability of its creators is to establish a professional licensing regime for engineers that would function in a similar way to medical licenses, malpractice suits, and the Hippocratic oath in medicine. “What if, like doctors,” she asks in the Washington University Law Review, “AI engineers also vowed to do no harm?”

    Sharma’s concept, were it to be adopted, would overcome the obvious obstacles currently stymieing effective governance: it bypasses the tech companies, it does not require a new government bureaucracy, and it is nimble. It would accomplish this, she writes,

    by establishing academic requirements at accredited universities; creating mandatory licenses to “practice” commercial AI engineering; erecting independent organizations that establish and update codes of conduct and technical practice guidelines; imposing penalties, suspensions or license revocations for failure to comply with codes of conduct and practice guidelines; and applying a customary standard of care, also known as a malpractice standard, to individual engineering decisions in a court of law.

    Professionalization, she adds, quoting the network intelligence analyst Angela Horneman, “would force engineers to treat ethics ‘as both a software design consideration and a policy concern.’”

    Sharma’s proposal, though unconventional, is no more or less aspirational than Marcus’s call for grassroots action to curb the excesses of Big Tech or Harding’s hope for an international, inclusive, community-run, nonbinding regulatory group. Were any of these to come to fruition, they would be likely targets of a Republican administration and its tech industry funders, whose ultimate goal, it seems, is a post-democracy world where they decide what’s best for the rest of us.

     The danger of allowing them to set the terms of AI development now is that they will amass so much money and so much power that this will happen by default.

    Read “The Coming Tech Autocracy” in The New York Review of Books.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Dan’s Papers: Prof. Jerry Goldfeder Quizzes Readers on New York Politics

    Pursuing Public Defense: Meet Sarina Chohan ’26

    Parriva: Prof. John Pfaff Argues ICE Will Not Reach Its Hiring Goals

    Comments are closed.

    • The Big Idea
    August 5, 2025

    The Big Idea: Who Counts (and Who Doesn’t) in the U.S. Census 

    March 31, 2025

    The Big Idea: Local Politics, Reform Prosecutors, and Reshaping Mass Incarceration

    March 3, 2025

    The Big Idea: Forced Labor, Global Supply Chains, and Workers’ Rights

    November 6, 2024

    The Big Idea: Partisanship, Perception, and Prosecutorial Power

    READ MORE

    About

    Fordham University - The Jesuit University of New York

    Founded in 1841, Fordham is the Jesuit University of New York, offering exceptional education distinguished by the Jesuit tradition to more than 15,100 students in its four undergraduate colleges and its six graduate and professional schools.
    Connect With Fordham
    © 2025 ThemeSphere. Designed by ThemeSphere.

    Type above and press Enter to search. Press Esc to cancel.