In this co-authored op-ed for Lawfare, Fordham Law Professor Chinmayi Sharma argues that open-weight AI models “aren’t the panacea for AI democratization, innovation, and accountability that their evangelists claim them to be.”
In light of the explosive growth of generative AI, which the general public has adopted at a faster rate than personal computers or the Internet, it is natural to worry about who controls this technology. Most of the major industry players—including leading AI labs such as OpenAI (makers of ChatGPT), Anthropic (Claude), and Google (Gemini)—rely on closed models whose details are kept private and whose operation is entirely dependent on the whims of these (increasingly profit-hungry) private companies.
A notable exception to this trend of closed AI models is Meta, whose advanced foundation model, Llama, has publicly available parameters. Meta has gone all-in on what it calls “open-source AI”—what we, for reasons explained below, call “open-access AI”—going so far as to argue that such models are not only as good as but are indeed superior to closed models. Indeed, while open-access models have traditionally lagged behind their closed counterparts in performance, Meta, along with other makers of open-access foundational models such as IBM and AI chip giant Nvidia, has largely closed the gap. Although competition between closed and open-access models will no doubt continue, it is plausible that, going forward, there will not be a meaningful capability gap between open and closed models.
Meta’s argument in favor of openness in AI models relies heavily on analogy to open-source software. In a statement accompanying one of the Llama releases, Meta founder and CEO Mark Zuckerberg made a direct analogy to what is arguably open-source software’s greatest triumph: the development of the Linux operating system that runs a majority of the world’s computers, including most phones (Android being a fork of Linux) and web servers. He is not alone.
But the comparison of open-access AI to open-source software is less compelling than it might first seem, because open-access models are substantially less open than the “open-source” moniker would suggest. While there are indeed many analogies between these two types of software, there are also important disanalogies, ranging from the nature of the technology itself and the access outside developers have over purportedly open models, to development costs, to the degree of centralization in a handful of companies. Some of the positive history of open-source software will likely carry over to open-access AI. But not all. Everyone—from industry techno-optimists releasing open-access models to serve open-source software’s ideals to policymakers wary of the threats introduced by such models—should be attentive to these differences to maximize the benefits and minimize the harms of open-access AI.
Read “Open-Access AI: Lessons From Open-Source Software” on Lawfare.