Exploring openness in AI: Insights from the Columbia Convening
Over the past year, a robust debate has emerged regarding the benefits and risks of open sourcing foundation models in AI. This discussion has often been characterized by high-level generalities or narrow focuses on specific technical attributes. One of the key challenges—one that the OSI community is addressing head on—is defining Open Source within the context of foundation models.
A new framework is proposed to help inform practical and nuanced decisions about the openness of AI systems, including foundation models. The recent proceedings from the Columbia Convening on Openness in Artificial Intelligence, made available for the first time this week, are a welcome addition to the process.
The Columbia Convening brought together experts and stakeholders to discuss the complexities and nuances of openness in AI. The goal was not to define Open Source AI but to illuminate the multifaceted nature of the issue. The proceedings reflect the February conversations and are based on the backgrounder text developed collaboratively with the working group.
One of the significant contributions of these proceedings is the framework for understanding openness across the AI stack. The framework summarizes previous work on the topic, analyzes the various reasons for pursuing openness, and outlines how openness varies in different parts of the AI stack, both at the model and system levels. This approach provides a common descriptive framework to deepen a more nuanced and rigorous understanding of openness in AI. It also aims to enable further work around definitions of openness and safety in AI.
The proceedings emphasize the importance of recognizing safety safeguards, licenses, and documents as attributes rather than components of the AI stack. This evolution from a model stack to a system stack underscores the dynamic nature of the AI field and the need for adaptable frameworks.
These proceedings are set to be released in time for the upcoming AI Safety Summit in South Korea. This timely release will help maintain momentum ahead of further discussions on openness at the French summit in 2025.
We’re happy to see collaboration of like-minded individuals in discussing and solving the varied problems associated with openness in AI.