Skip to main content

Stefano Maffulli, Executive Director, The Open Source Initiative

State of Open: The UK in 2023

Phase Two “Show us the Money”

Part 2: “AI Openness”


The Open Source Initiative (OSI) recognises the evolving challenges AI and ML pose to traditional open source principles. Blurring the lines between software and data, these technologies raise legal questions around copyright, patents, and various laws, challenging established OSI-approved licenses. OSI aims to address this by initiating the “Deep Dive: AI” series, featuring webinars, an in-person session, and a report. The series seeks to define “Open Source AI” through a global drafting process, engaging experts, practitioners, and ethicists. The focus is on adapting open source principles to AI, exploring licensing models for responsible AI use, data protection, content transparency, and community collaboration. The initiative encourages stakeholders to contribute insights for shaping the future of open source in the AI landscape.

Thought Leadership: “Open Source AI” Definition
Stefano Maffulli, Executive Director, The Open Source Initiative

The Open Source communities have relied for decades on copyright and its sibling, copyleft but this approach is showing its limitations26 with modern technologies. Artificial Intelligence and Machine Learning are posing an even larger challenge to the principles of Open Source than cloud and mobile did.

To start, AI and ML blend the boundaries between software and data. The AI systems introduce new artefacts for which the applicability of copyright law is questionable. The Generative AI systems also pose new and intricate legal challenges to the many established understandings of patents and trade secrets. And the large quantities of data required to build functional ML systems also attract other laws – from privacy protection to security to non-discrimination and accessibility laws – all the way to basic human rights protections. Much of the legal principles at the base of OSI Approved Licences are already being challenged in these contexts.

Despite the popularity of the term “Open Source AI,” it has no shared and agreed definition. And despite the popularity of software licences applied to ML models, not everybody agrees about the applicability of their terms.

To bring the principles of Open Source with us for the tech that comes next, we must think hard, carefully and quickly about how to adapt the guiding principles of “open” to the AI/ML field. OSI is calling for stakeholders to join the global drafting process of a definition of “Open Source AI” being Open Source as applied to AI/ML.

Looking for a solution

Deep Dive: AI is a series of events organised by the Open Source Initiative (OSI) as a wide consultation with communities of practice, researchers, experts of ethics and human rights to explore the challenges and opportunities of Open Source AI. First produced in 2022, the 2023 series consists of three parts:

  • A webinar series featuring experts on AI and Open Source, airing between September and

October (the call for speakers is open until August 4th):

  • An in-person session at All Things Open on October 16th; and
  • A report summarising the findings of the 2023 Deep Dive.

The goal of Deep Dive: AI is to help the Open Source community understand the implications of AI for Open Source. The series addresses a range of topics focused on the implications of applying Open Source principles in the development and use of AI models

Deep Dive: AI is a timely and important initiative. As AI becomes more pervasive, it is essential that the Open Source community promote the development of AI systems that are used in an ethical manner, consistent with the ethos of Open Source. The OSI is soliciting input from the community throughout the series. The event is a valuable resource for anyone interested in the future of Open Source AI, especially with regard to licensing issues and the very definition of Open Source in an AI world.

The webinars will be particularly informative, and the session at All Things Open will offer a diverse range of perspectives on the future of AI and Open Source.

Focusing on Open Source principles

As AI technologies become more prevalent and influential, Open Source and AI developers are looking for licensing models that promote innovation while limiting harm. The emergence of AI has raised complex questions regarding ownership, accountability, and fairness. Developers and organisations utilising AI technologies are increasingly recognising the need to align their work with ethical principles and societal values.

Deep Dive: AI will address one of the primary challenges in this new reality, providing frameworks to promote the responsible use of AI with Open Source licensing, for example, curtailing malicious or harmful applications. The concerns that AI technologies can too easily violate privacy, enable surveillance, or facilitate discrimination are at the forefront of researchers and civil rights advocates, highlighting a tension with some of the core principles of Open Source.

Another aspect to be addressed is the protection of data used in AI models. Open Source licences are being updated to specify how data collected and processed by AI systems should be handled, ensuring compliance with privacy regulations and ethical data usage. Additionally, there is growing awareness around the potential risks associated with AI-generated content, such as deepfakes or misinformation.

Deep Dive: AI will explore collaboration models that incentivise contributions to AI projects while safeguarding the open nature of the systems.We encourage everyone concerned about the intersection of AI and Open Source to consider how they might update their licences to address the unique challenges posed by AI. Deep Dive: AI is here to help, exploring responsible use, data protection, content transparency, and community collaboration and providing insights to help evolve licences that shape AI development in a manner aligned with societal values.

Scroll to top of the content