Skip to main content

A Fireside Chat: Mozilla’s Columbia Convening
Udbhav Tiwari, Director Global Product Policy, Mozilla

State of Open: The UK in 2024

Phase Two: The Open Manifesto Report

Fireside Chat: Mozilla’s Columbia Convening

I’m Udbhav Tiwari, Mozilla’s Director of global product policy based in Berlin. I work in Mozilla’s public policy team, representing Mozilla’s Mission with lawmakers and regulators based on Mozilla’s Manifesto, which says that the internet should be a global public resource that’s accessible to all. 

I’ve worked with Mozilla for a little under five years and started working in open source when I was in law school in India. The intranet set-up within the University was run by the students and that’s when I first started using open source tools as a part of the broader student collective. Since then I’ve worked in Civil Society organisations, large tech companies and now with Mozilla.

Mozilla is headquartered in San Francisco, California with 1,000 employees worldwide, and 60-70 in the UK. It’s wholly owned by the Mozilla Foundation and isn’t publicly traded which helps to ensure our work is in service of the mission. We operate in software producing internet browsing products used all over the world.

How important is open source to Mozilla and what has been the main engagement around that?

In the early 2000s, when then Internet Explorer Browser dominated the industry, Mozilla was created as an open source alternative. When the project launched, the community contributed to a full page ad in the New York Times. Since then Mozilla has been deeply involved in open source. All of our development takes place in open source and Mozilla also serves as the foundation for other open source projects. 

How is Mozilla managing the openness conversation around AI?

Is open source the solution to AI concerns? Probably not. Is it one of the most important solutions? Absolutely, yes. We believe that many open source development practices enable better transparency and understanding and that’s the first thing needed for AI. The public also needs to be able to play a role in auditing AI systems, deciding what the risks are and proposing solutions.

But there are challenges. In the last couple of years, open source has gone through a rough patch as AI development processes are not the same as software ones and this has led to the benefits of openness becoming more diffuse when it comes to AI development. We’re also seeing a lot of hyperbole about AI right now and attempts to paint openness as a villain in the context of AI by painting pictures of risk. But we think that the studies saying open source AI is too risky often lack objectivity. If these processes were done in a more objective and scientific manner, we’d see there are very few things that present a major risk from our open source AI models that the internet doesn’t already pose for the world at large. 

Encouragingly, over the last few months we’ve seen the tone on open source and AI shift from people being terrified to starting to see the positive opportunity. 

The Columbia Convening 

The Columbia Convening was a day-long event on 29 February 2024 and one of the first instances of the openness community coming together to discuss openness and AI. Through the Convening, we realised that when it comes to AI, openness is a spectrum within which there are many people with different positions but these people are not communicating. Mozilla connected with Colombia Institute of Global Politics, which has expertise from a technical and policy perspective and convening power to ensure that the communities talk to each other. 

Was there anything surprising in the outcomes?

There were three primary outputs: a technical readout, a policy readout and an updated version of a backgrounder shared with participants in advance which created a spectrum of openness. We superimposed that upon how they can be more open and the impacts of those components becoming more open. The most surprising output was just how much consensus there was on so many points. But of course there were areas where we saw differences in opinions. Data turned out to be one of these areas. Should making training data available be an essential part of open source AI? Or should disclosing a list of the data rather than the data sets itself be enough?

How is the convening and the framework different? 

Our idea is for an iterative expansion of previous work by others. What our model does is ask what the different kinds of open things currently being released and made available in the market are, then using that to create broader frameworks that are useful for lawmakers, policy makers, regulators and the technical community, to enable their deciding that ‘these are the parts I would like to make open and these are the parts that I may not be able to make open’.

How do you feel that this kind of framework can actually make a difference? 

Since sharing the output documents we have had fantastic responses. There are a lot of academic papers that inspired the convening but we noticed there hadn’t been an output that collated all of these pieces together. So that is what we did. The reception has been very positive. We see that the level of discourse has up-levelled as a result and people are more familiar with both the push and the pull in the current space and thinking about constructive ways of engaging with it rather than just wondering what it’s about. 

What is the long term vision? 

Interestingly, people agree that open source should not be a pink slip to get away from regulatory obligations but that responsible release practices are important and a part of understanding the risks that openly available AI models pose. This can serve as a really good way of mitigating some of those risks. We want to have an open conversation about it in the future and the next Convening will likely focus on guardrails and safety. Now we know what the lay of the land is, what are the things that we can start doing both in law as well as technically to mitigate the risks that are arising from AI technologies?

What do you see happening next?

Open source has a history of working in a particular way and it’s pretty clear that some of that has to evolve for use in the AI world. Implementing this in the developer community is a whole other track of work. Whether evolution means greater engagement and staying the same or those norms themselves changing, it’s a multi-year project and a very divisive one. 

What would you like to see governments focusing on in AI and how to think about openness? 

We believe that for open source AI to be successful, governments need to invest in creating public infrastructure. Funding research labs, funding academic institutions and creating sandboxes that allow innovative models and explorations. The government has taken some encouraging steps towards this and people are starting to look at it for the positive aspects, and we’re starting to see some of these lessons percolate down into countries that historically have not been pro-open source, even though the benefit has always been there and if applied to AI we think those benefits will be multiplied.

First published by OpenUK in 2024 as part of State of Open: The UK in 2024 Phase Two “The Open Manifesto”

© OpenUK 2024

View all

Scroll to top of the content