Skip to main content

A Fireside Chat: Emad Mostaque,

Founder, Stability AI

State of Open: The UK in 2024

Phase One: AI and Open Innovation

Emad Mostaque, a mathematician with a finance background, founded Stability AI to leverage AI for global challenges. He advocates for open source software, believing it democratises access and fosters innovation. Stability AI attracts talent globally, emphasising freedom for researchers. His motivation stems from his son being diagnosed with autism, with Emad wanting to see how AI could help review existing research.

He prioritises open models to achieve Stability AI’s vision. Balancing openness and profitability, they offer subscription services while remaining committed to open source. Safety measures are a priority, including collaboration with academics to mitigate risks. The future entails exponential AI adoption, with the UK leading while Stability AI aims to democratise AI access and drive innovation across sectors.


A Fireside Chat: Emad Mostaque, Founder Stability AI

1. What is your personal background and how did you come to work in Al?

I am a mathematician and problem solver at heart. I started my career in finance, with a background in engineering having studied mathematics and computer science at the University of Oxford. I later founded Stability AI, with a view of using AI to solve some of the world’s greatest challenges and to help humanity to achieve its potential.

2. What is your background in and understanding of open source software?

Open source technology is what will power the world and help to level the playing field in this next revolution.

At Stability AI, we are committed to developing and releasing open models because we recognise and welcome the benefits of open source software. Open models are innovation enablers. They democratise access and allow grassroots developers anywhere in the world to develop specialised models tailored to specific needs so that one day every sector and every nation can have their own AI.

Our commitment to open models is driven by democratising access to this technology and empowering the grassroots developer community in order to ensure transparency and competition.

The grassroots development of new businesses outside of the US is also essential to mitigate against a likely geographical AI divide and the development of local models will help to reduce bias and improve transparency.

3. You run a UK unicorn. Can you share your views on building in the UK in terms of finding talent and skills, taking investment and Government support?

We are hugely proud to be a British AI company with talent based both in the UK and all over the world.

We continue to attract some of the best and brightest talent in the world, who choose to work at Stability because our technology is cutting edge and our researchers have freedom to create.

4. Can you share the personal motivation behind founding Stability AI and how you’ve achieved your vision for the company in the world of open source AI?

When I worked at a hedge fund, I was a big investor in video games and AI. But my real interest in AI came when my son was diagnosed with autism. I wanted to see how AI could help to review existing research and detect commonalities.

Stability AI is now the leading independent multi-modal generative AI company. The goal is to make foundational AI technology accessible to all and enable the development of multi modal models for every sector and nation. This cannot be done without open models, which is why they are at the heart of Stability AI.

Achieving this vision takes a lot of hard work along with a lot of collaboration across our world class teams. Having the goal of making this technology accessible to all has been very important in driving us.

5. Stability AI is currently dealing with some challenges around copyright and intellectual property. Can you explain?

As with any groundbreaking technology, AI raises important questions about the integration of these tools into the digital economy.

We believe that the benefits of AI will accrue to jurisdictions with clear, fair, and practical rules governing AI development. We have been engaging with governments and regulators around the world, including in the United Kingdom to assist them with these important questions as they consider the future of AI and intellectual property.

In March 2023 I was one of the first CEOs in the AI industry to sign an open letter27 calling for greater caution in the development of powerful AI models and in May 2023 I sent an open letter28 to the US Senate Subcommittee on Privacy setting out suggestions for the future of oversight.

6. You’ve recently started offering a subscription service in order to standardise and change how customers can use your models for commercial purposes. How do you envision bal- ancing the company’s commitment to openness with the need for profitability?

Having delivered best in class models at the cutting edge of generative AI, we are commercialising our offering in order to better serve enterprise customers whilst remaining committed to providing open models to small developers, academics and non-commercial entities.

This closer collaboration with companies will also ensure that we are creating useful models that not only help to solve problems and boost efficiency, but also augment creativity. This will make us even better and more relevant and ensure that we stay ahead of the curve.

We remain committed to releasing our models openly to empower researchers and developers to use our models and build upon this transformational technology.

We will continue to release open source models and open research through our grants and collaborations with non-commercial researchers and academics. Our membership programme has been deployed in close consultation with researchers and our community.

We will always be a foundation model powerhouse.

7. With the evolving landscape of AI technologies, how does Stability AI navigate the balance between encouraging open source innovation and prioritising safety to mitigate potential risks and malicious use of your AI tools?

Safety comes first, always. We have taken proactive steps and developed layers of mitigation including filtering datasets that our models are trained on to remove unsafe content, adding filters to intercept unsafe prompts or outputs and investing in content labelling features to help identify images generated on our platform.

We also collaborate with academics and NGOs and support their efforts to strengthen these guardrails. Our researchers are working closely with researchers at John Hopkins University and have granted compute power to jointly develop better defence mechanisms.

With half the world’s population set to vote in national elections this year, preventing the mis- use of AI has never been more important. In addition to our existing safeguards, we are focused on mitigating disinformation and misinformation. We are also working with organisations in the US that provide solutions to the threats that disinformation, AI, deep fakes, and other emerging technologies could pose to elections.

The pace of innovation is accelerating and collaboration between regulators, law enforcement, technology platforms, AI developers and AI deployers is key to ensuring safety.

8. Why do you think music and image generating systems such as Dance Diffusion and Sta- ble Diffusion are such popular tools?

What our research and product teams have achieved in such a short space of time is nothing short of extraordinary. Our models are the most downloaded and the most liked on Hugging Face and have been downloaded over 100 million times by developers. Nearly 300,000 de- velopers and creators actively contribute to the Stability AI online community highlighting the strength of our collaboration with the open source community.

I think that one of the reasons our models are so popular is because we are focused on devel- oping technology that is human augmenting. It is designed to enable humanity to do more by prompting a wave of productivity and creativity. SDXL Turbo can now generate 100 images a second and our StableLM Zephyr model works without the internet at the same performance of models 20 times the size. The fact that these models are openly available for researchers to build on is something we are incredibly proud of and is core to our ethos.

9. What do you believe the future holds for Stability AI?

I am hugely excited about the future of Stability AI. 2023 was the year of talking about AI. 2024 is going to be the year of action and we will see exponential adoption. It’s not a case of if, but when and the UK can lead the charge.

Open technology is already playing a huge role in promoting transparency, improving accessibil- ity, lowering the barriers to entry and driving innovation and we look forward to continuing to play our part as the leading developer of world class models across modalities, including audio, video and 3D. We are focused on generative media, which means that every pixel is going to be digital.

We are going to see the increased adoption of AI across different sectors, from the creative industries to fintech, healthcare and beyond driven by the development of specialised models for those sectors.
Open models allow for the development of local models too, which will help to mitigate bias. This, coupled with Stability’s focus on building open edge models, so that anyone with a device can benefit from this technology, will help to democratise access to this technology, something which is at the heart of the open source movement.

Ultimately, we remain laser focused on delivering models that fit the needs of our customers and the research community and we are excited about what is to come.

Scroll to top of the content