Margaret Hartnett, Co-founder, Progressio AI Ltd
State of Open: The UK in 2024
Phase Two: The Open Manifesto Report
Thought Leadership: An AI Act update
Following intense trilogue negotiations, the text of the provisional political agreement on the EU Artificial Intelligence Act (AI Act) was released in January. The text has undergone several amendments as it progressed through approval by the European Parliament; issuance of a corrigendum; and most recently approval by the Council of the European Union. A particular area of confusion and concern related to the handling of Artificial Intelligence (AI) systems released under so called ‘free and open-source licences’. The final text approved in May by the Council of the European Union is due to come into force in August and states that AI systems released under ‘free and open-source licences’ are excluded from the AI Act unless the AI systems satisfy any one of a series of conditions.
Thus, the application of the AI Act to an AI system released under a ‘free and open-source licence’ at least partly depends on the risk classification of the AI system. So, for example, referring to Article 2(12) and Article 5, a prohibited AI system released under a ‘free and open-source licence’ will come under the scope of the AI Act. In the event an AI system is harmful and abusive, contradicting values of respect for human dignity, freedom, equality, democracy, the rule of law and fundamental rights including the right to non-discrimination, to data protection and to privacy and the rights of the child, then the AI system will be prohibited, regardless of the licence under which the AI system is released. However, the application of the AI Act to a high-risk AI system released under a ‘free and open-source licence’ includes an additional dependency as to whether the AI system is placed on the market or put into service in the EU.
Article 3(9) defines placing on the market as the ‘first making available of an AI system or a general-purpose AI model on the Union market’. Further clarification can be found in Recital 103 which provides that ‘AI components that are provided against a price or otherwise monetised, including through the provision of technical support or other services’, …’should not benefit from the exceptions provided to free and open-source AI components.’ Recital 103 further notes that the making of ‘AI components available through open repositories should not, in itself, constitute a monetisation.’ In the last week Apple has declined to put a new AI product onto the European (EU) market due to other laws.
Beyond considering whether the AI Act applies to an AI system released under a ‘free and open-source licence’, it is also necessary to consider the obligations of the different stakeholders involved. The Act distinguishes between the providers and deployers of AI systems, with the providers of high-risk AI systems facing particularly onerous obligations. But complications arise when considering the obligations of providers and deployers in the context of AI systems released under ‘free and open-source licences’, and to which the AI Act is applicable. Take for example, an AI-driven emotion recognition system or a biometric categorisation system, released under a ‘free and open-source licence’, being tested in real world conditions prior to being placed on the market in the EU (Article 2(8)). In this case, while the provider of the AI system is exempted from the obligations they would otherwise face as a provider of a high-risk AI system (Article 6(2) and Annex III, Points 1(b) and 1(c)), the deployer of the AI system still has an obligation to inform the persons exposed thereto of the operation of the AI system and to process the personal data in accordance with GDPR (Article 50(3)). By contrast, providers of AI systems intended to interact directly with a person (Article 50(1)), for example a chatbot, must design and develop the AI system so that the person is informed that they are interacting with an AI system. In addition, if the AI system is placed on the market as a component of a high-risk AI system the provider of the high-risk AI system must also comply with their other obligations under the AI Act (as set out in Article 16); and the deployer of the high-risk AI system must comply with their obligations under Article 26.
The AI Act imposes additional obligations on the providers of general-purpose AI systems ‘an AI model, ‘that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market’ (Article 3(63)), which may be placed on the market in various ways and may be ‘integrated into a variety of downstream systems or applications” but also states that general-purpose AI models’ are essential components of AI systems, they do not constitute AI systems on their own’.Excluded from the definition of general-purpose AI models are those AI models that ‘are used for research, development or prototyping activities before they are placed on the market’ (Art 3(63)).
There are a number of obligations of providers of general-purpose AI models including creating technical documentation, and satisfying transparency obligations by creating and making available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model which is placed on the market.However, the technical documentation and transparency obligations of providers of general purpose AI models (apart from those that pose systemic risk) do not apply to providers of AI models that are released under a ‘free and open-source licence’ that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available. Nevertheless, the provider of a general purpose AI model released under a free and open-source licence is still required to produce a summary of the content used for model training and to put in place a policy to comply with EU copyright law. (Article 53(2) and Recital 104).
The AI Act recognises that an AI value chain may comprise multiple parties that in addition to supplying AI systems, tools and services but also supply components or processes that are incorporated by an AI system provider into the AI system for the purpose of model training, model retraining, model testing and evaluation, integration into software, or other aspects of model development (Recital 88). With this in mind, the AI Act requires these parties to provide by written agreement the provider of a high-risk AI system with the necessary information, capabilities, technical access and other assistance, to enable the provider to comply with the obligations under the AI Act (Article 25(4)). However, this obligation does not apply to third parties making accessible to the public tools, services, processes, or components, other than general-purpose AI models, under a ‘free and open-source licence’.
The AI Act provides limited exemptions from various stakeholder obligations in connection with AI systems and general-purpose AI models if they are released under a ‘free and open-source licence’. However, the AI act does not include a formal definition of a ‘free and open-source licence’ and as written the term is not one historically used or defined in legislation. Instead, Recital 102 provides a broad description of a ‘free and open-source licence’ for the release of Software and data, including models, as a licence which allows them to be openly shared such that ‘users can freely access, use, modify and redistribute them or modified versions thereof’ and which ‘allows users to run, copy, distribute, study, change and improve software and data, including models under the condition that the original provider of the model is credited, the identical or comparable terms of distribution are respected’. However, Liesenfeld and Dingemanse* have argued that with the rise in large language models and text to image generators, the traditional licence-based representation of open source is no longer sufficient for declaring models as being open. They further argue that by adopting a licence-based definition of open source AI, the AI Act has provided a pressure point that will be targeted by those seeking to avoid ‘the most onerous requirements of technical documentation and the attendant scientific and legal scrutiny’.
*Liesenfeld A. and Dingemanse M., Rethinking open-source generative AI: open-washing and the EU AI Act, FAccT ’24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, June 2024, Pages 1774–1787
First published by OpenUK in 2024 as part of State of Open: The UK in 2024 Phase Two “The Open Manifesto”
© OpenUK 2024