The aim of this legislation, whose main application will begin in 2026, is to promote innovation in Europe while controlling any abuses
On Tuesday 21 May, the twenty-seven Member States of the European Union gave their final approval to pioneering legislation to regulate artificial intelligence (AI) systems. The aim of this initiative is to promote innovation in Europe while limiting potential abuses. After arduous negotiations, an agreement was reached in early December by the EU’s co-legislators, despite the concerns of some countries, such as France, who feared that too strict a framework would hamper the development of this promising sector.
The new regulations, which will mainly come into force in 2026, impose constraints proportionate to the risks that AI systems pose to society. Low-risk systems will be subject to minimal transparency obligations, while high-risk systems, used in particular in critical infrastructures such as education, human resources or law enforcement, will have to meet strict requirements before they can be deployed in the EU.
Specific rules have been defined for generative AI such as ChatGPT (OpenAI) in order to guarantee the quality of the data used to develop the algorithms and to respect copyright. Artificially generated content, whether sound, images or text, will have to be clearly identified as such to prevent any manipulation of public opinion.
Mathieu Michel, Belgian Secretary of State for the Digital Economy, whose country holds the presidency of the Council of the EU until the end of June, said in a statement: ‘With this landmark legislation, the first of its kind in the world, Europe is underlining the importance of trust, transparency and responsibility, while allowing this fast-evolving technology to flourish and boost European innovation.’