Generate report

EU

What's new

Regulations

Announced April 2021; Entered into force August 2024

Regulation

EU

The AI Act aims to ensure AI products in the market and used within the EU are safe and respect existing law on fundamental rights and values. The regulation splits AI systems into 3 categories: unacceptable risk, high-risk and low risk. 

In April 2021 the AI Act was proposed by the Commission. On August 1, 2024, the AI Act entered into force across all 27 EU member states. On February 2, 2025, the first provisions of the EU’s groundbreaking AI Act started to apply. These provisions include a range of AI-related practices that are now prohibited and a duty on companies to introduce AI literacy into their organisation, through appropriate training and awareness programmes. The next application date coming on August 2, 2025 will introduce requirements on the providers of general-purpose AI models.

Announced September 2022

Regulation

EU

The AI Liability Directive introduces liability rules specifically to damage caused by AI systems. The new rules introduce two main safeguards: first, the AI Liability Directive alleviates the victims' burden of proof by introducing the ‘presumption of causality,' second when damage is caused the new AI Liability Directive will help victims to access relevant evidence.

In September 2022 the AI Liability Directive was proposed by the Commission. The regulation is awaiting consensus from the Council and the Parliament before it can become enforceable. 

Adopted May 2024

Principles

OECD

The OECD drafted Principles on Artificial Intelligence. The OECD's 36 member countries and partner countries (including Argentina, Brazil, Columbia, Costa Rica, Peru and Romania) adopted them in May 2019.  In May 2024 the OECD principles were updated to include reference to misinformation and disinformation, the rule of law and bias. 

Published July 2025; Effective August 2, 2025

Principles

EU

The Code of Practice helps industry comply with the EU AI Act legal obligations on safety, transparency and copyright of general-purpose AI models. The Code consists of three chapters—Transparency, Copyright, Safety and Security—to help providers of general-purpose AI models (ranging from all providers to advance models) with their legal obligations and compliance efforts.

The Code was published on July 10, 2025 and goes into effect August 2, 2025.

Published 30 October 2023

Principles

G7

The leaders of the G7 countries issued International Guiding Principles on AI and a voluntary Code of Conduct for AI developers under the Hiroshima AI process. They have outlined 11 guiding principles which provide developers, deployers and users of AI a blueprint to promote safety and trustworthiness in their technology.