EU

What's new

Regulations

Announced April 2021

Regulation

EU

The AI Act aims to ensure AI products in the market and used within the EU are safe and respect existing law on fundamental rights and values. The regulation splits AI systems into 3 categories: unacceptable risk, high-risk and low risk. 

In April 2021 the AI Act was proposed by the Commission. The regulation is awaiting consensus from the Council and the Parliament before it can become enforceable.  

Announced September 2022

Regulation

EU

The AI Liability Directive introduces liability rules specifically to damage caused by AI systems. The new rules introduce two main safeguards: first, the AI Liability Directive alleviates the victims' burden of proof by introducing the ‘presumption of causality,' second when damage is caused the new AI Liability Directive will help victims to access relevant evidence.

In September 2022 the AI Liability Directive was proposed by the Commission. The regulation is awaiting consensus from the Council and the Parliament before it can become enforceable. 

Adopted May 2024

Principles

OECD

The OECD drafted Principles on Artificial Intelligence. The OECD's 36 member countries and partner countries (including Argentina, Brazil, Columbia, Costa Rica, Peru and Romania) adopted them in May 2019.  In May 2024 the OECD principles were updated to include reference to misinformation and disinformation, the rule of law and bias. 

Published 30 October 2023

Principles

G7

The leaders of the G7 countries issued International Guiding Principles on AI and a voluntary Code of Conduct for AI developers under the Hiroshima AI process. They have outlined 11 guiding principles which provide developers, deployers and users of AI a blueprint to promote safety and trustworthiness in their technology.