Developing a global approach to AI governance
As the existing and future risks of AI technologies become ever more apparent, regulators and policymakers around the world are paying close attention, with many seeking to introduce AI-specific legislation.
The European Union has, for many years, led the way with digital regulations in fields such as privacy and online harms, and is now looking to do the same with AI through the AI Act. This is a ground-breaking piece of legislation which seeks to establish the world’s first comprehensive cross-sector framework for regulating AI. Other jurisdictions are considering following the EU’s lead or developing their own approach, including the U.S., UK, and China.
One of the main challenges for organizations that are developing or using AI will therefore be to develop a consistent and sustainable global approach to AI governance framework which adequately manages the AI risks and satisfies diverging regulatory standards.
Standards for AI governance
A focus on Europe
The AI Act, which entered into force on 1 August 2024, sets out a layered, risk-based approach that aims to achieve a safe and innovative AI landscape. The impact on organizations hinges on two main factors: the nature and purpose of their AI systems, and their role within the AI supply chain. Rather than regulating all AI systems, the AI Act zeroes in on “high-risk” AI which is identified by the EU as likely to result in a high risk, which can be updated by the EU at any time. Companies involved in developing, deploying, and distributing these high-risk AI systems must meet strict obligations. Additionally, the AI Act sets up a distinct framework for General Purpose AI (GPAI) providers, who develop versatile AI models that can be configured and deployed for a wide variety of purposes. Furthermore, transparency and AI literacy requirements apply broadly to many AI providers and deployers, regardless of their systems’ risk classification.
The UK has taken a very different approach to regulating AI, with the focus on introducing a set of basic principles which will be supplemented by sector-specific guidance from existing sector and domain-specific regulators. The proposal seeks to strike a balance between the primary policy objective of creating a ‘pro-innovation’ environment for business, and developing trustworthy AI that addresses the most significant risks to individuals and society.
A focus on China
Lacking a unified AI legislative regulation, China takes a bespoke approach and creates rules for the specific types of algorithmic applications and AI services, e.g., recommendation algorithms, deep synthesis technology, and generative AI. On top of the global AI governance compliance framework, market players in China should also consider China-special challenges: one important issue is content moderation – companies should filter illegal and inappropriate content to follow “socialist core values” and not endanger national security. Another consideration is the requirements concerning international data transfers under Chinese law, which may limit the cross-border use of AI systems globally, despite China’s regulators relaxing the requirements and allowing some exemptions for the review process of international data transfers in March 2024.