Developing a global approach to AI governance
As the existing and future risks of AI technologies become ever more apparent, regulators and policymakers around the world are paying close attention, with many seeking to introduce AI-specific legislation.
The European Union has for many years led the way with digital regulations in fields such as privacy and online harms and is now looking to do the same with artificial intelligence through the AI Act. This is a ground-breaking piece of legislation, which seeks to establish the world’s first comprehensive cross-sector framework for regulating artificial intelligence. Other jurisdictions are considering following the EU’s lead or developing their own approach, including the U.S., UK, and China.
One of the main challenges for organizations that are developing or using AI will therefore be to develop a consistent and sustainable global approach to AI governance framework which adequately manages the AI risks and satisfies diverging regulatory standards.
Standards for AI governance
A focus on Europe
The EU’s AI Act is arguably the most ambitious proposal to regulate AI currently and requires a high standard of compliance. The previous versions of the AI Act from the European Commission and Council have predominantly focused on introducing obligations in relation to ‘high-risk’ use cases of AI, including having comprehensive governance and risk management controls in place. However, the most-recent amendments proposed by the European Parliament seek to significantly widen the scope of the AI Act, by introducing specific rules for generative AI and a set of general principles for the development and use of all AI systems, irrespective of the risks that they may pose.
The UK has taken a very different approach to regulating AI, with the focus on introducing a set of basic principles which will be supplemented by sector-specific guidance from existing sector and domain-specific regulators. The proposal seeks to strike a balance between the primary policy objective of creating a ‘pro-innovation’ environment for business and developing trustworthy AI that addresses the most significant risks to individuals and society.
A focus on China
Lacking a unified AI legislative regulation, China takes a bespoke approach and creates rules for the specific types of algorithmic applications and AI services, e.g., recommendation algorithms, deep synthesis technology, and generative AI. On top of the global AI governance compliance framework, market players in China should also consider China-special challenges: one important issue is content moderation – companies should filter illegal and inappropriate content to follow “socialist core values” and not endanger national security. Another consideration is the requirements concerning international data transfers under Chinese law, which may limit the cross-border use of AI systems globally.