Back

09 January 2025

The month in 5 bytes

  • Welcome to 2025
  • The AI Report from the US House of Representatives
  • The United Nations Convention against Cybercrime
  • The South Korean AI Basic Act
  • The Final Report from the European Securities and Markets Authority on Crypto-Assets
Welcome to 2025

This week’s Consumer Electronics Show in Las Vegas, CES 2025, has just confirmed what technology predictions for the year ahead had told us all along: 2025 will be the year where we move from AI conversations as we know them to AI agents that work autonomously at our side – artificial assistants, robots and other autonomous systems in our offices, our factories and on our streets. The questions that this incoming wave of artificial companions will trigger for law & ethics abound: How to cope with the incoming wave of “outsourcing”? Whom to blame if anything goes wrong? And how to take care of human value alignment?

In sync with agentic AI, and the digital twins and replica that they will produce, questions of data protection as well as data commercialisation are likely to take centre stage. Elsewhere in the field of digital transformation, 2025 ushers in a re-invigoration of crypto assets of all type - with all their intricate legal implications, ranging from their issuance and trade to their custody and other support services to capital markets.

The AI Report from the US House of Representatives

A comprehensive AI report from the bipartisan House Task Force was finally released on December 17th last year. Over the past year, the AI Task Force held multiple hearings and roundtables with over one hundred experts, including business leaders, government officials, technical experts, legal scholars, and domain specialists to probe a range of critical issues at the heart of how AI intersects with key policy areas, spanning the fields of data privacy, national security, research and development, civil rights, education and the workforce, intellectual property, content authenticity, energy use and data centres, agriculture, healthcare, financial services, and more.

The fruits of several months of painstaking analysis were distilled into this 200-plus page report, which unveils 66 core findings and 89 recommendations, laying the groundwork for future initiatives that the US Congress can take to address the most critical issues arising from the advancement of AI, with a long-term vision for the evolution of the technology in society. In addition to the minute study of specific AI issues, the AI Task Force adopted several high-level principles to frame future AI policies and smooth the path to new congressional efforts to more robustly regulate the technology. These practical principles include recognising the novelty of emerging AI issues to avoid overlapping regulatory frameworks, promoting AI innovation, enhancing protection against AI risks and harms, empowering governments with AI technology, making effective use of sectoral regulatory structures, and taking an incremental and human-centred approach to AI policy.

This report is more than a mere compilation of insightful analysis gleaned from industry, academia, policymakers, and society at large. As a roadmap to steer the US Congress through the challenging legislation of AI, it actually heralds the outline of what the AI industry can expect to come out of the halls of the US Congress in the near future. 

The United Nations Convention against Cybercrime

On December 9th 2024, the United Nations General Assembly adopted a landmark Convention against cybercrime in New York. Its core aim is not only to provide states with more effective means to prevent and combat cybercrime, but also to step up international cooperation in sharing electronic evidence of serious crimes. The key challenge that the Convention seeks to address is that while cybercriminals continue to wreak havoc on a global scale, leaving a trail of digital breadcrumbs across multiple countries, law enforcement remains strictly bound by jurisdictional lines. To effectively counter this challenge, the chapter on international cooperation establishes a global framework that enables signatories to provide mutual assistance in cross-border investigations, prosecutions, asset recovery, and judicial proceedings. The chapter also establishes international procedures for the preservation and acquisition of electronic evidence, access to data, or interception of traffic data, and a global mechanism for the exchange of electronic evidence for serious crimes, including those covered by other UN Conventions and Protocols, such as the UN Convention against Transnational Organised Crime and its Protocols, or the UN Convention against Corruption.

The adoption of this Convention is no small step in the fight against transnational cybercrime. The forthcoming signing ceremony in Hanoi, Vietnam, marks the beginning of a critical chapter in the cross-border prosecution of translational cybercrime, with potential ripple effects across multiple jurisdictions.

The South Korean AI Basic Act

At the end of 2024, South Korea’s National Assembly passed the Basic Act on the Development of Artificial Intelligence and the Establishment of Trust, or simply the AI Basic Act. The new law makes South Korea the second jurisdiction to establish a regulatory framework for AI, following in the footsteps of the EU legislation. Similar to the European AI Act, the South Korean AI Basic Act seeks to strike a sensitive balance between technological thrust and regulatory foresight. Fundamentally, the AI Basic Act follows a tiered approach based on varying levels of potential risk to life, human rights, and safety, and imposes tighter requirements on high-impact AI applications. These can include risk management plans, impact assessments, and user protection strategies. 

The AI Basic Act marks a defining milestone for South Korea in the global landscape of AI governance, positioning the country as the first market in Asia to regulate AI in a single piece of legislation. This is likely to prove a critical testing ground for neighbouring countries looking to regulate AI. Businesses entering the market for AI systems in South Korea will have a full year to prepare, as the AI Basic Act is set to take effect in January 2026.

The Final Report from the European Securities and Markets Authority on Crypto-Assets

On December 17th the European Securities and Markets Authority (ESMA) published the Final Report containing guidelines on the conditions and criteria for the qualification of crypto-assets as financial instruments. The report is paramount as there is no fully harmonized understanding of the definition of “financial instrument” under the Markets in Financial Instruments Directive, or MiFID, in the EU. And while the legal uncertainty surrounding this topic has been flagged since the implementation of MiFID and MiFID II, further practical implications may emerge with the Markets in Crypto-Assets Regulation, or MiCA, regarding the classification of certain crypto-assets as financial instruments.

The report offers guidance on the classification of crypto-assets as transferable securities, money-market instruments, units in collective investment undertakings, derivative contracts, emission allowances, and hybrid tokens. Throughout the process, ESMA has aimed to follow two core principles: technological neutrality and substance over form. In the ebb and flow of an ever-shifting tokenised economy, two guiding principles and a set of instructive examples are a welcome starting point for navigating the complexities of digital markets in the year ahead.

Authored by Leo von Gerlach & Julio Carvalho