Back

05 December 2024

The month in 5 bytes December

  • US Treasury Department: Unveiling the “Final Rule” to shield national security
  • China’s AI strategy: The formation of a new AI committee
  • The first draft of the General-Purpose AI Code of Practice in the EU
  • Global cyber rules take shape: A declaration on international law in cyberspace
  • The AI Office: A consultation on future guidance for the AI Act
US Treasury Department: Unveiling the “Final Rule” to shield national security

In response to the growing threats posed by disruptive technologies and products considered critical to national security, the US Treasury Department issued the so-called “Final Rule” on November 15th, bringing to a halt US investments in Chinese companies involved in technological developments that could imperil the national security of the US. The Final Rule had been mandated by an Executive Order issued on August 9th last year, known as the “Outbound Order”, which declared a national emergency to address national security in sensitive areas of technological development.

The Outbound Order had already identified three key sectors of “sensitive technologies and products” to be covered by the forthcoming regulation: semiconductors and microelectronics, quantum information technologies, and AI systems. Within these areas, the Final Rule requires US persons to notify the Treasury Department if they – or their controlled foreign entity – engage in what the regulation considers a “notifiable transaction”, i.e. one that may threaten US national security. Additionally, the Final Rule prohibits transactions that may pose “a particularly acute national security threat” owing to their potential use in the military, intelligence, surveillance, or cyber capabilities of a country of concern. These rules also apply in the context of US LP investments in a non-US pooled investment fund, except when the US person has obtained a binding contractual assurance that the investment will not be used to engage in neither a notifiable nor a prohibited transaction. So far, the Final Rule has named China, along with the administrative regions of Hong Kong and Macau, as a country of concern. But the list can be updated at any time.

China’s AI strategy: The formation of a new AI committee

Barely a week after the US Treasury Department announced the Final Rule, the Cyberspace Administration of China established an AI Special Committee at the World Internet Conference on November 20th. The committee consists of over 170 members, including various international organisations, think tanks, research institutes, professional associations, and Western industries in the AI sector. The announcement of the committee foregrounded its commitment to the core principles of international cooperation and global sharing of AI development results, just when the US has begun to rein in AI investments abroad.

The first draft of the General-Purpose AI Code of Practice in the EU

On November 14th the European AI Office published the first draft of the General-Purpose AI Code of Practice. The draft marks the first milestone of four rounds of drafting planned until April 2025 and reflects the collaborative efforts of specialised working groups gathering industry, academia, and civil society around a daunting task: to deliver a “future-proof" Code that holds up to the next generation of AI models. Each of these groups has focused on four key areas of the regulation: transparency & copyright-related rules, risk identification & assessment for systemic risk, technical risk mitigation for systemic risk, and governance risk mitigation for systemic risk. They all built on nearly 430 stakeholder submissions and the broad international consensus reflected in the G7 Code of Conduct, the Frontier AI Safety Commitments, and the Bletchley Declaration.

This first draft was then resubmitted to a balanced group of stakeholders, who had a full fortnight to provide written feedback to the Plenary. In the light of the feedback received, the Chairs will now be able to refine the first draft into a more granular Code, to be published by May 1st next year. The final version will be an indispensable compliance tool for providers of general-purpose AI models to safely navigate the AI Act.

Global cyber rules take shape: A declaration on international law in cyberspace

A common understanding on the application of international law to cyberspace was adopted by the EU Council on November 18th. The Declaration is a firm statement that cyberspace is far from a lawless domain and marks the first time that EU member states formally adopt a declaration on the matter. It recognises that malicious behaviour and cyberattacks, including ransomware, are rapidly escalating in both sophistication and reach. The document also reaffirms support for the Cyber Programme of Action initiative, which will further enhance and solidify the United Nations framework of responsible state behaviour in cyberspace. 

The core premise of the declaration is that establishing a common understanding around fundamental principles of international law will only contribute to the emergence of global multi-stakeholder mechanisms for legal cooperation in cyberspace. These principles include respect for the sovereignty of states, non-intervention, due diligence – which obliges states not to knowingly allow its territory to be used for acts contrary to the rights of other states –, the prohibition of the use of force against the territorial integrity or political independence of any other state, and compliance with the rules of international humanitarian law and human rights.

The AI Office: A consultation on future guidance for the AI Act

On November 13th the European AI Office launched a consultation inviting stakeholders, especially AI system providers, businesses, national authorities, academia, research institutions, and civil society,  to contribute to the preparation of future guidance on the AI Act. Contributions received will feed into the Commission's guidance on the definition of AI systems and prohibited AI practices under the AI Act, which will be published in early 2025. The consultation seeks additional practical examples from stakeholders relating to the definition of AI systems and prohibited AI practices that can be translated into insightful practical aspects and use cases. Interested stakeholders have until December 11th to submit their contributions and help the industry gear up for the application of the parts of the AI Act related to prohibited practices on February 2nd next year.

Authored by Leo von Gerlach & Julio Carvalho