Generate report

United States of America

What's new

Regulations

Announced January 2023

Guidance

United States

The AI RMF is designed to better manage risks associated with AI in the U.S. The framework is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use and evaluation of AI products, services and systems. 

Passed April 2023

Regulation

New York City

The law prohibits employers and employment agencies from using “automated employment decision tools” in New York City to screen candidates for employment or assess employees for promotions unless such tools have been subject to independent bias audits, the results of which must be summarized and posted publicly on the employers’ or employment agencies’ websites. The law also requires employers and employment agencies to provide candidates and employees with disclosures regarding the use of automated employment decision tools.

In January 2023, the New York City Bias Audit Law (Local Law 144) was enacted by the NYC Council in November 2021. Originally due to come into effect on January 1, 2023, the enforcement date for Local Law 144 was pushed back to April 15, 2023 due to the high volume of comments received during the public hearing on the Department of Consumer and Worker Protection’s (DCWP) proposed rules to clarify the requirements of the legislation. From April 15, 2023 onward, companies are prohibited from using automated tools to hire candidates or promote employees, unless the tools have been independently audited for bias. 

Announced March 2022

Regulation

United States

The law requires companies to risk assess AI systems used and sold, creates new transparency obligations about when and how they can be used and empowers consumers to make informed choices about the automation of critical decisions. 

Rescinded January 2025

Principles

United States

The Executive Order aims to harness the positives of AI by addressing the risks associated, and places an emphasis on establishing best practices and standards. The EO sets out a list of 8 guiding principles and priorities that executive departments and agencies should adhere to. Although its focus is on U.S. agencies, the EO has implications on AI developers more generally as the U.S. Government will publish guidance on certain AI practices and also be able to request developers for information on their AI models. 

In October 2023 the U.S. Government published the EO. The EO was rescinded in January 2025.

Announced October 2022

Guidance

United States

The Blueprint for the AI Bill of Rights identifies 5 principles that should guide the design, use, and deployment of AI which includes: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration and fallback.

In October 2022, the U.S. Government released the Blueprint for an AI Bill of Rights. It is non-binding but the White House also announced that federal agencies will be rolling out related actions and guidance regarding their use of AI systems, including new policies regarding procurement. 

Adopted May 2024

Principles

OECD

The OECD drafted Principles on Artificial Intelligence. The OECD's 36 member countries and partner countries (including Argentina, Brazil, Columbia, Costa Rica, Peru and Romania) adopted them in May 2019. In May 2024 the OECD principles were updated to include reference to misinformation and disinformation, the rule of law and bias. 

Published 30 October 2023

Principles

G7

The leaders of the G7 countries issued International Guiding Principles on AI and a voluntary Code of Conduct for AI developers under the Hiroshima AI process. They have outlined 11 guiding principles which provide developers, deployers and users of AI a blueprint to promote safety and trustworthiness in their technology.

Passed May 2024

Regulation

Colorado

On May 17, 2024, Colorado Governor Jared Polis signed the Colorado Artificial Intelligence (AI) Act (CAIA), the first broadly scoped U.S. AI law. Similar to the EU AI Act, the CAIA takes a risk-based approach and focuses on high-risk AI systems. It requires developers and deployers of such systems to use reasonable care to avoid algorithmic discrimination in high-risk AI systems. Developers and deployers must disclose specified information to stakeholders. Deployers must also conduct impact assessments, implement risk management plans, and provide consumers with a mechanism to appeal adverse decisions. The Colorado Attorney General has exclusive authority to enforce and adopt rules implementing the CAIA. The CAIA takes effect on February 1, 2026.

Announced January 2025

Principles

United States

On January 20, President Trump issued an “initial rescission” order that rescinded President Biden’s October 30, 2023 AI executive order (EO 14110). Most of the Biden executive order had already been implemented, with a few exceptions (e.g., Commerce’s ongoing rulemaking to implement the AI EO’s dual-use foundation model reporting).  The initial recission order signaled that there would be further unwinding of the actions taken under the Biden order. Yesterday, more details emerged about how the rescission would be implemented. 

In the January 23, 2025 Executive Order on Removing Barriers to American AI Innovation, President Trump:  

  • Revoked the Biden AI Executive Order;  
  • Called for departments and agencies to revise or rescind all policies, directives, regulations, orders, and other actions taken under the Biden AI order that are inconsistent with enhancing America’s leadership in AI;
  • Directed the development of an AI Action Plan to sustain and enhance America’s AI dominance, led by the Assistant to the President for Science & Technology, the White House AI & Crypto Czar, and the National Security Advisor; and
  • Directed the White House to revise and reissue OMB AI memoranda to departments and agencies on the Federal Government’s acquisition and governance of AI to ensure that harmful barriers to America’s AI leadership are eliminated.

Please find linked here the Executive Order Fact Sheet. We expect the Trump Administration will continue to be active on AI and technology issues.

Passed

Regulation

Maine

On June 12, 2025, Maine Governor Janet Mills signed into law “An Act to Ensure Transparency in Consumer Transactions Involving Artificial Intelligence,” which will impose transparency requirements on the use of artificial intelligence (AI) chatbots in trade or commerce. The Act covers anyone who uses AI chatbots in trade or commerce, and AI chatbots are defined as software applications, web interfaces, or computer programs that simulate human-like conversation and interaction through textual or aural communications. The Act prohibits using an AI chatbot or any other computer technology to engage in trade and commerce with a consumer in a manner that may mislead or deceive a reasonable consumer into believing they are engaging with a human being, unless the consumer is notified in a clear and conspicuous manner that the consumer is not engaging with a human being.

Passed by state legislature June 2025

Regulation

New York

On June 12, 2025, the New York State legislature approved the “Responsible AI Safety and Education (RAISE) Act” which, if signed by Governor Kathy Hochul, would impose transparency requirements on large developers of frontier AI models. The Act applies to large developers of frontier models, meaning persons who have trained at least one frontier model and spent over $100 million dollars in compute costs in training frontier models. The Act defines frontier models as AI models trained using greater than “10º26” computational operators, the compute cost of which must exceeds $100 million dollars, or an AI model produced by applying knowledge distillation to a frontier model, provided the compute cost exceeds $5 million dollars. The Act would impose several “transparency requirements” on frontier model training and use. If enacted, the Act will take effect 90 days after becoming law and will be enforced by the Attorney General, who may bring actions to recover civil penalties of up to $10 million dollars for first violations and $30 million dollars for subsequent violations, as well as injunctive or declaratory relief.