Evolving legal and regulatory standards for AI security
The future of AI regulation and the cyber threat landscape are evolving in tandem. To add to this complexity, as with many new technologies, AI can serve dual purposes. It can aid bad actors in the execution of larger, more frequent, and more effective cyberattacks, while at the same time acting as a tool employed by organizations for enhanced detection strategies and risk management. In addition, there are new cyber threats connected to use of AI technologies, such as ways that bad actors could seek to manipulate AI inputs (poisoning training data sets) or outputs (via maliciously crafted inputs).
In response to rapid developments, regulators across the globe have released frameworks, announced requirements, and proposed new rules calling for stricter security practices and controls for AI. In the U.S., the Biden Administration has taken steps to address consumer concerns regarding the reliability and security of AI services, developing a Blueprint for an AI Bill of Rights and an AI Risk Management Framework. The UK and EU continue to lead with ambitious proposals, including the UK National AI Strategy, EU AI Act and EU Cyber Resilience Act, the latter of which will introduce specific security obligations. The Cyberspace Administration of China now requires security reviews of generative AI systems before they are introduced to the Chinese market.