Operational resilience and outsourcing
Respondents suggested that third-party providers of AI solutions should provide evidence supporting the responsible development, independent validation, and ongoing governance of their AI products, providing firms with sufficient information to make their own risk assessment. Respondents argued that third party providers do not always provide sufficient information to enable effective governance of some of their products. Given the scope and ubiquity of third-party AI applications, respondents commented that the risks posed by third party exposure could lead to an increase in systemic risks. Some respondents said that not all firms have the necessary expertise to conduct adequate due diligence of third-party AI applications and models.
Fraud and money laundering
Respondents suggested that as the technology develops, there may also be increased access of AI tools by bad actors who wish to use AI for fraud and money laundering. For example, respondents noted that generative AI can easily be exploited to create deepfakes as a way to commit fraud. The technology may make such fraud more sophisticated, greater in scale and harder to detect. This may in turn create risks to consumers and, if sufficient in magnitude, financial stability.
Some respondents noted that the adoption of Generative AI (GenAI) may increase rapidly in financial services. Respondents noted that the risks associated with the use of GenAI are not fully understood, especially risks related to bias, accuracy, reliability, and explainability. Due to ‘hallucinations’ in GenAI outputs, respondents also suggested that there may be risks to firms and consumers relying on or trusting GenAI as a source of financial advice or information.
3. Legal requirements or guidance relevant to AI
Respondents remarked that, while existing regulation is sufficient to cover risks associated with AI, there are areas where clarificatory guidance on the application of existing regulation is needed (such as accountability of different parties in outsourcing) and areas of novel risk that may require further guidance in the future. Some respondents suggested that guidance on best practices for responsible AI development and deployment would help firms ensure that they are adopting AI in a safe and responsible manner. AI capabilities change rapidly. Regulators could respond by designing and maintaining ‘live’ regulatory guidance for example periodically updated guidance and examples of best practice. Specific areas of law and regulation that might be adapted to address AI are summarized below.
Operational resilience
A number of respondents stressed the relevance and importance to AI of the existing regulatory framework relating to operational resilience and outsourcing, including the PRA’s supervisory statements (SS) 1/21 – Operational resilience: Impact tolerances for important business services and SS2/21 – Outsourcing and third party risk management, as well as the FCA’s PS21/3 – Building operational resilience. Respondents also noted the relevance of the Bank, the PRA and the FCA’s DP3/22 – Operational resilience: Critical third parties to the UK financial sector.
SMCR in an AI context
Most respondents did not think that creating a new Prescribed Responsibility (PR) for AI to be allocated to a Senior Management Function (SMF) would be helpful for enhancing effective governance of AI. Most respondents thought that further guidance on how to interpret the ‘reasonable steps’ element of the SM&CR in an AI context would be helpful, although only if it was practical or actionable guidance.
Regulatory alignment
Some respondents noted legal and regulatory developments in other jurisdictions (including the proposed EU AI Act), and argued that international regulatory harmonization would be beneficial, where possible, particularly for multinational firms. One respondent noted that the development of adequate and flexible cooperation mechanisms supporting information-sharing (or lessons learnt) across jurisdictions could also minimize barriers and facilitate beneficial innovation.
Data regulation
Respondents highlighted legal requirements and guidance relating to data protection. One respondent noted that the way the UK General Data Protection Regulation (UK GDPR) interacts with AI might mean that automated decision making could potentially be prohibited. One response noted regulatory guidance indicating that the 'right to erasure' under the UK GDPR extends to personal data used to train AI models, which could prove challenging in practice given the limited extent to which developers are able to separate and remove training data from a trained AI model. Other respondents argue that, although it is generally recognized that data protection laws apply to the use of AI, there may be a lack of understanding by suppliers, developers, and users, leading to those actors potentially gaming or ignoring the rules. Most respondents argued that there are areas of data regulation that are not sufficient to identify, manage, monitor, and control the risks associated with AI models. Some pointed to insufficient regulation on the topics of data access, data protection, and data privacy (for example to monitor bias). Some respondents thought that regulation in relation to data quality, data management, and operations are insufficient.
Several respondents sought clarification on what bias and fairness could mean in the context of AI models, more specifically, they asked how firms should interpret the Equality Act 2010 and the FCA Consumer Duty in this context. Other respondents asked for more clarity on how data protection / privacy rights interact with AI techniques.
Open banking was suggested as a way of improving data access within financial services and thus facilitate innovation with AI and competition. Lack of access to high-quality data may be a barrier to entry for firms’ adoption of AI. Open banking may help create a more level playing field by providing firms with larger and more diverse datasets, and therefore enabling more effective competition.
4. Cross-sectoral and cross-jurisdictional coordination on AI
Many respondents emphasized the importance of cross-sectoral and cross-jurisdictional coordination as AI is a cross-cutting technology extending across sectoral boundaries. As a consequence, respondents encouraged authorities to ensure coherence and consistency in regulatory approaches across sectoral regulators, such as aligning key principles, metrics, and interpretation of key concepts. Some respondents suggested that the supervisory authorities work with other regulators to reduce and/or prevent regulatory overlaps and clarify the role of sectoral regulations and legislation.
5. Next steps
As set out in the responses to DP5/22, since many regulated firms operate in multiple jurisdictions, an internationally coordinated and harmonized regulatory response on AI is critical in ensuring that UK regulation does not put UK firms and markets at a disadvantage. Minimizing fragmentation and operational complexity will therefore be key. The supervisory authorities should support collaboration between financial services firms, regulators, academia, and technology practitioners with the aim of promoting competition. Respondents also noted that encouraging firms to collaborate in the development and deployment of AI, such as sharing knowledge and resources, could help reduce costs and improve the quality of AI systems for financial services. Ongoing industry engagement will clearly be important as the regulatory framework for AI continues to develop. We will be closely monitoring developments so please do get in touch with our financial services regulatory and technology specialists listed below with any questions.
6. Looking ahead
AI Safety Summit
During the Global AI Safety Summit, which was hosted by the United Kingdom on 1-2 November 2023, the Bletchley Declaration was signed by 28 countries. This includes the US and China, in addition to the EU. The Declaration is aimed at promoting global co-operation on AI safety, including by creating risk-based AI policies across signatory countries while respecting that legal frameworks may differ. It is likely that we will continue to see further collaboration on AI safety policies – while the Summit revolved around the use of AI generally, it demonstrates the significance and continued interest in the technology on a global scale, and will likely have implications for the use and governance of AI in the financial services sector.
In May 2024, there will be an additional “mini” virtual Summit co-hosted by the Republic of Korea in the first six months of the year and an in-person Summit hosted by France later in the year. These summits will promote further collaboration between countries on AI safety.
Strategic approach by UK regulators
In the Response to UK AI White Paper published on 6 February 2024, it was indicated that the UK government has asked a number of regulators including the FCA and the Bank of England to publish an update outlining their strategic approach to AI by 30 April 2024. The plans published by the regulators will influence how the government may wish to address any gaps (and introduce any targeted binding measures if necessary).
The Artificial Intelligence (Regulation) Bill in the UK
The Artificial Intelligence (Regulation) Bill was introduced as a Private Members’ Bill by Lord Holmes of Richmond in the House of Lords on 22 November 2023. The primary purpose of the Bill is to establish a framework for the regulation of AI in the UK. This involves putting AI regulatory principles on a statutory footing, and establishing a central AI Authority responsible for overseeing the regulatory approach to AI. The Bill is currently going through a Parliamentary process, and the second reading is scheduled on 22 March 2024 in the House of Lords. Further details can be found in our blog post.
The EU AI Act
On 13 March 2024, the European Parliament adopted the EU AI Act. In contrast to the UK's principle-based, sector focused approach to regulating AI, the EU AI Act will regulate AI across all sectors horizontally in the EU, including the financial services sector. The EU AI Act pioneers a risk based approach, providing four different risk classes with different requirements attached to each risk class, which cover different AI use cases as well as banning AI systems that create an unacceptable risk. The EU AI Act also need to be formally adopted by the Council, and it is expected to come into force by August 2024, and most of obligations will be fully applicable within 24 months after its entry into force.