In February, the Financial Conduct Authority (FCA) and Alan Turing Institute announced their plans for a year-long collaboration on artificial intelligence (AI) and machine learning transparency in financial services.
Acknowledging the potential for AI to enable transformations across the financial services sector, the FCA recognises that it can raise important ethical and regulatory questions, particularly where it might have an impact on consumers. In addition to announcing this collaboration and initiative, the FCA and Alan Turing Institute set out a high-level framework for considering transparency in financial services, with the following four guiding questions emphasised:
- Why is transparency important?
- What types of information are relevant?
- Who should have access to these types of information?
- When does it matter?
Given that the opportunities and risks associated with the use of AI models depend on context and use cases, the FCA identifies that AI transparency cannot be reduced to a single question of “Which types of information should be made accessible?”, but instead a nuanced approach will be required that will involve answering the following alternative question: “For a certain type of AI use case, who should be given access to what types of information and for what reason?”
The FCA suggests that, in answering the above question, decision-makers may find it helpful to develop a ‘transparency matrix’ that, for particular use cases, maps types of information against types of stakeholders, which can then be used to structure a systematic assessment of transparency interests.
Thoughts
The FCA and Bank of England survey, published in October 2019, concludes that financial services are witnessing rapidly growing interest in AI and that AI will play an important part of the way financial services are designed and delivered in the future. Recognising this, regulators are beginning to take a heightened interest as they recognise the potential benefits and harms that could arise from its adoption: for instance, in February 2020 the Information Commissioner’s Office launched its own consultation (closing 1 April 2020) on the use of AI with draft proposals on how to audit risk, governance and accountability in AI applications, and in January 2020 the European Banking Authority published its report on big data and advanced analytics, which we summarised here.
Given that regulators are working to keep up with the rapid adoption and pace of AI use, this will remain an area of continuing development, in the financial services sector and beyond.