Narrowing in on the AI regulation, assurance and financial crime compliance aspects of the report, some key takeaways include:
- Established use cases: AI is making significant impacts in the areas of KYC, AML monitoring and trade surveillance, helping firms to automate routine tasks while improving the speed and accuracy of financial crime detection.
- Risk Management: The probabilistic nature of LLMs models present a challenge for certain applications i.e. outputs are not always 100% accurate vs the traditional use of deterministic systems. However “firms are mindful that failing to adopt AI is itself a huge risk to future viability”
- Regulation: The Group has not flagged regulation as a serious barrier to innovation at this stage, however the prospect of potential future regulatory interventions “may be fuelling a degree of caution among firms”. There is a need for greater regulatory clarity and consistency to enable developers and users of AI to plan and invest with confidence.
- AI risks and governance: The industry should continue to work together to develop its collective understanding of AI risks and identify best practices in risk management, governance and ethics, while taking into account regulatory expectations.
The recognition of AI's transformative capabilities is balanced by a focus on maintaining compliance and upholding responsible, ethical standards. Transparent and explainable AI helps mitigate risks associated with black-box models, enhancing understanding and trust in AI decisions. Firms should also establish a clear governance framework, policy, risk appetite, and risk assessment for AI development and adoption.