AI & Machine Learning in Financial Crime Compliance for Financial Services
Introduction
AI and machine learning are rapidly reshaping financial crime compliance — from transaction monitoring and sanctions screening to surveillance and fraud detection. While these technologies promise major improvements in accuracy and efficiency, they also introduce new compliance, conduct, and governance risks.
This virtual classroom session draws upon the Bank of England and FCA’s 2024 joint survey, along with the Alan Turing Institute’s ethical AI framework, to explore how AI is being used in financial services practice and what businesses must do to meet regulatory expectations.
It will examine the materiality of use cases, explainability challenges, and the growing reliance on third-party AI tools — with a focus on risk-based implementation and effective governance.
Join us on this session to understand the opportunities and risks of AI in financial crime compliance and how your business can stay ahead in this evolving landscape.
What You Will Learn
This live and interactive course will cover the following:
- Understand core AI/ML methods used in financial crime detection: supervised vs unsupervised learning; anomaly detection, risk scoring, and LLMs in compliance tooling
- Explore real-world use cases in AML/CFT and fraud prevention: transaction monitoring and alert optimisation; sanctions screening and client onboarding; and behavioural surveillance and suspicious activity detection
- Analyse key findings from the BoE/FCA 2024 AI survey: 75% of firms use AI; 33% rely on third-party tools; 55% involve automated decision-making; and only 34% of firms fully understand the AI they use
- Address governance and oversight challenges: fragmented accountability and over-reliance on third-party models; model complexity; and explainability techniques and their limitations
- Balance benefits and risks: benefits seen in AML, fraud detection, and cybersecurity; systemic risks from third-party dependencies and common models; and constraints in data protection, lack of talent and explainability
- Map regulatory and ethical expectations: SMCR accountability and AI-specific controls; alignment with UK GDPR, Consumer Duty, and BoE/FCA feedback statements (DP5/22, FS2/23); and applying the Turing Institute’s principles of fairness, safety, accountability and transparency
- Take away practical tools to: evaluate AI/ML use cases; challenge vendors and internal developers; and design proportionate oversight frameworks
Recording of live sessions: Soon after the Learn Live session has taken place you will be able to go back and access the recording - should you wish to revisit the material discussed.