Financial criminals are leveraging AI technology to produce deepfake videos, voices and fake documents that can get past computer and human detection.
Generative AI is expected to accelerate fraud losses to an annual growth rate of 32% in the US to reach US$40 billion by 2027 according to Deloitte.
Banks are deploying AI in anti-financial crime (AFC) efforts to monitor transactions, generate suspicious activity reports, automate fraud detection.
AI-driven systems introduce a complex “black box” element due to opaque decision-making processes.
Banks need careful planning, thorough testing, specialized compliance frameworks and human oversight to ensure AI accountability.
Human judgment remains essential in AFC investigations and AI systems require Explainable AI tools to make AI-driven decisions understandable to regulators.
Financial institutions can combine a rules-based approach with AI tools to create a multi-layered system that leverages the strengths of both approaches.
Risk and compliance experts must be trained in AI to develop compliance frameworks specific to AI.
High-quality, secure data infrastructure is essential for AI implementation.
AI is a two-sided coin for banks that needs a cautious, hybrid approach to minimize risks and increase efficiency.