Abstract image representing technology
Artificial Intelligence

5 Reasons Why Explainable AI Matters to FCC Teams

The FCC landscape is on the cusp of a paradigm shift as AI automation begins to make a significant impact on how financial crime is detected. Explore the 5 reasons why explainable machine decisions are critical for financial crime and compliance teams.

The Financial Crime and Compliance (FCC) landscape is on the cusp of a paradigm shift as AI automation begins to make a significant impact on how financial crime is detected. We previously covered more about this shift here, which included a reference to the capability of advanced machine learning to now undertake complex cognitive decision making in high-risk domains such as Anti-Money Laundering (AML). A key element in this transformative capability is the ability for a machine solution, such as Nasdaq Automated Investigator for AML, to fully explain itself in human-friendly language.

Indeed, when AI is able to explain decisions in a human-readable format, the value it provides to financial services organisations reaches beyond just knowing how the decision was made, especially if the decision is of a complex nature that augments an expert analyst’s judgement.

The question that lies at the heart of regulatory audit or quality reviews involving an AI system is ‘Did the machine do what it was supposed to do?’. Explainable machine decision-making outputs allow this question to be answered, and below we set out several of the benefits to this capability for FCC teams.

Five reasons why explainable machine decisions are critical for financial crime and compliance teams:

  1. Regulatory transparency – the ability to know that the right decisions are being taken goes without saying, but the only way to ensure regulatory transparency and confidence is if you can provide an explanation that is human-readable as to how and why those decisions are being appropriately reached. The ability to provide a clear audit trail that points to factors such as importance rating of decision factors is invaluable and something that many human analyst teams struggle to document.
  2. Quality assurance – the QA function of all financial services firms want to create confidence in the outputs of any process that is in place, especially when it relates to complex, high-risk judgements. Consistent, easy to read explainable insights to the decision output of machines makes it possible for auditors, stewards, analysts and QA teams to rapidly and confidently sanity check those outputs rather than just taking them at face value.
  3. Continuous improvement – decision imperfections will always have some level of occurrence regardless of the maturity of technology that can out-perform current standards. Human understanding of the processes that led to an imperfect decision is imperative when it comes to recognising and implementing subsequent effective optimisation and fixes. This symbiotic relationship between human and machine is critical to the scoping, training and management of AI technology.
  4. Analyst insight – machine-based decisions will never fully replace expert risk analysts because of the deep understanding of financial transactions that humans have beyond the raw numbers. Explained machine decisions are, however, a powerful way to augment analyst performance in the handling of high-volume alerts and gain insight as to where to best focus their valuable attention or follow new avenues of enquiry. 
  5. Bias correction – biases can exist unawares within a system, and decisions are often trusted without ever interrogating data for mistakes or misinterpretations. Clear explanations of all factors that may lead to unwanted bias creeping into decisions means that they can be assessed to understand risk impact and where necessary fixed as part of a continuous feedback loop.

In many industries, it will not always be necessary for a machine to fully explain whether it did what it was expected to do, but in financial services, the need is critical, and the value goes far beyond just being able to prove to an auditor that the right decision was taken. Explainable insights will revolutionise the way high performing FCC teams operate and make a material difference to them, focusing on the investigations that really matter in financial crime.

Darren Innes

Nasdaq

With more than 20 years of experience in RegTech focused on AML, KYC and regulatory data, Darren Innes leads Nasdaq’s expansion of its globally-recognized trade surveillance into complementary areas.

Read Darren's Bio

MarketInsite

Nasdaq

Nasdaq’s Marketinsite offers actionable insights on a variety of market-moving topics. Learn from our thought leaders who are driving the capital markets of tomorrow.

Read MarketInsite's Bio