Manage Risk in Machine Learning Models

FICO Community
5 min readAug 19, 2019

By, Chris Smith

The Model Risk Management Framework

During the early stages of my career in data and analytics, ‘drag and drop’ tools evolved, making it much easier to build a statistical model, a capability previously limited to data scientists with years of training and experience. This capability, in parallel with the rise of big data, and most importantly the growing range of applications where models were being used within organizations, presented a risk to industry. To quote an Office of the Comptroller of the Currency (OCC) supervisory guidance: “The use of models invariably presents model risk, which is the potential for adverse consequences from decisions based on incorrect or misused model outputs and reports”1.

Model Risk Management Frameworks Emerge
As a direct consequence, model risk management (MRM) frameworks became a priority, for both regulators and financial institutions, to ensure that the necessary governance was put in place to mitigate model risk. These MRM frameworks are now commonplace, particularly with banks and are of importance for all models deployed within an organization, not just those with increased ‘risk’ such as those in the credit lending process. A typical MRM framework would incorporate components including:

  • Model oversight — inventory of models within an organization, with clear policies associated with model development and management
  • Model validation — an on-going measurement of model performance to ensure robustness
  • Model controls — documentation to ensure transparency, and checks to ensure data integrity

Machine Learning Model Development — Beginners Beware
To some extent, the series of events that created the requirement for MRM frameworks is now repeating with the emergence of Artificial Intelligence (AI) and specifically Machine Learning (ML) modeling techniques. For example, the accessibility of ML models is broadening, with open source software making it possible for people with relatively limited specialist knowledge to develop ML-based solutions. This is leading to an increased number of AI / ML applications being used across the credit and collections process.

The risk, however, is magnified given the nature of these ML techniques and the fact that outputs from these models are often too complicated to understand and have an increased risk of inherent model bias. As a result, examples of AI bias driving incorrect, or even discriminatory outcomes are widespread; whether it be recruiting tools with inherent gender or racial biases, or age estimation tools which significantly over-estimate the age of elements of a population based on their outdoor working conditions. These are just a few examples that highlight the potential risk of deploying these AI and ML tools and techniques without the necessary expertise, whether it’s specific industry, mathematical or technology experience. Thus, organizational oversight is critical so that the results of these models consider all variables, are tested and then understood.

Why Model Risk Management Framework Is Critical
The model risk quote from the OCC highlighted above is as relevant today as it was in 2011, especially with respect to AI / ML-based models. Perhaps this guidance also provides the solution, an effective model risk management framework.

In fact, the European Union’s recently published ‘Ethics Guidelines for Trustworthy AI’2 contains several guidelines which link very closely to the model risk management framework outlined above, including: oversight, data governance, transparency, and technical robustness.

Clearly, the existing MRM framework within an organization will require modification to account for the added complexity and reduced transparency of an ML-based model, but the broad principles remain the same.

By implementing a detailed governance structure, you can ensure that critical standards are in place for any ML model deployed across the organization, resulting in:

  • Solutions that are explainable to both internal stakeholders and regulators
  • Data governance processes to mitigate the risk of AI bias
  • Algorithms within models are understood and as a result, technically sound
  • Techniques designed to not only improve model performance, but also to minimize model risk

Although some departments within an organization, such as collections, may not be as familiar with their company’s MRM framework, it is likely that one exists. At the very least, the analytics department is likely to have a formal process by which models are reviewed and approved. This is a great place to start when reviewing any ML-based solution developed either in-house or externally.

Weighing Performance vs. Risk
When developing or selecting an ML-based solution, it is important to not only focus on model / solution performance, but to also look at how the solution fits into an MRM framework and whether or not it has the appropriate levels of governance.

Does it provide the necessary levels of integrity, stability, and transparency to meet the requirements of both your organization and the regulator? If not, then the added benefits of a few basis-points of model performance may not outweigh the added risk.

If you’re interested in ways to bring the advantages of ML, how to use ML learnings within the traditional scorecard solutions, or want to learn more about explainable AI (xAI) tools, check out Designing the Perfect Match written by two of my FICO colleagues, Ethan Dornhelm and Dr. Gerald Fahner or download the white paper xAI Toolkit: Practical, Explainable Machine Learning.

In addition, FICO® Decision Central™ can support your MRM efforts by allowing you to automatically monitor the stability and effectiveness of your models while providing governance processes to manage the development, documentation, use, and evolution of every component that goes into making a decision. This includes ML algorithms, predictive models, optimization models, strategy trees, rule flows, and virtually any other element that impacts your decisions.

For more collections and recovery insight, follow this Customer Credit Lifecycle blog.

References
[1] OCC 2011–12. Board of Governors of the Federal Reserve System. “Supervisory Guidance on Model Risk Management.” Office of the Comptroller of the Currency. April 4, 2011. https://www.occ.gov/news-issuances/bulletins/2011/bulletin-2011-12a.pdf

[2] AI HLEG (High-Level Expert Group on Artificial Intelligence) set-up by the European Commission. “Ethics Guidelines for Trustworthy AI.” European Commission. April 8, 2019. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

Originally published on the FICO Community.

--

--

FICO Community

Advanced analytics, decision management & optimization in the cloud; build with us.