Why Do We Need To Understand Rationale Behind Machine Learning Predictions?

Royston D. Mai, MS
3 min readFeb 8, 2022

--

Machine learning is a popular term these days. We all know machine learning algorithms use historical data as input to predict new output values, but the reasons behind a model’s outcomes are as important as the outcomes themselves. Understanding the rationale behind machine learning predictions gives us trust and trust is essential to humans, especially when a few people are still afraid of robots' intelligence.

Machine learning models are often used to assist users in making decisions. However, users need to have a certain level of confidence in a model before they can trust it. For instance, no doctor would operate on a patient solely because a model recommended it. Even in low-stakes situations, such as choosing a movie to watch on Netflix, users need to rely on a model that they trust.

Although many machine learning models are considered black boxes, understanding the reasoning behind their predictions can help users determine when to trust them and when not to.

The image above shows an example of how a machine learning model predicts a patient having the flu. An “explainer” then highlights the symptoms that are most important to the model to explain the prediction. However, many advanced machine learning models are like black boxes, making it difficult to understand how they work.

This raises the question of trust — can I trust the model’s specific prediction, or do I trust that it makes reasonable predictions in general? By having the information about how the model works, a doctor can make an informed decision to trust the model or not.

It seems intuitive that explaining the rationale behind individual predictions would make us better positioned to trust or mistrust the prediction, or the classifier as a whole. Even if we can’t necessarily understand how the model behaves in all cases, it may be possible (and indeed it is in most cases) to understand how it behaves in particular cases.

Photo by Arseny Togulev on Unsplash

Finally, a word about accuracy. If you’ve worked with machine learning before, I’m sure you’re thinking something like this: “Of course, I know my model will perform well in the actual world since I have a very high cross-validation accuracy! Why should I bother trying to comprehend its forecasts when I know it is correct 99 percent of the time?”. Accuracy on cross-validation can be quite deceiving, as anyone who has utilized machine learning in the real world (not just on a static dataset) will confirm.

Sometimes data that shouldn’t be available leaks into the training data accidentally. Sometimes the way you gather data introduces correlations that will not exist in the real world, which the model exploits. Many other tricky problems can give us a false understanding of performance, even in doing A/B tests. I am not saying you shouldn’t measure accuracy, but simply that it should not be your only metric for assessing trust.

Photo by Brett Jordan on Unsplash

In conclusion, trust is crucial for effective human interaction with machine learning systems, and we think explaining individual predictions is an effective way of assessing trust.

Next time, we will continue with how we get explanations using LIME. Stay tuned!

--

--

Royston D. Mai, MS
Royston D. Mai, MS

Written by Royston D. Mai, MS

Simplified Data Science, Machine Learning, Marketing & Business For Everyone | https://www.linkedin.com/in/datt-mai/

No responses yet