Organizations are becoming increasingly reliant when it comes to machine learning models to make decisions. These can be the allocation of jobs, loans or even university applications, decisions that either directly or indirectly affects a persons’ life. Similarly, algorithms within machine learning tools can be used to make recommendations based on our choices, like which movie to binge on, similar products on retail sites or which apartment fits within our budget.
Most businesses have a growing demand for machine learning tools to ease their daily workload. Hence this also creates a need to understand exactly how these models and algorithms perform tasks, especially when there are a large number of machine learning cases without any human intervention.
Take into consideration the example of choosing a candidate for a particular job within the organization. The ML model is used to select the best 10 candidates out of 100 applicants. Before putting faith into the model’s suggestions, the recruiter would want to see the steps taken to reach those 10 best candidates chosen. If the task was performed by a human then assessing their judgment would be easy as it would simply involve retracing their steps, looking for short summaries, highlighted or underlined parts of key elements in a resume. This is different from ML models as depending on the depth of the algorithm’s neural network, there would be limited retracing of steps and as a result less transparency. Although humans can understand a machine with 3 to even 30 gears, levers and pulleys, things would become very difficult to understand when something has more than 300 moving parts.
Black box problems like these are nothing new in the world of AI, and neither is its growing prevalence in modern powerful machine learning solutions and sophisticated models. However, these machine learning models are quite capable of outperforming humans in complex tasks like the classification of images, speech transcription or translating languages. But as the model becomes complicated, the lower its explainability level becomes.
For some Machine Learning applications, the black box issue does not matter if users want to leverage the machine’s intelligence. If simple and easy to understand models cannot perform a given task, like translating a menu from Spanish to English on par with a human level or is doing it poorly, then the user has no choice but to not use the model and translate the text manually.
Although it must be said that in some applications, the question of transparency is not considered at all. In a model that detects the 10,000 most loyal or recurring customer prospects from a list of millions or choose the best products to suggest to them are examples of when humans are completely out of the loop, as it would be time-consuming and would require a lot of manual effort to check them all manually.
Creating transparency in sophisticated machine learning models is an area of ongoing research. Broadly speaking, there are 5 key approaches:
- Use simpler models. The downside to this is that it foregoes accuracy for explainability.
- Combine simpler and more sophisticated models. The sophisticated model formulates the recommendations while the simpler model gives the reasoning behind decisions. This approach holds promise but there are cases where the models disagree with each other.
- Use intermediate model states. A good example of this is computer vision. Elements in intermediate layers of the model are highlighted only by certain patterns. These are thought of as features that provide reasoning for image classification
- Use attention mechanisms. A lot of most sophisticated models have a process that directs ‘attention’ to the parts of the input that hold the most significance. These can be used to place added importance on parts of an image or a text that relate the most to a particular suggestion or recommendation.
- Modify inputs. If removing a few words or completely ignoring some parts of an image changes the overall model results significantly, the chances are these inputs have a significant role in the classification as well. They can be further explored by running the model on different variations of the input, where the desired results are highlighted to the user.
In the end, the decisions made by humans can be easily explained in most situations. This can be the same as sophisticated algorithms but it should be the software provider’s responsibility to hasten research on creating technical transparency to increase the level of trust for intelligent software and the decisions they make.