Explainable AI, Explained

Explainable AI helps developers and users better understand artificial intelligence models and their decisions.

Written by Ellen Glover
A model kit of an AI brain.
Image: Shutterstock / Built In
UPDATED BY
Brennan Whitfield | Jun 23, 2023

Explainable AI is a set of techniques, principles and processes used to help the creators and users of artificial intelligence models understand how these models make decisions. This information can be used to improve model accuracy or to identify and address unwanted behaviors like biased decision-making. 

Explainable AI can be used to describe an AI model, its expected impact and any potential biases, as well as assess its accuracy and fairness. As artificial intelligence becomes more advanced, many consider explainable AI to be essential to the industry’s future.

What Is Explainable AI?

Explainable AI is a set of techniques, principles and processes that aim to help AI developers and users alike better understand AI models, both in terms of their algorithms and the outputs generated by them.

Explainable AI is important because amid the growing sophistication and adoption of AI, there remains one major ongoing problem: People don’t understand why AI models make the decisions they do. Not even the researchers and developers who are creating them.

AI algorithms often operate as black boxes, meaning they take inputs and provide outputs with no way to figure out their inner workings. Black box AI models don’t explain how they arrive at their conclusions, and the data they use and the trustworthiness of their results are not easy to understand — which is what explainable AI seeks to resolve.

 

How Does Explainable AI Work?

Typically, explainable AI seeks to explain one or more of the following things: The data used to train the model (including why it was chosen), the predictions made by the model (and what specifically was considered in reaching that prediction) and role of the algorithms used in the model. 

In the context of machine learning and artificial intelligence, explainability is the ability to understand “the ‘why’ behind the decision-making of the model,” according to Joshua Rubin, director of data science at Fiddler AI. Therefore, explainable AI requires “drilling into” the model in order to extract an answer as to why it made a certain recommendation or behaved in a certain way.

Researchers have developed several different approaches to explain AI systems, which are commonly organized into two broad categories: self-interpretable models and post-hoc explanations.

Self-interpretable models are, themselves, the explanations, and can be directly read and interpreted by a human. Put simply, the model is the explanation. Some of the most common self-interpretable models include decision trees and regression models, including logistic regression.

Meanwhile, post-hoc explanations describe or model the algorithm to give an idea of how said algorithm works. These are often generated by other software tools, and can be used on algorithms without any inner knowledge of how that algorithm actually works, so long as it can be queried for outputs on specific inputs.

Explanations can also be formatted in different ways. Graphical formats are perhaps most common, which include outputs from data analyses and saliency maps. They can also be formatted verbally through speech, or written as reports.

Find out who's hiring.
See all Data + Analytics jobs at top tech companies & startups
View Jobs

 

Types of Explainable AI Algorithms

Local Interpretable Model-Agnostic Explanation (LIME) 

One commonly used post-hoc explanation algorithm is called LIME, or local interpretable model-agnostic explanation. LIME takes decisions and, by querying nearby points, builds an interpretable model that represents the decision, then uses that model to provide explanations. 

 

SHapley Additive exPlanations (SHAP)

SHapley Additive exPlanations, or SHAP, is another common algorithm that explains a given prediction by mathematically computing how each feature contributed to the prediction. It functions largely as a visualization tool, and can visualize the output of a machine learning model to make it more understandable.

 

Morris Sensitivity Analysis 

Morris sensitivity analysis, also known as the Morris method, works as a one-step-at-a-time analysis, meaning only one input has its level adjusted per run. This is commonly used to determine which model inputs are important enough to warrant further analysis.

 

Contrastive Explanation Method (CEM) 

CEM is used to provide explanations for classification models by identifying both preferable and unwanted features in a model. It works by defining why a certain event occurred in contrast to another event, helping developers deduce “why did X occur instead of Y?”

 

Scalable Bayesian Rule Lists (SBRL)

SBRLs help explain a model’s predictions by combining pre-mined frequent patterns into a decision list generated by a Bayesian statistics algorithm. This list is composed of “if-then” rules, where the antecedents are mined from the data set and the set of rules and their order are learned.

Related Reading Top 10 Machine Learning Algorithms Every Beginner Should Know

 

Why Does Explainable AI Matter?

Explainable AI makes artificial intelligence models more manageable and understandable. This helps developers determine if an AI system is working as intended and quickly uncover any errors. It also helps build trust and confidence among an AI system’s users.

 

AI HAS BECOME MORE UBIQUITOUS IN EVERYDAY LIFE

Artificial intelligence has seeped into virtually every facet of society, from healthcare to finance to even the criminal justice system. This has led to many wanting AI to be more transparent with how it's operating on a day-to-day basis.

Explainable AI is “getting applied for the purposes of increased performance and for increased efficiency in areas where you would have never thought there was AI involved,” Rubin told Built In. “[AI] is getting used more, and the kind of models that are being used are less intrinsically understandable.”

 

AI Decisions SOMETIMES Come With HARMFUL SOCIETAL RAMIFICATIONS

AI can have deep ramifications for people on the receiving end of its models. Facial recognition software used by some police departments has been known to lead to false arrests of innocent people. People of color seeking loans to purchase homes or refinance have been overcharged by millions due to AI tools used by lenders. And many employers use AI-enabled tools to screen job applicants, many of which have proven to be biased against people with disabilities and other protected groups.

And just because a problematic algorithm has been fixed or removed, doesn’t mean the harm it has caused goes away with it. Rather, harmful algorithms are “palimpsestic,” said Upol Ehsan, an explainable AI researcher at Georgia Tech. Their traces remain, leaving what he calls algorithmic imprints

“There is no undo button,” Ehsan told Built In. “Even though algorithms are made of software, and we think of software as very malleable [and] deletable, the effects of these algorithmic systems are anything but that — they leave hard and persistent imprints on society and human lives.”

 

AI IS GETTING MORE REGULATED, REQUIRING MORE INDUSTRY ACCOUNTABILITY

As governments around the world continue working to regulate the use of artificial intelligence, explainability in AI will likely become even more important.

In the United States, President Joe Biden and his administration created an AI Bill of Rights, which includes guidelines for protecting personal data and limiting surveillance, among other things. And the Federal Trade Commission has been monitoring how companies collect data and use AI algorithms.

Looking ahead, explainable AI could help mitigate any compliance, legal and security risks with a particular AI model.

More on Explainability in AI Weighing the Trade-Offs of Explainable AI

 

Principles of Explainable AI

The National Institute of Standards and Technology (NIST), a government agency within the United States Department of Commerce, has developed four key principles of explainable AI.

 

Explanation

An AI system should be able to explain its output and provide supporting evidence. According to the NIST report, a system is explainable when it “supplies evidence, support or reasoning related to an outcome or a process of an AI system.” This principle does not offer any specific metric or quality of those explanations, but that will likely vary depending on the given system, scenario and who the user is.

As Ehsan puts it, “explainable AI is pluralistic.” There is no one-size-fits-all solution. “There are different types of fields. Different goals, different use cases.”

 

Meaningful

Whatever the given explanation is, it has to be meaningful and provided in a way that the intended users can understand. If there is a range of users with diverse knowledge and skill sets, the system should provide a range of explanations to meet the needs of those users.

 

Explanation Accuracy

The AI’s explanation needs to be clear, accurate and correctly reflect the reason for the system’s process and generating a particular output. This is distinctly different from how accurate the actual AI system is.

“Regardless of the system’s decision accuracy, the corresponding explanation may or may not accurately describe how the system came to its conclusion or action,” the NIST report continued. “Additionally, explanation accuracy needs to account for the level of detail in the explanation.”

 

Knowledge Limits

The AI system must operate under its “knowledge limits,” or the specific conditions for which it was designed. It should only function under sufficient confidence within its intended output, and is recommended to have a system limit declared to “increase trust” and safeguard against “misleading, dangerous or unjust outputs.”

Predictions for Explainable AI and More Here Are 5 AI Trends to Watch in 2023

 

Explainable AI Use Cases

Managing Financial Processes and Preventing Algorithmic Bias

Finance is a heavily regulated industry, so explainable AI is necessary for holding AI models accountable. Artificial intelligence is used to help assign credit scores, assess insurance claims, improve investment portfolios and much more. If the algorithms used to make these tools are biased, and that bias seeps into the output, that can have serious implications on a user and, by extension, the company.

“There is risk associated with using a model to make, for example, a credit underwriting decision. You don’t just want to roll something out willy-nilly unless it’s met certain kinds of standards,” Rubin said.

It’s also important that other kinds of stakeholders better understand a model’s decision. In finance, this includes people like lending agents or fraud auditors — people who don’t necessarily need to know all of the technical details of the model, but who are able to do their job better when they understand not only what a given model recommends, but also why the model recommends it.

 

Operating and Understanding Autonomous Vehicles

Explainability is a high priority for autonomous cars, both on the research and corporate side.

Autonomous vehicles operate on vast amounts of data in order to figure both its position in the world and the position of nearby objects, as well as their relationship to each other. And the system needs to be able to make split-second decisions based on that data in order to drive safely. Those decisions should be understandable to the people in the car, the authorities and insurance companies in case of any accidents.

Why did the car swerve left instead of right? What caused the brakes to be applied? All of these questions, and more, are what explainable AI attempts to answer.

 

Detecting Health Anomalies and Making Treatment Options

The healthcare industry is one of artificial intelligence’s most ardent adopters, using it as a tool in diagnostics, preventative care, administrative tasks and more. And in a field as high stakes as healthcare, it’s important that both doctors and patients have peace of mind that the algorithms used are working properly and making the correct decisions.

For example, hospitals can use explainable AI for cancer detection and treatment, where algorithms show the reasoning behind a given model’s decision-making. This makes it easier not only for doctors to make treatment decisions, but also provide data-backed explanations to their patients.

Want More AI Industry Adoption? 15 Examples of AI in Supply Chain and Logistics

 

Explainable AI’s Challenging Future

For all of its promise in terms of promoting trust, transparency and accountability in the artificial intelligence space, explainable AI certainly has some challenges. Not least of which is the fact that there is no one way to think about explainability, or define whether an explanation is doing exactly what it’s supposed to do.

“There is no fully generic notion of explanation,” said Zachary Lipton, an assistant professor of machine learning and operations research at Carnegie Mellon University. This runs the risk of the explainable AI field becoming too broad, where it doesn’t actually effectively explain much at all.

“There is no fully generic notion of explanation.”

Lipton likens it to “wastebasket diagnoses” in the medical industry, where a diagnosis is too vague or broad to have any real treatment or solution. “The problem is that you can’t come up with a cure for that category because it doesn’t describe a single disease. It’s too broad. The only way you can make progress is to divide and conquer.”

But, perhaps the biggest hurdle of explainable AI of all is AI itself, and the breakneck pace at which it is evolving. We’ve gone from machine learning models that look at structured, tabular data, to models that consume huge swaths of unstructured data, which makes understanding how the model works much more difficult — never mind explaining it in a way that makes sense. Interrogating the decisions of a model that makes predictions based on clear-cut things like numbers is a lot easier than interrogating the decisions of a model that relies on unstructured data like natural language or raw images.

Nevertheless, it is unlikely that the field of explainable AI is going anywhere anytime soon, particularly as artificial intelligence continues to become more entrenched in our everyday lives, and more heavily regulated.

“As long as there are high stakes involved, and there’s a need for accountability, and the truth remains that AI systems are fallible — they’re not perfect — explainability will always be a need,” Ehsan said. Whether explainability will come part and parcel with AI “remains to be seen,” he added, simply because AI has and is continuing to change so much. “What we meant by AI even 10 years ago is very different from what we mean by AI now. So what it means to be explainable could mean very different things even in 10 years.”

Explore Job Matches.