AI is increasingly becoming an integral part of every sphere of daily life. From AI-driven automobiles to chatbots and predictive analytics, AI is everywhere. Even for enterprises, Gartner reported that AI implementation jumped up to 270 percent in the past four years. Yet, AI is a black box for most people, akin to mystery and magic at once. Explainability in AI seeks to correct this exactly. Let us explain why it is needed and how it would work.
Let’s start with a visit to your family doctor. Yeah, I know we all hate hospital visits, or even the thought of them, so let’s get this over with fast. Suppose you are feeling a bit under the weather. Would you be satisfied with the visit if your physician prescribes you some medicines and sends you on your way home?
Of course, not! Ideally, you’d want them to explain what’s happening to you, maybe even identify potential causes, and how they will help you recover fully. This aspect of their service is explainability – helping you understand the problem and then explain the solution.
However, many artificial intelligence solutions available today lack precisely this aspect. Let’s see why this is a crucial issue.
The AI Black Box
Traditionally, AI solutions have been able to deliver predictable results, outcomes, and services to businesses. That’s because their ML models were well-understood. However, as continuous learning and feedback loops entered the picture, the AI models became increasingly complex, so much so that they became too difficult for humans to understand.
All the New generation AI solutions are equipped with continuous learning capabilities, which utilize user feedback on their output to improve their accuracy. In doing so, AI, in turn, also enhances its models and algorithms. Using libraries like Graphviz, we can visualize how AI is learning, but it doesn’t help us understand why it chose that specific way to do it.
We are virtually blind to that aspect of their workings.
Why is explainability the need of the hour?
Let’s give credit where it’s due. AI solutions have become exceedingly well at performing specialized niche tasks, such as autonomous driving, image recognition, manufacturing plant monitoring, speech recognition, and so on.
In many of these tasks, AIs consistently outperform humans in accuracy, efficiency, and performance. So, that brings us to the question:
If AI is more accurate and can work without human bias, why should we even bother about explaining how they work?
The answer to this question is manifold:
Accuracy and Bias
Let’s take the case of COMPAS, an AI solution used by courts in the US to determine the defendant’s recidivist risk(probability of a repeat offense by the punished). But, it was found that COMPAS was biased against people of color and would rate their recidivist risk as higher!]
And that’s hardly a one-off incident. Apple Card’s credit assessment AI was consistently biased against women.
So, the basic presumption in our earlier question – without human bias – has been proven wrong on closer inspection. AI can be biased, which is quite intuitive when you think about it. AI solutions learn from data fed into them, and humans generate this data themselves, unconsciously also bringing in their biases. For instance, Apple Card’s AI likely learned from countless human credit assessment officers’ historical work and merely replicated them. Their biases and preferences found a way into the AI & its functioning.
With AI solutions deployed in medicine, national defense, judiciary, and other critical fields, it is crucial that all stakeholders affected by them understand how they work and can be 100% sure that they are fair.
Explainability in AI introduces trust and reliability into the AI solution’s capabilities.
Legal/Regulatory Obligations
AI debacles like COMPAS and Apple Card have not entirely missed the attention of lawmakers. An increasingly data-regulated world of GDPR and CCPA, compels organizations to be fully transparent about their AI solutions. Their solutions are under constant scrutiny, and therefore, they cannot risk attracting heavy penalties for non-compliance. So, explainability is the need of the hour.
Causality
Let’s revisit the COMPAS example. Does a defendant have the right to know why he is being considered at risk of recidivism? Without a keen understanding of COMPAS’s models, the justice system’s response would be, “Well, because this AI said so!”
Yeah, I know that was an extreme example. So, let’s try one more. You are operating a manufacturing plant, and your equipment monitoring AI tells you that there’s some problem with your pumps and that they are vibrating too much. How does that help you? You would have to assign a technician to identify the cause of this problem, which could be in a different machine altogether. Once you recognize it, you’ll proceed to fix it.
An AI whose models are designed to answer the “whys” and the “hows” can detect issues and analyze their decision models to zero in on the root cause of the problem. Perhaps, such a solution can notify the plant manager that the vibrations arising out of a motor radiate into the pumps and destabilize them. That would save the plant engineers precious hours in diagnostics and repair.
In cases like these, explainability not just makes issue identification possible but also offers a quick, actionable resolution.
The Road Ahead
AI is a black box full of paradoxes. For instance, the more realistic their models grow, the harder they become to understand. This paradox brings up the skepticism that is focusing too much on explainability in AI may come at the cost of accuracy!
Therefore, developing AI that can quickly become highly complex and accurate while maintaining its explainability would be the biggest challenge before the present & future AI creators. Only then would AI would be considered truly ‘intelligent’ for the masses, spurring its adoption.
At UptimeAI, we believe in maintaining the balance of accuracy & explainability to harness the power of AI. If the equipment monitoring example above seemed too good to be true, then you need to talk to us today about our virtual AI plant expert 🙂