The success of Machine Learning has prompted a great deal of discussion around the notion of explainable . In general, the better the machine learning system, the blacker the box – that is, the harder it is to decipher what is going on inside the model. 

The performance of these models are outstanding, but the utility of the models can be severely limited. You cannot put a black box into production against mission critical tasks if you cannot explain what is going on. This is true across any – from hospitals to hedge funds. As a result, much of the recent in machine learning works on perception based problems – where the errors of the model are not mission-critical. Imagine a machine learning model to recommend images of cats – the user won’t mind if a picture of a dog slips in. The bigger the stakes, however, the more important understanding the behavior becomes.

For AI to take hold in the enterprise it has to be explainable.

The technical challenges, the operational challenges, the organizational changes associated with intelligence pale in comparison to the .

Many companies are willing to open the kimono – that is, to offer full transparency or explainable AI. While this is a fundamental requirement, it has little bearing on the problem of trust.

What the world needs is justification, which is something entirely different than transparency.

To explain, let’s define transparency. Transparency is identifying what algorithm was used, what parameters were learned from the data. We have seen some companies expose these – the source code of the algorithm and the learned parameters. While interesting, this does not provide any intuition as what is going on. It allows one to “check the math” but it is not valuable to know that your computer can do matrix multiplication or basic linear algebra operations. This is akin to checking whether the Oracle database can join tables correctly.

This is not to suggest there is no utility in transparency – far from it. Knowing what has been done with a level of precision that let’s us replicate the work has value. Transparency might also include information about why the design of calculations were made in a particular way. This, however, is essentially QA, it does not provide any intuition into the reasons that machine has for its actions.

Let me give you an example – imagine that you train a 3 layer neural network for a prediction task. A transparent system would provide training parameters (e.g. momentum, regularization etc.) as well as final parameters (the two weight matrices between the layers 3 layers). Now, while this is perfectly inspectable – for every possible input you can essentially -verify the outputs, it isn’t actually useful since the verification amounts to ensuring that the library implements matrix multiplication correctly. The problem is that this exercise provides you with no intuition about  why the model behaves the way it does. This example is easily extended to other non-trivial algorithms.

Beyond Transparency: Justification

The concept of justification is far more robust and is what is required to move AI into production. Like transparency, justification identifies the algorithm that was used and the parameters that were applied, but it also provides the additional ingredient of intuition – the ability to see what the machine is thinking: “when x, because y.”

Justification tell us, for every atomic operation, here is the reason(s). For every classification, prediction, regression, event, anomaly or hotspot we can identify matching examples in the data as proof. These are presented in human understandable output and represent the variables, the ingredients of the model.

Getting to the atomic level is the key to cracking the AI black box. So how might we achieve that in practice?

Machine learning is the practice of optimization – all the algorithms maximize/minimize some objective. An important feature of optimization is the distinction between global and local optima. Finding global optima is hard since the mathematical conditions to check whether we are near an optima are unable to distinguish between global and local optima. The challenge is that at the global level, it is difficult to know when you have found that maxima.

If this sounds obscure, consider the well-worn but highly effective example of climbing a hill in the fog. Your visibility is highly constrained – a few feet. How do you know when you are at the top? Is it when you start to descend? What if you crested a false summit? You wouldn’t know it, but you would claim victory as you began to descend.

But what if you had a GPS – a map and a way to locate yourself in the fog?

This is one of the areas where Topological Data Analysis (TDA)—a type of AI that can illuminate the black box—is particularly effective. Unlike other AI solutions, TDA produces visual “maps” of data. So in the example of climbing a hill in the fog, using TDA, you would now know if you were at the global optima (the summit) or merely some local maxima (a false summit) because you could literally see your location in the networks.

In fact for every atomic operation or “action,” with TDA we can find our location somewhere in the network. As a result, we always know where we are, where we came from and where (to the extent the prediction is correct) we are going next.

For example, a study that attempts to predict the likelihood of a person contracting lung cancer based on personal behavioral data might strongly predict a positive outcome for a particular person. In that case, transparency would tell you what inputs were used, what algorithm was used and the parameters, but not why the prediction was made.

By contrast, justification would tell you everything that transparency revealed, while also explaining the prediction by highlighting that the person is a heavy smoker or, more precisely, that the person is in a high-dimensional neighborhood where a statistically overwhelming number of other people are heavy smokers. This information builds intuition at the atomic decision level.

Justification differs from transparency in that it concerns the output of a methodology, rather than describing what was done computationally.

Furthermore, one can, with Topological Data Analysis, continue to move “upstream,” understanding the role of the local models, the groups within those local models, the rows within a node, and ultimately the single point of data. This is extremely powerful, not just for its ability to justify a model’s behavior no matter how complex it may be, but also for understanding how to repair the model. That is why so many organizations are now applying TDA as a microscope to their existing machine learning models to determine where those models are failing – even when they were not built with TDA.

Justification is not simply a “feature” of AI – it is core to the success of the technology. The amount of work underway to move us beyond explainable AI/transparency is a testament to its importance.

TDA gets us there today – without sacrificing performance. In the AI arms race, that is worth something.

Source link


Please enter your comment!
Please enter your name here