Explainable AI – AI 2.0?

by | Feb 6, 2020

Written by Mario Mallia Milanes

Introduction

During the last decade AI has moved to the forefront of computer science and has gained prominence. Algorithms and Big Data also contributed to its proliferation. But in the process, we have come to be familiar with software that simulates human cognitive reasoning. Hence the demand for more is being requested. But when one takes stock of the current arsenal of algorithms many, if not all, are closed-box systems offering little to no justification on their outcome. If artificial intelligence is to be used properly, especially in a support role, humans must be kept in the loop and given the facility to follow on conclusions reached. This facility would help people, experts and non-experts alike, to gain trust in the systems assisting them.

One of the main inhibitors of progress to explanability is the lack of consistency and measurement metrics. Much of the literature reviewed seeks a unified approach to the research, but as the topic is in its infancy this will take time. Hopefully a time will come when a common structure for xAI models is designed and accepted. This will facilitate interoperability.

Principles of Artificial Intelligence

Artificial Intelligence has brought both turmoil and reassurance to the computer dominated world. Naturally when one sees the possibilities that can be reached with this new way of working, the rumor mill starts to spin. Churning out talk of takeover and disruption. This is, thankfully, far. But human collaboration with Artificial Intelligence is not. The audience of intelligent machines spans far and wide. Areas of daily life facilitated by artificial intelligence may cover:

  • Intra and inter Government collaboration;
  • Business and Government governance;
  • Public sector and public service administration and service;
  • Civil society at large;
  • And a possibility of a multi stakeholder audience.

For all this to function properly rules are needed so that processes and interactions follow clear lines. Such principles follow along the lines of:

  • Privacy;
  • Explanability;
  • Fairness.

Needs

The algorithms used in the realm of Artificial Intelligence are often complex and efficient. Their main drawback being the lack of insight into the thought and behavior process of the algorithm. In other words, explanation is starting to become mandatory. With the introduction of new GDPR legislation in May 2018 explanation has become enforceable. Although the legislation does not expressly refer to the right of explanation, it does not mean that it is not enforceable[1]. Explanation is not clear. Many rightly see it as a non-trivial problem that requires much debate. Apart from legal and philosophical challenges, the technical challenges abound too.

To date many algorithms cannot offer a decent expression to their line of deduction. We need to identify bias in data and ensure fairness as basic principles. The starting point should be a definition of the term explanation itself.

The Problems and Challenges We Face

In an ordinary sense explanation implies that one examines the inputs and outputs of a system and would be able to come to a possible understanding of the mechanism by which a decision was taken. To date many algorithms, offer some sort of surface-level explanation but we are still far from answering ‘why?’ a certain course of action was taken.

Many models lack the transparency necessary to make them understandable. Let us give an example to clarify. How could a proper assessment of an accident be made in the case of a driverless car that took certain evasive action? Or else why was a certain diagnosis put forward in favor of another one? Transparency by itself is also offers subjectivity. Are models which can be exclusively interpreted by experts or programmers deemed to be transparent? What about the user that faces technology each day? Shouldn’t he have the right to understand or even opt out from such decisions.

Recapping our argument, we are left with more questions than answers namely in the direction of:

  • How should we produce models that are more explainable?
  • How are the interfaces to these models to be designed?
  • We need to properly understand the psychological requirements necessary for an effective explanation;
  • How can we effectively measure explanation or explanability.
Finally, we face another possible fork along the road. Do we create new AI algorithms that are inherently explainable, or do we adapt what we have at hand?

“All this drive towards the enhancement of algorithms to make them clearer derives from the need to have rights respected and disallow free access to data.”

Goals We Have to Reach

As stressed earlier, explanability is complex. The direction this work is trying to address is that explanability enhances trust which in turn facilitates the interaction between man and machine. We have identified nine characteristics necessary to enable explanation in algorithms:

  • Trustworthiness: This implies the confidence humans have in a given model or system;
  • Causality: Finding relationships within data sets. The ability to explain properly requires knowledge of such relationships. Elements within a dataset need to be correlated a priori;
  • Transferability: Can we use one common explainable framework to all algorithms with the hope of obtaining a consistent output that is understandable.
  • Informativeness: This is one of the targets of any AI algorithm. It is the capability to solve problems, assist in taking decisions. Machines are capable of recursively going through data but rarely leave a trace of their trajectory;
  • Confidence: This characteristic is necessary if trust must take place. Algorithms must be robust and stable;
  • Fairness: Decisions taken by an algorithm must be just and open to scrutiny. The user must be able to have clear visibility to any relationships within the data that can possibly affect impartiality and proper ethical analysis;
  • Accessibility: Humans should be part of the system. People working with intelligent algorithms should be allowed to interact with the decision-making process. This must also be made available even to non-experts;
  • Interactivity: Human operators or co-decision makers are to have the capability to follow the decision process;
  • Privacy: This is one of the very important aspects of explanability that much of the literature reviewed shies away from. In practical terms algorithms may have access to data that has been restricted by the user. The issue here is that such data, if included or not in the decision process, may affect the resultant outcome. For the work of this theses privacy within algorithms shall not be entertained.
As it can be seen from this short introduction to the subject, explanation poses many challenges and questions in many domains. I am of the conviction that effort must be made in all spheres to make explanability work for us.

Classification Terminology

As discussed previously one of the issues that frequently bars the way to progress is the lack of common parlance within the confines of explainable Artificial Intelligence. In this paragraph we shall try to give some commonly used terms a proper definition.

  • Understandability: Humans can understand how a model works with no need of knowing the internal structure of the algorithm;
  • Comprehensibility: This can be expressed as a factor of algorithm complexity. This term defines the changes needed to a model to make it understandable;
  • Interpretability: To provide meaning;
  • Transparency: This term can have several meanings, the one I really prefer is that quality which makes an algorithm understandable by itself. Transparency can have several characteristics that go with it. These are as follows:
    • Simutable: the algorithm can be simulated;
    • Decomposable: the algorithm can be deconstructed for better understanding;
    • Algorithmic transparency: The inner workings of the algorithm are open to scrutiny.

Approaches to Explanability

Currently two methods are used to add explanability to algorithms. These namely are post-hoc and embedded approaches.

Embedded Approach to Explanability

In this scenario explanability will form part of the algorithm itself. New algorithms will have to be devised whereby explanability forms an integral part of the recommendation model. Opaque algorithms would not be allowed under this scenario. Users should be able to analyse the internal working of an algorithm plus also understand the rationale behind the output. Output should be padded with human-readable instructions which facilitate understanding.

Post-Hoc Approach to Explanability

Post-Hoc models are those to which explanablitiy is applied retroactively. The advantage in this case is that we can still use current models and benefit for the explanation. One of the common occurrences is this type of model can be found in recommender systems. Users are informed that their choices are usually accompanied by other options. Explanations are usually very dry, and they are frequently ignored. Algorithms that are usually customised are opaque algorithms which typically comprise Deep Neural Networks. Post-Hoc explanability can be specifically designed for each type of artificially intelligent algorithm. The disadvantage coupled with such an approach is that each different algorithm would have a customised explainable algorithm to it. Thus, dispensing with the need of common methodologies to explanability.

Another Post-Hoc approach to explanability is by using model-agnostic models which are designed give a common look and feel to the resulting output or level of explanation. In this case the explanation is made to seamlessly hook up to existing algorithms and be able to extract information from them. Currently the favoured approach is that of using a second taxonomy that complements the first. In this case both models run in tandem, and the second model collects information from the first. The information would be used as feedback to the user.

Conclusion

All this drive towards the enhancement of algorithms to make them clearer derives from the need to have rights respected and disallow free access to data. While the effects of enhanced algorithms are apparent the lack of homogeneous development should not hinder development in other areas of research. To achieve greater transparency and control over algorithms there naturally is a trade-off we must accept. That of performance. The opaquer systems are the harder it is to make them transparent, the more they are loaded. But at this point it would be pertinent to point out that the complexity of an algorithm is not directly related to its accuracy. Data also plays an important part. Data should be of good quality, have important features within it, should add value, and is properly structured.

subscribe

Receive the 

latest updates

on open positions.

You have Successfully Subscribed!