Corporate Blogs

Explain yourself machine

By Joe Smallman - Last updated: Thursday, August 10, 2017

explain yourself machineAccording to Stephen Hawking, artificial intelligence will be “either the best, or the worst thing, ever to happen to humanity”. The recent breakthroughs in artificial intelligence are largely due to developments in machine learning. However, one of the key challenges of machine learning is that when ‘the machine’ gets it wrong, it is very hard to understand why. Fortunately there are few emerging techniques that allow us to probe the ‘mind’ of the machine, to get an insight into its decisions, and hopefully build our trust in it. At Cambridge Consultants we’re very interested in these emerging techniques, and are working to ensure that the result for humanity is at the more optimistic end of Hawking’s prediction.

Machine learning algorithms differ from more traditional algorithms in one key way: traditional algorithms consist of a set of pre-programmed rules whereas machine learning algorithms learn the rules for themselves using a (typically large) set of examples. One can think of the algorithm learning from what it has seen in the same way that a toddler learns how to tell the difference between a dog and a cat. In the dog and cat illustration, a developer of a traditional algorithm would, for example, have to write an algorithm to find eyes in the image, associate them with a body, and then measure how round the pupils are (cats obviously have untrustworthy slits). That’s not straightforward!

Understanding how machine learning algorithms process data and make decisions is not easy. In the same way you find it hard to explain the difference between a cat and a dog (it is hard – both are small, have four legs, are furry, and have a tail), a machine learning algorithm will make lots of uninterpretable decisions to come to its final answer. This isn’t necessarily a problem; we can treat them as black boxes and trust they have been trained well. But what are the limitations of the trained algorithm?  What happens when they don’t work? How can we learn to trust machine learning algorithms and begin offloading human work onto them? Many experts and politicians are certainly worried about these issues.  Next year the EU General Data Protection Regulation (GDPR) comes into force, and includes a right to an explanation on decisions made by algorithms.

Gaining human understandable interpretation into the workings of machine learning algorithms is a necessary step towards solving these problems, and is currently an active field of academic research. Two approaches that show promise are Attention Mechanisms and Locally Interpretable Model-Agnostic Explanations (LIME). Attention Mechanisms analyse what part of the input data is used for each algorithm decision. In LIME, the input data undergoes a set of subtle changes in order to understand which changes have big impacts on the final decision. This process can shed light on the decision-making process.

The wolf-husky example from Ribeiro et al. demonstrates how human insight is vital to gaining trust in otherwise opaque algorithms.

Figure 1 Taken from M.T. Ribeiro et al., "Why Should I Trust You?": Explaining the Predictions of Any Classifier, KDD 2016 Proc. 22nd ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, 1135-1144

Figure 1 Taken from M.T. Ribeiro et al., “Why Should I Trust You?”: Explaining the Predictions of Any Classifier, KDD 2016 Proc. 22nd ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, 1135-1144

In this example, the husky shown in Figure 1 (a) is incorrectly classified as a wolf. The explanation image, Figure 1 (b), shows that the algorithm is basing this decision almost entirely on the presence of snow surrounding the husky. This indicates that the algorithm has been poorly trained using a biased dataset of images of wolves taken in the snow. What the authors created was an effective snow classifier – not a husky-wolf classifier. The algorithm got the classification wrong, but we, as humans, were able to at least partially understand why the algorithm came to its conclusion.

At Cambridge Consultants we are carrying out an internal investment to understand and implement both approaches to interpretation. We aim to take these ideas further and make them robustly applicable to the problems that our clients need help solving. We want to develop smart algorithms for our clients and, crucially, we want to help our clients trust the algorithm outputs through human interpretation.

In a follow up blog we’ll discuss how we’re using machine learning to classify your accent and demonstrate how these interpretation techniques can be applied to understand the analysis of your speech.


Reference: M.T. Ribeiro et al., “Why Should I Trust You?”: Explaining the Predictions of Any Classifier, KDD 2016 Proc. 22nd ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, 1135-1144


Media


AuthorJoe Smallman


Subscribe to blog feeds:


Recent Posts


Posts by categories