Wireless and Digital Services

Getting real with Machine Learning

By James Clemoes - Last updated: Tuesday, December 6, 2016

Everyone claims they’re doing it, most aren’t doing it, a few are doing it badly, and even fewer are doing it well.

With the ready availability of great Machine Learning frameworks, such as Torch (used by Facebook) and Google’s TensorFlow, the barrier to entry can seem deceptively low. Such tools are relatively straightforward to install and before you know it you’re categorising digits or spotting cats. So you’re doing AI, right?

This is a testament to the Machine Learning development process – the work required to build even those examples was likely tremendous, even though the network may only be 37 lines of code. Code that defines the structure of Machine Learning networks will often be relatively compact – but doesn’t show the time required to meticulously prepare data sets for training, to design training procedures, to prevent over-fitting or cheating, or to ensure generality to new data. It turns out that building useful AI is very challenging, with a mix of heavy duty data science, engineering, mathematics and infrastructure – searching a careers board for Machine Learning gives an insight into the diverse skills required. Even then there are no guarantees. Driver-less cars still crash occasionally and Amazon recommends me My Little Pony play-sets.

Commercially, many deployed systems still use the cutting edge of yesterday: logistic regression (used by a popular supermarket for loyalty card data), support vector machines and decision trees, to name a few – and with success. All well understood approaches, built to outperform hard-coded algorithms, but often with no more learning intelligence than an insect after you trod on it. This is because building real-world Deep Learning systems is hard, while much of the best expertise is closely guarded by academic-turned-commercial groups, nesting inside big companies such as the internet giants. Indeed such academics appear to be transferred like expensive football players. Set against this, Elon Musk and others are aiming to buck this trend by hiring prominent researchers for OpenAI, with its stated aim of “democratizing access to AI”.

The Mark 1 Perceptron machine, the first implementation of the perceptron (deep) algorithm, featuring 20x20 cadmium sulphide photocell pixels. Circa 1960

The Mark 1 Perceptron machine, the first implementation
of the perceptron (deep) algorithm, featuring 20×20
cadmium sulphide photocell pixels. Circa 1960. (Image: Wikipedia).

Curiously, evolution of the world’s learning systems seems to mirror that of humans. Mobile, IoT and web services have provided multi-sensory input to machine retinas (Convolutional Neural Networks) giving vision to the 1960’s inspired brain-in-a-vat, fully connected perceptron network. The goal has long been to create systems capable of decision making through true understanding in a human sense – this is the essence of Deep Learning. Many tech systems may now be at a tipping point, as measured by the sharp increase in Machine Learning job posts. AI of this nature is just beginning to seep in through cracks of innovation – giving rise to opportunity whilst posing a significant threat for those that lag behind. Either disrupt your industry with AI or risk being disrupted. Many businesses have detailed data – be that unique impressions, retail loyalty card data, patient data, playlists, network load or a thousand other data points. Until now, offline human analysis or outsourced data mining have been the popular for data driven optimisation, but these techniques have been rendered obsolete in a world of fledgling AI. Enter fully functional, aware, decision making, Deep Learning engines, built straight into the middle. Suddenly expert systems become intelligent, proactive is favoured over reactive, “within 3 working days” becomes 400 milliseconds. Fix your AI in stone and you’ve created an expert system in a fraction of the time that it would take developers to build. Leave it learning and it will continuously adapt. Either way, a paradigm shift in system ability, in this author’s opinion.

The specific flavour of AI that I’m concerned with is Deep Learning, defined by the presence of non-linear perceptron layers, also referred to as fully connected neural networks. With a small number of exceptions, large service companies don’t yet seem to have figured out how to expose the true capability of Deep Learning, and to exploit it commercially. A few have arrived to the party early, offering APIs – narrow windows – that solve a cacophony of problems, usually web-centric in nature. Such companies often contain world leading Machine Learning specialists; working on the cutting edge, but their creativity is bound by JSON endpoints. Their tools may be great for improving the UX of an app, monitoring sentiment online or producing software ever more able to interface with humans; but probably doesn’t offer a solution to solve the core business problems that companies actually need to solve. These challenges require custom artificial intelligence by their nature, short of the Holy Grail HBaaS (Human Brain as a Service). This is the opportunity that excites us here at Cambridge Consultants – To harness both Deep Learning and the creative inspiration of our brightest minds, to apply new algorithms against long-standing commercial and industrial challenges. This work is experimental by its nature, but it promises great reward: to solve real-world challenges in a wide range of industries.