Deep learning made the headlines when the UK’s AlphaGo team beat Lee Sedol, holder of 18 international titles, in the Go board game. Go is more complex than other games, such as Chess, where machines have previously crushed famous players. The number of potential moves explodes exponentially so it wasn’t possible for computers to use the same techniques used in Chess. In learning Go, the computer would have to create millions of games, competing against itself and discovering new strategies that humans may never have considered.

Deep learning itself isn’t that new, and researchers have been working on algorithms for many years, refining the approach and developing new algorithms. What has stimulated it recently is the convergence of massively parallel processing, huge data sets and superior performance against traditional machine learning algorithms.

How does deep learning differ from traditional algorithms?

Let’s take a few examples. A credit scoring model based on logistic regression will typically use around ten to fifteen input parameters, such as age, income, time at address etc. More complex decision trees or neural networks used to detect fraud may use hundreds of parameters. Deep learning takes this to a whole new level and may use hundreds of thousands or even millions of parameters. This can only really work when there are thousands or even millions of examples to train the models.

The internet is an ideal place to find examples. For instance, when you search for images of cats, dogs, trains, and so on it will probably be a deep learning algorithm that’s been used to classify the image. Other uses extend to natural language processing, translation, facial recognition – Google and Facebook are known to be extensive users of these algorithms. Interestingly, humans were used to classify the initial images through such techniques like Captcha® where the user confirms they’re a human by identifying which images are dogs, buildings, areas of water etc. Each batch of images would include some known images, but also some that were unknown - once a few users agree on an image, it can be marked as classified and the process repeated on new images. In this way, thousands of images can be quickly classified for use in algorithm training.

Consider also that the algorithm has no rationality when it comes to human suffering (we’re a long way off Asimov’s Three Laws of Robotics).  This means it doesn’t actually care whether you get your intended results or not!

What's next for deep learning?

Deep learning algorithms have been used in tests for self-driving cars and offer the promise of reduced casualties and fatalities on the roads. In a world where nobody drives regularly, how would a machine cope when an accident is unavoidable?

Deep learning is now tackling these jobs which had traditionally been thought of as unique to humans, such as driving, journalism, law, insurance underwriting and a whole range more.

However, it will also create new job functions, such as algorithm auditor and analysts to frame the questions in a way the algorithm can understand. (I’m reminded of the sci-fi comedy series, The Hitchhikers Guide to the Galaxy, where the Ultimate Question was being sought after a supercomputer revealed the Ultimate Answer - to life, the universe and everything - to be 42!!).

This area is already developing fast and over the next few years it will do so at an exponential pace.

Find out more about deep learning and connect with me via Twitter @sukcag.

Share

About Author

Colin Gray

Colin started his career training to be an actuary and holds a Certificate of Actuarial Techniques. Since moving to SAS, he has concentrated on the detection and prevention of fraud through the use of Analytics.

Comments are closed.

Back to Top