We live in exciting times. Our relationships with machines, objects and things are quickly changing.
Since mankind lived in caves, we have pushed our will into passive tools with our hands and our voices. Our mice and our keyboards do exactly as we tell them to, and devices like the Amazon Echo can help us do simple tasks, like turning on lights, or more complex tasks, like responding to questions with analytics.
But with the rise of artificial intelligence (AI), the tides might turn. Can machines morph from passive objects into active participants that weave themselves into the fabric of our lives? Will machines drive us, or will we drive the machines? Will objects inform us what they have done on our behalf, or will we continue to tell objects what to do? Could we become mere pawns in a life orchestrated by autonomous intelligence, as everything becomes smarter, more intelligent?
How close are we to such a reality?
The state of AI today
If you are worried about the machines taking over the world, you can sleep soundly. It will not happen based on the technology currently in use.
The trendy thing is to label everything AI that does something remotely clever or unexpected, but in reality, it is not AI. My calculator is better at arithmetic than I will ever be – it is not AI. A decision tree is not AI. An extra clause in an SQL query is not AI.
We have seen incredible advances in making algorithms perform with stunning accuracy tasks that a human could do. Until recently we thought the game of Go could not be computerized, and now a machine beat us to it and outperformed us. Or in the health care field, algorithms can detect forms of cancer on medical images as well as radiologists – something life-changing.
These algorithms have superhuman abilities because they do their work reliably, accurately, repeatedly and around the clock. Yet we are far from creating machines that can think or behave like a human.
Current AI systems are trained to perform a human task in a clever, computerized way, but they are trained to do one task – and one task alone. The system that can play Go cannot play solitaire or poker, and it will not acquire skills to do so. The software that drives an autonomous vehicle cannot operate the lights in your home.
This does not mean that this form of AI is not powerful. It has the potential to transform many industries – maybe every industry. But we should not get ahead of ourselves in terms of what can be accomplished. Systems that learn in a supervised, top-down fashion based on training data cannot grow beyond the contents of the data; they cannot create or innovate or reason.
The trust leap
Even if algorithms become intelligent, we do not have to let them run our lives. They can remain a decision support system. The ultimate trust leap is to let algorithms make decisions on your behalf.
But imagine if algorithms were autonomous. I believe that if we accept autonomy, then we will be ready to accept true AI. If an algorithm can make reliable, unbiased decisions that can be shown to be in your best interest in the long run, are you comfortable to hand over the reins and let it make decisions without your input?
How well do we expect machines to perform when we let them loose? How quickly do we expect them to learn on the job? And when do they get morals along the way?
If these questions make you uncomfortable, you are not alone. I prefer to be killed by my own stupidity rather than the codified morals of a software engineer or the learned morals of an evolving algorithm.
The illusion of intelligence is all that we can handle, and it is all that we have to handle for now.
We want to get tricked by the machine, in a clever way. The rest is hype.
Preparing for the future
Is today’s form of AI intelligent? I argue that it is not.
Intelligence requires some form of creativity, innovation, intuition, independent problem solving and sentience. The systems we are building based on deep learning cannot have these characteristics. I do not want to put a time frame on when AI will be intelligent. We thought that we were close decades ago and that machines would be acting and thinking like humans by now, but they do not. The technology we have today still cannot solve this problem.
There must be a disruptive technology shift to get us to true AI. I do not think we have found the solution, yet – but we are looking for it.Read next: AI for executives