Artificial intelligence: the good, the bad, and the ugly

0

When the likes of Elon Musk and Stephen Hawking go on record warning about the dangers of AI, it’s probably prudent to take notice. However, before rushing off into full panic mode, some definitions and perspective would be in order.

Artificial_intelligenceThe type of artificial intelligence Musk and Hawking are referring to is known as Strong AI, or AGI (Artificial General Intelligence). This is the level at which a machine could readily pass itself off as indistinguishable from a human in cognitive, perceptual, learning, manipulative, planning, communication and creative functions - a thinking machine that can pass the Turing Test.  We’ll close with some perspectives on Strong AI, but first let’s take a look at Weak AI, also known as Applied AI or Enhanced Intelligence (EI).

The good

There have been several methods, tools and approaches taken on the road to artificial intelligence. SAS is all over this field:  machine learning, analytical/Bayesian statistics, natural language processing, neural/deep neural networks, and cognitive computing to name a few.

To see the difference between traditional software development approaches and machine learning, consider first how the earliest chess playing computers were designed. You programmed in the rules of the game, the goal, and then layer upon layer of ‘If-Then-Else’ commands that captured the aspects of strategy gleaned from the game's master players.  The computer will then play at the level determined by the algorithmic representation of that domain knowledge, but no better.  For the computer to up its game, you had to change the code, recompile, redeploy and rerun – a linear process that doesn’t scale well.

The machine learning and neural network approaches are illustrated in this short video clip of Google’s Deep-Q learning how to play Atari Breakout.  Starting out, the only four things the algorithm knows are the sensory inputs (the screen), action in the environment (how to move the paddle), a measure of success (score), and an objective to maximize future rewards.  Not even a single rule or tactic - it learns these as it goes.  After that, the software teaches itself how to play Breakout, eventually including the trick of tunneling through near the wall.

Where is machine learning being applied?

  • Robotics that learn by doing rather than being specifically programmed for each task
  • Detecting satire in customer comments via text analytics
  • Personal agents that learn and understand your buying / entertainment preferences
  • Autonomous vehicles, self-landing airplanes and space shuttles, self-docking spacecraft
  • Patient diagnosis – not just chess-like expert systems, but learning and improving with each case
  • Speech recognition and language translation, without the non sequiturs and cultural faux pas
  • Management of smart cities: traffic and energy
  • Facial/visual recognition and other biometric applications
  • Smart implants: brain (to control Parkinson’s), pacemakers, insulin pumps, cochlear and retinal
  • Control of exoskeletons and prosthetic limbs
  • A DARPA project to reverse damage caused by brain injury with neuroprosthetics
  • Computer-aided interpretation of medical imaging
  • And on a lighter note, Roomba’s and robotic pets

That’s a pretty cool list, only partial and still growing, of both recent accomplishments and a peek at what lies in our near future, that can’t help but echo Ken Jennings’ Final Jeopardy answer upon losing to a computer: “I, for one, welcome our new computer overlords”.

The bad

The fear over Strong AI is of course that of the runaway, uncontrollable sentient machines, where biological intelligence has outlived its usefulness, and the robots take appropriate steps to disinfect their environment of the human virus.

But you don’t have to enter the realm of science fiction to find wickedness of this magnitude. A little common sense will suggest that long before sentient machines start designing and building their own ever more powerful replicas of themselves, actual humans will either deliberately or accidently program much weaker AI to perform equally nefarious deeds.  Battlefield droids and weaponized biologics are under development today.  The harm and destruction that cyber criminals will be able to realize as they hack into our increasing complex, interconnected and interdependent infrastructure, systems, devices, economies and society is already an ongoing concern that will only get worse.

Personally, I would consider it a great success were humanity to survive to the point where it needed to concern itself over self-conscious, self-replicating machines bent on global, nay, galactic, domination. It’s not the machines that should be our primary concern, but the humans that program, run and hack them, with AI likely playing a central role in assuring a secure future for humankind.

The ugly

So how likely is a future scenario where we need to concern ourselves with what machines think about us? For this year’s EDGE question, “What to Think about Machines that Think”, editor John Brockman gathered together the opinions and judgments of several hundred of the greatest thinking machines of our species.  These assessments run the gamut from it’s impossible, ever, to the inevitable emergence of a “singularity” between man and machine before the end of this century, and everything in between.

I’m no anthropocentric chauvinist, I don’t hold the human brain and consciousness to any standard that transcends the physical laws of our universe, but after considering all the arguments, I come away persuaded that several of them make powerful, compelling cases that Strong AI is not in the cards.

  • The human brain is not a Turing machine, it is not merely algorithmic, as are, however, all AI constructs so far. There are a countably infinite number of computable functions for AI to address and solve, but there are an uncountably infinite number of non-computable functions. Put another way, there is a second order infinity of mathematical truths that can never be proven, but can nonetheless be understood by the human mind.
  • The subjective experience of self is something we are far from understanding let alone creating within a machine. Qualia such as color does not exist in nature – there are only 700nm electromagnetic waves which the brain interprets as RED.
  • The only mechanism known so far that can create a subjective experience is evolution, and many suspect that it will take the same on our part, directed biological evolution, in order for humans to build a self-aware intelligence. The human brain is comprised of some 100 billion neurons, each making 10,000 connections, for a total of 1,000,000,000,000,000 synapses. All in a compact three pound package that consumes just 20 watts of energy. A single eukaryotic cell is vastly more complex than our most powerful CPU chip. Moore’s Law ain’t gonna get us there.
  • Meaning and metaphor. There is a difference, a big difference, perhaps an insurmountable gulf, between manipulating symbols versus grasping their meaning. Consider a piece I wrote upon the fifth anniversary of my brother’s death some years ago now:

“A permanent vase-like illusion: your brother, or emptiness? But time has a way of regenerating the devastated landscape.  A visit to his memory now finds that the birds and blossoms have returned, a melancholy meadow where it is forever late summer.”

What is a Turing machine to make of melancholy meadows where it is forever late summer? Metaphors are neither true nor false – that is not how we interpret them. Will a machine ever experience revulsion at the image of a swastika or a burning cross?  What sort of algorithm would it take to comprehend the gaps and relationships between symbol, language and meaning in Magritte’s “The Treachery of Images” (“Ceci n'est pas une pipe”)?

The Doomsday Clock currently sits at three minutes till midnight without any assistance from Evil AI – humanity has managed that all by itself. A noble planetary goal for the 21st century would be to see it moved back to before noon, and I, for one, welcome the help of Weak AI, our servant and partner in all its forms and applications, in urging it in that direction.

Share

About Author

Leo Sadovy

Marketing Director

Leo Sadovy currently manages the Analytics Thought Leadership Program at SAS, enabling SAS’ thought leaders in being a catalyst for conversation and in sharing a vision and opinions that matter via excellence in storytelling that address our clients’ business issues. Previously at SAS Leo handled marketing for Analytic Business Solutions such as performance management, manufacturing and supply chain. Before joining SAS, he spent seven years as Vice-President of Finance for a North American division of Fujitsu, managing a team focused on commercial operations, alliance partnerships, and strategic planning. Prior to Fujitsu, Leo was with Digital Equipment Corporation for eight years in financial management and sales. He started his management career in laser optics fabrication for Spectra-Physics and later moved into a finance position at the General Dynamics F-16 fighter plant in Fort Worth, Texas. He has a Masters in Analytics, an MBA in Finance, a Bachelor’s in Marketing, and is a SAS Certified Data Scientist and Certified AI and Machine Learning Professional. He and his wife Ellen live in North Carolina with their engineering graduate children, and among his unique life experiences he can count a singing performance at Carnegie Hall.

Comments are closed.

Back to Top