Cognitive Computing - Part 1


Is cognitive computing an application of text mining?

If you have asked this question, you are not alone. In fact, lately I have heard it quite often. So what is cognitive computing, really? A cognitive computing system, as stated by Dr. John E. Kelly III, is one that has the ability to be trained to “… learn at scale, reason with purpose, and interact with humans naturally.”

What does that mean exactly? Perhaps the best known example is IBM’s Watson winning Jeopardy!. Watson was able to understand the questions asked by a human, with all of the intonations, puns and expressions inherent in the asking of the questions and the questions themselves; search for the appropriate answer; give its answer a confidence that would determine whether Watson would “buzz” in; and then provide the answer. Was it perfect? Of course not; just as there are no human experts who truly know everything in their fields, no cognitive computing system is perfect either. Every person, even the most knowledgeable expert, sometimes has to say, “I don’t know,” or “I think xyz, but I’m not completely sure.” The degree of certainty that a human has with knowledge is hard to quantify, and is based mostly on a gut feeling, but the degree of certainty that a cognitive computing system has for any answer is a confidence level. This can be thought of as a score that corresponds to the quality of the decision after the system has evaluated all of its options and information. Based on that number, the machine can decide whether to answer a question, or what the best answer is for a given question.

CC Blog Part 1 Image

The second part of cognitive computing systems is that these systems are able to learn. This promises great things for the future, because not only can everyone interact with the system without having to learn a specific language or a specific interface, but also the machine learns how to interact better with the user over time. How does such a system learn? Typically after it is fed information, usually in large amounts, a cognitive computing system receives inputs and outcome sets. The inputs could include textual data, images, videos, speech, numbers, IoT data, etc. Next, as a human interacts with the system, grading the system’s responses and their accuracy levels, and providing feedback , the system refines its internal training model. To me, it’s analogous to a human studying for a test in a new subject, and using tools such as textbooks and practice exams (Q&A, essentially) as study aides, and perhaps also working with a tutor.

The third exciting component is that cognitive systems take advantage of what computers do really well. They can access and process enormous amounts of data very fast, and the data does not have to be specifically prepared for the processing. Cognitive systems are built to handle unstructured data; this extends the sources of information for these systems far beyond the realm of the traditional databases. The majority of data in the world is unstructured data; this makes sense because unstructured data is the natural way humans communicate with each other. After all, structured data is an artificial construct that was created to be able to apply analytical concepts to better understand the world. Given that unstructured data is the majority of the information out in the world, and if unstructured data will continue to expand in volume at high rates, and given a cognitive computing system can “read” and digest this huge corpus of information, the result can be powerful. This corpus of information even includes the system being able to “see” images in addition to its ability to “read” text and “hear” spoken language. Amazing! This addresses many of the challenges in text mining today by taking advantage of information that is often largely ignored.

So is Siri® a cognitive computing system?

Not quite, but it’s a start. What makes cognitive computing the next era of computing is that it is not a programmed compute system, the way personal computers, tablets, smartphones, and other gadgets are. It’s hard not to love Siri (after all, it can be fun to ask Siri such questions as “what is zero divided by zero?” or “what is the meaning of life?” – if you haven’t already asked one of these questions, take a break right now and try it), but Siri has been highly programmed, and provides scripted answers (I’m sure Apple employees had a lot of fun with this part of the development of Siri). It can learn a little from you, and most likely it will continue to evolve to learn even more from each individual user, but it is not truly a cognitive computing system. Some say it can be categorized as a “cognitive embedded capability,” which is important for cognitive computing. Any development efforts in the world of cognitive computing need to be able to be embedded in the technology and applications that users know and love, such as Siri being a part of the iPhone®. So while Siri has some capabilities that are on the road to a cognitive computing system, its emphasis is on its programmed functionality rather than on what it learns from a user. Instead of assessing information from multiple sources (including what it has been taught), offering hypotheses, and allotting confidence to potential answers, Siri is mostly programmed with canned responses or matches from its available information. After all, Watson beat former Jeopardy! champions, while Siri often cannot even find the nearby location to which I request directions.

Given all of this, what exactly can cognitive computing systems do today?

Stay tuned: This will be discussed in Part 2. Thanks for reading, and let us know your thoughts on cognitive computing. Will it change the world?


About Author

Comments are closed.

Back to Top