Where in your business process can analytics and AI play a contributing role in enhancing your decision making capability? At the information interpretation stage. As a framework for understanding where analytic and AI opportunities may arise, the simple diagram below illustrates the relationships between data, information and knowledge, and how you get from one end to the other. The key is how you come to understand the arrows (the verbs) that link the three primary concepts (the nouns).
Information is organized data. The same data can be organized differently to yield different information sets. Sorting your customer data by zip code yields a different information set than by age, gender, income, or name, with each sort/filter useful in its own right for different purposes.
Organizing your data
We all know the basic data sources and types: transactional, streaming, audio/visual, social/web/internet, machine, flat files/reports, data bases/lakes/warehouses, third-party, etc. This data is often already organized in some form or fashion, but likely not in a manner that can be readily consumed by analytics and AI, which works on deliberately organized information. To that end the raw data is transformed, via well known tasks like sort, filter, transpose, split, standardize, impute, append, join and compute. The resulting information can be an output to a data warehouse or data mart, to another report, a graph/visualization, or a BI tool.
In the end, the quality of the resulting information is less dependent on the transformation techniques than on the quality of the data, hence the unqualified requirement to focus on and allocate sufficient resources and processes towards data quality, data governance and data integration. Artificial intelligence projects become doubly complicated once you introduce more than one data silo – while the downstream algorithm is largely unaffected, getting information to the algorithm becomes exponentially more difficult.
Who gets to interpret data?
The final step in turning information into knowledge is interpretation. What does the information mean? Until quite recently this function was the sole purview of humans. We would read the numbers, the gauge, the report or the graph, and decide if sales were improving or if the boiler was about to blow apart. We’re quite good at interpreting information and assigning meaning – so good in fact that we will often assign meaning to entirely arbitrary information sets (e.g. astrology).
But now, with analytics and AI, we can effectively offload much of the interpretation task, and we can derive interpretations (knowledge) that are better, faster and cheaper than can be done by a human.
You might think you see a seasonal pattern in your sales data, but forecasting analytics can break that down into level, trend and seasonality and use that to more accurately forecast future sales. Your experience tells you that certain market segments show consistent product attribute preferences, whereas clustering techniques can segment the entire market across all products and attributes. You might think you can efficiently allocate resources with pen, paper and spreadsheet, but optimization analytics can literally optimize delivery routes or staff schedules much more effectively. And now with AI for computer vision and text analytics we have analytics that can interpret visual scenes and speech to rival, and in some cases exceed, human capabilities in these areas.
The Interpretation Process
Applying analytics and AI to information has four main components:
- Where in your process do you currently have human-based interpretation that could be augmented or better performed by analytics and AI? (i.e. better, faster, and/or cheaper)
- What’s the best / appropriate analytical methodology to employ?
- What upstream data management challenges do you need to overcome in order to support your chosen AI/analytics interpretation?
- Where are the remaining steps in the process that still require human intervention?
When thinking about human interpretation, don't overlook your requirements for understanding the basis for machine/AI interpretability. How critical is it for you to understand how the underlying algorithms work? In some cases the fact that they do work, and on average better than the comparable human performance, may be enough. The current discussion around the safety of autonomous vehicles highlights this issue quite well. Just because the ethical issues of who or what such a vehicle might hit in an extreme situation have not all been ironed out is not a good justification for discarding a system that is overall safer than a human driver.
In other situations, such as financial regulation, the why and the how can be just as important as the what. You don’t necessarily want to regulate an already complex system with another poorly understood process.
The appropriate tool and methodology depends on how well YOU need to understand the interpretation. Keep in mind that while you can always ask a human how they came to their decision, you can’t always trust their response – we are not necessarily good judges of our own internal decision processes.
This segues into the final issue of weak links in complex systems, where we run into a phenomenon known as normal accidents, where the system is complex beyond human understanding to the point where deviations, breakdowns and accidents become inevitable. Three Mile Island and Chernobyl come to mind. As does Russian colonel Stanislaw Petrov, the "Man Who Saved the World," by NOT initiating a retaliatory nuclear strike when all the indicators showed that America had launched its missiles, which were later determined to be reflections from high altitude clouds picked up by radar.
The lesson here suggests that tightly-coupled interpretation processes within larger systems can be accidents-in-waiting. Tightly-coupled systems are all the rage for efficiency experts. And in our digital transformation drive to link everything seamlessly, we are prone to constructing systems so complex that we cannot reasonably understand and predict all the interactions and feedback loops.
For the time being, there will be plenty of roles where humans continue to interpret information better, faster, cheaper than analytics and AI, other situations where human interpretation can be augmented although not replaced by AI, and others still where humans are better employed interpreting information at higher levels where it’s been first pre-processed as it were by analytics.
It’s not for nothing that SAS’ tagline is “The Power to Know.” And while knowledge may be the end-point of this process, the power in that knowledge ultimately comes from putting it use, execution, making a decision. Knowing, and extracting value from that knowledge, are two different things, the latter dependent on your culture and your enterprise framework for action. Value comes from getting that knowledge into the hands of the decision makers on a timely, user-friendly basis. Better decisions, faster - that's the power in the knowledge.
Watch the webinar: Implementing AI Systems with Interpretability, Transparency and Trust