Text analytics through linguists’ eyes: When is a period not a full stop?

~ This article is co-authored by Biljana Belamaric Wilsey and Teresa Jade, both of whom are linguists in SAS' Text Analytics R&D.

When I learned to program in Python, I was reminded that you have to tell the computer everything explicitly; it does not understand the human world of nuance and ambiguity. This is as valuable a lesson in text analytics as in programming.

When I share with new acquaintances that we have a team of linguists at our analytics company, they are often puzzled as to what our job entails. I explain that we use our scientific understanding of language to ensure that the computer interprets the symbols of human language correctly; for example, what a word is or where a sentence ends. You might think these are easy tasks; after all, even young children have answers to these questions. But, in fact, teaching a computer the seemingly simple task of where a sentence ends across a wide range of human language texts quickly becomes complex, because a period is not always a full stop.

Take, for example, abbreviations like “Mr.” and “Mrs.” in English, “Dipl.-Ing.” in German, “par ex.” in French, “г.” in Russian, etc. In all of these cases and across most languages, the period does not necessarily signify the end of the sentence. Instead, it means information has been left out that we, as humans, can guess from context: “Mr.” really means “mister,” “Mrs.” refers to a married woman (did you know it is short for “mistress”?), “Dipl.-Ing.” stands for “Diplom-Ingenieur” (an engineering degree), “par ex.” stands for “par example” (“for example”) and “г.” most often stands for “год” (“year”) or “город” (“city”). You might think telling the computer to ignore the period in these cases is a good way to avoid interpreting the period as the end of the sentence. But that won’t work everywhere – just consider the first sentence of this paragraph, where the period comes after the abbreviation “etc.” but it also doubles as a sentence ender!

The situation is no less complex with numerals. In some parts of the world, including the US, South Asia and Australia, periods are used to separate the decimals from the integer and commas are used to separate thousands, for example: “100,000.25.” But in other parts of the world, including Europe and most of South America, convention dictates that the roles of the period and comma are reversed: Commas are used for decimals whereas periods separate hundreds, for example: “100.000,25.” In these cases, the entire numeral needs to be interpreted as one unit, and thousands of units of currency might be at stake.

Read More »

Post a Comment

Why I’m not worried by double negatives?

Double negatives seem to be everywhere, I have noticed them a lot in music recently. Since Pink Floyd sang "We don't need no education", to Rihanna's "I wasn’t looking for nobody when you looked my way". My own favourite song with a double negative is "I can't get no sleep" - Faithless.

This last one is maybe the most appropriate, because I have been thinking about double negatives a lot recently. The Oxford Dictionary definition of double negative is as follows:

  1. A negative statement containing two negative elements (for example "he didn't say nothing")
  2. A positive statement in which two negative elements are used to produce the positive force, usually for some particular rhetorical effect, for example "there is not nothing to worry about"

However this definition misses the point that their usage can be extremely nuanced.  Double negatives are often used in litotes, which is a figure of speech where an understatement is used to emphasise a point, by stating a negative to affirm a positive. Context is everything, the phrase "not bad" can be used to indicate a range of opinions from just average to brilliant. They can also diminish the harshness of an observation "the service wasn’t the best”, might be used by some people, as a politer way of saying “the service was bad”.

I've been thinking about double negatives and negation in language, because I have recently worked on several projects with business to consumer companies, analysing their complaint and NPS survey feedback data. Maybe it's a particularly English trait, but my countrymen seem to use negation in this type of survey feedback a lot. An airline customer will say "their meal wasn't bad" or a bank customer is "not worried that their interest payment is delayed ". Of course they negate their positives too, so the airline customer "is not pleased they had to queue at check in" and the bank customer "isn't happy with the mistake setting up a power of attorney".

This type of language usage can sometimes cause a problem for some Text Analysis solutions because the primary approach they utilize is to summarize the words in documents mathematically. So if the same words are used in very differing contexts, such as negations, there is a risk documents might be classified in the wrong topic. For example SAS Text Analytics uses a “bag-of-words” vector representation of the documents. This is a powerful statistical approach which uses term frequency, but it does ignore the context of the term. So if the same words are used in very differing contexts, such as negations. There is a risk documents might be classified in the wrong topic.

Fortunately SAS Text Analytics also provides an extremely effective feature, which allows you to treat terms differently according to context. The approach is described further in this white paper “Discovering What You Want: Using Custom Entities in Text Mining”.

I used this approach to define a library of approximately 6,500 positive and negative words and treated these differently if they were negated. You can almost think of this as a new user defined ‘part of speech’, this then gives more information to the mathematical summarisation of the documents and ultimately discovers more useful topics with less false positives.

I’m embarrassed to admit I only speak English, but interestingly I learnt whilst researching this blog post, that double negation is not used the same way across different languages. For example, it is extremely uncommon in Germanic languages, in some languages, like Russian, it is required whenever anything is negated, whereas in other languages double negatives actually intensify the negation. However aside from handling negations, this hybrid approach combing linguistic rules with algorithms, can be used in lots of other ways too. For example dealing with homonyms (e.g. same pronunciation, same spelling, different meaning i.e. “lean” (thin)  -vs-  “lean” (rest against)) or heteronyms (e.g. different pronunciation, same spelling, different meaning i.e. “close” (shut) -vs-  “close” (near)), if these are used a lot in your corpus of documents.

Possibly the most beneficial use of all is to differentiate between language usage, that maybe specific to your corpus of documents. For example an insurance assessor maybe taking notes about an accident and write:

“… the customer works on an industrial estate and would like us to assess the damage on the car there”

or

“ … the accident happened early evening on the southern industrial estate”

In this example I could build linguistic rules to identify the time and location of accidents. This may improve model accuracy to detect insurance fraud if there is a correlation between crash for cash accidents, locations and times. There are lots of very specific examples like this, where this hybrid text mining approach, which combines linguistic rules with machine learning, can significantly improve our text analysis results.

Post a Comment

Behind the scenes in natural language processing: Overcoming key challenges of rule-based systems

A while ago, I started this series on natural language processing (NLP), and discussed some of the challenges of computers interpreting meaning in human language based on strings of characters. I also mentioned that today’s NLP systems can do some amazing things, including enabling the transformation of unstructured data into structured numerical and/or categorical data.

Why is this important? Because once the key information has been identified or a key pattern modeled, the newly created, structured data can be used in predictive models or visualized to explain events and trends in the world. In fact, one of the great benefits of working with unstructured data is that it is created directly by the people with the knowledge that is interesting to decision makers. Unstructured data directly reflects the interests, feelings, opinions and knowledge of customers, employees, patients, citizens, etc.

For example, if I am an automobile design engineer, and I have ready access to a good summary of what customers liked or didn’t like about the last year’s vehicle and competitor vehicles in a similar class, then I have a better chance of creating a superior and more popular design this year.

My previous article, “Behind the scenes in natural language processing: Is machine learning the answer?,” mentioned that the two most-common approaches to NLP are rule-based (human-driven) systems or statistical (machine-driven or machine learning) systems. I began the discussion of rule-based systems by describing some benefits. But these systems also pose some challenges, which I will elaborate on here.

Read More »

Post a Comment

Event Stream Processing with Text Analytics

Is text analytics part of your current analytical framework?

For many SAS customers, the answer is yes, and they've uncovered significant value as a result.

As text data continues to explode both in volume and the rate at which it's being generated, SAS Event Stream Processing can be used to analyze not only high-velocity structured data, but also the text (by using text models in stream).

In some cases, standard batch processing delivers the analytical insight sufficient for organizations. Yet, what about those other situations where taking action immediately, as an event is happening, is critical? These sub-second actions and real-time alerts can save or make millions of dollars for a company.

Below, I describe techniques that highlight streaming analysis of text data (and many of these elements also apply to structured data as well). My hope is this will trigger ideas and use cases for you to think about within your company.

1.)   Data Quality and Cleansing

Anyone who has worked with social data (or any text data for that matter) understands that it can be cluttered with noise, encoding issues, abbreviations, misspellings, etc. If not corrected, this can lead to inaccurate results and even processing errors. So why not deploy Event Stream Processing to correct and transform variables before they hit your database? As you’d expect, not every data quality issue can be resolved on the frontend of data collection, but by applying known corrections upfront, you have the ability to enrich your data and enhance the value of data sitting within your database.

Image 1: Diagram of an Event Stream Processing flow, integrating text analytics, pattern detection, and predictive modeling.

Image 1: Diagram of an Event Stream Processing flow, integrating text analytics, pattern detection, and predictive modeling.

2.)   In-Stream Sentiment Analysis and Categorization

SAS has a powerful set of text analytics technologies that customers have been using for over 10 years. In the latest release of SAS Event Stream Processing (version 3.1, which comes out in May), customers who currently license SAS Sentiment Analysis, SAS Content Categorization, or SAS Contextual Analysis can now deploy these models against streaming data. This opens a window of opportunity to tag unstructured data on the fly (such as sentiment scoring, classifying documents, or extracting entities). These results are then inputs to event stream models for additional scoring, or to generate alerts, prompts, or to take a specific action. To learn more about SAS Text Analytics, check out SAS Contextual Analysis, SAS Text Miner, and SAS Sentiment Analysis.

3.)   Embedded Modeling

In text analytics, the goal is to convert unstructured data into some structured format, such as flags, scores, categories, and entities. For many applications, these new variables are most valuable when they are used to enhance predictive models, trigger alerts, create risk scores, enrich content, and to ultimately track and report. Through embedded analytics, SAS DS2 code (and functions in C++, XML, and regular expressions may also be used) can be deployed within event stream processing flows, which means real-time scoring of both structured and unstructured text using regression models, decision trees, and more.

4.)   Integrated Data Sources

In many situations, insights from streaming data can only be realized when multiple data streams are integrated together. SAS Event Stream Processing allows users to join and merge data in stream, so that the calculations and models may be applied to the comprehensive dataset. For example, a large call center has streaming data in the form of customer complaints and service-

Image 2: SAS Event Stream Processing Streamviewer

Image 2: SAS Event Stream Processing Streamviewer

related questions. Once a customer comment is received, SAS Event Stream Processing can extract the customer name and/or customer ID and match it to transactional history for that customer, while also categorizing the reason(s) of the complaint or question. This in turn can trigger a prompt to the agent to adopt a retention strategy or potentially upsell them to a new product or service.

5.)   Emerging Issue Detection

As data floods into you organization, it is sometimes difficult to spot emerging trends and issues. Currently, many organizations run batch jobs to detect and resolve these issues. Because SAS Event Stream Processing can be both stateful and stateless, aggregations and advanced models can be used to identify emerging topics, categories, sentiment and other indicators in real-time. These emerging issues can be detected using sophisticated pattern matching that supports detecting patterns based on the relationship of one event to one or more other events within a defined period of time. Thresholds can be set and events can be used to determine the relevancy and immediacy of any associated instruction/action. This changes the process from being reactive to proactive in the sense that an emerging issue can be monitored in real-time.

Real-time systems such as SAS Event Stream Processing are used for a variety of purposes. By integrating this technology as a front end to key, time-sensitive deployments, organizations gain a competitive advantage in both time and quality.

To learn more about SAS Event Stream Processing, check out the following links for more information and feel free to contact us if you'd like more information.

Also, if you're out in Dallas at SAS Global Forum next week, be sure to stop by and check out SAS Event Stream Processing.

Post a Comment

What’s it take to be a data scientist?

In February of this year, the Washington Business Journal reported the US Government appointed its first Chief Data Scientist, DJ Patil. With this, I think it’s now safe to say that Data Science is officially sanctioned as new mode in organizations. Those that can apply the necessary finesse along with business acumen to make sense of big data has formalized into a ‘new’ profession.

I talked to one of our own, to find out his thoughts in what it takes to be a data scientist. And true to his ilk, SAS’s Adam Pilz applied text analytics to figure out what skills were being sought to fill this coveted role.

Adam Pilz, SAS

Adam Pilz, SAS

Crawling just over 7,000 public postings from a job website, Adam investigated the key elements companies were looking for in a data scientist. They must be highly educated to attain a job.” Masters degrees or greater was seen as a requirement for 81% of the advertised jobs – comparing that to the 12% of the American population. Indeed, there is a clear distinction between the level of scholarship obtained by the general public and that required of a data scientist.

In terms of the prowess of data scientists? He saw the top 10 most desirable analytical skills mentioned by a prospective employer were:

table1

Adam suspects that the first two categories (machine learning and optimization) may simply be popular buzzwords added to job postings, and perhaps optimization may be the Human Resources department’s way of describing how to make things better - versus the mathematical method. If that holds true than it’s possible that text analytics is the most sought after skill in the data scientist market. At a minimum it’s in the top three.

He saw that text analytics and forecasting were the fastest growing desirable skills. And of course, as with all text analysis, various synonyms were captured for each of the terms seen above. For example, content analysis, NLP, sentiment analysis, text classification, topic extraction, etc. are all included in the term ‘text analysis’.

Data wrangling’ is a fun term. It conjures up romantic notions of the wild west, within (no doubt) the Text Frontier - wrestling big data beasties, captured by causally (and similarly) dressed cowboys who are methodical in their approach (big buckle bragging rights will be seen at this year’s SAS Global Forum in Dallas, as a matter of fact).

Breaking this out further, Adam compared:

  • “lower level skills” = those that are lower in importance as education attainment increases, relative to
  • “higher level skills” = those that become more important to have as education increases.

In the two charts below, the skills are in ranked order of importance.

Importance of ‘lower level’ analytical skills with educational attainment

Importance of ‘lower level’ analytical skills with educational attainment

As education level increases (from left to right), skills like data wrangling, data visualization and basic statistics are not prominently featured as required skills for data scientists, as Masters and then again, PhDs are expected to focus their time on more sophisticated types of analysis.

Importance of ‘higher level’ analytical skills with educational attainment

Importance of ‘higher level’ analytical skills with educational attainment

Text analytics, on the other hand, jumps significantly in importance ranking as skill level rises, possibly because the outputs of such an analysis are highly sensitive to the methods used and thus impacted by subject matter expertise. Linear regression and design of experiments both become more important with increasing education, and generalized linear models show up as a required skill for PhDs.

I also asked Adam if he has seen any trends in the usage of the term ‘data scientist’. He said that “the level of education required to be a data scientist has remained the same for the last year, but there are important geographical differences”.  Backing this up, he pointed to differences seen in the highest level of education mention in the job postings, segmenting based on the highest level of education that was required. When looking at the entire US, he found that Bachelor’s was the least sought after degree for data science positions, only seen in 19% of the job postings, while Masters were the most cited educational requirement, garnering 54% of the advertised positions. PhDs claimed the remaining 27%.

Geographical differences in required skill level were found in Silicon Valley relative to the rest of the country. He saw that inside Silicon Valley, PhDs were required for 50% of the jobs listed (and Masters were required for 36%). This was in contrast to jobs outside of Silicon Valley, where PhDs and Masters were identified for 33% and 55% of job postings, respectively.

It’s been said that SAS has more PhDs on staff than does any single university in the United States. And if you’re using open source code for example, perhaps you do need more PhDs on staff to make sure that algorithms are behaving correctly. I know here at SAS we build that expertise right into the software.

I asked Adam what software he used to do this analysis. He initially used Base SAS® and found that once he’d written the code to tag terms he was able to find them in the postings. However, he soon moved to SAS® Contextual Analysis. The difference? SAS Contextual Analysis highlighted the word tagged by the category and so he was able to search for the specific terms and see what else people talking about. He found that the text analytics software gave him the insight into how different postings were saying similar things, in addition to informing what other phrases he might want to investigate, concluding that the text analytics approach was ”..More enlightenment than discovery”.

Adam did this research before coming to SAS – in his search for a new career. We are thrilled to have him and his data scientist prowess as part of the SAS family.

Regardless of title (Adam is described as a Solutions Architect), the skills attributed to data science have been held by those in the analytic field for some time.

Do you see yourself as a data scientist?

 

Post a Comment

Don’t Second Guess – Depend on Prescriptive Analytics

I don’t know why I’m on this medical theme lately – maybe it’s because my parents are aging. They talk about bits falling off, take lots of naps and describe how body parts don’t work like they used to. They’ve gone to pre-packaged pills – dividing up their medications by day and time of day by the local pharmacist. It’s helped a lot. I’ve got a lot more confidence that my Dad won’t (again!) take a sleeping pill, first thing in the morning - before he gets in the car to drive. Ugh.

Confidently knowing what action needs to be taken because it’s pre-packaged is very appealing to many aspects of business too.  Wouldn’t it be nice to know that front-line workers make all their decisions in a particular situation based on expert advice that includes organizational policies and requirements? Even when it’s not people, but other systems or even devices making decisions.  Like in the Internet of Things, isn’t it necessary for those things to base action on situational understanding - triggering a specific (and appropriate) action for a particular scenario? Yes, particularly when we turn our decisions over to machines to make them for us.

Pre-packaged pills

Pre-packaged pills

Taking prescriptive actions includes the benefit of:

  • Consistency – under the same scenario conditions the same action is take
  • Repeatability – when the same situation arises, you can reuse the same logic
  • Efficiency – no additional energy is spent investigating the action to be taken, it’s prescribed.

And together, these things reduce the risk of the wrong decision being made and an inappropriate action being taken.

So where do you get the expertise in the prescribed action? My parents get it from a subject matter expert - the pharmacist, who, based on his training, directions from the doctor and knowledge of current medications defines which pills go into which sealed envelop. In fact, their pharmacist was able to decrease their medications – simply because of his perspective on the buffet of pills they’d been prescribed over the years.

The Wrinklies

The Wrinklies

Organizations get the expertise from a few places. From their analytical experts who examine operational data to assess the pros and cons of different factors influencing behaviors and outcomes – summarized in advanced analytic models. From business analysts, who consider situational conditions, organizational policies, regulatory controls and analytical model scores in relation to decision objectives. They develop the business rules that define the conditions under which an analytical model is relevant. From their IT departments who have spent time collecting, cleansing and normalizing operational data to ensure currency, accuracy and availability. And from corporate executives who determine organizational policies and mandates to align stakeholder and compliance requirements.

In a recent IIA Research Brief the difference between predictive and prescriptive analytics is detailed. The paper also goes into more depth of how you gain (likely untapped) prescriptive insight from unstructured text data – it’s amazing the direction that is often included in narrative. Going beyond the data discussion, it describes how prescriptive actions are codified using the discipline of enterprise decision management. And lastly, it explores the impact of big data in the form of streaming data – necessitating more operational and tactical decision discipline.

The Wrinklies (my nickname for my folks) have taken some of the guess-work out of their routine and we all agree, they are better off for it. Giving you more confidence, what operational activities in your organization would you like to see prescribed?

Post a Comment

Seth Grimes with More on Text Analytics

Perhaps it’s the same for you - it’s getting harder to get to all the conferences I’d like to attend. One of the benefits of getting out there is a chance to learn about different perspectives in an industry. When someone has a broad perspective, particularly if they’ve been in an area for a number of years, their focal lens can often see unique trends.

Earlier this year, I was able to catch up with Seth Grimes and get his perspective on:

Seth Grimes, Alta Plana Corporation

Seth Grimes, Alta Plana Corporation

  • The extent to which text analytics become an essential part of data analysis?
  • Why have some organizations not realized positive ROI – and how can they improve
  • What’s unique about text analytics?
  • What are the hottest issues in the text analytics market over the next few years?

Check out the free recording of our discussion so that you can hear what he had to say!

Looking at this field since 2002, and with his market survey running since 2009 – Seth Grimes, Alta Plana Corporation and industry expert in text analytics, has seen the text analytics field from being an interesting concept, to an applied discipline. He released his most recent study this year.

Unstructured text analysis is expected to grow even more next year – particularly in organizations who’ve started to understand big data. To date, many have focused on the more traditional structured side of big data. The next chapter is to understand the unstructured – text being a large part of that.

So as Seth says we can only expect “more”. What does more text analytics mean to you?

Post a Comment

The prescription for unstructured data analysis is now legible

My Mum could have been a doctor – most can’t read her handwriting. It’s only because I’ve been trained to read it, I can.

The analysis of unstructured data is similar. Text analysts can be quickly overwhelmed to learn that you have to manually develop a training corpus. Reading a sample of documents, and marking each document by hand – defining the relevant categories to the software.

It’s a bit easier if you already have a starter taxonomy, but it can be trying to find one specific to your need.  And even if you do find one, how do you define new concepts that you don’t even know exist in the materials? Well, you have to read a few (and are back to more manual effort).  All this to get to the point of automating categorization (sigh).prescription

There’s also the option of generating a taxonomy from reliable sources, like Wikipedia or DBPedia – and SAS® does that too. Some manual validation to ensure your document collection is addressed still has to be done.

There’s now an easier way.

SAS® Contextual Analysis is a new, highly intuitive text model development technology.  Machine learning algorithms are used to do the initial heavy lifting – removing much of the historic manual burden.  The software examines the entire collection – identifying the stems, misspellings, and more. The NLP is automatically done.

The software also automatically finds relevant topics. You can visually explore the results, adjust and refine what is discovered.

You can add concepts – the most common ones are even pre-defined for you. Or write your own.

And as the subject-matter expert, you decide what makes sense as categories – with the help of relevance metrics that the system generates.

For those of you familiar with SAS, SAS Contextual Analysis brings together some of the capabilities of SAS® Text Miner, with those of SAS® Enterprise Content Categorization – in one, guided, web interface.

Take a look for yourself. In this webinar (which features Jared Peterson, the product development manager for SAS Contextual Analysis) you can see how straight-forward it now is to get insights from all the dark (text) data you’ve not looked at (possibly because it’s been too hard?).

We’ve found that the narrative in call center notes is almost always more informative about customer issues and concerns then from the categories manually selected by call center agents.  In fact, customers will often describe how to fix the issues they perceive, giving you the prescription to make them happy.

We’ve seen this for improving fraud investigations, improving debt collection, creating more efficient operations, creating more satisfied customers, improving patient care, reducing warranty costs, delivering  relevant real-time advertising, the list goes on.

You know, it’s amazing what you can get used to.  Chronic pain from text analysis doesn’t have to be one of them.   Sure, some training will always be involved. But you can reduce the burden of manual effort and the ills of inconsistency and error in the process. And just get to the new insights sooner.

What would you want to know from your dark, text data?

Post a Comment

Sharpening the Executive Edge with Text Analysis

~ Contributions from Elena Lytkina Botelho, Stephen Kincaid, Chris Trendler - ghSMART & Beverly Brown, Pamela Prentice and Dan Zaratsian - SAS ~

 

A fascinating talk at the SAS Global Forum Executive Conference focused on text analytics, one of the newer weapons in the arsenal for analytic understanding.

Dr. Goutam Chakraborty, a marketing professor at the Spears School of Business, Oklahoma State University, described how he’s seen text-based insights expand knowledge beyond the numbers. He spoke of previously unknown facts that made companies more responsive and effective.

A practical step-by-step guide to applying SAS Text Analytics

His work shows that most analytical models are pretty good at predicting future scenarios and describing conditions.  However, competitive pressures and dynamic market conditions leave little room for substantial improvement in existing algorithms.  Marginal   gains are possible by tweaking existing algorithms, fine tuning parameters, distributions and the like.

But sweeping advancements are likely when we incorporate new types of information into existing analytic paradigms.  Text data, for example.

In case studies from different industries, Dr. Chakraborty shared how quantitative returns jumped substantially when text analytics were applied to operational data.  For example:

  • Automating SMS text message classification and sentiment scores within mobile logistics applications reduced professional drivers’ response times.
  • Debt collection increased when call agents were armed with new intelligence from call center conversations.

Besides operational improvements, he also gave strategic planning examples. In one case, merging text-based insights with numeric data improved predictive accuracy of future conditions so intervention strategies were more effective. In another, fact-based understanding of reputation (by tracking the impact of controversial statements in social media) led to better social media strategy.

Text analytics extends existing analytic methods, answering questions such as:

  • Why is this happening?
  • What should we say?
  • Who needs to take action?

If the Q&A after Dr. Chakraborty’s talk was any indication, the audience agreed that text analytics could help them make better executive decisions.

Executives’ words reveal their fitness to lead

Did you know that text analytics can also help decipher what makes a good executive in the first place?

ghSMART is the elite consulting firm that helps CEOs lead at full power.  Based on their branded method of assessment (called SmartAssessments), along with their expertise in understanding human behavior, they answer the question: Who should run your business?  And now they, too, are seeing how text analytics can be used to advance insights.

In a collaborative project with SAS, some of the early findings of this study are indeed, quite intriguing. Based on analysis of anonymized transcripts of candidate interviews and SmartAssessment ratings, we’ve found:

  • Lower-rated candidates used the term ‘mentor’ (and its stem variants, mentoring, mentored, etc.)

Initially, the team believed this to be counter-intuitive.  One of the benefits of text analysis is that you can dig deeper into the rationale of a result – and understand what is driving statistically significant numbers.  It turned out that the distinction was really that lower-rated candidates described their mentors or expressed wanting to be a mentor, whereas higher-rated candidates talked of being a mentor to others in the interview.

Frequency of terms associated with lower-rated candidates (left) and higher-rated candidates (right).

  • Candidates with a consulting background were statistically more likely to successfully transition directly into executive-level positions than those without consulting experience.

Here’s a helpful nugget for aspiring executives: Acquire broad experience from working with all aspects of a business (even if in a smaller company) prior to your CEO interview

  • Lower-rated candidates frequently described some form of failure, setback, or disappointment throughout the interview.

Telling the truth is important.  Context is too.  What seems to matter in these preliminary results is how much the candidate focused on describing previous failures in relation to the length of the interview (specifically, the number of words in the full dialogue transcript).

Heat map (top) illustrating the frequency of ‘mentor’ is, on average, correlated with higher-rated candidates. The bar chart (below) shows that a higher mention of failures (on average) is less correlated with higher-rated candidates and more correlated with lower-rated candidates.

Actionable intelligence from analyzing text is helping organizations reduce risk, lower operational cost and inform both tactical and strategic decisions.  And, some exciting new research suggests that text analytics can also help decide who should be at the helm of the organization.  Having the mindset of being a mentor as a CEO, for example, calibrates to being more effective leader than simply being successful in the CEO role.

 SAS and ghSMART continue to sift interviews to see what sets Grade A executives apart from the rest. We look forward to sharing what we’ve learned later this year.

Post a Comment

Predictive Coding - What's so predictive about it?

Recently, SAS announced support for White House efforts in the fight against patent trolls.  As indicated in the announcement, lawsuits filed by patent trolls cost innovators $500 billion in lost wealth from 1990 to 2010[1] - and are growing at an average rate of 22% a year.

Finding the right information, in seas of documents is a challenge for many organization – patent search and litigation are no exception.  Legal organizations are awash in hard drives filled with reports, emails, communications, and alike.

Which brings me to the topic of predictive coding.  In following some of the historic debate regarding the usefulness of this approach to help alleviate some of the burden of manual review, I’ve asked: Why has this been called ‘predictive’ in the first place?  For the legal profession, a field founded in facts – predictive notions might even be downright scary. And ‘predictive’ doesn’t really even describe what this text analytics method does to improve legal searches.

According to Wikipedia a prediction is “a statement about the way things will happen in the future often but not always based on experience or knowledge”.   Well, that’s not what predictive coding does.  This type of analysis uses computer software to analyze documents, with the goal of finding important or highly related content within existing material. There is nothing futuristic about it.

A patent troll

Predictive inference (in statistics) considers extending the characteristics of a sample to an entire population. Well, that’s not what predictive coding does.  A document is examined to determine membership to one or more topics, terms, themes or phrases. A relevance score is defined – which reflects a probability of membership.

In fact, defining the relevance of a document, describing membership to a fact, taxonomy or topic is within the well-established field of categorization. Categorization of content is a descriptive analysis method – putting text/documents into relevant buckets.  Descriptive analysis is different than predictive analysis – the first explains, while the second forecasts or projects.  And probabilities are different from predictions – asking will something happen is different from asking when something might happen. But descriptive coding, while perhaps more accurate, isn’t really very catchy.  Established, alternate naming conventions for this eDiscovery technique, such as ‘technology assisted review’ or ‘computer assisted review’ seem more helpful describing what this is.

I’ve even gone so far as to interview lawyers on this topic. Their conclusion was, that for extremely high volume cases, and as a method of triage for certain types of documents, computer assisted review can be quite helpful.  The goal is to filter out materials that are unrelated to the case at hand.  Ideally, the remaining, potentially relevant materials, are grouped into different topics – providing context. And then an intensive search exercise occurs, to isolate pertinent documents.  Still – nothing futuristic.

So, one may ask ‘How do you predict from text data? Or any kind of document for that matter’?

Prediction from text happens once it’s numerically represented.  Structured in such a way that it retains the essence of the text meaning – but described as numbers (like the presence or absence of a term).  There are very sophisticated ways to do this – and are well defined in the field of text mining.  Once documents are numerically structured, then they are in the format needed for predictive models – to see if the terms, phrases, facts and themes are meaningful to future events.

For example:

Will customers leave in the future based on a dissatisfying experience that they had?

  • Say they’ve called into the 1.800 line and complained, or written emails. First you’d analyze the text to understand the issues. These ‘issues’ (whether they be topics, or concepts, or even linguistic rules) are translated to structured representations (as new variables or taxonomies). In turn, these new elements are used as input to a churn model – which is estimating the probability that they will leave at some time in the future.

When might a car no longer be roadworthy, given its history of repairs, age, use, etc?

  •  Text mining of service notes for that make/model, warranty claims, reported issues, and alike creates structured, numeric variables.  These new insights, along with other numeric information (like mileage), would be inputs into a model to identify the future failing of the vehicle.

When will demand for a product increase?

  • Monitor social media, identify the ‘buzz’ – from crawling external information sources and extracting pertinent commentary. Use these identified elements, along with sales trend data, in a model to forecast when more demand is expected to happen.

… the list goes on…

I enjoyed the recent Law Technology News award winning article: Predictive Coding Is So Yesterday by Joel Henry.  I’d even go a step further, and say that – it really never was (predictive anyway).

Text mining is a well-established discipline – and as many of our customer’s know – is a discovery process. Sound familiar? Based on the data – not humans – documents are classified – with machine learning methods that identify clusters, topics and even create taxonomies, or profile how a term changes over time.

Text mining is, however, only part of the electronic data discovery technology solution described by Joel Henry. Today, text mining can help remove the burden to manually develop training sets, and provides a method for active learning - for machine generated categories to learn from human conditioning.

ESI in Joel Henry’s article stands for ‘electronically standardized information’.  Having documents in electronic form is a requirement for any type of machine learning exercise.

SAS announced commitment to converting 38 years of user documentation and technical papers to electronic form for IP.com, who, in turn, work with the US Patent and Trademark Office (USPTO).  With the documentation in electronic form, IP.com will be able to publish, aggregate and analyze technical documentation, helping USPTO efforts reduce the burden of patent troll litigation.

The future is predicted to be very bright for organizations committed to stemming abusive patent business practices, as well as for those who are making use of advanced analytics to address big data burdens.


[1] Findings from Boston University, School of Law study : http://www.bu.edu/law/news/BessenMeurer_patenttrolls.shtml

Post a Comment