Event Stream Processing with Text Analytics

Is text analytics part of your current analytical framework?

For many SAS customers, the answer is yes, and they've uncovered significant value as a result.

As text data continues to explode both in volume and the rate at which it's being generated, SAS Event Stream Processing can be used to analyze not only high-velocity structured data, but also the text (by using text models in stream).

In some cases, standard batch processing delivers the analytical insight sufficient for organizations. Yet, what about those other situations where taking action immediately, as an event is happening, is critical? These sub-second actions and real-time alerts can save or make millions of dollars for a company.

Below, I describe techniques that highlight streaming analysis of text data (and many of these elements also apply to structured data as well). My hope is this will trigger ideas and use cases for you to think about within your company.

1.)   Data Quality and Cleansing

Anyone who has worked with social data (or any text data for that matter) understands that it can be cluttered with noise, encoding issues, abbreviations, misspellings, etc. If not corrected, this can lead to inaccurate results and even processing errors. So why not deploy Event Stream Processing to correct and transform variables before they hit your database? As you’d expect, not every data quality issue can be resolved on the frontend of data collection, but by applying known corrections upfront, you have the ability to enrich your data and enhance the value of data sitting within your database.

Image 1: Diagram of an Event Stream Processing flow, integrating text analytics, pattern detection, and predictive modeling.

Image 1: Diagram of an Event Stream Processing flow, integrating text analytics, pattern detection, and predictive modeling.

2.)   In-Stream Sentiment Analysis and Categorization

SAS has a powerful set of text analytics technologies that customers have been using for over 10 years. In the latest release of SAS Event Stream Processing (version 3.1, which comes out in May), customers who currently license SAS Sentiment Analysis, SAS Content Categorization, or SAS Contextual Analysis can now deploy these models against streaming data. This opens a window of opportunity to tag unstructured data on the fly (such as sentiment scoring, classifying documents, or extracting entities). These results are then inputs to event stream models for additional scoring, or to generate alerts, prompts, or to take a specific action. To learn more about SAS Text Analytics, check out SAS Contextual Analysis, SAS Text Miner, and SAS Sentiment Analysis.

3.)   Embedded Modeling

In text analytics, the goal is to convert unstructured data into some structured format, such as flags, scores, categories, and entities. For many applications, these new variables are most valuable when they are used to enhance predictive models, trigger alerts, create risk scores, enrich content, and to ultimately track and report. Through embedded analytics, SAS DS2 code (and functions in C++, XML, and regular expressions may also be used) can be deployed within event stream processing flows, which means real-time scoring of both structured and unstructured text using regression models, decision trees, and more.

4.)   Integrated Data Sources

In many situations, insights from streaming data can only be realized when multiple data streams are integrated together. SAS Event Stream Processing allows users to join and merge data in stream, so that the calculations and models may be applied to the comprehensive dataset. For example, a large call center has streaming data in the form of customer complaints and service-

Image 2: SAS Event Stream Processing Streamviewer

Image 2: SAS Event Stream Processing Streamviewer

related questions. Once a customer comment is received, SAS Event Stream Processing can extract the customer name and/or customer ID and match it to transactional history for that customer, while also categorizing the reason(s) of the complaint or question. This in turn can trigger a prompt to the agent to adopt a retention strategy or potentially upsell them to a new product or service.

5.)   Emerging Issue Detection

As data floods into you organization, it is sometimes difficult to spot emerging trends and issues. Currently, many organizations run batch jobs to detect and resolve these issues. Because SAS Event Stream Processing can be both stateful and stateless, aggregations and advanced models can be used to identify emerging topics, categories, sentiment and other indicators in real-time. These emerging issues can be detected using sophisticated pattern matching that supports detecting patterns based on the relationship of one event to one or more other events within a defined period of time. Thresholds can be set and events can be used to determine the relevancy and immediacy of any associated instruction/action. This changes the process from being reactive to proactive in the sense that an emerging issue can be monitored in real-time.

Real-time systems such as SAS Event Stream Processing are used for a variety of purposes. By integrating this technology as a front end to key, time-sensitive deployments, organizations gain a competitive advantage in both time and quality.

To learn more about SAS Event Stream Processing, check out the following links for more information and feel free to contact us if you'd like more information.

Also, if you're out in Dallas at SAS Global Forum next week, be sure to stop by and check out SAS Event Stream Processing.

Post a Comment

What’s it take to be a data scientist?

In February of this year, the Washington Business Journal reported the US Government appointed its first Chief Data Scientist, DJ Patil. With this, I think it’s now safe to say that Data Science is officially sanctioned as new mode in organizations. Those that can apply the necessary finesse along with business acumen to make sense of big data has formalized into a ‘new’ profession.

I talked to one of our own, to find out his thoughts in what it takes to be a data scientist. And true to his ilk, SAS’s Adam Pilz applied text analytics to figure out what skills were being sought to fill this coveted role.

Adam Pilz, SAS

Adam Pilz, SAS

Crawling just over 7,000 public postings from a job website, Adam investigated the key elements companies were looking for in a data scientist. They must be highly educated to attain a job.” Masters degrees or greater was seen as a requirement for 81% of the advertised jobs – comparing that to the 12% of the American population. Indeed, there is a clear distinction between the level of scholarship obtained by the general public and that required of a data scientist.

In terms of the prowess of data scientists? He saw the top 10 most desirable analytical skills mentioned by a prospective employer were:

table1

Adam suspects that the first two categories (machine learning and optimization) may simply be popular buzzwords added to job postings, and perhaps optimization may be the Human Resources department’s way of describing how to make things better - versus the mathematical method. If that holds true than it’s possible that text analytics is the most sought after skill in the data scientist market. At a minimum it’s in the top three.

He saw that text analytics and forecasting were the fastest growing desirable skills. And of course, as with all text analysis, various synonyms were captured for each of the terms seen above. For example, content analysis, NLP, sentiment analysis, text classification, topic extraction, etc. are all included in the term ‘text analysis’.

Data wrangling’ is a fun term. It conjures up romantic notions of the wild west, within (no doubt) the Text Frontier - wrestling big data beasties, captured by causally (and similarly) dressed cowboys who are methodical in their approach (big buckle bragging rights will be seen at this year’s SAS Global Forum in Dallas, as a matter of fact).

Breaking this out further, Adam compared:

  • “lower level skills” = those that are lower in importance as education attainment increases, relative to
  • “higher level skills” = those that become more important to have as education increases.

In the two charts below, the skills are in ranked order of importance.

Importance of ‘lower level’ analytical skills with educational attainment

Importance of ‘lower level’ analytical skills with educational attainment

As education level increases (from left to right), skills like data wrangling, data visualization and basic statistics are not prominently featured as required skills for data scientists, as Masters and then again, PhDs are expected to focus their time on more sophisticated types of analysis.

Importance of ‘higher level’ analytical skills with educational attainment

Importance of ‘higher level’ analytical skills with educational attainment

Text analytics, on the other hand, jumps significantly in importance ranking as skill level rises, possibly because the outputs of such an analysis are highly sensitive to the methods used and thus impacted by subject matter expertise. Linear regression and design of experiments both become more important with increasing education, and generalized linear models show up as a required skill for PhDs.

I also asked Adam if he has seen any trends in the usage of the term ‘data scientist’. He said that “the level of education required to be a data scientist has remained the same for the last year, but there are important geographical differences”.  Backing this up, he pointed to differences seen in the highest level of education mention in the job postings, segmenting based on the highest level of education that was required. When looking at the entire US, he found that Bachelor’s was the least sought after degree for data science positions, only seen in 19% of the job postings, while Masters were the most cited educational requirement, garnering 54% of the advertised positions. PhDs claimed the remaining 27%.

Geographical differences in required skill level were found in Silicon Valley relative to the rest of the country. He saw that inside Silicon Valley, PhDs were required for 50% of the jobs listed (and Masters were required for 36%). This was in contrast to jobs outside of Silicon Valley, where PhDs and Masters were identified for 33% and 55% of job postings, respectively.

It’s been said that SAS has more PhDs on staff than does any single university in the United States. And if you’re using open source code for example, perhaps you do need more PhDs on staff to make sure that algorithms are behaving correctly. I know here at SAS we build that expertise right into the software.

I asked Adam what software he used to do this analysis. He initially used Base SAS® and found that once he’d written the code to tag terms he was able to find them in the postings. However, he soon moved to SAS® Contextual Analysis. The difference? SAS Contextual Analysis highlighted the word tagged by the category and so he was able to search for the specific terms and see what else people talking about. He found that the text analytics software gave him the insight into how different postings were saying similar things, in addition to informing what other phrases he might want to investigate, concluding that the text analytics approach was ”..More enlightenment than discovery”.

Adam did this research before coming to SAS – in his search for a new career. We are thrilled to have him and his data scientist prowess as part of the SAS family.

Regardless of title (Adam is described as a Solutions Architect), the skills attributed to data science have been held by those in the analytic field for some time.

Do you see yourself as a data scientist?

 

Post a Comment

Don’t Second Guess – Depend on Prescriptive Analytics

I don’t know why I’m on this medical theme lately – maybe it’s because my parents are aging. They talk about bits falling off, take lots of naps and describe how body parts don’t work like they used to. They’ve gone to pre-packaged pills – dividing up their medications by day and time of day by the local pharmacist. It’s helped a lot. I’ve got a lot more confidence that my Dad won’t (again!) take a sleeping pill, first thing in the morning - before he gets in the car to drive. Ugh.

Confidently knowing what action needs to be taken because it’s pre-packaged is very appealing to many aspects of business too.  Wouldn’t it be nice to know that front-line workers make all their decisions in a particular situation based on expert advice that includes organizational policies and requirements? Even when it’s not people, but other systems or even devices making decisions.  Like in the Internet of Things, isn’t it necessary for those things to base action on situational understanding - triggering a specific (and appropriate) action for a particular scenario? Yes, particularly when we turn our decisions over to machines to make them for us.

Pre-packaged pills

Pre-packaged pills

Taking prescriptive actions includes the benefit of:

  • Consistency – under the same scenario conditions the same action is take
  • Repeatability – when the same situation arises, you can reuse the same logic
  • Efficiency – no additional energy is spent investigating the action to be taken, it’s prescribed.

And together, these things reduce the risk of the wrong decision being made and an inappropriate action being taken.

So where do you get the expertise in the prescribed action? My parents get it from a subject matter expert - the pharmacist, who, based on his training, directions from the doctor and knowledge of current medications defines which pills go into which sealed envelop. In fact, their pharmacist was able to decrease their medications – simply because of his perspective on the buffet of pills they’d been prescribed over the years.

The Wrinklies

The Wrinklies

Organizations get the expertise from a few places. From their analytical experts who examine operational data to assess the pros and cons of different factors influencing behaviors and outcomes – summarized in advanced analytic models. From business analysts, who consider situational conditions, organizational policies, regulatory controls and analytical model scores in relation to decision objectives. They develop the business rules that define the conditions under which an analytical model is relevant. From their IT departments who have spent time collecting, cleansing and normalizing operational data to ensure currency, accuracy and availability. And from corporate executives who determine organizational policies and mandates to align stakeholder and compliance requirements.

In a recent IIA Research Brief the difference between predictive and prescriptive analytics is detailed. The paper also goes into more depth of how you gain (likely untapped) prescriptive insight from unstructured text data – it’s amazing the direction that is often included in narrative. Going beyond the data discussion, it describes how prescriptive actions are codified using the discipline of enterprise decision management. And lastly, it explores the impact of big data in the form of streaming data – necessitating more operational and tactical decision discipline.

The Wrinklies (my nickname for my folks) have taken some of the guess-work out of their routine and we all agree, they are better off for it. Giving you more confidence, what operational activities in your organization would you like to see prescribed?

Post a Comment

Seth Grimes with More on Text Analytics

Perhaps it’s the same for you - it’s getting harder to get to all the conferences I’d like to attend. One of the benefits of getting out there is a chance to learn about different perspectives in an industry. When someone has a broad perspective, particularly if they’ve been in an area for a number of years, their focal lens can often see unique trends.

Earlier this year, I was able to catch up with Seth Grimes and get his perspective on:

Seth Grimes, Alta Plana Corporation

Seth Grimes, Alta Plana Corporation

  • The extent to which text analytics become an essential part of data analysis?
  • Why have some organizations not realized positive ROI – and how can they improve
  • What’s unique about text analytics?
  • What are the hottest issues in the text analytics market over the next few years?

Check out the free recording of our discussion so that you can hear what he had to say!

Looking at this field since 2002, and with his market survey running since 2009 – Seth Grimes, Alta Plana Corporation and industry expert in text analytics, has seen the text analytics field from being an interesting concept, to an applied discipline. He released his most recent study this year.

Unstructured text analysis is expected to grow even more next year – particularly in organizations who’ve started to understand big data. To date, many have focused on the more traditional structured side of big data. The next chapter is to understand the unstructured – text being a large part of that.

So as Seth says we can only expect “more”. What does more text analytics mean to you?

Post a Comment

The prescription for unstructured data analysis is now legible

My Mum could have been a doctor – most can’t read her handwriting. It’s only because I’ve been trained to read it, I can.

The analysis of unstructured data is similar. Text analysts can be quickly overwhelmed to learn that you have to manually develop a training corpus. Reading a sample of documents, and marking each document by hand – defining the relevant categories to the software.

It’s a bit easier if you already have a starter taxonomy, but it can be trying to find one specific to your need.  And even if you do find one, how do you define new concepts that you don’t even know exist in the materials? Well, you have to read a few (and are back to more manual effort).  All this to get to the point of automating categorization (sigh).prescription

There’s also the option of generating a taxonomy from reliable sources, like Wikipedia or DBPedia – and SAS® does that too. Some manual validation to ensure your document collection is addressed still has to be done.

There’s now an easier way.

SAS® Contextual Analysis is a new, highly intuitive text model development technology.  Machine learning algorithms are used to do the initial heavy lifting – removing much of the historic manual burden.  The software examines the entire collection – identifying the stems, misspellings, and more. The NLP is automatically done.

The software also automatically finds relevant topics. You can visually explore the results, adjust and refine what is discovered.

You can add concepts – the most common ones are even pre-defined for you. Or write your own.

And as the subject-matter expert, you decide what makes sense as categories – with the help of relevance metrics that the system generates.

For those of you familiar with SAS, SAS Contextual Analysis brings together some of the capabilities of SAS® Text Miner, with those of SAS® Enterprise Content Categorization – in one, guided, web interface.

Take a look for yourself. In this webinar (which features Jared Peterson, the product development manager for SAS Contextual Analysis) you can see how straight-forward it now is to get insights from all the dark (text) data you’ve not looked at (possibly because it’s been too hard?).

We’ve found that the narrative in call center notes is almost always more informative about customer issues and concerns then from the categories manually selected by call center agents.  In fact, customers will often describe how to fix the issues they perceive, giving you the prescription to make them happy.

We’ve seen this for improving fraud investigations, improving debt collection, creating more efficient operations, creating more satisfied customers, improving patient care, reducing warranty costs, delivering  relevant real-time advertising, the list goes on.

You know, it’s amazing what you can get used to.  Chronic pain from text analysis doesn’t have to be one of them.   Sure, some training will always be involved. But you can reduce the burden of manual effort and the ills of inconsistency and error in the process. And just get to the new insights sooner.

What would you want to know from your dark, text data?

Post a Comment

Sharpening the Executive Edge with Text Analysis

~ Contributions from Elena Lytkina Botelho, Stephen Kincaid, Chris Trendler - ghSMART & Beverly Brown, Pamela Prentice and Dan Zaratsian - SAS ~

 

A fascinating talk at the SAS Global Forum Executive Conference focused on text analytics, one of the newer weapons in the arsenal for analytic understanding.

Dr. Goutam Chakraborty, a marketing professor at the Spears School of Business, Oklahoma State University, described how he’s seen text-based insights expand knowledge beyond the numbers. He spoke of previously unknown facts that made companies more responsive and effective.

A practical step-by-step guide to applying SAS Text Analytics

His work shows that most analytical models are pretty good at predicting future scenarios and describing conditions.  However, competitive pressures and dynamic market conditions leave little room for substantial improvement in existing algorithms.  Marginal   gains are possible by tweaking existing algorithms, fine tuning parameters, distributions and the like.

But sweeping advancements are likely when we incorporate new types of information into existing analytic paradigms.  Text data, for example.

In case studies from different industries, Dr. Chakraborty shared how quantitative returns jumped substantially when text analytics were applied to operational data.  For example:

  • Automating SMS text message classification and sentiment scores within mobile logistics applications reduced professional drivers’ response times.
  • Debt collection increased when call agents were armed with new intelligence from call center conversations.

Besides operational improvements, he also gave strategic planning examples. In one case, merging text-based insights with numeric data improved predictive accuracy of future conditions so intervention strategies were more effective. In another, fact-based understanding of reputation (by tracking the impact of controversial statements in social media) led to better social media strategy.

Text analytics extends existing analytic methods, answering questions such as:

  • Why is this happening?
  • What should we say?
  • Who needs to take action?

If the Q&A after Dr. Chakraborty’s talk was any indication, the audience agreed that text analytics could help them make better executive decisions.

Executives’ words reveal their fitness to lead

Did you know that text analytics can also help decipher what makes a good executive in the first place?

ghSMART is the elite consulting firm that helps CEOs lead at full power.  Based on their branded method of assessment (called SmartAssessments), along with their expertise in understanding human behavior, they answer the question: Who should run your business?  And now they, too, are seeing how text analytics can be used to advance insights.

In a collaborative project with SAS, some of the early findings of this study are indeed, quite intriguing. Based on analysis of anonymized transcripts of candidate interviews and SmartAssessment ratings, we’ve found:

  • Lower-rated candidates used the term ‘mentor’ (and its stem variants, mentoring, mentored, etc.)

Initially, the team believed this to be counter-intuitive.  One of the benefits of text analysis is that you can dig deeper into the rationale of a result – and understand what is driving statistically significant numbers.  It turned out that the distinction was really that lower-rated candidates described their mentors or expressed wanting to be a mentor, whereas higher-rated candidates talked of being a mentor to others in the interview.

Frequency of terms associated with lower-rated candidates (left) and higher-rated candidates (right).

  • Candidates with a consulting background were statistically more likely to successfully transition directly into executive-level positions than those without consulting experience.

Here’s a helpful nugget for aspiring executives: Acquire broad experience from working with all aspects of a business (even if in a smaller company) prior to your CEO interview

  • Lower-rated candidates frequently described some form of failure, setback, or disappointment throughout the interview.

Telling the truth is important.  Context is too.  What seems to matter in these preliminary results is how much the candidate focused on describing previous failures in relation to the length of the interview (specifically, the number of words in the full dialogue transcript).

Heat map (top) illustrating the frequency of ‘mentor’ is, on average, correlated with higher-rated candidates. The bar chart (below) shows that a higher mention of failures (on average) is less correlated with higher-rated candidates and more correlated with lower-rated candidates.

Actionable intelligence from analyzing text is helping organizations reduce risk, lower operational cost and inform both tactical and strategic decisions.  And, some exciting new research suggests that text analytics can also help decide who should be at the helm of the organization.  Having the mindset of being a mentor as a CEO, for example, calibrates to being more effective leader than simply being successful in the CEO role.

 SAS and ghSMART continue to sift interviews to see what sets Grade A executives apart from the rest. We look forward to sharing what we’ve learned later this year.

Post a Comment

Predictive Coding - What's so predictive about it?

Recently, SAS announced support for White House efforts in the fight against patent trolls.  As indicated in the announcement, lawsuits filed by patent trolls cost innovators $500 billion in lost wealth from 1990 to 2010[1] - and are growing at an average rate of 22% a year.

Finding the right information, in seas of documents is a challenge for many organization – patent search and litigation are no exception.  Legal organizations are awash in hard drives filled with reports, emails, communications, and alike.

Which brings me to the topic of predictive coding.  In following some of the historic debate regarding the usefulness of this approach to help alleviate some of the burden of manual review, I’ve asked: Why has this been called ‘predictive’ in the first place?  For the legal profession, a field founded in facts – predictive notions might even be downright scary. And ‘predictive’ doesn’t really even describe what this text analytics method does to improve legal searches.

According to Wikipedia a prediction is “a statement about the way things will happen in the future often but not always based on experience or knowledge”.   Well, that’s not what predictive coding does.  This type of analysis uses computer software to analyze documents, with the goal of finding important or highly related content within existing material. There is nothing futuristic about it.

A patent troll

Predictive inference (in statistics) considers extending the characteristics of a sample to an entire population. Well, that’s not what predictive coding does.  A document is examined to determine membership to one or more topics, terms, themes or phrases. A relevance score is defined – which reflects a probability of membership.

In fact, defining the relevance of a document, describing membership to a fact, taxonomy or topic is within the well-established field of categorization. Categorization of content is a descriptive analysis method – putting text/documents into relevant buckets.  Descriptive analysis is different than predictive analysis – the first explains, while the second forecasts or projects.  And probabilities are different from predictions – asking will something happen is different from asking when something might happen. But descriptive coding, while perhaps more accurate, isn’t really very catchy.  Established, alternate naming conventions for this eDiscovery technique, such as ‘technology assisted review’ or ‘computer assisted review’ seem more helpful describing what this is.

I’ve even gone so far as to interview lawyers on this topic. Their conclusion was, that for extremely high volume cases, and as a method of triage for certain types of documents, computer assisted review can be quite helpful.  The goal is to filter out materials that are unrelated to the case at hand.  Ideally, the remaining, potentially relevant materials, are grouped into different topics – providing context. And then an intensive search exercise occurs, to isolate pertinent documents.  Still – nothing futuristic.

So, one may ask ‘How do you predict from text data? Or any kind of document for that matter’?

Prediction from text happens once it’s numerically represented.  Structured in such a way that it retains the essence of the text meaning – but described as numbers (like the presence or absence of a term).  There are very sophisticated ways to do this – and are well defined in the field of text mining.  Once documents are numerically structured, then they are in the format needed for predictive models – to see if the terms, phrases, facts and themes are meaningful to future events.

For example:

Will customers leave in the future based on a dissatisfying experience that they had?

  • Say they’ve called into the 1.800 line and complained, or written emails. First you’d analyze the text to understand the issues. These ‘issues’ (whether they be topics, or concepts, or even linguistic rules) are translated to structured representations (as new variables or taxonomies). In turn, these new elements are used as input to a churn model – which is estimating the probability that they will leave at some time in the future.

When might a car no longer be roadworthy, given its history of repairs, age, use, etc?

  •  Text mining of service notes for that make/model, warranty claims, reported issues, and alike creates structured, numeric variables.  These new insights, along with other numeric information (like mileage), would be inputs into a model to identify the future failing of the vehicle.

When will demand for a product increase?

  • Monitor social media, identify the ‘buzz’ – from crawling external information sources and extracting pertinent commentary. Use these identified elements, along with sales trend data, in a model to forecast when more demand is expected to happen.

… the list goes on…

I enjoyed the recent Law Technology News award winning article: Predictive Coding Is So Yesterday by Joel Henry.  I’d even go a step further, and say that – it really never was (predictive anyway).

Text mining is a well-established discipline – and as many of our customer’s know – is a discovery process. Sound familiar? Based on the data – not humans – documents are classified – with machine learning methods that identify clusters, topics and even create taxonomies, or profile how a term changes over time.

Text mining is, however, only part of the electronic data discovery technology solution described by Joel Henry. Today, text mining can help remove the burden to manually develop training sets, and provides a method for active learning - for machine generated categories to learn from human conditioning.

ESI in Joel Henry’s article stands for ‘electronically standardized information’.  Having documents in electronic form is a requirement for any type of machine learning exercise.

SAS announced commitment to converting 38 years of user documentation and technical papers to electronic form for IP.com, who, in turn, work with the US Patent and Trademark Office (USPTO).  With the documentation in electronic form, IP.com will be able to publish, aggregate and analyze technical documentation, helping USPTO efforts reduce the burden of patent troll litigation.

The future is predicted to be very bright for organizations committed to stemming abusive patent business practices, as well as for those who are making use of advanced analytics to address big data burdens.


[1] Findings from Boston University, School of Law study : http://www.bu.edu/law/news/BessenMeurer_patenttrolls.shtml

Post a Comment

Visualizing Superbowl Tweets with Text Analytics: Post-game Analysis

A few Superbowl tweets:

"Everything worked out well with the Super Bowl in NY/NJ except the game. What a shocker tonight #TBS_SuperBowl"

"Best #superbowl #halftimeshow ever with #brunomars and #redhotchillipeppers . What talent from bruno...now thats a performer."

"Someone turn off the lights to make the #SuperBowl interesting!"

In my previous post, Visualizing Superbowl Tweets with Text Analytics, I discussed the initial trends and insights found within Superbowl-related tweets.

Now that Superbowl XLVIII is officially in the books, I'd like to take the analysis a step further by showing additional visualizations. The graph shown below, Graph 1, shows the relationship between trending data-driven topics and associated hashtags. You'll notice a cluster for security, JC Penney, Esurance, "boring game," and others.

Graph 1: Network Graph - Relationship between Data-Driven Topics and Twitter Hashtags

SAS Text Analytics uses a combination of natural language processing and statistics to automatically discover these data-driven topics within the Superbowl XLVIII tweets. Depicted using SAS Visual Analytics, I list the top 15 topics, in no particular order, with some example tweets that correspond to the network graph shown above:

  1. Pre-game coin toss ("awkward!!", "coin toss #fail")
  2. Sympathy for Peyton Manning ("feel your pain Manning, great season")
  3. Boredom ("This game is boring", "I'm going to sleep", etc.)
  4. Superbowl security ("lol superbowl security broadcasting it's wifi name and password on tv http://t.co/aOPMDmUf9x")
  5. Sodastream (commercials with Scarlett Johansson)
  6. Best halftime show ever
  7. Bruno Mars ("so classy", "Bruno Mars wins the Superbowl!")
  8. Budweiser ("Puppy Love" commercial)
  9. Happy Seahawks Fans
  10. Godaddy ("best super bowl commercial was the muscle bound spray tan fanatics. hilarious #GoDaddy")
  11. Esurance (post-game commercial with $1.5M give-away)
  12. J.C. Penney (clever social media marketing)
  13. Disappointed Broncos fans ("Broncos??? Hello?? U there?? Wake up and actually score")
  14. Terrible commercials ("#disappointing commercials", "Superbowl and it's commercials - both terrible")
  15. Sodastream Fizz Football Challenge Sweepstakes

As many of us know, the Superbowl offers a once-a-year opportunity for companies to create brand buzz through social media channels (hopefully positive, but sometimes the buzz is negative).

J.C. Penney's clever approach led to a viral event. During the first half of the game, J.C. Penney's corporate Twitter account tweeted two typo-laden tweets,

  • "Toughdown Seadawks!! Is sSeattle doing toa runaway wit h this???"
  • "Who kkmew theis was ghiong tob e a baweball ghamle. #lowsscorinh 5_0"

At first, people responded with:

  • "#sbmktg101 in a cost cutting move, JC Penney hires a toddler to tweet during game" 
  • "JC Penny's tweeter must be a Broncos fan, drinking his sorrows"

Apparently that was part of J.C. Penney's marketing tactic, which they foreshadowed earlier in the game with a tweet mentioning mittens. After the controversial tweets, they tweeted, "Oops...Sorry for the typos. We were #TweetingWithMittens. Wasn't it supposed to be colder? Enjoy the game! #GoTeamUSA pic.twitter.com/e8GvnTiEGl." The end result, nicely stated in this tweet, "J.C. Penny paid $0 for two fake drunk tweets and now have more mentions than a $3 million commercial."

One of the pre-game trending topics on Twitter was "Omaha", which is Peyton Manning's code word that he uses just before the snap. The graph below, Graph 2, shows the volume of tweets related to Omaha or any mention of Manning's code word, including misspellings (believe it or not there are many misspellings of the word Omaha in the data, such as omah, omaah, omahaaaa).

Graph 2: "Omaha" Trends on Twitter

There was even an over-under set at 27.5, so you could have bet on the number of times Manning said the word. Yet, the success story found on Twitter is seen clearly in the second spike. The Omaha Chamber of Commerce has pledged $1,500 to Manning's charity each time he says Omaha.

What else is being said about Omaha and how does it relate to the Broncos and the Seahawks?

Graph 3: Network Graph for "Omaha"

The network graph shown above, Graph 3, visualizes the relationship between Twitter hashtags and each team. You'll notice several mentions of the charity, marketing opportunities for OmahaSteaks, and even hashtags related to last year's blackout.

How can SAS Text Analytics and SAS Visual Analytics help you identify patterns and emerging trends within your data? Are you creating opportunities to promote your brand (like JC Penney)? Are you able to quickly identify critical events (like the leaked Superbowl wifi username and password)? How well are you able extract and understand emerging topics with your data? Most importantly, how does this information enable you to take action and make smarter business decisions?

See how SAS Sports Analytics is helping teams and leagues optimize pricing models, improve marketing ROI, attract fans - and keep them coming back! Learn more later this month at the 2014 MIT Sloan Sports Analytics Conference.

I’d be interested to know what you think you could discover if you had these analysis capabilities in your organization. Write back and let me know.

Post a Comment

Visualizing Superbowl Tweets with Text Analytics

In the days leading up to Superbowl XLVIII there’s a unique opportunity to capture insightful trends and patterns within social media.

Much of text analytics involves analyzing customer conversations, whether the conversations exist within social media, emails, forums, blogs, survey responses, or call center transcripts.

These conversations, just like the Superbowl tweets, are time sensitive. What is relevant today may not be relevant in a month, a week, or within the next 24 hours (think viral events). Similarly, if you contact a customer one week after they express anger, you miss the window to intervene and incentivize your customer to stay with your organization.

Below are some of the current trends and insights based on Superbowl tweets from the past two weeks.

Graph 1:  Twitter volume over time for Denver (orange) vs Seattle (green). Also, what are the top hashtags and who are the most influential authors?

Graph 1: Overall Trends, Top Authors, and Top Hashtags

Graph 2:  Who is winning the “Twitter Superbowl” based on fan support?

Graph 2: Social Media Volume - Broncos VS Seattle

Graph 3:  Do fans mention the Seahawks or the Broncos within the context of winning? How about within the context of losing?

Graph 3: Social Media Volume - Winners VS Losers

Graph 4:  Where are the Seahawks and Broncos fan’s located?

Graph 4: Mapping Fans - Denver Broncos VS Seattle Seahawks

What does this have to do with your business? When analyzing text, there are a few key questions you may want to ask yourself:

Why are you analyzing text?

This question is fundamental, but is sometimes overlooked. Organizations know that they have all this textual data and need to be doing something with it, but often fail to define a solid objective that leads to ROI (More on ROI in upcoming blog posts).

  • Do you want to identify data-driven trends? (often seen in marketing and customer intelligence)
  • Are you looking for root cause or a needle in a haystack? (seen in fraud applications)
  • Do you need to extract entities or facts such as IDs, names, demographic information, etc.?
  • Are you using textual data to enhance your predictive models?
  • Do you want to identify key influencers around a given topic or event?

What topics or categories align to your business requirements?

It's important to approach this from two angles:

  1. Use a data-driven approach to identify naturally occurring topics based purely on the data. Text mining, clustering, and natural language processing all help to enhance the statistical discovery of topics.
  2. Provide your domain-knowledge into the model, through business rules, that target the categories and topics you are specifically interested in based on your business requirements.

What data sources are you using (and how did you collect the data)?

Poor data collection methods lead to data quality issues and a large dataset with low relevancy. If you are collecting any data from online sources or 3rdparties, it’s important to understand the data collection process, filtering criteria, and queries, all of which could bias the data and introduce noise if not configured correctly.

  • What kind of web crawling techniques/tools are you using?
  • If you are using search terms to target and collect data, how did you choose these terms and are they limiting your results or introducing unnecessary noise?

What kind of action should the analysis elicit?

  • Do you need a dashboard to monitor trends, influencers and viral conversations?
  • Does your model trigger a promotional email, predict customer attrition, or flag a fraudulent event?
  • Can alerts help your social media team or call agents proactively reach out to customers with timely offers?

In the days leading up to the Superbowl, I will continue to update the analysis and give you insight into emerging trends and interesting findings. Please check out the software behind the analysis, SAS Text Analytics and SAS Visual Analytics.

Check out the Post-Game Analysis for more insights.

You can also download our whitepaper from last year's Superbowl or read Ken's recent post on measuring the economic impact of this year's Superbowl.

Post a Comment

Behind the scenes in natural language processing: Is machine learning the answer?

When you think of the phrase “express yourself,” you may think of expressing your sense of style through your fashion decisions or home décor, but most of us probably think of expressing our thoughts, feelings, opinions, needs, desires, etc. through language. Whether writing or speaking, language helps us to connect our own inner reality with the external reality that we share with others. There is so much complexity wrapped up in each linguistic expression, that it is amazing we can get computers to do anything with language at all!

During my career, I have focused on language as seen from the perspective of a computer. When a computer looks at text, all it sees are strings of characters. It doesn’t even distinguish between different types of characters like letters, numbers, punctuation, or white space. We humans have to tell the computer how to recognize meaningful patterns and cues to recognize constructs like words, sentence’, and topics.

What we really want is a computer to understand when a word is a meaningful concept alone or whether other words are required for an object, action, or relationship to become clear. For example the word 'couch' used as a noun has one meaning as a type of object in the real world, related to the action of sitting and part of a class of furniture:

Read More »

Post a Comment