5 ways to tackle big data and maintain brand loyalty


I recently shared with you my 2013 industry predictions.   With the anticipated increase in streaming data,  social data, big data BI and analytics and the growing focus on Hadoop clusters, one thing is certain:  big data is here to stay.   Knowing how to best tackle that data while maintaining brand loyalty will be critical.

So, as featured in fourth quarter 2012 Loyalty Management magazine from Loyalty 360, I give you "5 ways to tackle big data & maintain brand loyalty."

Brand loyalty experts are often the most data-savvy individuals at a company. So the discussion of big data might seem like old news to you. It shouldn’t. The simple calculations that have driven loyalty programs are about to become old news. Staying abreast of the rapidly evolving big data landscape—including ways to crunch data faster, or in more complex ways, and at the most reasonable cost—is critical.

We still come across companies that are skeptical of big data because they associate it with the ground-breaking work of companies such as Google and Facebook who, let’s be honest, could throw a legion of engineers at any problem to program their way out of it. Most companies don’t have those resources. But the solutions supporting big data are maturing rapidly enough to assist companies with much more modest budgets. The story of the next few years will be how seamlessly big data comes to be used in the enterprise. Here are a couple of ideas for increasing your organization’s maturity level in this area.


A good working definition of big data: It’s when the data you want to work with reaches a volume or complexity that puts it outside your comfort zone. In the loyalty world, a good example comes from the telco industry. When wireless carriers first began trying to figure out which customers might be on the verge of bolting to another carrier, they looked at measures that were fairly crude and easy to mine —the names and numbers of subscribers who were using fewer minutes than they had previously. That particular measure is not producing the ROI it once did, so today leading telcos are using data to find influencers—the customers who influence their friends to sign up with a specific carrier or for a certain plan. This type of analysis involves looking less at call volume and more at who is calling whom; it requires a lot more complex usage of basically the same data. The telcos who are doing it needed a more sophisticated big data solution to make it work.


Sensor data of all types has dramatically plunged in price and is spitting out a gold mine of useful data. For example, one auto insurer has created a unique loyalty promotion by mining data from sensors to improve sales. It offers drivers the opportunity to place a sensing device in their cars in exchange for differentiated pricing. The sensor can detect speed, hard braking—even if the insured person has been totally honest about how much driving they do. Suddenly, there is a way to find good drivers—and give them a loyalty-inducing discount —without relying strictly on historical records (e.g., claims data). But, it also involves sifting through data that is of a magnitude greater than the company has ever dealt with before.


You need to branch out from using just the results from a loyalty program along with some basic demographic data. Retailers, try incorporating weather data to see if you can get more accurate store packs sent out to reduce your markdowns and increase the speed in which you turn over goods. Grocers, do you know the average size of your customers’ cars and how that might influence trip frequency or their likelihood to make high-volume purchases?


Buying one big, potent computer is a thing of the past. Distributed architecture using open source software like Hadoop is key to working with big data quickly and cost-effectively. Distributed architecture lets you solve problems with the divide-and-conquer approach.

The most iconic retail brand in the US is using Hadoop because it helps the retailer deal with its exponentially growing data. Moving forward, keep your eyes on Spark cluster computing. It is showing a lot of promise.


2012 has been a year of big data experimentation for a lot of companies. And, yes, some projects didn’t turn out quite like you’d planned. But it’ll happen. In the pre-big data era, there was a retailer that spent most of its time just trying to organize its data in order to send out sales promotions, Web offers and catalogs. With high-performance analytics, that retailer is now personalizing catalog offerings like never before; selecting new store locations and estimating those store first-year sales; choosing up-selling opportunities that increase profits; and scheduling promotions to drive sales. In an economy that has been tough on brand loyalty—this company is succeeding in this very rough economy. Are you?

Learn more from SAS "big data" experts in this special 32-page report on high-performance analytics.


About Author

Paul Kent

Paul Kent is the Vice President of Big Data at SAS. He coordinates with customers, partners and R&D teams to make sure the SAS development roadmap is taking world-class mathematics to the biggest baddest problems out there. Paul joined SAS in 1984 as a Technical Marketing Representative and eventually moved into the Research and Development division. He has used SAS for more than 20 years and has contributed to the development of SAS software components including PROC SQL, the WHERE clause, TCP/IP connectivity, and portions of the Output Delivery System (ODS). A strong customer advocate, Paul is widely recognized within the SAS community for his active participation in local and international users conferences.

Comments are closed.

Back to Top