Hurwitz on SAS & Big Data: Experience Matters

Mike Ames and I recently had an opportunity to talk to Fern Halper and Judith Herwitz from Hurwitz & Associates as they are doing a 4 part blog series on vendor views on big data and big data analytics. You can view Fern's blog post about the SAS perspective here.

Here are the highlights from our conversation:

Decision making is the key - Although the technical aspects of big data are interesting, and Hadoop is all the rage, it's really about analyzing big data so that organizations can make better decisions faster - within the decision window of their competition.

One size fits all doesn't work - We spoke about how you can't take a one size fits all approach to big data, that the approach that you leverage should be driven by your business goals translated into an analytical and technical requirements.

SAS High Performance Computing enables complete dataset support - For some scenarios, being able to factor in the complete dataset into the analytics is key. This isn't just about factoring in all forms of data into analytics like social media data, device data, etc., it's about leveraging an infrastructure that allows you to perform analysis at a very detailed level. For example, doing pricing optimization at an individual sku/store level vs. product category/region. It's also about being able to do variable selection on a massive vs. limited scale, being able to operationalize the analytics process for production, etc.

The 4th V is Relevance - Everybody recognizes volume, variety and velocity, but what is critically important in big data scenarios is to identify relevance and to understand that relevance changes over time. Building an infrastructure that can complement complete dataset support by identifying and ensuring that the relevant data is constantly available to your analytics infrastructure is becoming increasingly important. We refer to this information management style as "stream it, score it, store it", where we leverage the power of analytics on the front end of the information lifecycle to identify relevant data. And instead of using a simple filter to do this, we leverage analytics to define relevance based on extensive organizational knowledge. That results in information that is relevant to your organization or business vs. a generic Google style search. As events occur that change the relevance of data, data that was previously secondary in nature can be factored into the analytical process.

It's also great that Fern recognizes that although many vendors are jumping on the big data bandwagon, that SAS "has been growing its big data capabilities through time, all of the technologies are delivered or supported based on a common framework or platform."

And, I love her final quote: "Experience matters.  Enough said for now."

Make sure that you read Fern's complete post located here and note that there are many other blog posts on big data and Hadoop in the Information Architect blog.

tags: big data, Big data analytics, hadoop, high-performance analytics, Hurwitz

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <p> <pre lang="" line="" escaped=""> <q cite=""> <strike> <strong>