Four ways SAS makes Hadoop easy

8

4 people celebrate how SAS makes Hadoop easyThere she blows! – there she blows! A hump like a snow-hill! It is Hadoop!

Bringing your data and processing to Hadoop can sometimes feel like an insurmountable task – but it doesn’t have to be that way. The same technologies and capabilities that have powered SAS Data Management for over a decade can make wielding the power of Apache Hadoop more like a pleasure cruise and less like hunting a great beast. From my experiences in working with SAS and Hadoop, I'll describe four ways SAS can make Hadoop easier.

Data access

Access to Hadoop can be challenging for a variety of reasons (location, security, data format, data transport and user skill set). SAS foundation tools (in particular, SAS/ACCESS© interface technologies) let users access data in a number of ways. These technologies are developed in partnership with Hadoop vendors to allow deep integration with a data system. SAS can increase efficiency by making native connections for data transfers to Hadoop Distributed File System (HDFS) and allowing direct access to HDFS data. This implementation enables users to access their data in Hadoop from a desktop or a remote server’s web user interface. Security can be applied on the server, on the client, or both – depending on IT security requirements. In turn, IT has flexibility to keep the data available as it prevents mixing of sensitive data.

In the early days of Hadoop, there were limited options for formatting data – Hadoop offered few data types at that time. But our customers overcame those challenges using SAS formatting to change native Hadoop data types from string, for example, to other data types that suited their processing. As Hadoop matured to support new data types, SAS complemented the HDFS by providing formats better suited for analytics (high-performance data format) and analytic data preparation (SAS Scalable Performance Data Engine). These formats provide coherence of storage, required data types and metadata – increasing the efficiency of data processing tasks.

Data profiling

Houston, we have a problem…we’re just not sure where. Often, finding the problem can be just as challenging as fixing it. In the world of database management systems, we have a mature data dictionary for storing descriptive statistics of the data. Table storage volume, table column data types, average column value, standard deviation, null count per column or null percent per column, and median value per column (just to name a few). This metadata does not exist in a single place or a unified form in Hadoop.

SAS metadata is a big strength of our data management offerings. SAS can gather metadata across many systems to make activities like data migration, data processing and lineage easy. Data profiling gives users the capability to pull the metadata in Hadoop to assess the quality of their data. Are there any patterns in the data? What's the state of the data's incompleteness? Are there any trends in the quality of the data? Does the data contain personally identifiable information (PII)? These are questions that can be answered using SAS metadata and profiling tools.

Data integration

Hadoop deals in the arena of challenges with large volumes of data – it's not well suited for small data problems. To support all the data required by the business, it's key to use a tool set that lives in both the big data and the tiny data arenas. SAS Data Management provides both extract, transform, load (ETL) and extract, load, transform (ELT) processing capabilities. This allows SAS to transform and blend data outside of Hadoop from files, database management systems (DBMS), streaming data or master data systems, just to name a few. Who is doing the work is just as important as where the work is taking place. No matter whether the work is being done by the enterprise, the business unit or a collaboration between the two, SAS technologies enable seamless alliance for working with data in Hadoop.

Data quality

Once users understand their data quality issues, we can work to correct them. That’s the easy part, right? Hadoop has been around for just over a decade and still lags behind most DBMS SQL function sets. Then there's the issue of the coding languages that are available. An Oracle DBMS developer may run into issues working with data sets migrated from Oracle into Hadoop Hive with HiveQL, MapReduce and Apache Spark. Further, there are no native data quality procedures in Hadoop today.

The SAS Quality Knowledge Base provides a rich set of files that store definitions for performing various data cleansing tasks. Performing standardization, semantic parsing, clustering or field extraction are as easy as function calls through the SAS programming language.

Overcoming the challenges

Many customers encounter these four bottlenecks when they first start working with a Hadoop migration. With SAS technologies, we can make Hadoop approachable and accessible for a variety of enterprise and business needs. And you won't need to learn complex skills to succeed.

Download a free paper – Bringing the Power of SAS to Hadoop
Share

About Author

Clark Bradley

Principal Technical Architect

Clark Bradley provides field enablement and sales support leadership to SAS field teams in the areas of data management, data integration and data quality in big data environments. His responsibilities include analyzing data architecture, analyzing complex SQL, investigating Hadoop technologies, and system and subsystem tuning across SMP and MPP platforms.

8 Comments

    • Clark Bradley
      Clark Bradley on

      Thank you for the comment! Preparing data for analytics is an important task. Making Apache Hadoop more approachable can allow users to reach business insights more quickly.

    • Clark Bradley
      Clark Bradley on

      The SAS Quality Knowledge Base (powered by the SAS Data Quality Accelerator for Hadoop) is the secret sauce to distributed DQ tasks in Hadoop, whether working at rest in HDFS or in memory with Apache Spark!

  1. Hi
    We are implementing a Big data cloud based environment with Hadoop and Spark. We want our SAS users to be able to interact with it. In particular we want to use SAS to aggregate results for input to other Systems.

    Are you saying that we can do this natively with SAS? ie What do we have to invest in or do to use SAS over this new infrastructure?
    Thanks

    • Clark Bradley
      Clark Bradley on

      Hi Andrew,

      Thanks for the background on your project. Yes, you have options for interacting with Hadoop from SAS.

      At a SAS foundation level, your users can interact with HDFS, Pig, and Mapreduce with the PROC HADOOP procedure. For users needing to interact with the various HiveQL engines available, they can use SAS/ACCESS Interface to Hadoop (Hive), SAS/ACCESS Interface to Impala (Impala), and SAS/ACCESS Interface to Hawq (Hawq). Each works against different HiveQL engines provided by Hadoop distributions, with the distinct ability to push SAS descriptive statistical procedures into Hadoop (FREQ, SUMMARY/MEANS, RANK, TABULATE, REPORT, SORT).

      Stepping up to the in-Database capabilities, SAS offers:
      SAS Code Accelerator for Hadoop allowing users to develop transformations in an object oriented coding language (PROC DS2).
      SAS Quality Accelerator for Hadoop allows users to apply data quality transformations on data in Hadoop such as Match Code, Identification Analysis, Standardization, Parsing, Pattern Analysis, Gender Analysis, and Field Extraction to name a few.
      SAS Data Loader for Spark Engine allows users the option to push their transformations and quality expressions into Spark for faster processing.

      For business users who are not coders, they can take advantage of all of these capabilities through SAS Data Loader for Hadoop. SAS Data Loader for Hadoop allows for collaboration between native SAS users and business users through a wizard driven user interface that creates the code for submission to Hadoop. Below is a white paper covering the capabilities:

      https://www.sas.com/content/dam/SAS/en_us/doc/factsheet/sas-data-loader-hadoop-107474.pdf

Leave A Reply

Back to Top