Defining a custom color palette for BI Dashboard bar charts

Out of the box, the simple bar graph uses the same color for each bar.

A frequent request from users is to define a standard color palette for the dashboard indicators so distinct bars are the same color in each indicator or from refresh to refresh. With most dashboard tricks, like the dynamic dashboard range, it is all about the data. The steps are to create a numeric value for each bar's categorical variable (such as region), create a range that matches these numeric values and defines the color for each, and the use a range enabled bar chart to leverage these colors but still display the bar's numerical value (such as sales total).

Let's walk through this in more detail.

Step 1. Set up the data

For this example, I used a SQL query to pull in the total sales for three regions in the data set. Then with a SQL case expression, I added a 'color' column where 1 is for Africa, 2 is for Asia, and 3 is for Canada.

The important change to note is that for the data mapping tab, the color column will be changed so that the value displayed is the total sales. This means that the graph will leverage the color to plot, but the actual data value shown to the user will be the total sales.

Step 2. Define a custom color range

Create a range that defines which colors to use for which corresponding number. In the example screenshot below, I chose to have Orange for Africa (or defined as 1 in the data), Blue for Asia (stored as 2), and Red for Canada (and marked as a 3).

Step 3. Use a bar chart that leverages range

The final step required a bit of exploration, but what turned out the best was the use of the "Bar chart with bullet" indicator. I used the data and range defined in the prior steps, and then specified that the 'color' value was the range (#1 in the screenshot below), the 'region' was the x-axis value (#2), the bar value was 'color' (#3) and the bullet value is 'total_sales' (#4). You'll notice in the chart the y-axis label states 'color(millions)'. Just add the term 'Total Sales' to the y-axis title field to correct this as well.

Testing recommendations for SAS BI Dashboard & SAS Web Report Studio

With limited time and budget, you can still thoroughly test SAS reports built in SAS BI Dashboard and SAS Web Report Studio by considering how all the tools were used to build the end report. An understanding of the functionality within each part of the chain from raw data to final report helps build out the test cases to effectively verify accuracy without clicking on every single button on the final report.

  1. Raw data to summarized data table
    Sometimes this is not considered at all, but the ETL scripts written to create the summarized data are worth at least a peer code review. Other options are to use SAS Enterprise Guide and to generate proc summary or proc freq results to compare the raw data to the summarized table. The point here is that instead of verifying that the data in each web report is correct, verify the numbers from the summary table itself are accurate.
  2. Metadata and Information Maps
    For subsequent steps in creating the data source, metadata should be confirmed (especially SAS formats for any improper truncation of values) and custom columns in SAS Information Maps should be tested. Prompts and filters built in SAS Information Map Studio can be verified at this point. To get the most value out of the time spent - the important item for the tester to understand is how the prompt or filter were built. I would recommend more testing when multiple filters are combined (expressed in Boolean logic AND/OR), and error insert testing with the simple prompts/filters (one variable used in the selection). Error inserting means that you are entering in prompted values that you know will NOT return results, to ensure that the end user either is not allowed to enter incorrect values or results are presented in a way that the end user has requested or is expecting.
  3. BI Dashboard and Web Reports
    The design of these reports is critical understanding for the tester. What content uses which information maps, what ranges are leveraged in which indicators, and where interactions exist between indicators are all required to build successful test cases. If any measures are defined in the web report itself, these should be verified as accurate - but all other data elements that are pulled straight from table/information map should already have been validated from one of the above two sections.
  4. Links between BI Dashboard and Web Reports
    With the design of individual reports, the interaction between tools needs to be verified. But note that testing the linkage from one combination of prompted values is typically enough to verify that the functionality is working as expected. If different data sources are used to create the dashboard and the web report studio prompt, it is much more efficient to verify that the data values are identical using proc freq statements than to click your way through tens or even hundreds of possible values using the user interface.

Of course, the most important thing is to do SOME testing of your reports before handing them off to your user community. The last thing anyone wants is an email from an executive questioning the accuracy or complaining that the report simply doesn't work. Loosing their trust will just makes your job harder in the future.

Do you have other recommendations do you have for testing SAS BI Dashboard and SAS Web Report Studio reports? Please share so we can all produce accurate & functional results!

Use OLAP data sources from SAS Stored Processes

So are you now dreaming big? Coming up with other ideas for how SAS Stored Processes can be leveraged in your installation of SAS?

As I mentioned in last week's post, you could use OLAP cubes as a data source for your SAS Stored Process. Here is how I would suggest getting started.

Step 1. Open and slice the OLAP Cube in SAS Enterprise Guide

This allows you to navigate through the cube and generate the tedious MDX code for that specific view and limit all of your typing.

Step 2. Select the menu item Edit View -> Edit in MDX Viewer

Now you can view all of the MDX code associated with this view. Copy this entire code for the next step.

Step 3. Place the MDX code in a PROC SQL statement

proc sql;
connect to olap (host="machine" port=5451

CREATE TABLE stp_datasource as 
select * from connection to olap 
/*********************PUT PASTED MDX CODE HERE**********************/

 [DateDimension].[Fiscal_Month_Num].Members } 
FROM [SalesSummary]
         [ProductDimension].[All ProductDimension].[Wholesale].[Carob N Almonds])

/************STOP THE PASTING OF MDX CODE BY THIS POINT****************/

Step 4. Build cascading filters for the where clause

I created a cascading filter, one for TYPE and one for PRODUCT and replaced the where clause with the following code:

        [ProductDimension].[All ProductDimension].[&type].[&product])

Step 5. Register and test the SAS Stored Process

Whalaa! You have successfully queried an OLAP cube for use in your stored process! Now that you have the basic steps, what dreaming are you doing? How will you use OLAP cube data for stored processes?

I have doubts

One of my new work friends says this frequently. But the statement is lost in his translation from Portuguese, it is not that he does not believe me, it is just that he has questions about how things happen so he can learn. I debate on which version of this statement I get from others, especially when it comes to the "magic" of using a SAS Stored Process within a SAS Information Map. In summary, I get two main questions from the tip/example I posted back in '09 about using Stored Processes to use SQL Pass-thru from Information Maps:

  1. The first concern is with concurrancy
    What happens when multiple people hit the SAS Information Map at the same time? In the example, the most important statement is:
    libname BIOUT (work);
    This reassigns the library BIOUT to the location of work. If you are not familiar with SAS, this is the individual location for each instance of SAS that is distinct, temporary, and is completely removed if the SAS session ends normally. Therefore, reassigning your BIOUT location to (work) ensures that the table created dynamically by the SAS Stored Process is the file only for your run.
  2. Next is the question that no physical file exists
    Again with the use of libname BIOUT (work); everything is cleared when the session closes. So a physical file exists, but only briefly.

Now that you have no doubts about using this technique, how do you plan to leverage the power of SAS Stored Processes? If you need help getting started, I previously posted four ideas on how SAS Information Maps can leverage Stored Processes "Using Stored Processes to Improve Information Maps".

Code wizards make the magic happen

Wizard Harry Potter makes magic just waving his wand around and shouting out spells. Using SAS Enterprise Guide (and you are welcome to shout out your own commands too) you can create some magic, allowing the Stored Process Wizard to code for you. On Step 2, the "Include code for" button allows you to choose whether the spell will include stored process macros, global macro variables, and references to libnames. In doing so, the key is to remember in magic and wizardry there can be a man behind the curtain. You will not see this when editing the stored process from Enterprise Guide, but the actual .sas file on the server contains all of this code.

In the below screenshot is the original code (squared off in red) with the three default additions created by the SAS Enterprise Guide Stored Process Wizard.

  1. *ProcessBody
    This was a required element for the SAS Workspace Server in 9.2 and prior versions. This alerted the server on where the code began.
  2. %STPBEGIN and %STPEND Statements
    Provides all of the header/footer information to enable output to ODS locations so that you don't have to code it yourself. There are instances where I recommend (and you are required to) not select this option, such as when generating custom HTML layouts.
  3. ExtendValidMemName
    Enables the use of an extended validmemname for 9.3+ versions. This will appear for all stored processes created from SAS Enterprise Guide 5.1, no matter what is selected in the "Include code for" button.

When I attempt to edit the code using the Stored Process Manager from SAS Enterprise Guide, I only see the two lines of code. In the majority of cases, this will work out just fine. But if you are trying to code something custom, just stay aware of what magic is happening around you!

And the difference between Business Intelligence and Business Analytics is ...

After some discussion with SAS Product Marketing yesterday, we devised this response to everyone's question about the Business Analytic differentiator that SAS provides.

Business Intelligence is one of the key components from SAS that ties into a suite of analytic techniques, information handling and performance infrastructures needed to deliver the insights through relevant Business Intelligence offerings. The majority of Business Intelligence systems provide general reporting capabilities (dashboards, web reports, etc.) to a wider group of users.

Business Analytical systems provide everything BI can do plus an additional level of functionality – the statistical analysis tools to complete tasks such as forecasting, regression, and modeling. Business Analytics from SAS includes ALL the key areas needed to successfully implement data driven decision making: Information Management, Analytics, Business Intelligence and tailored business solutions. The Business Analytics Framework is empowered by the High Performance Analytics capabilities from SAS.


'Intelligence' and 'Analytics' can appear as very similar terms but in my mind (and in my thesaurus) they are not. The term 'Intelligence' is another word for Information, News, Communication. Therefore the tools included within a Business Intelligence system would be to provide that information or news to effectively communicate to a large audience. Whereas 'Analysis', synonymous with examination, evaluation, investigation, and scrutiny, is the actual task of slicing and dicing data, understanding past-present-future by exploring everything available. SAS Business Analytics includes High Performance Analytics capabilities. When given to business analysts, they can perform all the investigation and scrutiny their hearts can handle!

What do you think?

Any SAS program can become a SAS Stored Process

Questions (or search strings) lead people to my blog or my email box on a daily basis and many of these are related to what SAS Stored Processes can or cannot do.

  • Can a SAS Stored Process query an RDBMS?
  • Can a SAS Stored Process create a data table?
  • Could it create an OLAP cube?
  • Can a SAS Stored Process create a PDF file (tip: use _ODSDEST), how about an Excel file?

Thankfully all I have to say is 'YES!', if a SAS program can do it then so can a SAS Stored Process (and with little to no tweaking). That is what makes SAS Stored Processes so incredibly awesome, the SAS developer can switch roles - from a bottleneck 'Run this job for me now' person to an enabler, creating reusable content that anyone can use immediately and with a couple minutes of click-this training.

Just remember, because all .sas files COULD become SAS Stored Processes doesn't mean that they all SHOULD become SAS Stored Processes. For example, .sas files that need to be run automatically and/or require no user interactions are not necessarily candidates for a SAS Stored Process. But .sas programs that require some modification to run for a specific need and/or are run when requested are absolutely good candidates. Can you think of other .sas programs that should (or should not) be considered for the SAS Stored Process job?

For more examples of what SAS Stored Process programs look like and can do, check out The 50 Keys to Learning SAS Stored Processes book. There are 35 SAS Stored Process examples covered and key after key of developing them.

Leverage metadata user synchronization macros for other uses

Six user import and synchronization macros are available in the SAS Foundation directory, these are documented as available in creating programs to retrieve information and compare authentication provider (such as LDAP) account information with what is stored in SAS Metadata to then synchronize the two.

Try this code in BASE SAS (after updating the options statement). The program takes a few moments to run, but it returns a set of metadata extract tables.

options metaserver=servername metauser="username" metapass="userpassword";

In all, there are over 20 resulting tables now available within your work library.

With METABROWSE and other metadata calls no longer needed, what use cases can you think up with all of this data so quickly available?

Here are some thoughts that I had:

  1. Looking at the list of tables, Grpmems (which only contains the unique group key and the unique person key) I could join with Group_info and Person_info tables to get a complete picture of who is a member of what group.
  2. Using the Email table, I could quickly grab a list of users email addresses to send email messages (even using BASE SAS).
  3. The location table (which contains a person's address) could be leveraged as a permanent data table (updated daily/weekly) to auto populate custom SAS Stored Process forms.

And remember these are just ideas of the one user synchronization macro mentioned here (%MDUEXTR)! There are five more. What can you dream up?

How did you find that metadata?

A common question really and one that I get after posting code like yesterday's All SAS Stored Process Locations Report. The neat trick is just to type METABROWSE in a BASE SAS command window. (There are two other commands: METACON and METAFIND. Has anyone got any tips on when those would be useful?)

BASE SAS will automatically provide a connection screen to establish a link into the SAS Metadata Server.

After connecting successfully, you can navigate through a folder structure type environment.
Comparing the code from yesterday with the metabrowse view, you will notice that the same terms appear. Below is a sample where I counted all the objects within the Directory folder. I got each Name and DirectoryName at this point but only if the Name was equal to "SP Source Directory" did I move into the second do loop.

Then to move into the do loop and not get lost in my metabrowse screen I right clicked on one of the "SP Source Directory" items in the windowing environment and selected Explore from Here (Number 1 in the diagram).
I then (Number 2) retrieved the unique metadata identifier for each occurrence in the Files list then I grabbed the corresponding Filename attribute from that specific File.

Also, the rc= functions used in the data step (including metadata_getattr, metadata_getnasn, and metadata_getnobj) are all defined in the SAS Language Interfaces to Metadata Documentation. Have you found other uses for METABROWSE? What other techniques have you leveraged to generate reports on your metadata?

Where is the SAS code for that stored process?

Stored process code can exist within the metadata itself for version 9.3 (the benefits of which I discussed earlier). But for all other versions (and as a option in version 9.3) the SAS code is  stored as a .sas file within the server's file system (or mapped folder/drive structure). When editing stored process code directly, you need to know where the stored process .sas file resides.

Here are a few places to find that .sas file location. Do you have any other ways?
1. Through the SAS Management Console properties screen, select the Execution tab.

2. Using PROC STP (SAS 9.3)

 options metaserver = 'machine'
 metaport = 8561
 metauser = 'XXXXXXXXXX'
 metapass = XXXXXXXXXX;
 PROC STP PROGRAM='/Projects/Book Examples/12_d_AMOParetoEdits';
 list group=execution;

The output then provides:
NOTE: PROC_STP: ====== Stored Process: /Projects/Book Examples/12_d_AMOParetoEdits ======
NOTE: PROC_STP: ====== Metadata Listing for /Projects/Book Examples/12_d_AMOParetoEdits ======
NOTE: Logical Server: SASApp - Logical Stored Process Server
NOTE: Source Code Repository: C:\SAS\Stp Book
NOTE: Source File:
NOTE: Result Type: Packages No
NOTE: Streaming Yes
NOTE: PROC_STP: ====== End Metadata Listing for /Projects/Book Examples/12_d_AMOParetoEdits ======
NOTE: PROC_STP: ====== Stored Process: /Projects/Book Examples/12_d_AMOParetoEdits Return Status
= 0 ======
NOTE: PROCEDURE STP used (Total process time):
real time 0.63 seconds
cpu time 0.04 seconds

3. By running a metadata call
In the following example, I have a sas program that will pull out a full list of all the stored processes registered in the metadata, each corresponding .sas file and where in the server these physical files reside.

/*insert an options statement here to connect to your metadata server/port.*/
data new(keep=dirname stpname stpcode);
label dirname="Directory Name";
label stpname="Stored Process Name";
label stpcode=".sas File Name";
 length uri name dirname stpuri stpcode stpassn stpname $256;

n=1; nobj=1;
    do while(nobj >= 0); 
        nobj=metadata_getnobj("omsobj:Directory?@Id contains '.'",n,uri);
    put nobj;
        if (nobj < 0) then leave; 
    if Name = "SP Source Directory" then do;
       rc1=1; x=1; 
        do while(rc1>0);
      rc1=metadata_getnasn(uri, "Files", x, stpuri); 
      rc2=metadata_getattr(stpuri, "FileName", stpcode); 
      rc3=metadata_getnasn(stpuri, "AssociatedTransformation", 1, stpassn);
      rc4=metadata_getattr(stpassn, "Name", stpname); 
title 'All Stored Process Locations';
proc print label noobs;

Included below is a screenshot of the resulting report.

  • About this blog

    Angela Hall manages a team of SAS Fraud Framework implementers within the SAS Solutions On-Demand organization. Angela is co-author of the book Building Business Intelligence using SAS: Content Development Examples and The 50 Keys to Learning SAS Stored Processes
    BI Bible Book 50 Keys to STP Book
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives