There is a job category unfamiliar to most people that plays a crucial role in the creation of analytics software. Most can surmise that SAS hires software developers with backgrounds in statistics, econometrics, forecasting or operations research to create our analytical software; however, most do not realize there is another group of people who work closely with individual developers to test their code. For the analytics products at SAS these people are called analytics testers. What do they do?

At SAS, verifying the correctness of procedural output is termed “numeric validation.” This process consists of independently checking and verifying all the numeric output created by a developer in a SAS procedure or function. Just as SAS has invested in a large stable of talented developers with advanced degrees in specialized areas of statistics, econometrics, forecasting, operations research and mathematics, SAS has also invested in an equivalent stable of analytics testers with advanced degrees in the same specialty areas. One primary responsibility of an analytics tester is to ensure numeric correctness independent of the developer, which they typically do by replicating the method in alternate code. Think of dueling PhD’s racing to implement the same algorithm but in different ways. Part of ensuring numeric validation means their results must agree, because agreement provides greater assurance that the implementation and output of the algorithm is correct.

But what happens when they do not agree? This happens quite often during the software development process. As you might imagine, this leads to a lot of head-scratching and white-boarding to figure the disparity out. When the numbers are different, the numbers for both the developer and tester have been called into question. Who is correct? Did the developer implement the statistical algorithm in the C language the same way the analytics tester did in SAS/IML, or vice versa? Was there a different interpretation of the algorithm from the source material like a journal article? Figuring out the discrepancy can take time to converge, due to the subtle nature and abstract interpretation required of many algorithms and mathematical approaches that require years of training in order to understand. Codifying those subtleties and abstractions into closed-form, robust and well-tested C code that produces the correct values for SAS customers can be quite arduous, and so is numeric validation of the results using an independent pathway.

Let me give you a concrete example. One of the analytics testers for SAS/STAT who is responsible for testing and validating a heavily used SAS procedure used in drug trials identified an issue with the cumulative incidence function (CIF) that was part of a new feature under development. Her numbers did not match output from the procedure the developer created. She notified the developer and thus began a three week exercise analyzing why their numbers differed. The tester had to write a 900 line SAS/IML program to independently calculate the CIF, because there were no other independent means to validate it. After much discussion, the developer determined that that the analytics tester’s approach was technically correct and adjusted his C code for the procedure accordingly.

On the surface one might think this is just two statisticians arguing over a seemingly arcane issue, but the computation is critical in a field of statistics referred to as survival analysis. Biostatisticians and medical researchers use survival analysis to determine which factors increase the probability of survival for subjects in medical studies. Life-altering decisions are made based on the results of this analysis, so it is not hyperbole to say that numeric validation can be a life and death matter.

SAS analytic software is utilized the world over to drive decisions in nearly every area of scientific research, business, and government policy setting. Ensuring SAS software is well-tested and numerically correct is a key factor to the integrity of driving those decisions, and analytic testers are a core part of that process. A few years ago SAS CEO Jim Goodnight said, “SAS was, is and always will be a collection of people who put the needs of the customer first, produce quality software unmatched by any other, and thrive in an innovative workplace that serves as a model across the globe.” This week at SAS is the second annual Quality Week, which is dedicated to raising awareness of the importance of software quality and empowering employees to help solve quality challenges. As SAS employees shine attention on quality this week I am proud to lead a team with a particular contribution to quality related to numeric validation.

Share

Jim Metcalf has been at SAS since 1993 and has worked as a software tester, applications developer, director of product management, director of web application development, and director of web application testing and validation. For the past two years he has served as senior director of testing and validation in the Advanced Analytics Research and Development Division of SAS. Prior to coming to SAS, Jim worked for Shell Oil Company and used SAS to help explore for oil. Jim holds B.S. and M. S. degrees in geophysics from the University of Oklahoma.