“Bolder” statistics with Karen Copeland

Karen Copeland, Ph.D owner and sole employee of Boulder Statistics, a statistical consultancy

Karen Copeland, Ph.D, owner and sole employee of Boulder Statistics, a statistical consultancy, will be the guest on Analytically Speaking on June 8.

Dr. Karen Copeland will be our featured guest on Analytically Speaking on June 8. She is the owner of Boulder Statistics, a successful consultancy to a wide array of industry sectors around the world — medical device, diagnostics, chemicals, marketing, environmental, consumer and food products, pharmaceuticals, and web analytics, among them. When Karen named her company, she may not have intended it to be a play on words, but I think it’s fitting. She has made some bold steps in her career.

She works with scientists and engineers and enjoys the diverse projects she is given as well as the challenge of learning new methods to be an effective problem-solver.  With more than 20 years of applying statistical methods and working in academia and industry before starting her consultancy, she has some interesting stories and experiences to share. In addition, she co-authored the books, The Analysis of Means: A Graphical Method for Comparing Means, Rates, and Proportions and Introductory Statistics for Engineering Experimentationin addition to a number of journal articles.

You may recognize Karen from her popular posts to the JMP Blog over the last few years. Check out her most recent posts on model visualization:

Karen has a great deal of technical expertise — on such topics as analysis of means, experimental design and data visualization — but she also has other important skills contributing to her success, like effective communication and the ability to see the big picture to know which questions to ask and identify the best path forward. We'll cover those subjects in our live interview.

We hope you will join us June 8. If you can’t join the live webcast, you can always catch the archive which is usually available by the following day.

Post a Comment

Remaking a mosquito trends chart

Recreating graphs is a hobby of mine. It both helps me test the limits of JMP and sharpens my own data handling and visualization skills. This time, there was a third benefit: finding a significant data error in the published chart.

I recently saw this interesting mosquito trends chart as part of an article, “When the mosquitoes will be biting in your state,” on the Washington Post’s Wonk Blog. Its shows Google search trends for the word “mosquito” by state, with each state on a different scale.

wpmosquito1

 

It’s not a typical analytical graph, but I thought the layout would be a good test of Graph Builder’s small multiples grouping, and I was intrigued by the overall lack of geographic pattern. For instance, the article mentioned that Tennessee is so unlike its neighbors, and the same can be said for other states. The main point of using a map is show geographic patterns, but the connections are pretty weak here.

Getting the Data

The article nicely includes a link to the Google Trends page for mosquito trends for the whole United States, and that page nicely has a Download as CSV menu item.

However, that’s where the niceness ends for getting the data. Each state trend is on a separate page, there’s not a separate URL for the download, and the CSV file is not really a pure CSV file. At least the state URLs followed a pattern, so I wrote a script to open each state in a separate tab in a web browser. Then I had to manually click the Download as CSV menu item for each tab. Each “CSV”  followed a regular pattern which included some descriptive text and multiple embedded tables. An example snippet:

Web Search interest: Mosquito
Wisconsin (United States) 2014-2015

Interest over time
Week,Mosquito
2014-01-05 - 2014-01-11,4
2014-01-12 - 2014-01-18,3
2014-01-19 - 2014-01-25,4
...
2015-12-27 - 2016-01-02,4

Top metros for Mosquito
Metro,Mosquito
Duluth MN-Superior WI,100
Wausau-Rhinelander WI,85

Fortunately, JMP’s text import wizard let me tell it how many lines to skip and how many to read. After doing it once in the wizard, I looked at the generated script and was able to put it in a loop to read the other files the same way:
Open( "report (" || Char(i) || ").csv",
   Columns( Column( "Week", Character), Column( "Mosquito") ),
   Import Settings( Column Names Start( 5 ),
      Data Starts( 6 ), Lines To Read( 104 ) )
);

After doing all that and splitting the first field into separate start and end dates, I could concatenate all the table into one big table containing all the state data, which is two years of weekly data.

First Look at the Data

We can see all the data fairly well in this small multiples line chart overlaid by year.

mosquitolines

We don’t have the geographic arrangement, but we can already see enough to compare against the original, and two features stood out to me:

  1. Sometimes the years were quite different within a state (especially Arizona and Hawaii).
  2. Tennessee does not have the spikiness of the original.

The first observation suggests we probably need more years of data to establish what the article calls a “typical” year, and I haven’t pursued that, yet. The second item was more of a mystery. I initially thought it was an artifact of the binning since the original chart is showing data by month instead of by week, but that didn’t hold up as I looked at the data closer. I double-checked my graph against the Google Trends page for Tennessee, and they agreed. I continued on, hoping to discover the source of the discrepancy.

Arranging the States

The above chart uses the Group Wrap role in Graph Builder to lay out the states in a grid alphabetically. For complete control, I assigned each state a row and column value and used those values in the Group Y and Group X roles .

xgmosq

In an effort to approximate the original chart, I switched from the overlaid lines to smoothed area charts. (I didn’t do bars because I still haven’t tried converting weeks to months – or maybe there's a way to get monthly data from Google Trends.) It was enough to notice some of my states looked like other states in the original. The original Tennessee and Ohio look a lot like my South Dakota and North Dakota; the original Pennsylvania looks a lot like my Oregon. Luckily, a pattern occurred to me. The original chart’s states are off by one alphabetically!

Almost, anyway. After further study, I realized only the states after District of Columbia were off by one. Our charts agreed on the other states. Coincidentally, I also had a similar error in my data where I had downloaded the DC data twice, and my initial charts were off by one before DC. Weird. After some Twitter messages, the author confirmed my findings and quickly updated the Wonk Blog post graph and commentary.

Scaling the Data

In the data from Google Trends, each state’s interest levels are scaled so the maximum value is 100. That means the magnitudes are not comparable from state to state as you would expect with a small multiples chart. Just the patterns (e.g., spikiness) can be compared in this case.

How could I convert the state data to be on the same scale? The Google Trends page for the whole US includes a list of summary levels for each state. If those summaries represent each state’s average value, we can make the adjustment with a scale factor. Here’s the result with all the states on a common scale.

xgmosqabs

Notice I also used a different state layout of my own design, trying to give my home state of North Carolina a truer positioning. Within the constraints of equal-sized rectangles, it's impossible to preserve all the geographic properties. The layout in the original chart is from Jon Schwabish at PolicyViz and has the nice property that the overall shape resembles the overall shape of the country.

Given the thinness of many of the curves, I added a background color based on the state's average value, which besides allowing a big picture view of the yearly pattern, helps anchor the areas in their cells and make the labeling clear.

xgmosqheat

Insights Gained

Though the Wonk Blog article makes it clear that the Google Trends data may not be representative of any real-world trends, by redoing the analysis I did gain a few insights I couldn't get from the original article and graph:

  • Some states have significant year-to-year variation.
  • Monthly versus weekly aggregation may be an issue.
  • The state data was normalized by the maximum value.
  • The Dakota spikes are really high (if my re-scaling is right)

I realize it’s not always practical to do our own remakes of graphs and analyses, but it’s a great way to really understand the nuances of the data.

Post a Comment

AMA Advanced Research Techniques (ART) early registration ends Thursday!

The co-chairs of last year's AMA ART Forum

The co-chairs of last year's AMA ART Forum kick off the last day of the 2015 conference. Join us this year in Boston!

I’ve posted here in the JMP Blog about the American Marketing Association’s Advanced Research Techniques (ART) Forum and the impressive work that’s presented there every year. As co-chair, I am doubly excited for this year’s conference, which will take place June 26 – 29 in Boston, MA.

We had an amazing group of paper submissions that we used to build the program schedule. Take a look. You'll see sessions on such topics as Social Networking, Market Segmentation, and analyzing Large and Unstructured data. I’ll be chairing one of two sessions on Choice Modeling, and my conference co-chair, Rex Du, will be chairing a session on Marketing Metrics. We’ll also have a full slate of tutorials to choose from the Sunday before the official conference begins. Among the tutorials are Introduction to Discrete Choice Experiments, Probability Models for Customer-Base Analysis, and Leveraging Online Search Trends in Marketing Research.

One of the things I love about ART Forum is the interaction between academic researchers and marketing research professionals. The conference sessions and format are designed to encourage discussion. In every session, academics present alongside experts who are working in industry. Post-presentation discussions from market leaders show how to work through problems that might happen when implementing a new technique, as well as analyze the gains from doing so. This year, our presenters have made a special effort to make sure white papers and how-to guides are part of the conference take-home materials.

Between the tutorials for basic training and the cutting-edge research techniques on display at the main conference, ART Forum is a great training opportunity for people who are new to marketing research as well as for experienced professionals.

Early bird registration ends this Thursday, so register soon. I hope to see you in Boston next month!

Post a Comment

When should programming come into play in statistics courses?

blurrycode

Does exposure to coding in statistics courses dampen students' enthusiasm — for both programming and statistics?

Both academically and professionally, more courses are being offered and developed to make more people comfortable with data, analysis and risk assessment. This necessitates some use of statistics, and software is pretty much a tool of the trade. Software — some new, some enhanced, some commercial and some open source — is increasingly available to broader audiences and is ever-changing.

For the quantitative courses I took in college, I had to learn some coding languages to use SAS, SPSS and SHAZAM. I was not a fan of learning JCL and other programming languages initially and found learning the syntax of the languages an impediment to understanding statistical concepts.

On the positive side, even my limited coding skills later proved useful for my career, but many of my classmates’ exposure to coding dampened their enthusiasm — for both programming and statistics. Once I was exposed to the highly visual and interactive experience that JMP provides in data exploration and analysis, I wondered whether I would have understood statistical concepts more quickly and whether fellow classmates would have had greater enthusiasm for statistics had we used JMP.

More intro stats courses are being offered as MOOCs. Many universities are evolving their curricula to include business analytics and other courses to appeal more broadly to engage more people in statistical thinking. Professionally, more basic data analysis courses are being offered as well. In light of all this, it’s interesting to see which software is used: spreadsheets, interactive visual software like JMP, some SAS interfaces, interfaces to R, Minitab, etc., as well as language-based approaches like R, SAS, Python and others.

What factors affect which software is used in courses?

Screen Shot 2016-05-19 at 2.03.36 PM

I wonder if I would have understood statistical concepts more quickly if I had had access to JMP in college.

Having written a blog post about teaching statistics with JMP and continuing to engage with academics on how they teach statistical concepts, I’m curious about the motivating factors in choosing software for use by students with such varied levels of numeracy. Often, cost is the driving factor. Open source software is freely available. Excel is so ubiquitous that it is essentially perceived as free (but many recognize the limitations of spreadsheets).

Another motivating factor of some intro-level courses may be to leave the students with more marketable skills, and knowing a popular programming language is certainly such a skill (in addition to knowing about data analysis, of course).

Yet another consideration could be that the software is already there, what’s been there and what the instructor already knows.

Teaching how to think statistically

But beyond these factors, many instructors truly want to engage more students to see and feel the power of data, to experience what it is to “think statistically.” They recognize that many people will appreciate and benefit from understanding statistical concepts, but may never go on to learn any programming languages. They may be capable of statistical thinking without knowing how to program. Obvious examples would be doctors and judges, whose recommendations and decisions can powerfully affect people's lives.

I recently finished reading Risk Savvy: How to Make Good Decisions by Gerd Gigerenzer. For many important decisions regarding our health, finances and more, he shares well-founded research in how we can better assess risk to make better decisions. For example, he has done a lot of work with doctors to better communicate probabilities to their patients (in short, he advises translating probabilities into natural frequencies). For more along these lines, David Spiegelhalter, who has done a great deal to educate the masses about understanding uncertainty and the many things to consider in presenting risk to decision-makers, has written a great blog post with interactive graphics on 2845 ways to spin the Risk.

Understanding risk is part of thinking statistically, an important skill in this data-rich era. For attracting the broadest audience and to give more people a foundational understanding of important statistical concepts, there is considerable evidence that interactive data visualization plays an important role. Through dynamic and interactive graphs, learning becomes play.

Observations from statistics professors

Many professors/instructors offer compelling reasons for taking a visual path (and choose JMP) as a means to introduce more people to statistical thinking. For example, here a few excerpts from an interview last year with Christian Hildebrand, Assistant Professor of Marketing Analytics at the Geneva School of Economics and Management:

  • “[Students] said ‘Wow, I never knew that statistics could even be fun!’ That’s when I realized that the statistical software is not just a medium, it is an environment that can actually help in understanding statistical concepts better.  JMP was a big amplifier for that."
  • "With the software focusing so heavily on visualization, it’s much easier for you to really understand what is the issue in the data. It's critical for students to understand their data better by interacting with the data in a software environment like JMP. "
  • "What students really loved about the software was that they had a very intuitive way of learning. This intuition is very important because statistics is very much cognitive, and you have to learn the basics. At the same time, it is very important to still be creative and to think about new hypotheses, and very often you learn that out of the data. The capabilities you have with JMP — with the rich visualization capabilities — those are key to understand statistical concepts better.”

Peter Goos, Full Professor at the University of Antwerp in the Department of Environment, Technology and Management, and David Meintrup, Professor of Mathematics and Statistics at the Ingolstadt University of Applied Sciences co-authored, Statistics with JMP: Graphs, Descriptive Statistics and Probability. In their preface, they say:

"We chose JMP as supporting software because it is powerful yet easy to use…. We believe that introductory courses in statistics and probability should use such software so that the enthusiasm of students is not nipped in the bud. Indeed, we find that, because of the way students can easily interact with JMP, it can actually spark enthusiasm for statistics and probability in class."

David Meintrup also recently shared this story: "I always end the first session on JMP with Graph Builder. The first time my students see how to interactively create a map of the unemployment rate in Europe over the years 2000-2015, they are blown away. I can see how their facial expression changes, and from that point on I don't need to worry about motivation anymore."

Iddo Gal, Senior Lecturer and past Chair, Department of Human Services at the University of Haifa, and past President of the International Association of Statistical Education:

"In 2015, I attended the JMP workshop (three hours) in our IASE Satellite in Rio, and remember being particularly impressed with these tools, which far exceed options in other packages, and for me can help our participants see what is unique about it and also does not require strong formal/procedural skills. I also recall how the local (Brazilian) statisticians were taken by surprise — they said they work so hard to impart the technical [formulaic, statistical] underpinnings of multivariate stuff and running traditional analyses, and their students struggle with traditional outputs — yet within 15 minutes into the visualization portion of the JMP workshop, all of a sudden, they realized how their students can view things so much easier and understand and see what is coming out.”

Earlier this year in an interview with Jason Brinkley, biostatistician and senior research methodologist at American Institutes for Research, he discussed some of his experiences teaching with JMP from his 2014 Discovery Summit paper, Using JMP as a Catalyst for Teaching Data-Driven Decision Making to High School Students. Though the course targeted high school students who were gifted in math and science, Jason explained that this hands-on approach was well received, especially by the students who had not yet taken Advanced Placement Statistics. They could see and feel the power of data, and this piqued their interest. Jason said, “You could see the passion start to come up from the students, not necessarily about the research but about the data.”

What about you?

For those of you in the noble profession of teaching, how do you teach statistical concepts to a broad audience? Is some level of programming involved from the beginning, do you take a more visual approach, or do you give the students options to choose the tools they use?

For those of you who were/are students, how were you introduced to statistics? Did you have to learn a programming language first or did you learn via an interactive tool like JMP? If the former, do you think you would’ve understood the concepts more quickly if you’d had a more visual introduction? If the latter, did you later invest in learning a language (perhaps JSL?) anyway because it helped you do more with your data?

Thanks for your interest and I look forward to hearing from you!

Post a Comment

Does the pedigree of a thoroughbred racehorse still matter?

You may have heard that a horse named Nyquist won the Kentucky Derby recently. Nyquist was the favorite going into the race, though he was not without his doubters. Many expert race prognosticators questioned his stamina, and I was curious about the basis for those comments.

My due diligence revealed that breeders (and race handicappers) have a language of their own when speaking of these great thoroughbred horses. For example, in evaluating a horse’s potential, they speak of “dosage,” which at first glance appears to imply the use of performance-enhancing drugs. However, this is a term used (at least) since the earliest part of the 20th century (in France and England), and it refers to the horse’s pedigree.

Here is a brief explanation of commonly used terms:

Dosage. A system designed to predict the distance potential (stamina) of horses based on the sires in the first four generations of their pedigrees. Categories range from most speed/least stamina to least speed/most stamina, and points are assigned over this range to form the Dosage Profile.

Two key statistics can be generated from the Dosage Profile:

The Dosage Index (DI). DI is the ratio of points in the “speed wing” to points in the “stamina wing.” The average DI for race horses in North America is 2.4. Nyquist’s DI is 7, indicating that he has 7 times as much speed as stamina in his pedigree. Since 1940, there has only been one other horse (Strike the Gold, 1991) with a DI higher than 7!

Center of Distribution (CD). CD provides a point of reference relative to the horse’s pedigree with a metric ranging from +2.00 to -2.00. A positive CD indicates the horse has speed (in the pedigree), and a negative CD indicates the horse has displayed more stamina. The average racehorse in North America has a CD of 0.70. Nyquist’s CD is 1, indicating that he should have more speed than the average horse.

The fundamental theory behind dosage is that the higher the DI and CD, the lower the distance potential of the horse; conversely, the lower the DI and CD, the higher the speed potential of the horse.

The rule of thumb has been that a horse with a DI higher than 4.0 and a CD higher than 1.25 will not perform well and would not win the Triple Crown races like the Kentucky Derby, the Preakness or the Belmont Stakes. For 50 years, this rule of thumb appeared to be valid, as there were only two race winners with a DI of at least 4 from 1940 to 1990, but something changed the game in 1991.

DerbyDosage1940

In 1991, a horse named Strike the Gold kicked down the barn door of pedigree metrics and ushered in a new era in high-stakes horse racing as he won the Kentucky Derby with a DI of 9. I sought to analyze this change using control charts from JMP’s Quality and Process platform.

Control charts use the data to differentiate what is typical from what should be considered special. In creating my control chart, I realized that 1991 wasn’t the first time that such a change had occurred since 1940; it’s just the first one that emphasized that something was rendering metrics like DI and CD outdated.

In the chart below, note the change in the average values of DI per each time period. What’s causing these changes? New training methods? Something else?

DerbyDIChange

On to the Preakness!

Given what we have seen in Kentucky Derby winners since 1940, can we throw out the historical data in this post-1990 world of racing? Does the DI rule of thumb still hold true? And, does pedigree still matter in the world of horse racing?

Here’s a graph depicting more than 30,000 horse races since 1980, and it indicates that the rule of thumb still holds true: The longer the race, the lower the DI.

DerbyDistanceDI

The DI metric has seen some interesting activity in the Preakness as well. But since 1990, it’s actually been trending lower. And since 1940, the average DI for the Preakness (a 10-furlong race) winner is 2.82, and that’s a number falling within historical expectations (according to the chart above).

DerbyPreaknessDist

Conclusions

For more than 50 years, racehorses in the Kentucky Derby behaved as expected relative to their pedigree-related metrics. At least in the Kentucky Derby, something has apparently changed, and horses that previously would not have even been considered contenders have been winning! Might it be better training methods enabling speed horses to be better trained to run the distance? Might it be drugs? What do you think?

Regardless, the numbers indicated that Nyquist would not win the nine-furlong Kentucky Derby. On Saturday, he’ll run in the Preakness, which is a 10-furlong race. The numbers again say “No,” but this horse has a calmness about him that’s rare. We'll have to watch the race and see how it turns out!

Post a Comment

Graph Builder tutorial materials

I'll be leading a pre-conference tutorial on Graph Builder at this year's Discovery Summit conference in Cary. We'll start with the basics and then walk through more advanced ways to create effective visualizations.

We did a similar tutorial this spring in Amsterdam, and materials from that course are posted in the JMP User Community. The materials you will find there include pictures and source materials (data tables and scripts) for recreating 100 different graphs -- some simple, some advanced.

Take a look, and if you're interested, sign up for the live tutorial in September.

Post a Comment

Discovery Summit: Best-in-class analytics conference

JMP-Discovery2015_50B4126Four times a year, we host Discovery Summit, where scientists, engineers, statisticians and researchers exchange best practices in data exploration, and learn about new and proven statistical techniques.

Past attendees have called the event a “best-in-class conference to benchmark best practices in analytics” with sessions that are “immediately relevant to daily work.”

Save your seat among them.  The conference is Sept. 19-23 at SAS world headquarters, and there’s no better time to learn from fellow JMP users and to grow your network of analytically minded people.

As always, you’ll have a chance to meet with developers in person, hear real-world case studies from power users and find inspiration in thought leader keynotes.

You can also add training courses or tutorials to your week. Training courses, led by SAS education instructors, combine lectures, software demonstrations, question-and-answer sessions and hands-on computer workshops for an interactive learning experience. Tutorials, led by JMP developers, are a rare opportunity for you to go in-depth on specific topics with the experts themselves.

And here’s the inside scoop: Sign up soon because the first 225 people to register will have the opportunity to attend the opening dinner held at the home of SAS co-founder and JMP chief architect John Sall.

 

Post a Comment

The QbD Column: Split-plot experiments

Split-plot experiments are experiments with hard-to-change factors that are difficult to randomize and can only be applied at the block level. Once the level of a hard-to-change factor is set, we can run experiments with several other factors keeping that level fixed.

To illustrate the idea, we refer in this blog post to an example from a pre-clinical research QbD (Quality by Design) experiment. As mentioned in the first post in this series, QbD is about product, process and clinical understanding. Here, we focus on deriving clinical understanding by applying experimental design methods.

The experiment compared, on animal models, several methods for the treatment of severe chronic skin irritations[1]. Each treatment involved an orally administered antibiotic along with a cream that is applied topically to the affected site. There were two types of antibiotics, and the cream was tested at four different concentrations of the active ingredient and three timing strategies.

The experiment was run using four experimental animals, each of which had eight sites located on their backs from the neck down. Thus, the sites are “blocked” by animal. For each animal, we can randomly decide which sites should be treated with which concentration by timing option. The antibiotics are different. They are taken orally, so each animal could get just one antibiotic, and it would then apply to all the sites on that animal.

The analysis included a number of outcomes, and the most important were those that tracked the size of the irritated area over time, as a fraction of the initial size at that site. The primary CQA (Critical Quality Attribute) summarizing the improvement over time is the area under the curve[2] (AUC), and that is the response that we will analyze in this blog post. The AUC is an overall measure of the rate of healing, with low (high) values when healing is rapid (slow).

For more details on split-plot experiments, a great source is the introductory paper by Jones and Nachtsheim.

Why is split-plot structure important?

In the topical cream treatment study, the animals form experimental blocks. The basic reason for considering blocks in the data analysis is that we expect results from different sites on the same animal to be similar to one another, but different from those for sites on other animals. We take advantage of this property when we compare timing and concentration. Those comparisons are at the “within animal” level, which neutralizes the inter-animal variation and thus improves precision. For the antibiotics, the differences between animals will affect our comparison. The fact that we think of each animal as a block means that we do expect to see such differences. We need to take this into account both in designing the experiment and in analyzing the results.

What are whole plots and sub plots?

We use the term “whole plots” to refer to the block-level units and “sub plots” to refer to the units that are nested within each whole plot. In the example above, the animals serve as whole plots and the sites as sub plots. The terminology goes back to Sir R.A. Fisher, the pioneer of the statistical design of experiments.  Fisher worked at an agricultural research station in Rothamsted, UK, in the early 20th century. Typical experiments at this station involved comparing types of crops, planting times, and schedules of irrigation and fertilization.

Fisher observed that some of these factors could be applied only to large plots of land, whereas others could be applied at a much finer spatial resolution. So “whole plots” to Fisher were the large pieces of land, and “sub plots” were the small pieces that made up a whole plot. Some experiments have more than two such levels. Nowadays, we continue to use these terms, even though split-plotting affects many kinds of experiments, not just field trials in agriculture, and the “whole units” and “sub units” usually are not plots of land. In the QbD context, they consist of animal models, like in the example used here, batches of material or setup of production processes.

When does split-plotting occur?

There are many possible sources of split-plot structure in an experiment. Sometimes, as above, we have “repeat measurements,” but at different conditions, of the same experimental subject. Sometimes the experiment involves several factors that are difficult to set to varying experimental levels, such as a column or bioreactor temperature. In that case, it is common to set the hard-to-change factors to one level and leave them at that level for several consecutive observations, in which the other factors are varied. This leads to a split-plot experiment, with a new whole plot each time the hard-to-change factor(s) are set to new levels.

Sometimes a production process naturally leads to this sort of nesting. For example, consider an experiment to improve production of proteins for a biological drug. The process begins by growing cells in a medium; then the cells are transferred to wells in a plate where they produce protein. An experiment might include some factors that affect the growth phase and others that affect only protein production. Dividing the cells in a flask among several different wells makes it possible to test the production factors at a split-plot level.

How do I design a split-plot experiment?

The Custom Design Tool in JMP makes it easy to create a split-plot design. First, enter the factors in your experiment. The split-plot structure is specified using the column labeled “Changes” in the factor table. The possible entries there are “easy,” “hard” and “very hard,” corresponding to three levels of nesting among the factors. Factors that can be assigned at the individual observation level are declared as “easy” (the default setting). Factors that can be applied only to blocks of observations are declared as “hard.”

There may be a third level consisting of factors that can be applied only to “blocks of blocks,” and these factors are labeled as “very hard.” Figure 1 shows the factor table for our experiment. Antibiotic is the hard-to-change factor because it is applied to the animal, not the individual sites. Timing and concentration are numerical factors, as the company wished to get information for comparing all the levels under consideration, without the need to extrapolate by a regression model. So we decided to declare all the factors as categorical. Note that a concentration of 0 means applying a base cream with no addition of the compound being tested.

Figure 1: Factor definition for the experiment

Figure 1: Factor definition for the experiment

The next step is to specify any special constraints. For example, some factor combinations may be impossible to test, or there might be some inequality constraints that limit the factor space. Then you need to declare which model terms you want to estimate, including main effects and interactions. In our experiment, the company wanted to estimate the main effects of the three factors. They wanted information on the two-factor interactions but did not consider it essential; however, the experiment is large enough to permit us to estimate all these terms. If there were fewer runs, we could indicate the “desired but not crucial” status by clicking on the estimability entry for these terms and choosing the “if possible” option. See Figure 2.

Figure 2: Model definition for the experiment

Figure 2: Model definition for the experiment

We are then asked to specify the number of whole plots, i.e., the number of animals available.  For our study, there were four animals. Finally, we need to specify the number of runs. The Custom Design tool recommends a default sample size, tells us the minimum possible size and allows us to specify a size. In our experiment, it was possible to stage eight sites on each animal, for a total of 32 runs.

Clicking on the “Make Design” button generates the design shown in Table 1. There are four whole plots with each antibiotic assigned to two of them. There are 12 combinations of timing and concentration, but only eight sites on each animal. So it is important to make an efficient choice of which of these treatment combinations will be assigned to each site. Moreover, there is no textbook solution for this allocation problem. This is a setting where the algorithmic approach in JMP is extremely helpful.

Table 1. The 32-run design for three factors in four whole plots of eight runs each. The factor “antibiotic” can only be applied at the “whole plot” level.

Table 1. The 32-run design for three factors in four whole plots of eight runs each. The factor “antibiotic” can only be applied at the “whole plot” level.

The design found by JMP uses the two-hour time scheme for 12 sites and each of the other schemes for 10 sites. Each concentration is used eight times. Each timing by concentration option is used either two or three times. (Note that we would need 36 runs to have equal repetition of the combinations, but the experiment has only 32 sites.) The design automatically includes a Whole Plots column – in our experiment, this tells us which animal is studied, so we changed the name of the column to "Animal."

How does a split-plot experiment affect power?

The power analysis in Table 2 is instructive. We see that the power for the timing and concentration factors is much higher than for the antibiotic. The higher power is because we are able to compare levels of these factors “within animal,” thus removing any variability between animals. For the antibiotics, on the other hand, the comparison is affected by the variation between animals, so that the relevant sample size is actually four (the number of animals) and not 32 (the number of sites).

Table 2. Power analysis for the 32-run design.

Table 2. Power analysis for the 32-run design.

It is important to realize that the power analysis must make some assumptions. These include the size of the factor effects (the Anticipated Coefficients) and the magnitude of the variances.  The entry for RMSE in the table is for the site-to-site variation. There is also an assumption about the size of the “between animal” variation to the “within animal” variation. The default assumption is that they are roughly the same size. If you thought that most of the variation was between animals, the default should be changed to a number greater than 1. To do so, click on the red triangle next to Custom Design, select the Advanced Options link, and then the Split-Plot Variance Ratio link.

How do I analyze a split-plot experiment?

The analysis follows the same general structure as for other designed experiments; see the earlier blog posts in this series. The major difference is that we need to add the factor Animal to the design as a “random effect.” It is this random effect term that tells JMP that the experiment is split-plot. Use the Fit Model link under Analyze. If you do this from the design table, the list of model terms will automatically include the random effect. If you access the data differently, then you will need to add Animal to the list of model effects and to declare it as a random effect by highlighting the term and clicking on the “Attributes” triangle next to the list of model terms. The first option there is “Random Effects.”

What happened in the skin irritation experiment?

We analyze the results on AUC.  Effective treatment combinations will have low values of AUC. Table 3 shows tests assessing whether the factors have significant effects. There is a clear effect associated with concentration (p-value=0.002). The effect for timing has a p-value of 0.076, so there is an indication of an effect, but much weaker than for concentration. The F-statistic for comparing the two antibiotics is larger than the one for timing. However, it has a p-value of 0.078, close to the one for timing. The reason is that the antibiotic comparison is at the “whole plot” level and so has more uncertainty, and much lower power, than the comparisons of timing strategies and concentrations.

Table 3. Effect tests.

Table 3. Effect tests

None of the interactions is strong. So concentration is clearly the dominant factor. Table 4 and Figure 3 summarize the estimated effect of concentration on AUC. There is a clear relationship, with higher concentration leading to lower AUC, hence faster healing.

Table 4. Estimated mean AUC for the 4 concentrations.

Table 4. Estimated mean AUC for the four concentrations

 

Figure 3. Plot of the estimated mean AUC by concentration

Figure 3. Plot of the estimated mean AUC by concentration

The analysis includes a random effect for “Animal,” reflecting the team’s belief that part of the variation is at the “inter-animal” level. Table 5 shows estimates of the “within animal” (residual) and the “between animal” variances.  The Var Component column lists the estimated variance components: 0.0029 at the “within animal” level and 0.0010 at the “between animal” level. The first column gives the between animal variance as a fraction of the “within animal” variance, estimated to be about 0.34 for our experiment.

Table 5. The estimated variance components.

Table 5. The estimated variance components.

What are the take-home messages?

The topical cream study provided valuable information that the cream is more effective at higher concentrations. The use of multiple sites per animal permitted “within animal” comparisons of the concentrations and timing, so that the positive effect of increasing concentration could be discovered with a small number of animals. The “between animal” variation was only about 1/3 as large as the “within animal variation.” This was a surprise, as we had expected that there would be substantial inter-animal variation. Of course, the estimate of inter-animal variation is based on a very small sample, and thus quite variable, so we will still be careful to take account of split-plot structure in future experiments like this one. Consequently, factors that must be administered by animal, rather than by site, will be detectable only if they have very strong effects or if the number of animals is increased.

Coming attractions

The next post in this series will look at the application of QbD to design analytic methods using Fractional Factorial and Definitive Screening Designs.

References

[1] For more information on testing topical dermatological agents, see the FDA “Guidance for Industry” document at http://www.fda.gov/ohrms/dockets/ac/00/backgrd/3661b1c.pdf

[2] In the example, we use data normalized to [0,1] after dividing all by the largest AUC.

[3] Jones, B. and Nachtsheim, C. (2009). Split-Plot Designs: What, Why, and How, Journal of Quality Technology, 41(4), pp. 340-361.

About the Authors

This blog post is brought to you by members of the KPA Group: Ron Kenett, David Steinberg and Benny Yoskovich.

Ron Kenett

Ron Kenett

David Steinberg

David Steinberg

Benny Yoskovich

Benny Yoskovich

Post a Comment

Discovery Summit China focuses on global trends in data analysis

Feng-Bin Sun of Tesla delivers a keynote speech at Discovery Summit China.

Feng-Bin Sun of Tesla delivers a keynote speech at Discovery Summit China.

About 200 experts, analysts, and JMP users and fans from all trades and professions gathered in Shenzhen for Discovery Summit China 2016.

The conference focused on the latest global trends in data analysis and its application.

Attendees came from government, banking, automotive, pharmaceutical, energy, semiconductor, electronic and public service organizations, to name a few. The annual analytics event took place at the Four Seasons Hotel in Shenzhen on April 29.

The day began with three keynote talks, featuring:

  • SAS co-founder and Executive Vice President John Sall on the JMP story. The JMP creator detailed the design and evolution of the software over 27 years, from release 1 to release 12, and noted that JMP 13 will be out in September.
  • Feng-Bin Sun of Tesla on data analysis in high-tech product research and development. He provided an overview of data analysis trends as seen in leading companies and examples featuring product reliability.
  • Author Kaiser Fung on why numbersense is a priceless asset in data science. He likened the data analysis process to running an obstacle course full of trapdoors, dead ends and diversions, explaining why the best analysts have a keen sense of direction as they navigate data.

Experienced JMP users led breakout sessions on the following topics:

  • Statistical modeling in crop research.
  • Continuous improvement in data analysis.
  • Producing China’s first integrated circuit package substrate.
  • Multiple correspondence analysis.
  • Design of experiments in high-tech.
  • Groundbreaking virtual product packaging.

During those presentations and in question-and-answers sessions, summit attendees participated in thorough and lively discussions about global trends in data analysis and its application, as well as best practices in data analysis and data-driven decision making.

John Sall (center) answers questions during an Ask the Experts session.

John Sall (center) answers questions during an Ask the Experts session.

During the Ask the Experts sessions, attendees spoke one-on-one time with JMP developers to learn tips and tricks, see demonstrations of new or unfamiliar features, and offer suggestions for upcoming versions of the software.

Comments from attendees reflected the high quality of the presentations and deep interest in JMP. "This is a great event, I've learned a lot from  presenters," said Wendy Yang from Dow Chemical. Ying Zhang from BUCM said, "I did not know that JMP could be so excellent for statistical education, and I will consider using JMP when writing dissertations.” Chongfa Yang from Hainan University called the conference "one of the best data analysis events I've ever attended."

Those who wanted an opportunity to go in-depth and hands-on with JMP for design of experiments (DOE) and predictive modeling attended pre-conference training at Shenzhen University.

"I have gained a lot from the training course and meetings during the past few days,” said attendee Liangqing Zhu, from Cargill. “I feel more confident to encourage my colleagues to love data analysis."

The day concluded with a Chinese-style feast and entertainment.

The third annual conference in China will take place in Beijing in 2017.

John Sall talks about The Design of JMP in his keynote speech.

John Sall talks about The Design of JMP in his keynote speech.

 

Kaiser Fung explains why "numbersense" is priceless in his keynote talk.

Kaiser Fung explains why "numbersense" is priceless in his keynote talk.

 

Jianfeng Ding talks with attendees during an Ask the Experts session.

Jianfeng Ding talks with attendees during an Ask the Experts session.

 

Kaiser Fung signs copies of his book during the one-day analytics conference

Kaiser Fung signs copies of his book during the one-day analytics conference.

 

Like all Discovery Summit conferences, this one was highly interactive.

Like all Discovery Summit conferences, this one was highly interactive.

 

The evening entertainment included Chinese opera.

The evening entertainment included opera.

Post a Comment

JMP Clinical is coming to PharmaSUG!

Everyone’s favorite mash-up of JMP and SAS software will be at PharmaSUG in the Mile-High City, May 8-11.

Stop by our booth in the exhibition hall to see demos of JMP and JMP Clinical, as well as of JMP Genomics, another JMP and SAS combination. You can be among the first to see the new interface and features of JMP Clinical 6.0!

In addition, attend these PharmaSUG sessions to take a deeper dive into many of the features that JMP Clinical has to offer:

  • Paper AD02: “Efficient safety assessment in clinical trials using the computer-generated AE narratives of JMP Clinical” (May 9, 1:45-2:35 p.m., Location: Centennial G). Learn how JMP Clinical leverages CDISC standards and the Velocity Template Engine to generate truly customizable narratives for patient safety directly from source data sets.
  • Demo Theater: “Assessing data integrity in clinical trials using JMP Clinical” (May 10, 3:30-4:30 p.m.). Fraud is an important subset of topics involving data quality. Unlike other data quality findings in clinical trials that may arise due to carelessness, poor planning, or mechanical failure, fraud is distinguished by the deliberate intention of the perpetrator to mislead others. Despite the availability of statistical and graphical tools that are available to identify unusual data, fraud itself is extremely difficult to diagnose. However, whether or not data abnormalities are the result of misconduct, the early identification, resolution, and documentation of any lapse in data quality is important to protect patients and the integrity of the clinical trial. This presentation will describe examples from the literature and provide numerous practical illustrations using JMP Clinical.

RCZFinally, stop by and chat with Richard Zink at the SAS booth (May 10, 11:00-11:30 a.m.). He’ll be autographing copies of his two SAS Press books:

Don’t yet have a copy of these books? Stop by the SAS Press booth where you can purchase copies at 20% off or you can pick up a free excerpt of either title.

Post a Comment