On Feb. 8, my colleague, Professor Douglas Montgomery of Arizona State University, and I presented a webinar for the American Statistical Association. Our first demonstration dealt with designing an experiment for six factors each having two levels in 24 runs. One natural way to construct such a design would be to
Author
JMP 10 is coming in March. In my next few posts, I plan to share the main new capabilities in the area of experiment design. The most visible of these new features is the Evaluate Design item on the DOE menu. What does the Evaluate Design feature do? Evaluate Design
In my two previous posts, I introduced the correlation cell plot for design evaluation and then showed how to use the plot to compare designs. Here, I want to use the same plot to show why definitive screening designs are, well, definitive. For a complete technical description of definitive screening
What is a correlation cell plot? In my previous post, I proposed a new graphic diagnostic tool for evaluating designed experiments. The suggested graph is a cell plot showing the pairwise correlation between two model terms as a colored square. If, as in Figure 1, there are 45 terms in
One concern I often hear about the use of software for optimal design of experiments is that the algorithm producing the design is a “black box.” To use the design, an investigator has to trust the black box. An optimal design is one created to maximize some scalar measure of
Experiments for most of us are demonstrations of scientific principles. We recall the science class where we put litmus paper into a beaker of lemon juice and watched it turn pink. In scientific research, many investigators still construct experiments to add support to a current hypothesis or perhaps disprove it.