Tag: Design of Experiments (DOE)

3
New in JMP 10 DOE: Individual run replication

Replication is one of the four basic principles of experiment design introduced by R. A. Fisher. The other three were the factorial principle, randomization and blocking. The value of replication is that it provides an estimate of the run-to-run variability in the response that is unaffected if the model is

0
What price orthogonality?

On Feb. 8, my colleague, Professor Douglas Montgomery of Arizona State University, and I presented a webinar for the American Statistical Association. Our first demonstration dealt with designing an experiment for six factors each having two levels in 24 runs. One natural way to construct such a design would be to

2
New in JMP 10 for experiment design: Evaluate Design

JMP 10 is coming in March. In my next few posts, I plan to share the main new capabilities in the area of experiment design. The most visible of these new features is the Evaluate Design item on the DOE menu. What does the Evaluate Design feature do? Evaluate Design

21
Introducing definitive screening designs

In my two previous posts, I introduced the correlation cell plot for design evaluation and then showed how to use the plot to compare designs. Here, I want to use the same plot to show why definitive screening designs are, well, definitive. For a complete technical description of definitive screening

1
Comparing screening experiments using correlation cell plots

What is a correlation cell plot? In my previous post, I proposed a new graphic diagnostic tool for evaluating designed experiments. The suggested graph is a cell plot showing the pairwise correlation between two model terms as a colored square. If, as in Figure 1, there are 45 terms in

0
Bradley Jones wins ASQ Brumbaugh Award again

For the second time in three years, the American Society for Quality has named Bradley Jones, Principal Research Fellow at JMP, a recipient of the ASQ Brumbaugh Award. Jones and his co-author, Chris Nachtsheim of the Carlson School of Management at the University of Minnesota, won the award for their

4
A new graph for screening design evaluation

One concern I often hear about the use of software for optimal design of experiments is that the algorithm producing the design is a “black box.” To use the design, an investigator has to trust the black box. An optimal design is one created to maximize some scalar measure of

0
Experiment design – and how!

Experiments for most of us are demonstrations of scientific principles. We recall the science class where we put litmus paper into a beaker of lemon juice and watched it turn pink. In scientific research, many investigators still construct experiments to add support to a current hypothesis or perhaps disprove it.

1 9 10 11 12 13 14