How do I get JMP 12?

In case you haven’t heard, JMP 12 is coming very soon. I hope you’re as excited about this release as we are. If you are, you’re probably ready to figure out how to get it for yourself. There are a few different ways, depending on the kind of license you have.

No license
If you’re not yet a JMP user, get a license for yourself.

Annual license
If you have a license that renews annually, an upgrade to JMP 12 is included as a part of your license. Ask your software administrator to request the upgrade to JMP 12 at jmp.com/upgrade on or after March 19, 2015. If you don’t know who your software administrator is, contact us with your Site ID number, and we can point you in the right direction.

Single-user, perpetual license
This type of license will require you to purchase an upgrade to JMP 12. Contact us, and we can get you set up.

Don’t know what kind of license you’ve got?
It’s easy to figure out what kind of license you’ve got. Look in the “About JMP…” window. You’ll find this window in the Help menu on Windows and in the JMP menu on a Macintosh.

Parallels DesktopScreenSnapz001JMPScreenSnapz001

 

If you see a Site ID, then you have an annual license:

Annual license on Windows

Annual license on Windows

Annual license on Macintosh

Annual license on Macintosh 

 

If you see a Serial Number, then you have a single-user, perpetual license.

Single-user, perpetual license on Windows

Single-user, perpetual license on Windows

 

Single-user, perpetual license on Mac

Single-user, perpetual license on Mac

Still don't know what to do?

We are always here to help, and we are and only a email or phone call away.

Post a Comment

Finding the best process operating conditions using 2 optimisation approaches

Scientists and engineers often need to find the best settings or operating conditions for their processes or products to maximise yield or performance. I will show you how the optimisation capabilities in JMP can help you work out the best settings to use. Somewhat surprisingly, the particular settings that are predicted to give the highest yield or best performance will not always be the best place to operate that process in the long run. Most processes and products are subject to some degree of drift or variation, and the best operating conditions need to take account of that.

You may be familiar with maximise desirability  in the context of process optimisation, but simulation experiment is a little known gem within the JMP Prediction Profiler. If you are trying to find the most robust factor settings for a process, then you need to know about simulation experiment. I will show you how useful simulation experiment can be and how it goes beyond what maximise desirability can achieve.

The goal of most designed experiments is to identify and quantify how much particular factors or inputs affect the responses or outputs from that process. A secondary goal is often to use this understanding to choose factor settings that will give the most desirable response or output values.

Once we have run a designed experiment and built a model that describes the relationship between the factors and responses, we can then use that model to find the optimum factor settings that will give the most satisfactory values for the responses we are interested in. There are several different ways of performing this optimisation process in JMP and these methods are described in detail in chapters 8 and 9 of the Profilers book, which can be found under Help > Books within JMP.

I want to focus on two of these methods: maximise desirability and simulation experiment as they can in some situations lead to very different solutions. The example I am going to use to illustrate this is a 13-run definitive screening design (DSD) with five factors and one response. These three-level definitive screening designs are a new class of screening designs that can very efficiently allow you to identify the important main effects. They can also allow you to build a full response surface model with two-factor interactions and quadratic terms if there are only three active main effects. Bradley Jones, the inventor of these designs, describes them in more detail in his excellent blog post on the subject.

A 13-run DSD is shown below.

DSD

The quickest and easiest way to build a model for this experiment is to run the built-in Screening script highlighted in blue in the top left-hand panel of the JMP data table. The model we obtain contains three main effects – Modifier, Temperature and Time – and a two-factor interaction term between Modifier and Temperature, plus a quadratic term for Time. The Prediction Profiler for this model is shown below. I have also turned on the Monte Carlo simulator and the Contour Profiler.

Initial 2

Using the initial factor settings (the mid-point of the three-dimensional factor space), we see the critical output is predicted to have a value of 75.3, and we have 50 percent of the points from the Monte Carlo simulation below the lower spec limit. The contour plot shows us sitting almost exactly on that lower spec limit.

When we move to the settings determined by maximise desirability (below), the critical output increases to 82.1 and the percentage of points below the lower spec limit drops to 4.6 percent. The contour plot shows that we are now sitting in the top left-hand corner where the highest value of the Critical Output is predicted to be.

Max desire 2

If we now look at the settings determined by simulation experiment (below), we have moved to the top right-hand corner of the contour plot where the contour lines are farther apart. We haven’t quite achieved as high a predicted value for the critical output. It is now 79.8, but the percentage of points below the lower spec limit is substantially reduced to 1.8 percent. When we compare the settings for the maximise desirability solution vs. the simulation experiment solution, we can see that the main difference is that the simulation experiment has chosen a high setting for Modifier, which exploits the two-factor interaction between Modifier and Temperature and makes the Critical Output insensitive to changes in Temperature (the Temperature line in the Profiler is now flat). The Critical Output distribution becomes much tighter with considerably fewer points out of spec, leading to a more robust process.

Sim expt 2

Let’s take a look at how simulation experiment found this more robust solution. Simulation experiment explores the factor space in a different way to maximise desirability. Rather than searching the factor space for the settings that give the most desirable value for the critical output, it focuses instead on searching the factor space to find the settings that minimise the Defect Rate calculated by the Monte Carlo simulation. It is still using the same model as maximise desirability, but it now uses that model to run a series of Monte Carlo simulations to determine how the Defect Rate varies within the factor space. It uses a Space Filling design to do this and models the defect rate using a Gaussian process. To launch simulation experiment, go into the red-triangle menu in the Simulator outline within the Prediction Profiler.

Sim expt dialog 2

When you run simulation experiment, it performs a Monte Carlo simulation at each of the factor settings specified by the Space filling design and records the defect rate obtained for each of those simulation runs to a table. That table is shown below. Each row represents a Monte Carlo simulation run with different factor settings. The table also contains a built-in script that will model the Defect Rate (it actually models the log defect rate since that is a better response to use). We can then find the optimum settings that minimise the defect rate (using maximise desirability in the defect rate Profiler) and then save those settings to the original Prediction Profiler for the critical output.

Gaussian process 2

 

To see the simulation experiment demonstrated in more detail, watch this video:

The key difference between maximise desirability and simulation experiment is that maximise desirability doesn’t take account of the natural variation in the factors when choosing the optimum factor settings. Simulation experiment takes account of that natural variation in the factor settings and finds the most robust settings to minimise the defect rate. The difference is nicely illustrated by a drawing that my daughter drew for me, illustrating how JMP can make complex problems simple.

Drawing

Post a Comment

New Journal Text Sledgehammer add-in available

JMP journals are a great way to organize, annotate and present analysis results.

You may not know this, but the color, font, size and style of the text content in journals can be formatted. This is an attractive feature, but since each block of text — and each attribute — must be addressed individually, the process can be prohibitively time-consuming for large journals.

The new Journal Text Sledgehammer add-in changes this, as it permits convenient bulk formatting of journal text. All you need to do is select the text blocks you want to edit, click a few buttons, and all of the selected text will be formatted as you wish.

  • If you make no selections in the journal, all of the text boxes in the journal will be formatted. Otherwise, formatting changes will only be applied to the selected items.
  • Any time an Outline box is highlighted, formatting changes will be applied to all of the text boxes contained within it.

Sledgehammer1

Another great feature of the Sledgehammer — and, incidentally, the origin of its name — is that it can open, or close, all of the journal’s Outline Boxes with a single click. This is a real timesaver when you are trying to locate something in a large, hierarchically organized journal. In fact, I use the Sledgehammer for this purpose as much as any other.

The Sledgehammer add-in is now available for download on the JMP File Exchange (a free SAS profile is required.) If you work with journals (and you should!) download it and give it a test run. You’ll wonder how you got along without it.

Post a Comment

Coming in JMP 12: Probability and Multiple Choice Profilers in the Choice platform

The JMP Profiler is a powerful tool for visualizing your model. With one click, you can see what the model predicts when you change a product’s features or the adjust one of your assumptions. It’s also a powerful communication tool. Your audience doesn’t need a statistics background to understand the model’s message.

In JMP 11 and earlier, the Profiler in the Choice platform was a Utility Profiler — it showed how your product’s utility changed with changes in features. To an economist or a marketer, utility is a pretty straightforward concept. Higher utility means a more desirable product that people are more likely to buy it.

“How much more likely?” and “More likely compared to what?” your marketing manager might ask. Until JMP 12, your answer might have been, “Let me get back to you.”

In JMP 12 you have the Probability Profiler and the Multiple Choice Profiler, two new tools to help visualize comparisons between competing products and predict market share for proposed new products.

Let's look at a quick example. Maybe I’m planning to offer a thick crust pizza with mozzarella cheese and pepperoni to compete with my competitor’s pizza, which has a thick crust with jack cheese and pepperoni.

Probability Profiler--New in JMP 12

The Probability Profiler transforms the Utility Profiler into a comparison between two products.

The Probability Profiler makes it easy to see that my proposed product is a good idea. When choosing between my pizza and my competitor’s, there’s a 92 percent chance that the “Female” market segment will choose mine.

 

Subject effects are listed with the Baseline product. Comparisons are made for the same subject.

Subject effects are listed with the Baseline product. Comparisons are made for the same subject.

There aren’t many markets where consumers have only two choices. You might know about multiple competitor products and want to design the pizza with maximum choice probability against all of them. That’s a job for the Multiple Choice Profiler.

Instead of having one baseline model and one alternative, the Multiple Choice Profiler produces a set of linked Profilers — one for each alternative (the one below shows three, but you can specify more or fewer). The selectors at the top set the market segment, and the profile sliders set the properties of the different products. There’s also a chart just below the header to visually show which product would have the highest predicted market share.

The Multiple Choice Profiler

The Multiple Choice Profiler lets you compare several products with different attributes.

Now it’s easy to see that a thin crust pizza with mozzarella cheese and no toppings is a clear winner against the other alternatives.

Look for more in-depth posts about how to use these new Profilers after the software becomes available in March.

Editor's note: This post is part of a series of previews of JMP 12 written by the people who develop the software.

Post a Comment

Discovery Summit is coming to Europe

In just one month, Brussels, Belgium -- the center of European politics -- will be the host city of Europe's first JMP Discovery Summit, March 23-25.

JMP users from all over Europe can expect an exciting agenda:

  • European customers present their statistical discoveries and share best practices
  • We have invited distinguished analytical thinkers such as Dick De Veaux, Beau Lotto, Bradley Jones and Peter Goos as keynote speakers.
  • John Sall, SAS co-founder and chief architect of JMP, introduces the newest version of the software: JMP 12

Sign up today to join the conference. You will meet JMP users  and JMP developers, and get inspired by new ways to analyze your data.

Also, don't miss out on the trainings we offer prior to the conference: JMP Scripting Language and Design of Experiments. Register together with your colleagues and get the group discount.

We are looking forward to seeing you in Brussels!

Can’t attend Discovery Europe? You don't have to miss out entirely: We will broadcast John Sall’s keynote speech: Sign up for it today, to learn first-hand about the brand new features that JMP 12 has to offer.

Post a Comment

Coming in JMP 12: New Destructive Degradation platform

The reliability of components, devices and complex systems is a critical aspect of the quality experience as viewed, and judged, by consumers. Reliability, which is often defined as “Quality over Time,” requires many different analytical techniques depending on the type of data being used and the goal of the findings of interest.

Degradation analysis is one special form of reliability data where a “failure” is defined at some point of deterioration of performance, but not necessarily a hard failure (device no longer works). Hence, these are often referred to as “soft failures.” One common example is the navigation lights on the wing of an aircraft. They may still be operational, but the lumen output has degraded to below a minimum standard. When this occurs, the lamp has been deemed a failure. The lumen output can be “repeatedly measured” over time.

What if the degradation process requires destruction of the unit in order to obtain a measurement? Examples include breaking strength or adhesive performance such as the deterioration of diaper adhesive tape over lengthy storage times. Other examples could be the seal integrity of packaged foods sealed with heat or adhesive bonds. Another aspect is known as disruptive measurements, where devices have to be disassembled for testing and then reassembled, which can affect the reliability. In such cases, only one measurement per unit can be obtained. With increased sample sizes, alternate methods are available to be used: Enter the new Destructive Degradation platform in JMP 12.

The Destructive Degradation platform allows you to fit a model by selecting from a list of 1) a predefined models that include built-in starting values, 2) data distributions and 3) common transformations with improved ease and efficiency.

After selecting the desired model from a pre-built list of models in the “model library” known as the Path Definition, you can visually compare the path style shape examples associated with each model type to the degradation plot to assist in selecting the best model based on the expected behavior. Multiple models may be generated for comparison, altering the underlying distributions and transformations. To determine the best statistical fit, you would then use the model comparison criteria, AICc or BIC.

Destructive_Degradation_JMP

Once you have generated the models of interest, you can view the Model List section of the report that allows you to easily scroll to any model you generated. You can compare and quickly determine predictions for each model.

Destructive_Degradation_JMP_2

A new suite of Prediction Profilers is available so you can perform evaluations and what-if analysis on degradation, crossing times and crossing probabilities. All Profilers include confidence intervals.

Destructive_Degradation_JMP_3

Previously, you could perform destructive degradation analysis using the existing JMP Degradation platform (which is still available in JMP 12). However, you needed to supply a JSL formula with starting values. Now, you simply match the shape of possible distribution models to plot: single line, multiple lines, curvature, etc. The new Destructive Degradation platform eliminates all the heavy lifting so you can focus on the analysis. JMP continues to make life simpler for engineers to perform complex statistical analysis with ease to save time in the never-ending process of product improvement.

Post a Comment

Cleaning up and visualizing my food log data with JMP 12

In an earlier blog post, I shared that I used the JMP 12 version of the Recode platform to clean up food item names in a data table containing nearly four years of  food log information. I was able to halve the number of unique food item names that appeared in my ~35,000-row table, reducing the table down to ~900 unique food items. Even if you don't keep a food log, I'm sure you can envision how useful this kind of cleanup and consolidation could be when working with your own large data tables! I gave a few more details in my e-poster presentation at Discovery Summit 2014, (which you can find on the JMP User Community in PDF form), but when I wrote my first Recode blog post, it wasn't quite time to share the many new features of Recode in JMP 12 and how I used them to streamline my data cleanup process.

Now that preview posts for JMP 12 have officially kicked off, I wanted to give some more information about how I used Recode enhancements to significantly reduce the time needed to combine and group my food items. I was fortunate to be able to test many of the new Recode features on my food log data from a very early stage in their development. I have to give a big thanks to Recode developer James Preiss for patience with my frequent emails and visits to his office while he was working on this new feature! I found that many of my requests lined up well with user suggestions we had received for Recode. I really enjoyed watching these suggestions solidify in the platform as it was under active development.

Let's consider an example that I used frequently when discussing Recode ideas with James: grouping and cleaning a set of food items whose names contained the key word "chocolate." I described in my earlier post how in JMP 11, I used Data Filter on the underlying table to find all food items containing the word chocolate. With that subset of the table in view, I then scrolled up and down in the Recode dialog to locate those items and pasted a common item name into the edit box for all related items. Changing an established group name could be tedious, unless I was ready to create, apply, and edit my Recode script.

Recode in JMP 12 offers a number of time-saving shortcuts for grouping items. The filter field in the Recode dialog makes it easy to find chocolate-containing item names within my long list. I found that automatic grouping by text edit distance worked well for grouping short food item names (e.g.,"Nestle After Eight Chocolate Mints" and "After Eight Chocolate Mints" in the example below) and also helped combine nearly identical names truncated at slightly different lengths in my food log files.

Recode in JMP 12

Recode's new manual grouping option was very useful for consolidating longer names that didn't match up at the text edit threshold I set. I could control or shift-click to select a set of related items and right-click to group them, choosing one value as the group name. If the grouped items were distant from one another alphabetically, they would be automatically reordered to appear together. If I decided to change the group name to a new value (e.g., "Milk, Chocolate"), I could simply edit the group name to make it apply it to all grouped items.

Recode examplel in JMP 12

Once I had cleaned up all my individual item names, I created a set of 30 or so food groupings that were meaningful to me for classifying my food items. I created two different classification variables for each item. My Food Category variable contained a comma-delimited list of all categories to which a food item belonged, saved as a multiple response column. The Primary Food Category variable contained a single category-the group in which I thought the food best belonged. For "whole" foods, values in these two columns were often the same, but they differed for complex food combinations.  While I placed sugar snap peas under Vegetable in both food category columns, I put a salted caramel mocha into the Primary Food Category I called CoffeeMilk.  In contrast, the comma-delimited Food Category list for a mocha listed the values "Chocolate, Coffee, Milk." I used my multiple response Food Category variable in the JMP Data Filter when I wanted to select all foods that belonged to a primary food group. I used my Primary Food Category as a grouping variable in graphical summaries and in cases where it made less sense for items to belong to more than one group.

I tried out a variety of different JMP graph types with this data table, and my favorite visualization was the treemap. I created item-level treemaps without a grouping variable to see which individual foods contributed most to my calorie totals over the past four years, but I found that using my primary food group categories as a second grouping level in the treemap was very helpful in comparing the contributions of food items next to other similar foods. Here is a treemap including grouped foods I ate over all four years. I used the Local Data Filter option under the Graph Builder red triangle menu Script section when I wanted to restrict the view to specific meals or years as desired.

Graph Builder treemap in JMP 12

To create a treemap using the existing JMP 11 split layout, open Graph Builder and:

  • Drag Calories to the Y axis
  • Drag Cleaned Item Name to the X axis, then drag Primary Food Category so that the X axis reads Primary Food Category / Cleaned Item Name
  • Click on the Treemap icon at the top of the Graph Builder window
  • Drag Primary Food Category to the Color box
  • Uncheck the Show Legend option under the Graph Builder red triangle menu

(Once you have JMP 12, you only need to choose Squarify from the Layout pulldown under the Treemap properties pane on the left to get the new algorithm.)

I created a simplified version of this treemap for my e-poster showing total calories eaten by year, with the size of the squares representing the number of calories eaten from each primary food category.  This graph helped me understand how my eating patterns at the food group level had shifted over time. One of the most obvious changes I observed was that I used to eat a lot more items in the Bread category than I do now. Digging into my data more deeply at the meal level revealed that this was primarily due to changes in my typical breakfast, which used to include scones with coffee but is now usually chocolate Greek yogurt. Like many people, I tend to develop a favored first meal of the day and stick with it until I get tired of eating it. Becoming aware of this shift caused me to question why I stopped baking my favorite maple oat nut scones, and I recently went back to making them more often!

treemap in JMP 12

To replicate this year by year treemap using the JMP 11 split algorithm:

  • Drag Calories to the Y axis
  • Drag Primary Food Category to the X axis
  • Click on the Treemap icon at the top of the Graph Builder window
  • Drag Primary Food Category to the Color box
  • Uncheck the Show Legend option under the Graph Builder red triangle menu
  • Drag Year to the Wrap drop zone

(Again, in JMP 12, you can choose Squarify from the Layout pulldown under the Treemap properties pane. )

If you saw my earlier post about my weight loss and maintenance journey, then you may be surprised to see how many "junk" foods show up in my food log. I'll admit I eat dark chocolate almost every day in addition to my favorite cocoa powder/toffee nut syrup/plain greek yogurt/caramel sauce breakfast mixture that's been repinned hundreds of times on Pinterest! I've found that it's not necessary for me to cut out so called "junk" foods entirely while maintaining my weight in my preferred range. Keeping a close eye on the total number of calories I consume has turned out to be much more critical  to maintaining my weight long-term.

Check out the first blog post in this series to learn more about my interest in quantified self (QS) data analysis. You can download a PDF version of my JMP Discovery Summit 2014 e-poster with more examples of treemaps I created from my data and download my JMP add-in to import your own BodyMedia® files here. I haven't attempted to generalize the food item recoding process with a script because I think the foods included in a food log will vary too greatly for a general script to be useful. But if you want to replicate my approach to create a consolidated set of item names in your own food log using Recode, grab your copy of JMP 12 when it comes out and see my earlier blog.

P.S. It’s free to join the JMP User Community, where you can learn from JMP users all over the world! 

Post a Comment

Coming in JMP Pro 12: Interactive model building

The Generalized Regression platform was introduced in JMP Pro 11 for fitting penalized regression models. Our focus for JMP Pro 12 has been to make model building an easy and natural process using the Generalized Regression platform (we like to call it Genreg for short). This post will focus on the new feature that I am most excited about in Genreg: the interactive solution path.

As noted in previous posts, a penalized regression fit does not result in a single regression model. Instead, we end up with a sequence of candidate models from which we choose the best fitting model based on a validation method (like cross-validation). The best way to summarize the sequence of candidate models is to plot the solution path as in Figure 1, a lasso fit of the diabetes data in the sample data folder in JMP.

Figure 1: Lasso solution path for the diabetes data

Figure 1: Lasso solution path for the diabetes data

On the left side of Figure 1, we see a summary of how each variable enters the model and changes as a function of the lasso penalty. I have labeled two of the paths and made them bold for emphasis. Here BMI (body mass index) is the first variable to enter the model for predicting the progression of diabetes. As the lasso penalty is relaxed (moving from left to right in the graph), the coefficient for BMI steadily increases until it levels off around 500. HDL (the good cholesterol) is the fourth variable to enter our model. It enters the model with a negative coefficient but actually has a positive coefficient by the end of the path. This sign change reminds us why variable selection is so important: Choosing one model instead of another can mean the difference between concluding that HDL cholesterol speeds up, slows down or has no impact on diabetes progression. The best model (based on the Bayesian Information Criterion) is marked by the vertical red line.

On the right side of Figure 1, we see how well the candidate models fit the data as a function of the penalty. Here, we are using the BIC for validation (smaller is better), but the results for cross-validation can be summarized in the same way. As we relax the penalty (moving left to right), the model improves to a point, and then it starts to get worse/overfit. Once again, we mark the best solution using a vertical red line.

So what is special about the solution path in JMP Pro 12? It's interactive! That means that we can click on the vertical line in Figure 1 and drag it to explore all of the candidate models in the solution path. As we use the handle in the solution path to change the model, everything in the report is updated to reflect the new model: parameter estimates, residual plots, Profilers and so on. This allows us to quickly explore candidate models that are not necessarily the best-fitting, but are still interesting or useful. For example, maybe there is a much simpler model that performs nearly as well as the best. Now we can quickly locate that simpler model and use it. Alternatively, there are situations where we would want to drag the handle to the right and use a larger model that performs similarly to the best. By using a larger model, we can feel more confident that we have identified the factors that truly are influencing the response variable.

Now let's look at an example of using the interactive solution path to build a logistic regression model for the South African heart disease data. Figure 2 shows the solution path and a portion of the parameter estimates table for a lasso fit tuned using 5-fold cross-validation. We can see that the validation likelihood flattens out around the best model, meaning that we have an opportunity to back up to a more parsimonious model that still fits very well. In fact, JMP even provides a green shaded zone where the performance of the models is similar to the best model.

Figure 2: Best Model for the Heart Disease Data

Figure 2: Best Model for the Heart Disease Data

In Figure 3, we have zoomed in on the right side of the solution path so that the range of models in the green zone are more obvious. We have also backed up to the smallest model inside the green zone. Notice that the parameter estimates have changed, and two of the interactions have dropped out of the model. Our new model has less than half as many non-zero terms as the best model, so it is substantially easier to interpret while still fitting very well.

Figure 3: A Much Simpler Model for the Heart Disease Data

Figure 3: A Much Simpler Model for the Heart Disease Data

It helps to see the interactive solution path in action. Figure 4 is an animation of building a regression model for a particularly interesting data set (you may have to click on the figure to see the animation). For more information about creating unique data sets like what you see in Figure 4, check out the website of NC State University professor Leonard Stefanski.

Figure 4: Interactive Solution Path in Action

Figure 4: Interactive Solution Path in Action

We have added a variety of exciting new features to the Genreg platform, but I am most excited about the interactive solution path. The interactivity allows us to quickly and easily build regression models in JMP Pro. Some of the other highlights in Genreg include:

  • Substantially improved computation times
  • Forward Selection
  • More distributions for modeling the response (Exponential, Beta, Beta-binomial, Cauchy, and more)
  • Quantile regression
  • Inverse prediction
  • More diagnostic plots
Post a Comment

How late is this year's Chinese New Year?

This year, Chinese New Year arrives on Feb. 19. From what I can remember, this is quite late. But how late is it exactly? I was curious to find out, so I turned to the Internet and found a website that has collected Chinese New Year information since 1900.

I used Internet Open to import the data into JMP, and then used Date Time functions in Formula to create the columns, Month and Month_Day, for my analysis. This is what my Chinese New Year data from 1900 to 2015 looks like:

Chinese New Year Data

After that, my analysis is straightforward and is as easy as a cup of Oolong tea.

  • The earliest Chinese New Year’s Day arrived on Jan. 21, 1966. The two latest arrivals occurred on Feb. 20 in 1920 and 1985. In the last 116 years, the Chinese New Year has arrived within a span of one month, from Jan. 21 to Feb. 20
  • The mode is Jan. 31; on this day, the Chinese New Year was celebrated six times. Six other dates shared the No. 2 spot: Jan. 23, Feb. 2, Feb. 5, Feb. 6, Feb. 10 and Feb. 13. Roughly speaking, the odds of celebrating Chinese New Year in February instead of January is 2:1.

Chinese New Year Day Distribution

  •  The sampling distribution of Chinese New Year is neither normal nor uniform. So, while this year’s Chinese New Year may not be latest, it is the latest since 1996. And this year, it looks like this:
2015 is the Year of the Sheep

Happy Year of the Sheep! (Image used courtesy of Art of Pic.)

We wish you all a Happy Chinese New Year!

Post a Comment

Coming in JMP Pro 12: Four great features of Covering Arrays

Covering arrays are a powerful design tool that may be used to design test cases to efficiently test deterministic systems. For such systems, a particular input will always generate the same output and, as a result, standard statistical designs are usually inefficient. It turns out that failures in these systems are typically precipitated by certain combinations of just a few factors. A covering array ensures that all combinations of a specified number of factors (called the strength) have been covered with as few test cases as possible.

In his recent post, Bradley Jones described covering arrays as the most exciting new DOE feature coming in the upcoming release of JMP.

The new Covering Array platform is in JMP Pro 12, and it includes these four great features:

1. The “Optimize” button: The platform tries to construct the smallest possible design, and in some cases the designs are optimal. However, when the design created by the platform is not optimal, the platform enables an “Optimize” button that may be used to further reduce the size of the design. You get to specify the number of optimization iterations. You can cancel optimization at any point, and the optimizer will return the best design found.

2. The “Restrict Factor Level Combinations” control: When defining a design, it is sometimes necessary to restrict certain factor level combinations (i.e., disallowed combinations). For example, suppose you were testing a browser-based application, you would want to make sure that your design did not contain incompatible browser, operating system pairs (say, IE10, OSX). The “Restrict Factor Level Combinations” control provides two ways of specifying such restrictions. The idea is to provide a control that is sufficiently flexible that both experienced as well as casual users are accommodated.You may use JSL (JMP Scripting Language) to define restrictions as a Boolean expression.

CABlogPict1
Correctly specifying restrictions as a Boolean expression can sometimes be challenging. To make this easier, you may use a tool similar to a Data Filter to specify restrictions.

CABlogPict2

Note that designs that restrict factor level combinations may also be optimized.

3. The “Analysis” table script: If you create a design data table, a Response column will be added, and an “Analysis” table script will be available. An outcome for any of these runs can either be pass, fail or missing. If at least one of the outcomes is a failure, then the Analysis script may be used to identify potential factor/level combinations that could have precipitated the failure. This identification problem is often referred to as the “failure localization” problem.

CABlogPict3

4. The “Load Design” menu option: This option allows you to load a covering array design from a JMP data table. Imagine that you have a design, and you are interested in assessing the coverage properties of the design. Or perhaps you would like to see if the number of runs can be reduced while preserving coverage. As long as your design is in a format that can be opened by JMP as a data table, then you can use this option to load it into the platform.

CABlogPict4

You can even load a design with disallowed combinations as long as the disallowed combinations are specified as a Boolean expression in a “Disallowed Combinations” table property.

For more on covering arrays, read Brad’s discussion in his post for this series and Ryan Lekivetz’s post, which describes how he cleverly used covering arrays in a home improvement project.

 Editor's note: This post is part of a series of previews of JMP 12 written by the people who develop the software.

Post a Comment