AMA Advanced Research Techniques (ART) Forum is next week!

There’s a saying in publishing that you measure a book’s impact in two ways: how many people buy it, and how many people read it. There’s a similar saying in statistics: You measure a technique’s impact by how many people know about it and by how many people actually use it. One of the things I love about being a developer at JMP is that I get to make statistical techniques that might otherwise be difficult to use or time-consuming in practice accessible to a wide audience of users.

That is also why I’m excited to be on the committee for the American Marketing Association’s Advanced Research Techniques (ART) Forum that’s taking place next week in San Diego. ART is dedicated to bringing academics and applied researchers together. The conference sessions and format are designed to encourage discussion. In every session, academic researchers present alongside applied researchers who are working in industry. Post-presentation discussions from market leaders show how to work through problems that might happen when implementing a new technique, as well as how to analyze the gains from doing so.

The conference also hosts tutorials that serve as refresher courses for established techniques. This year, the topics include Machine Learning for Marketing, Knowledge Representation, Choice Modeling, and others. This makes the ART Forum a great training opportunity for people who are new to marketing research, as well as for experienced professionals.

Last year, when I attended ART for the first time, I remember thinking, “These folks are exactly the kind of market researchers I want to work with in this field.” This year in San Diego, I’ll be participating as part of the committee, and I couldn’t be more excited.

There’s still time to register if you’re interested in attending this year, and if you’re already registered, I’ll see you in San Diego!

Post a Comment

Using a covering array to identify the cause of a failure

My last blog entry discussed using a covering array to test the Preferences for the Categorical platform. While the hope is that all of the tests pass, in this blog we consider what we can do when one of those tests fails. When you use “Make Table” after creating a design in the Covering Array platform, there are two important pieces to pay attention to: the first column with missing values labelled “Response”, and a table script called “Analysis”. The Response column uses values of 1 and 0 to correspond to whether or not a particular run passed or failed according to what we’re measuring. For the Categorical platform, a value of 1 would be recorded if the platform behaves as expected, and 0 if there’s a fault of some kind. The “Analysis” script is to be used once the Response column is filled in.

CA4

When performing the test, ideally we observe all 1’s in the Response column. In the Data Table provided on the File Exchange,I went through and collected some hypothetical data: each of the runs passed except for the last one. What do we do with this failure? It would be nice to narrow down the potential causes.

Analysis

The first place to look for a cause would be if any single factors could have caused the failure. Since each preference option occurs in more than one row and everything else passed, it’s not a single factor causing the issue. The next likely candidate would be a 2-option cause. We have 45 Choose 2 = 990 different 2-option combinations in that row. That doesn’t seem very informative. However, many of these combinations appear elsewhere in the design and the platform passed those tests, so those can be eliminated as potential causes. Going through the list of potential causes and eliminating those that have appeared elsewhere would be a tedious task – which is what the Analysis script takes care of for us. Running that script:

CA5

The Analysis report shows potential causes for the failure, which is greatly reduced from the 990 pairs contained within the row containing a failure to just 16. What’s more, the testing has been simplified to having pairs to check.

Any kind of information would make this process even easier. In our example, the tester has knowledge that the “ChiSquare Test Choices” were recently updated, and can first look at those 2 cases. It’s worth noting that clicking on any of the potential causes highlights the failure row and columns corresponding to it. This is useful if you’re dealing with many rows and/or columns and want a quick way to be able to subset the table.

Final Thoughts

We went from a task of testing preferences that looked impossible to something that gave reasonable coverage in just 16 runs. We could go even further by creating a strength 3 covering array – with some optimizing, I found a design that had 65 runs (and the 4-coverage was over 96%). Constraints where some combinations cannot be run together can also be accommodated with disallowed combinations. Any luck using covering arrays in your own work? Leave me a comment and let me know. Thanks for reading!

Post a Comment

Why are some dogs adopted faster than others?

Carah Gilmore entered her school's science fair competition and advanced to higher levels. (Photo courtesy of Ryan Gilmore)

Carah Gilmore entered her school's science fair competition and advanced to higher levels with her project analyzing data about pet adoptions. (Images courtesy of Ryan Gilmore)

Here at JMP, we love pets. So we were thrilled to hear that a young scientist used our software to explore data about pet adoptions from local animal shelters. The project is adorably titled "Furever Friends."

How young is this scientist? She is 10 years old, and her name is Carah Gilmore. Her father, Ryan Gilmore, works at SAS as a Senior Application Developer.

Carah's school, Mills Park Elementary in Cary, NC, strongly recommended that students take part in science fair, so Carah chose a topic that would help a charity dear to her heart.

"My dad and I take pictures for an organization named Rescue Ur Forever Friend to help pets get adopted quicker. I was wondering how I could help, so I thought of this project – what factors determine how fast a dog gets adopted," the rising sixth-grader says.

To identify those factors, she needed some data. She was able to get data from Rescue Ur Forever Friend and Second Chance Pet Adoptions. Ryan says the final data table had around 1,400 rows and 12 columns.

Table showing adoption time by breed.

Carah looked at adoption time by breed.

Ryan had worked in technical support for JMP and therefore knew the software very well. That's why he decided to teach Carah how to use Graph Builder and Tabulate to perform her analysis.

But before starting on her analysis, Carah had to do a bit of data preparation.

"The data from both organizations was given as Excel files. Carah combined the data into a single Excel file, which was then imported into JMP. New columns were created to compute the length of stay and age of the animals. Other columns were also created to categorize the dogs based on color and breed," Ryan says.

What Carah learned

Black Dog Syndrome is the idea that black dogs spend more time waiting for a new home than lighter-colored dogs. Carah aimed to test that hypothesis and see what else she could learn.

"My results showed that it took longer for black dogs to be adopted," Carah says.

Carah's graph from her science fair project showing that black dogs take more days to get adopted than dogs of other colors.

Carah's graph from her science fair project showing that black dogs take more days to get adopted than dogs of other colors.

Blacks dogs took 83 days on average to be adopted, whereas brown dogs took an average of 65 days. Gray dogs fared the best, waiting only 38 days on average for a new home.

Carah also found that female dogs were more quickly adopted than male dogs. As might be expected, large dogs took more days to be adopted than medium or small dogs. But what's surprising is that extra-large dogs spent the fewest days in the shelter before adoption, when looking at size alone.

Competing at science fairs, and beyond

At her elementary school science fair in January, Carah placed third, which qualified her to compete at the regional science fair the following month. She placed in the top eight among more than 100 students there and competed at the state level in March. While she did not place in the state competition, she did learn a lot along the way and helped spread the word about pet adoption.

What was the best part of doing this project for Carah?

"Helping the organizations. Creating the formulas and graphs in JMP. Also, finding out how long it does take different dogs to get adopted," Carah says.

The science fairs are over for this year, but she hopes all who have seen her findings (including you, dear reader) will "try to adopt a rescue dog and take a look at the black dogs because they’re just like the other dogs.”

Bravo, Carah!

Post a Comment

New in JMP 12: Data Table support for images (part 2)

Author’s note: Today’s example applies to a map created in Graph Builder, but you can use this approach in the majority of platforms in JMP.

In part one of this two-part series, I showed an example of the support for images in data tables in JMP 12. Today, I’ll show you how this capability allows you to create multilayered reports and graphs in Graph Builder, with just a bit of scripting.

Pic1

In the map above, each state is colored by its average temperature — blue is cooler, red is warmer. Hovering over any state reveals a graph of the state’s temperature trends over time. Here, we have the best of both worlds: an easily digestible overview of the data, with additional details hidden but immediately available.

Graph Builder in JMP allows us to save images of the graphs it creates to data tables, but the trend graph above was not created in Graph Builder — it was created using Overlay Plot. What happens when we want to save images we created outside of Graph Builder?

Fortunately, a bit of JSL is all we need. In today’s example, we will walk through the code needed to create a map like the one above.

First, let’s look at the raw data: average annual temperatures for each of the 50 states in the US. This is our source table, and will be used to create the trend plots.

Pic2

The map itself relies on the table below. To create it, we will:

  • Summarize the original table, averaging temperatures by state.
  • Create a new column, of Expression data type, to house the images of the trend plots created from the original table.
  • Copy the images from each of the trend plots into the summary table.
  • Turn on labeling for the State and Mean(Temp F) columns, to reveal these when we hover on the map.

Pic7

Step by step, here is what it looks like in JSL.

Pic3

  • Line 1 assigns a handle to the source table.
  • Lines 3 – 11 create the overlay plot report.
  • Line 13 assigns a handle to the overlay plot, so we can reference the individual outline boxes within it.

After running the code above, the report looks like this. (If your report differs, it could be due to differences in our preference settings.)

Pic4

The next four lines of JSL create the summary table and prepare it for use with the images.

Pic5

  • Line 15 creates the summary table.
  • Line 17 deletes the N Rows column from the summary table.
  • Line 19 creates a new column, Graph, of expression data type.
  • Line 21 turns on the column labeling properties for the State and Graph columns.

All that remains is to loop through the report, saving the pictures to the summary table.

Pic6

  • After initializing our counter in line 23, we cycle through a loop. Since the 1 in line 25 always evaluates to “True”, we will need to have a way to break out of the loop when we need to. The Try ( … , … ) statement lets us do that.
  • Try ( … , … ) will attempt to process whatever code appears before the comma. If an error is encountered, it processes the code after the comma (or just ignores the error, if there is no code after the comma—the second argument is actually optional.)
  • Here, line 27 attempts to get a handle on the current outline box in the report, while line 28 takes a picture of that outline box, and stores the picture in the current row of the summary table. Since the summary table and the report both sort the states in alphabetical order, this simple approach works.
  • Line 32 increments the index, so that the next time through the loop, we attempt to get the next outline box. Eventually the index hits 51, and since Rep[Outlinebox(51)] does not exist, we get an error and move into the 2nd half of the Try ( … , … ) statement, exiting the loop via the Break ().

The summary table should now look like this:

Pic7

To create the map, invoke Graph Builder and drag the State and Temperature columns to the Shape and Color zones, respectively. That’s all there is to it!

Pic8

I hope you have enjoyed today’s post; I am excited to see and hear about the interesting applications you will come up with!

P.S. I’ve placed the source table and script file in the JMP User Community’s File Exchange. (A free SAS profile is needed to access the files.)

Post a Comment

Testing software preferences with a covering array

In any computer software, it’s not unusual to have a set of preferences to allow customization to the settings. JMP is no exception. For example, in JMP 12, the Categorical platform allows for a great deal of the output to be tailored to the user’s preferences. Here’s what we see under Preferences for the Categorical platform:

CA1

There are a number of check boxes, drop-down options and editable number boxes. Imagine trying to test if there were certain combinations that would cause a failure in the software. Keeping the choices where numbers can vary to a minimum, we still have two options with four choices (levels), four options with three choices, and 39 binary options. If we were to test every possible combination, we have a whopping 4^2 * 3^4 * 2^39 = 712,483,534,798,848 combinations. Obviously, testing every possible combination is not feasible.

However, it turns out that faults in software are usually due to the interaction of just a few options. For example, if I wanted to test only two check boxes, I would have four cases to consider: both boxes checked, two cases with one box checked and one not, and both boxes unchecked. Following this idea, why not first consider all pairwise options? Even if I just wanted to check all of the two-option possibilities, a little math shows that I have 4,690 combinations to consider.

A different way to test?

Luckily, in JMP 12 we have covering arrays! They can help us derive test scenarios for testing software. With a covering array, if I specify strength 2, I know that all of the two-option combinations will be covered. I use DOE->Covering Array (JMP Pro only), and add factors corresponding to each option. Part of my factors setup is shown below, but I’ve included the Factors table on the File Exchange. You can load this under the Covering Array platform by having the factors table open and selecting Load Factors under the red triangle menu.

CA2

Keeping the Strength = 2 means that I want to ensure that all combinations of all pairs of options will be included somewhere in the design. When I chose Continue and then Make Design, I ended up with a 20-run design (your results may vary, but usually the run size will be in the range of 19-21 runs). It’s astounding to think that with the 45 options, all combinations of all possible pairs appear in just 20 runs.

Can I do it in fewer runs?

Since the two factors with the largest number of values have four levels each, the lower bound on the size of this strength 2 covering array is 16 runs. For this example, having four extra runs doesn’t seem like a big deal, but there are cases where a 20 percent savings in the number of runs is substantial in terms of time and money. It’s easy to try and find a smaller design using the Optimize button. Selecting 10,000 iterations, I was able to find the 16-run design on the File Exchange.

But wait, there’s more!

Let’s take a closer look at the Metrics from the Covering Array platform:

CA3

The coverage numbers represent the percentage of combinations involving t factors that are covered by the design (i.e., how many of the possible combinations appear somewhere together in the design). If the Categorical platform passes the set of tests defined by the covering array, we’ve covered all combinations from any pair of options, hence the 100 percent coverage for t=2. But, if everything passes, then we’ve also tested (and passed) over 85 percent of the combinations from three options, and 57 percent of those from four options.

This seems too good to be true

With all those options, do 16 runs really give me all that information? Keep in mind that I only know that all those combinations appear in the design -- if everything passes, we know that there are no faults due to two options together as well as many of the three- and four-option combinations.

What happens if I do see a failure? Then the covering array has helped us identify that there’s a problem, but not the direct cause of that problem. Can it help us find the cause of the failure? It turns out that we do have a lot of information contained in the covering array based on not just the failure, but also the successful tests. Next time, we’ll look at how it can help us to find a cause.

Post a Comment

IoT, DOE and more

BradJones DennisLinInternet of Things (IoT) and design of experiments (DOE) are just two of the things you will hear about if you tune in to the Analytically Speaking webcast with Dennis J.K. Lin and Bradley Jones on June 10.

Both are invited speakers at the 32nd Quality & Productivity Research Conference, which takes place the same week at nearby North Carolina State University and which JMP is co-sponsoring. They will step away from the conference to talk with us for an hour about some of their many contributions to statistics, and in particular design of experiments.

A professor of Statistics at Penn State University, Dennis recently spoke at the first-ever Discovery Summit China, where he talked about statistics for IoT. Earlier in the year, Brad, who is Principal Research Fellow at JMP, spoke at the inaugural Discovery Summit Europe with Peter Goos on some of the new DOE capabilities in JMP 12.

If you can’t make it for the live webcast of our discussion, you can always watch it from the archives later on.

Post a Comment

Probability and Multiple Choice Profiler in JMP 12 Choice platform

In an earlier post, I introduced the Probability and Multiple Choice Profiler, two new tools in the Choice Platform that help visualize comparisons between competing products and predict market share for proposed new products. This post covers step-by-step instructions for how to open and use the profilers in JMP 12.

Let’s consider the pizza data from the JMP sample data library. There are two versions of that example that have the same information, but different data formats. Let’s start with the multiple table example contained in Pizza Profiles.jmp, Pizza Responses.jmp, and Pizza Subject.jmp. These three tables mimic how choice data are often collected, with one table detailing the different factors in the experiment, another that collects the responses, and a third table for demographic information on participants.

Running the script attached to the Pizza Profiles menu produces a report with Choice model effects and Likelihood ratio tests on the resulting estimates. (Note: JMP performs the likelihood ratio tests by default if they can be calculated quickly. If the tests are not produced by default, you can always request Likelihood Ratio statistics from the red triangle menu.)

The effect tests show which factors are statistically significant, and the signs on the parameter estimates show whether that factor makes subjects more or less likely to buy the product. From this report, we can see that Jack cheese is less popular than Mozzarella, and there is a statistically significant interaction between Gender and preferences for Thick crust and Pepperoni vs No Toppings.

The Utility Profiler (just called “The Profiler” in JMP 11) can help explain how these interactions affect consumer choices. Selecting Utility Profiler from the red triangle menu brings up the following:

JMP_Utility_Profiler_1

Changing the Gender from M to F allows us to clearly see the effect of the interactions:

JMP_Utility_Profiler_2

Men are more likely to choose Thick crust pizza with Pepperoni. Women are more likely to choose Thick crust pizza with no toppings.

“How much more likely?” and “More likely compared to what?” are questions that your marketing manager might ask you. Until JMP 12, your answer might have been, “Let me get back to you.” Now all you need is the Probability Profiler.

In JMP 12, the Probability Profiler is right below the Utility Profiler in the Choice platform’s red triangle menu. When you select the Probability Profiler, you are comparing your proposed new product to a “Baseline” product. Maybe I’m planning to offer a Thick crust pizza with Mozzarella cheese and Pepperoni to compete with my competitor’s pizza, which is a Thick crust with Jack cheese and Pepperoni.

JMP_Probability_Profiler_3

The Probability Profiler makes it easy to see that my proposed product is a good idea. When choosing between my pizza and my competitor’s, there’s a 92 percent chance that the “Female” market segment will chose mine. I can change Gender in the Baseline settings to M to see how the “Male” market segment will choose and see that males have a 96 percent chance of choosing my pizza. Settings related to subjects (i.e., people) will always appear in the Baseline settings so you’re always comparing apples to apples when you compare choice probabilities.

JMP_Probability_Profiler_4

There aren’t many markets where consumers have only two choices. You might know about multiple competitor products and want to design the pizza with maximum choice probability against all of them. That’s a job for the Multiple Choice Profiler, which is in the red triangle menu just below the Probability Profiler.

Selecting Multiple Choice Profiler from the red triangle menu brings up a dialog asking how many choices you want to profile. Three is the default, but you can have more.

JMP_choices_dialog_5

Now, instead of having one Baseline model and one alternative, the Multiple Choice Profiler produces a set of linked profilers — one for each alternative. The selectors at the top allow you to choose the values of the subject variables that make your market segments, and the profile sliders set the properties of the different products. There’s also a chart just below the header to visually show which product would have the highest predicted market share.

JMP_Multiple_Choice_Profiler_6

Now it’s easy to see that a Thin crust pizza with Mozzarella cheese and no toppings is a winner against the other alternatives.

All of the above examples have shown an analysis with multiple tables. If you usually perform a one-table analysis, JMP will place your subject effects in the Multiple Choice Profiler. (There is no way for JMP to know whether an effect was intended as a subject effect or a profile effect, so JMP makes the choice that has the most flexibility for you, the user.)

JMP_Multiple_Choice_Profiler_7

The probability comparisons in the multiple choice model are restricted so that they must sum to one. That means the probability comparisons from a Choice model only make sense if you are comparing within a market segment. If you change the values of the subject variables, be sure to make them the same for all the choice probabilities. Otherwise, your predicted probabilities (and therefore market shares) will not be correct.

The Utility Profiler has always been a useful tool for visualizing your model. JMP 12 adds the Probability Profiler and the Multiple Choice Profiler, two new tools to help visualize comparisons between different products and predict market share for proposed new products.

Post a Comment

Using a space filling design and Google Maps to plan my commute: Part 2

Morning commute with traffic

I did learn some things about my commute using a space filling design. (Photo by Geoffrey Arduini via Unsplash: https://unsplash.com/geoffreyarduini)

In my previous blog post, I created a 150-run space filling design to collect travel times over various departure times in the morning and evening. I wanted to see if I use this designed experiment to  learn something useful about my commute.

Google Maps gives a range of times for a given trip – I’ve taken the midpoint for each of these. While I might want to validate at a later time just how accurate these times are, using this computer-generated response means I can collect all this data in one sitting. If I instead collected the data based on the actual drives I’m making, it would take me 30 weeks and the data would be much noisier. My boss also told me that he might have issues with my commute being dictated by data collection for a designed experiment.

So, I took the fast and prudent approach and collected the data from Google Maps. Now what?

After some data exploration, it’s not surprising that to try and make Fit Model work, I’d probably want to try higher powers, maybe some spline effects, transformations, etc… but all I really want is a fit that captures some of the intricacies while giving me access to the Prediction Profiler. A Neural net seemed like the perfect choice. I wasn’t worried about overfitting so much as getting appropriate results – and in this case I could cheat because I had a sense of what the fit should look like by investigating morning and evening times separately. I tried various choices of nodes and validation methods through use of a covering array, to ultimately use 10 TanH and Gaussian nodes in the first layer and 10 Linear nodes in the second with validation through Random Holdback. Instead of telling you what I see in the results, why not just share it with you?

Here's what my Profiler looks like as a static image, but you'll want to explore the interactive version; click in the graphs or enter different values for Day, morning departure or evening departure.

Profiler_commute

I created this Profiler by saving the profile formula in the Neural platform, and then saving as Interactive HTML 5 using that prediction formula under Graph->Profiler.

What I Learned

I’ll let you play with the results, but a couple of things I found interesting that corroborate with trends that I’ve observed:

  • For Monday to Wednesday, leaving by 7:30 a.m. gives the shortest commute. Every 10-minute delay after that costs roughly an extra minute on the road.
  • On Thursday and Friday, the longest predicted commute is for departures at 8:07 a.m. and falls off from there. The commuting time at 9:00 a.m. is roughly the same as at 7:30 a.m.
  • The afternoon commute is worst at around 5:30 p.m. Monday through Wednesday. On Thursday and Friday, the longest commuting time results from leaving a little before 5:10 p.m..
  • The Thursday and Friday commute generally takes less time than commutes earlier in the week.

Final Thoughts
I had a lot of fun using the new JMP 12 features in space filling designs and the Interactive HTML Profiler. While the accuracy of the Google Maps travel times remains to be seen, I was a bit surprised with the variation in the predicted commuting time from day to day.

Another interesting experiment would be to add another factor to compare different routes. a future blog post will show how to add restrictions to morning and evening departure times. After all, if you leave later in the morning, you will probably also need to leave later in the evening.

Post a Comment

Using my muscle map as a selection filter for workout data

JMP 12 introduced the ability to create a selection filter, or a graph that filters other graphs in a report. I have used this feature quite a bit since its introduction, and I love the flexibility it provides! I hope you had a chance to see developer Dan Schikore's post on selection filtering in JMP 12, which included some examples of using geographic maps as selection filters.

In my previous post, I described how I created a set of muscle shape files for visualizing my workout data. Graphs with custom map shapes like mine can also be used as selection filters, or they can be targets of selection filtering by another graph.

Creating a selection filter

To create a report containing a selection filter, I usually start by using the Combine Windows feature to place two or more graphs in the same report window. I like to place the graph acting as a filter on the left, since I'm accustomed to the Local Data Filter being there.

  • First, place the graph windows next to one another in the approximate arrangement you want.
  • On Windows, you can check the boxes in the lower right-hand corner of graph windows you want to place together, and choose Combine Windows from the pull down menu.
  • On Mac, you can choose Combine Windows... from under the Window menu and click  boxes for the the reports you want to combine. For more information about Mac Combine Windows, see Chris Butler's post on the topic.
  • Within the combined report, choose Edit Application from the Report red triangle menu.
  • When the application opens for editing in App Builder, right-click on the graph you want to use as the filter and choose Use as Selection Filter.
  • Run the application from its red triangle menu to open a new window where the selected graph will now be a live filter.

I used the steps above to create a combined report that used my custom muscle map as a selection filter for a stacked bar chart of total weight lifted for exercises, grouped by primary body part. I described in an earlier post how I calculated total weight lifted using a JMP formula for all my unique rep-set combinations, and in another post, I shared how I grouped exercises using the Recode platform in JMP.

Filtering one graph with another graph

First, here is a view of the combined report without any selection applied. As a reminder, the muscle map shown on the left is colored by the proportion of total weight I lifted for different primary body part areas over my entire data set. On the right, I created a stacked bar chart in Graph Builder, using primary body part as an Overlay variable. Various colors correspond to different body areas worked by exercises. (I'm still in the process of entering my data, so there are a few  months in the middle without much data.)

Combined report before selection filter 3-17-15

In the view below, I  selected two body parts in my filter graphic, shoulders and back. The stacked bar chart updated accordingly to show total weight lifted data for only those areas. Using filtered views like this one makes it easier to see how my training patterns for specific body areas have changed over time. According to the unfiltered graph above, I lifted similar monthly amounts of total weight during 1998 and 1999 as in recent months, but a greater proportion of that total weight is lifted during shoulder and back exercises now.

Combined report after selection filter 3-17-15

I have also created reports where my custom map is filtered by another summary graph. For example, on the left below is a bar graph of total weight lifted by unique workout session. From this graph, I can zero in on sessions where total weight lifted was higher or lower than usual. By using this graph as a filter, I can select bars corresponding to specific workouts of interest, then drill down into a view of one or more days and compare the proportion of total weight lifted by body area for those selected sessions by filtering my muscle map.

Which of these things is not like the other?

My filtered body maps for the four workouts I selected remind me of the classic Sesame Street song which asks "Which of these things is not like the others?" I chose this set of four workouts because they were from similarly structured days of the same program that worked the same body parts. Although the total weight lifted patterns for three of the sessions did look very similar, it turns out that I chose bodyweight bench pushups as a chest exercise on 1/5/2015. Since I'm never quite sure how to quantify the amount of body weight lifted for exercises like that one, I usually put a 1 in the weight column, resulting in a low total weight lifted number for chest in that session. I could change the filtered shape graph to reflect a different metric like total weight lifted by primary body part rather than using a proportion here, but then the colors would become much more sensitive to the total number of sets I completed.

Unique workout ID filters body map

More to come...

I'm still working on entering my historical workout data into JMP and looking forward to presenting at Discovery Summit in San Diego this September. JMP Developer Dan Schikore will be presenting a poster on selection filters at the conference, so I hope you will have a chance to learn more about this useful feature!

I'll also be speaking at the upcoming QS15 quantified self conference in San Francisco and hosting an office hours session there. Stay tuned, because recently, I began experimenting with a Push Strength monitoring band that collects additional metrics about the sets and reps I perform, including velocity, power, sets, reps, and timing of the eccentric and concentric phase of the lifts. I can export the data for my workouts as CSV and visualize it in JMP, providing me some interesting and much higher resolution information about what I'm doing in the gym.

Post a Comment

Process Capability in JMP 12

In my previous blog post, I announced the new Process Capability platform in JMP 12 and shared some of the ideas that steered its development. Process capability analysis is used in many industries for assessing how well a stable process is performing relative to its specifications; it helps quality practitioners understand the current state of their process so they can make adjustments and reduce process variation, thereby improving quality and consistency. In this post, I will show some of the new features in Process Capability with an example. Read More »

Post a Comment