Using images to bring JMP 12 graphs to life

Earlier this year, I had the opportunity to speak at a National Wear Red Day lunch-and-learn at SAS. I was invited to share my data and experiences as we marked a day devoted to raising awareness of the sobering statistics about cardiovascular disease risk among women.

Heart attack and stroke are responsible for one of every three deaths in women, and kill more women than any other diseases by a wide margin. On the bright side, many important risk factors for these diseases are lifestyle-related and largely under our control. Over the past decade, National Wear Red Day has promoted awareness of these risk factors and helped bring to light gender inequalities in cardiovascular research and health care. Many women are now actively reducing their risks by adopting the Simple 7 lifestyle changes recommended by the American Heart Association.

Although I have blogged about my personal diet and fitness data and presented a Discovery Summit 2014 e-poster on the topic, this lunch-and-learn represented the first time I publicly discussed the connection between my own lifestyle data and my risk factors for heart disease and stroke. I shared this connection in part because cardiovascular disease has touched my own family. I lost one grandmother when her doctor misdiagnosed her symptoms as anxiety, failing to recognize key heart attack symptoms experienced more often in women than men. My other grandmother experienced a series of debilitating mini-strokes late in her life. I share these sad stories to help explain why I am passionate about encouraging more women to take active steps towards positive health changes and do what they can to maximize their chances of living to see their children, grandchildren and great-grandchildren grow up.

If you saw the first post in my fitness and food blog series, you saw how I took advantage of a new JMP 12 feature to embed pictures of myself in my weight data table. I pinned several representative pictures to my historical weight graph to highlight the changes in my body weight over the past 15 years. My weight metric tracks predictably with my other cardiovascular risk markers, like blood pressure, body fat, waist circumference and blood cholesterol composition.

Unlike a cholesterol test, however, tracking my weight is easy to do at home; as a result, I have lots of historical data! When I showed my weight graph during my talk on National Wear Red Day, it immediately resonated with the women in the audience. I heard from several people that seeing my pictures (in second graph below) made the ups and downs in this chart much more meaningful than without (first graph below).

Weight graph without picture 12-21-14

Weight Graph Grad School to Present 9-9-14

Obviously, I have been through many weight fluctuations over the years. I first actively tried to lose weight through dieting and exercise in middle school. I recently obtained my medical records from my undergraduate years, and they reflected a similar pattern to other stressful periods in my life. I gained weight (12 pounds in the first semester), and I continued to pack on the pounds as the years went by, gaining a total of 30 pounds during my undergraduate years. It has taken me many subsequent ups and downs through graduate study, parenthood and my early working years to learn that adjusting to life's stress does not have to mean giving up on my weight, health and fitness goals.

In the past, I shelved healthy habits and stopped tracking during holidays or when my time was tight. I then felt like a failure, and this negative thinking led me down a path of declining fitness, increasing weight and rising blood pressure. In reviewing my data in notebooks, I realized that I have always been most successful when I am actively tracking my efforts. Several years back I recommitted to adopting data collection habits permanently, and this has helped me in the process of desensitizing my weight loss and maintenance efforts. Even in such an emotionally charged area, data can just be data, and continuing to collect it motivates me to keep my established habits going even during busy times. I track my food and workouts regardless of schedule, and have learned to both scale down my routine during hectic stretches and enjoy social eating events without stress. My yearly health metrics now reflect my long-term commitment to positive lifestyle habits.

As I mentioned earlier, increases in my weight always lead to my other risk biomarkers heading in the wrong direction. It is hard to statistically separate the effects of diet and exercise on my biomarkers, since I tend to adopt a constellation of healthy lifestyle behaviors when actively working to address my weight. However, I observed that my risk biomarker numbers tend to be at their best levels when I am within my current, healthy maintenance weight range, and my cholesterol composition is not quite as good when I rise above that range.

Lab data

As I showed in my talk, and you can see from the pink shading in the graph above, hearing the bad news about my heart disease risk biomarkers in early 2008 didn't prompt me to take action right away. In fact, my weight rose another 10 pounds over the next 18 months (during which I avoided yearly blood work) before I reached the point where I was ready to make the changes I needed. The important thing is that I did manage to make positive changes -- although my total cholesterol has stayed fairly consistent around 200, you can see from the graph above that the composition of my good and bad cholesterol has changed greatly since 2008. My ratio of total/HDL ("good") cholesterol was 3.8 in 2008, close to a worrisome threshold of 4, but now sits near 2 since raising my HDL 40 points and lowering my LDL 40 points.

In looking at this data, I wish I had access to multiple measurements at the same time to get a sense of my variation. Did having my blood cholesterol test on the Monday after my 2015 holiday break affect my numbers at all, compared to waiting a week post-vacation? Without cholesterol issues, I don't have an easy or inexpensive way to get more frequent cholesterol lab tests. Unfortunately, home blood cholesterol tests are expensive and don't provide the kind of detailed information I get from my yearly blood work. Although I don't have an identical twin to serve as a replicate, my youngest sister is almost exactly my height, and we look and are built very similarly. She shared with me her own cholesterol improvements after her 35-pound weight loss: She increased her HDL 6 points, reduced her LDL 25 points and reduced her triglycerides 33 points between June 2010 and November 2014, while her Hemoglobin A1C levels (an important biomarker for type 1 diabetics like her) dropped significantly.

I helped coach my sister through the changes she has made, and we in turn helped coach my mom and other sister as they have both improved their heart disease risk through lifestyle changes. My mother lost weight, became more active and improved her own cholesterol numbers. Between me, my mom and my two sisters, we have lost a combined total of more than 200 pounds over the past several years using the same strategies I have showed through my previous blog posts: reducing calorie intake and increasing activity to achieve a calorie deficit.

I started this blog post by talking about family, and will end talking about it, too. I think we sometimes forget the positive ripples that achieving health improvements can have on our social networks. I shared how my changes influenced my immediate family, but it goes even further than that. Once my mother achieved her own positive changes, she became a positive influence on a friend's post-stroke weight loss efforts, and her friend has now lost more than 100 pounds and adopted several new exercise activities.

To add pictures to your own data table in JMP 12:

  • Create a new column and change its Data Type to Expression.
  • Drag or copy/paste your pictures into appropriate cells in your Expression column.
  • If desired, change the marker color of the rows that contain pictures so that you can easily identify them in graphs.
  • Hover over points with pictures and pin the hover labels to keep them visible.

Keep in mind that the size of your pictures will affect the final size of your table when open in memory or saved to your file system. If you have the space to save large images in your table, yet want to use smaller sized versions in your hover labels, you can open up the Column Info dialog and select Expression Role near the bottom of the Column Properties list to modify the size of the picture as shown. Also new in JMP 12 is the ability to change the height of data table cells, so you can adjust the size of the thumbnail image as shown in your table.

Column Info Expression Role

I also used images in one of the final graphs I showed in my Discovery Summit 2014 e-poster to track my pregnancy weight changes alongside pictures of my expanding belly. Although I didn't end up including this one on my poster, I created an expanded version of this chart that included summary information about average calories per food item and average number of food items logged per week during my pregnancy. I noted that there was a definite decline in the diversity of my food log during March and April 2011 and a rise in the average calories per item logged.

Weight graph for blog

What was going on here? I suspected I knew the answer, recalling that early in my pregnancy, I struggled with an aversion to coffee, dairy, and many green, fresh vegetables. I reviewed my detailed food logs and summarized them with treemaps using the local Data Filter in JMP to restrict date ranges, as described in this post. My suspicions were confirmed: During the months when my stomach was most unsettled by nausea, I ate more combination foods and starchy carbs than usual, which were more calorie dense than my usual food choices.

Eating patterns early in pregnancy treemap

Eating patterns later in pregnancy treemap

What uses can you think of for pictures in your data tables, either for your own personal data or work-related projects? I think the possibilities are simply endless!

Post a Comment

Discovery Summit Europe live blog: Beau Lotto

In the final keynote of Discovery Summit Europe in Brussels, we hear from Beau Lotto, renowned neuroscientist and Director of the Change Lab at University College London.

View the live blog of this speech.

See photos and tweets from the conference at jmp.com/live.

Post a Comment

Discovery Summit Europe live blog: Bradley Jones and Peter Goos

On the second full day of Discovery Summit Europe,  Bradley Jones, JMP Principal Research Fellow, and Peter Goos, Professor of Technology at the University of Antwerp, deliver a keynote speech on design of experiments.

View the live blog of this speech.

See photos and tweets from the conference at jmp.com/live.

Post a Comment

Discovery Summit Europe live blog: Dick De Veaux

Williams College statistics professor and data mining expert Dick De Veaux gives a keynote speech at Discovery Summit Europe 2015 in Brussels, Belgium.

View the live blog of this speech.

See photos and tweets from the conference at jmp.com/live.

Post a Comment

Discovery Summit Europe live blog: John Sall and Chris Gotwalt

JMP creator John Sall, who is also SAS Co-Founder and EVP,  gives the opening keynote speech of Discovery Summit Europe 2015 in Brussels, Belgium. Sall is joined by Chris Gotwalt, Director of Statistical R&D for JMP, in a speech titled "Addressing the Challenges of Data Variety." The speech marks the official launch of JMP 12. To see a recording of this speech, watch the webcast premiere on Wednesday, March 25.

View the live blog of this speech.

See photos and tweets from the conference at jmp.com/live.

Post a Comment

Flow and Frontier in JMP 12

Long lists of improvements go into each new version of our software, and usually there are one or two themes that characterize the release. JMP 12 launches this week, and the themes of this new version are flow and frontier.

By flow, I mean workflow, the way we can smooth out the steps you need to take to get work done. I also would refer to the word flow as a state of consciousness, described by Mihaly Csikszentmihalyi as an uninterrupted, engaged state with deep enjoyment and creativity.

By frontier, I mean that we become able to push the edges of what's possible and also push into new feature space.

Workflow improvements include:

  • Making database access involving multiple tables easy and fast, with Query Builder.
  • Making cleaning up data much easier, with a Recode feature that automatically finds categories that should be combined, and an outlier facility that makes it easy to locate and deal with outliers.
  • Making it easy to find what effects are important in fitted models and to easily remove the effects that are not important.
  • Making it easy to publish results as PowerPoint or HTML.

These workflow improvements can save you a lot of steps and make your path much easier, so that you spend most of your time on analyzing the data, not in overcoming the obstacles.

Enhancements that push the frontier include:

  • Enabling data to have any expression type, such as images.
  • Allowing multivariate platforms to handle very wide cases, where there are many thousands of variables. This is happening much more in modern situations, such as in gene expression data and multisensor data.
  • Implementing Covering Arrays to construct test designs that cover high-degree interactions of features.
  • Generalizing model fitting with more distributions, more selection and shrinkage features that fit faster.
  • Enabling fast fitting of complex mixed models when they have a huge number of levels in random effects.
  • Pushing current features to cover more situations: extending process capability to understand short-run behavior, extending correspondence analysis with multiple correspondence analysis, extending PLS with PLS-DA for categorical responses, extending space-filling designs with categorical factors and extending definitive screening designs with blocking factors.

Pushing these frontiers can have a major impact in uncovering discoveries.

With these improvements, we cover more ground, and provide smoother paths to the ground we already prepared. It all adds up to a more productive experience in a wider variety of application areas.

When I think of how smooth the JMP workflow is, I remember a remarkable video Julian Parris made, called “Speed Stats.” Julian is from our academic team, and he wanted to show to students and faculty just how easy it could be to answer a long set of questions, live, in just 10 minutes with tools that put you in the groove. It is a remarkable illustration of flow. You may not quite achieve Julian’s smoothness, but once you get enough experience, you can come close.

 

P.S. You can see a demo of many of these enhancements in a recording of my speech from Discovery Summit Europe.

Post a Comment

Reflections on my ongoing diet and fitness project

I've blogged quite a lot recently about using Graph Builder to visualize my diet and fitness data, and you can see all posts in this series. While creating my Discovery Summit 2014 e-poster about this project, I significantly broadened my skills as a JMP user. This was the first time I wrote a custom JSL to automate the import, combination, and formatting of data from a large set of files (~50 Excel and ~50 text files containing activity and food log data, respectively). This time investment really paid off, since I have been able to apply the skills I learned to other work-related data import and visualization tasks.

In addition to gaining greater experience with JMP and JSL, I learned a lot about myself and my habits through this project. One of my most important revelations was that even my carefully collected self-report data contained flaws and biases related to my device usage and food logging patterns. This discovery has influenced my thoughts about large-scale efforts to aggregate diet and fitness data across individuals to understand weight loss patterns and also affected how I continue to collect my own data.

I have blogged before on how I wore my armband activity monitor less during the summer months in an effort to avoid strap tan lines, and how this usage pattern influenced the completeness of my activity data. It seems reasonable that every person using an activity monitor could have a unique use pattern that would influence the completeness and accuracy of their own data. My device gives me a daily "Percent Onbody" metric, which helps me compare across data with similar hours of wear. Although this might still not be perfect, it is a step in the right direction. Below is the graph I included in my poster to illustrate my own wear patterns.

Seasonal compliance

Similarly, food logging patterns are likely to vary by individual. Previous research has confirmed that people have trouble recalling the details of foods they ate days or even hours ago, much less recalling diet patterns from the day, week or year before. Diet study participants tend to under-report food consumption. Portion size estimation is another source of error in food logging, and one that may differ person to person. Whether intentional or unintentional, food logging frequency, timeliness and accuracy is sure to vary across a group of people. (You can read a lot more about these and other issues in a freely available chapter published in 2013 titled Dietary Assessment Methodology.)

I have noted that logging my foods before or just after a meal helps minimize my recall bias, so I do that whenever possible. By exploring my own food log data in Graph Builder, I was able to identify several different groups of outlier days where my own logging data was incomplete for various reasons. Even a large-scale study whose participants used a food logging app like I do would be faced with a choice: Take data from participants at face value knowing logging compliance and data quality might vary widely, or undertake assessment of logging compliance patterns by individual. Without knowing the underlying truth about each individual's eating patterns, it would be extremely difficult to assess whether data was incomplete or incorrect, and how that might affect overall study conclusions.

Density of cals consumed and meals

I have seen many examples of mismatches between quoted serving sizes and actual food weights on food labels, and I have read many articles indicating that calorie counts on menus are often unreliable. Similar-looking food items can have widely varying calorie counts due to differences in largely hidden ingredients like sugar, butter or oil. When possible, I weigh or measure portions to improve the accuracy of my food log data. To compensate for underestimation of calories in packaged or restaurant-prepared foods in my own data collection, I often add 10% to the serving size I log for an item, although I know errors can sometimes be much greater than that!

Errors in estimating consumption become compounded with errors in calculating exercise burn when it comes to calculation of the deficit and surplus numbers that ultimately govern weight loss. Most apps that log food also allow you to log exercise, thus "earning" more calories to eat. Unfortunately, standard estimates of calorie burn for activities often do not match real-life circumstances. They may overestimate true burn for some individuals or include baseline calories that would have been burned even if no exercise was done. The bottom line is that people usually think exercise burns a lot more calories than it does. Especially for short women whose exercise calorie burn is the least, relying exclusively on exercise for weight loss is a very flawed strategy, which unfortunately I have proved to myself several times over a lifetime of weight struggles.

During my recent and successful weight loss phases, I have focused on maintaining a deficit between my intake and burn. I achieve the deficit through eating less than I burn, and adjusting that balance as required to achieve my desired outcome. I shared weight gain and weight loss data from my last pregnancy and post-baby weight loss in an earlier post; a Graph Builder graph clearly showed that when I was in a surplus, I gained weight, and when in a deficit, I lost weight. My gain during pregnancy was well above and beyond the amount attributable to baby/water weight, accounting for about 20 pounds of excess body fat that took me months to lose. I have seen the same connection between calorie balance during my last few years of weight maintenance. Although my weight fluctuations are much smaller, the same basic concepts still hold.

Main poster graph

All that I have learned up to this point about self-reported food log data and activity measurements has opened my eyes about the limitations of my own data and the challenges faced by researchers conducting large-scale weight loss studies based on self-reported data. Large-scale studies aim to draw broad conclusions by averaging across a heterogeneous set of individuals with varied genetics, lifestyles and reporting patterns.

I have now collected enough free-living data in my own n=1 study to quantify what works for me to lose weight and maintain in a healthy range for me -- an understanding that largely eluded me up to this point in my life. Not surprisingly, I have converged on the same deficit strategy commonly employed in weight loss studies that treat people like caged rats, closely quantifying their intake and activity to prove that negative calorie balance is the critical factor that causes weight loss. I'm truly grateful that I didn't need to live in a cage to learn what I have over the past few years.  In many ways, learning what I have from my data has helped set me free.

Post a Comment

Top 10 things about JMP 12

JMP 12 Montage

JMP 12 arrives next week, and I hope you've had a chance to read the series of posts by JMP developers about what's coming in this new version.

I’ve been using JMP 12 during the entire development cycle (about 18 months now), and I am impressed how this version has grown from ideas in its infancy to fully formed, mature and useful features in such a short time.

So when you get your copy of JMP 12, try out some of these new features and enhancements, and let me know what your favorites are.

  1. Query Builder. This is exciting for anyone who has to access data in databases and may not enjoy writing SQL code to join database tables and build queries. The interface gives you a nice preview of the query, lets you set up custom filters, perform sampling and even lets you set up a “prompt on run” that will let you pick the specific parts of the database you wish you investigate when you run the query. If working with data in databases has previously been challenging for you, Query Builder in JMP 12 simplifies ad hoc queries to data in a database.
  2. Make Validation Column modeling utility in JMP Pro. This is a nice utility that has been added to the modeling utility submenu of the columns menu. It lets you easily partition your data set into training, test and validation subsets without having to go through the exercise of creating a new column and generating a random subset from there. You can also choose to make a stratified sample in the utility launch options. And if you forget to generate a validation column but launch a platform with a validation column role, you can click on the validation button and generate a column directly from the launch. If you are building a lot of validated models in JMP Pro, this is a huge time-saver.
  3. Illuminated drop zones in Graph Builder. Graph Builder is my go-to place for generating graphs in JMP. However, it was sometimes difficult to know where all the possible locations I could drop a column when building my graph. In JMP 12, all the drop zones “light up” as soon as you grab a variable from the list. This makes setting up things like nested hierarchies very easy.
  4. PowerPoint export. The “Save As” menu has a new output file type: “PowerPoint Presentation.” This is a big help when you need to quickly make a presentation of your JMP report to be presented in PowerPoint. The export feature does some useful things. First, it takes the JMP report titles and makes them editable titles on the slides. Second, it converts tabular output into editable tables in PowerPoint (letting you add emphasis or bolding to interesting values in your table). And third, it lets you easily output vector graphics, which you can resize and enlarge without the loss of quality or clarity. (No more Paste Special!).
  5. Images in data tables. Many times, I’ve wished that I could just put a thumbnail of something into a data table. Sometimes, a picture is a really useful way of capturing information about the corresponding data in your data table: a chemical structure, an image of a part or a screenshot of a waveform. JMP 12 makes it easy to incorporate images in data tables just by dragging and dropping. This is afforded by the new expression column data type, which in addition to images can also store .JSL scripts, matrices and associative arrays in data table cells.
  6. Recode in JMP 12. Even though it has the same name as Recode in JMP 11, it is almost entirely different. Perhaps the greatest feature in JMP 12 Recode is the “group similar values” option, which automatically groups nearly similar categories automatically based on edit distance. It is amazing how this works and how much time it can save, especially when you have many unique categories.
  7. Selection filters. You can use graphs to filter other graphs by using selection filtering in JMP 12. Simply create an instant app by combining multiple windows together and then edit the application you have created. Right-click on the graph that you want to function as the filter and then select “use as selection filter.” Now your graph will filter the other graphs letting you create more information-rich dashboards.
  8. Interactive HTML reports. These were introduced in JMP 11 and are a great way to share JMP graphs and reports especially with those who do not have access to JMP. In JMP 12, you can save the Bubble Plot and Profiler as interactive HTML output.
  9. Generalized Regression platform in JMP Pro. Introduced in JMP Pro 11, this is a great way to add modern modeling techniques to your analysis workflow. In JMP Pro 12, everything about Generalized Regression has been improved. The speed at which you can fit large and/or complex models is far faster than before, and the ability to refine your model fit with an interactive graph is quite nice. In JMP Pro 12, you can also employ forward stepwise model fitting and fit models using Quantile regression.
  10. Explore Missing Values and Explore Outliers utilities. These are two modeling utilities that you will want to check out. They help easily profile and deal with these common problems in raw data. These tools combined with Recode and the rest of the utilities and modeling utilities in JMP 12 are quite useful for speeding the time it takes to clean up your data so you can get going with your analysis tasks.

These are my favorite things about JMP 12. Now it’s your turn to find out what your favorite features or improvements will be. Enjoy JMP 12!

Post a Comment

Coming in JMP Pro 12: Variograms in Fit Mixed

In JMP Pro 11, we introduced the Fit Mixed platform for fitting models with a variety of covariance structures and random effects. With JMP Pro 12, we have improved on this platform, with noticeable changes coming to models with spatial covariance structures. These changes are detailed below.

Enhanced speed

Fitting models with a spatial covariance requires a high computing cost, particularly as the number of observations increases. For JMP Pro 12, the algorithm we use to fit models in the Fit Mixed platform has been improved by Chris Gotwalt, the Director of Statistical R&D at JMP. His work further reduced the number of computations required to fit all models in Fit Mixed. This has greatly reduced the time required to fit spatial models in particular.

In our testing, we have experienced anywhere from a 5 to 15 times increased speed when fitting these spatial models. In practical terms, this means we have reduced fits that took more than an hour to complete before to now being completed in just over 10 minutes!

Variograms

We have also added variograms in JMP Pro 12, a visual tool to examine and diagnose the existence of spatial correlation. The variogram is available when you fit an isotropic spatial model (an AR(1) or one of Power, Exponential, Gaussian, or Spherical with or without a nugget). The variogram below visualizes the change in covariance as observation locations move apart in time or space.

Exponential Variogram

The variogram plots the semivariance (half the variance) of the difference between observations at two locations versus the distance between the locations. As depicted in the variogram above, we can see that observations close together have strong correlation, hence a small semivariance.

This correlation decreases as distance increases, leading the semivariance to increase. The semivariance no longer increases at what is known as the range, the distance at which observations have little to no correlation. In the variogram above, this is approximately at a distance of 0.25.

The maximum value of the semivariance is known as the sill, and in most cases with the models fit in JMP Pro, this is simply the variance of the observations. In the presence of non-spatial error, a nugget can be included to estimate this variability. The nugget effect is depicted in the variogram below.

Exponential with Nugget Variogram

The semivariance is zero at distance zero, but the semivariance then jumps up to the nugget as soon as the distance between locations is positive. In the variogram above, the nugget is approximately 0.5.

Residual-only models

The variogram is also available when you fit a residual-only model. This option is accessed under Marginal Model Inference:

Variogram Option

After selecting this option and specifying the column(s) to use for computing distance in time or space, the empirical variogram is shown. The variogram red triangle menu allows you to fit one or more spatial models to this empirical variogram.

Empirical Variogram Models Empirical Variogram Model Fits

In this case, the variogram can be used to determine if a spatial covariance should be added to the residual only model. Under the absence of spatial correlation, the empirical variogram would likely appear flat, suggesting no change in correlation over distance.

Above, we see that there appears to be an increase in the empirical variogram as distance increases, and one of the Exponential, Gaussian, or Spherical models may be suitable (the Power model does not appear to fit the empirical variogram well). After using Fit Mixed to formally fit a model to each one of these correlation structures, we find that the Exponential model is preferred because it has the smallest corrected Akaike Information Criterion (AICc) and Bayesian Information Criterion (BIC). This means that the variogram should be used to identify possible models that can then be formally fit with Fit Mixed. With these fits, an AICc/BIC comparison should be used to choose the best model.

We hope you will find the enhanced speed and variogram addition useful, and we look forward to any questions or feedback you have in the comments!

Post a Comment

Coming in JMP 12: Ternary Plot enhancements

The Ternary Plot platform in JMP

Figure 1: The Ternary Plot platform in JMP

JMP is used to analyze and visualize data with a wide variety of types and relationships. The term mixture is used to describe a collection of variables that sum to a constant value. Mixtures may be observed in gases, liquids or solids, or they may be used as part of a design when optimizing a product for effectiveness, cost, or other features. While components of a mixture can be plotted using any graph platform in JMP, the Ternary Plot uses the special relationship between three components of a mixture to display three axes in a 2D graph without loss of information. Figure 1 illustrates the JMP Ternary Plot. Due to the relationship between the components, the domain is bounded by the limits [0,sum] for each of the three axes. The shaded region represents constraints on the range of values for each of the components. Ternary plots are also used in the Mixture Profiler, which allows you to explore higher-dimensional mixtures by choosing three components and displaying a slice of the full domain.

In previous versions of JMP, ternary frames did not support all of the features of a rectangular frame, or in some cases the interactions or behaviors were more like a rectangular frame. One of the goals of JMP 12 was to make the ternary frame behave more like other graphs in JMP, as described below.

Result after performing a zoom with the magnifier tool

Figure 2: Result after performing a zoom with the magnifier tool

Tool manipulation

JMP supports a variety of tools for interacting with the contents of a display frame. Some tools are aware of the ternary data and change behavior to match. The most notable exception in previous versions of JMP is the magnifier tool, which used a rectangular zoom region and resulted in a view of a triangle clipped by a rectangle. In JMP 12, the magnification region is a triangle, and the view always retains a triangular shape, as shown in Figure 2.

The hand tool performs translation in the graph, and in previous releases would allow you to translate the entire ternary triangle out of view.  With JMP 12, the ternary frame is constant in size and location -- only the axes are changed when performing a translation. The fixed component bounds of [0,sum] are enforced during interactive translation.

Axis manipulation

In addition to tool manipulation within a rectangular frame, axes of graphs also support direct interaction. In JMP 12, these actions are also supported for ternary frames:

  • Double-click on an axis to bring up the axis property dialog
    • Reference lines
    • Major and minor gridlines
  • Click in the middle of the axis to translate the axis
  • Click toward the extremes of the axis to zoom in or out

The relationships between the ternary variables is reflected in these transformations. Translating or zooming on one axis will necessarily affect at least one of the other axes. As with tool manipulation, the bounded extents [0,sum] of each axis are also enforced to maintain valid coordinate ranges within the ternary frame.

Drawing

USDA Soil Classification Taxonomy

Figure 3: USDA Soil Classification Taxonomy

The improved support for ternary frames is not only on the surface. In JMP 12, ternary plots are drawn directly from the ternary representation rather than transforming into a rectangular coordinate frame for drawing. This means that it is much easier to support additional geometry and custom geometry within the ternary frame. In addition to the built-in features such as reference lines, you can add custom geometry to a ternary frame in the same way that you would with a rectangular frame. Due to the relationship between the ternary components, it is only necessary to specify the first two coordinates when drawing to a ternary frame. Figure 3 illustrates a case where text, lines, and polygons are added to a ternary frame to show the USDA Soil classification regions.

In a future blog post, I will provide a more detailed look into the changes for ternary frames. Please share in the comments to let me know how you use mixtures with JMP.

Post a Comment