Give your boss right reasons to attend Discovery Summit Europe

justify-briefcase-croppedAre you considering a trip to Amsterdam for Discovery Summit Europe? You’ll likely need permission from your manager, so we’ve put together a kit to help justify the conference.

The kit includes a letter template for formal requests. Every decision maker will require different information before giving the OK, so be sure to alter this template to fit your needs. The red text indicates where you will need to fill in details.

For those of you asking for permission in person, the kit includes talking points that cover answers to questions you might face, such as:

  • What is Discovery Summit?
  • How much does it cost?
  • How is Discovery Summit different from training and tutorials?
  • You’ve just started using JMP; won’t this be above your level?
  • You’ve been using JMP for years; shouldn’t you know all the answers by now?

We hope this helps demonstrate the value of attending the Summit and that you will receive approval to attend in no time!

Post a Comment

Sailing and the art of data quality assessment

Gerhard Svolba is a colleague at SAS who is not only an experienced analyst and a caring father, but also an author for SAS Press and an enthusiastic sailor. He has done valuable research about detecting data quality problems and their consequences for data analysis.

sailing_regatta

The start of the regatta (Photo used courtesy of Gerhard Svolba)

As a statistician, I’m well aware of the importance and the burden of a solid data quality assessment. And because I am into wind-surfing and scuba diving, Gerhard and I have always a lot to talk about when we meet. Recently, he told me that he recorded his last sailing regatta and offered to provide me the log file. I accepted, and a few days later I received a JMP data table (very friendly!) with the data from his GPX logger, along with some recommendations for what to look for in the data.

The Data Table with Logging Data

The data table with logging data

To me, there is no data file as boring as a GPX log. It’s just a timestamp, coordinates, speed and compass heading.

What can you expect from a file like that? Even worse, my standard initial analysis – bringing the data into Graph Builder – just showed some odd lines.

map of Austria

Austria and Lake Neusiedl

 

Well, zooming way out of the picture revealed that the regatta took place in a lake – in the eastern-most area of Austria. And yes, Gerhard is an Austrian, and you should not only listen to what he has to tell but also hear his wonderful accent.

Back to the regatta itself. With sailing, it is interesting to see how the race went, as there are only some buoys to pass but no prescribed track between the buoys. With help of the Local Data Filter, I could use the timestamp to follow the moves of the sailing ship.

The points in Graph Builder, the Local Data Filter and Graph Options

The points in Graph Builder, the Local Data Filter and Graph Options

In the graph showing all the waypoints, I first set “lock scales” from the hot spot because I didn’t want to zoom into the selected area but follow the route over the entire course.

Parts of the course displayed at different slider positions

Parts of the course displayed at different slider positions

 

With the slider, I selected a tiny time slice and moved it over the scale. I could see how they kept their boat at the western part of the area, then made a long leg toward the southeast, and then a few tacking maneuvers before a sharp bend to the north (probably around a buoy). They went straight north, northwest, and then started a second round with a different tacking strategy.

Time range with no recordings

Time range with no recordings

To my big surprise, all points vanished after the second round, and I found a reasonable time period without any activity at all. Nothing is more fun than showing colleagues their mistakes! So I called Gerhard and told him about my findings.

“Well,” he said; “I forgot to tell you: The data is from three races, one after the other.” Good to know.

Now I wanted to identify the different races by a variable in the data set. There’s nothing easier than that with the interactive capabilities of JMP. I just moved the left slider handle to the origin of the scale. Now both rounds of the first race were shown. From the context menu, I picked “Name Selection in Column,” named the column and assigned the value 1. Then I did the same for the next two races with numbers 2 and 3, respectively, and calculated the sum of all in a fourth column. Now I was able to overlay the races in one graph.

Graphical subsetting and labeling of data points

Graphical subsetting and labeling of data points

 

Overlay graphs with three identified races

Overlay graphs with three identified races

 

So far, after just playing around with the slider and making a phone call, I learned that I had data from three races. Once I added some subject-matter knowledge, I was also able to learn about things about which I have no data. I know where the buoys were placed, and I have pretty good information about which direction the wind was blowing.

A sailing boat should get the highest speed over the ground when it is sailing with wind from behind. The data has compass heading and speed, so I looked at these.

North-northwest courses selected in Distribution

North-northwest courses selected in Distribution

The wind blew in northern to northwestern directions. Taking into consideration that a sailing boat does not provide solid ground, I selected a range of compass headings around the north course, including the high degrees from 300 to 360 degrees and the ones below 20 degrees, to find the fastest parts of the course. To double-check my selection, I looked at the positional graph and corrected my selection a bit so that I really covered the complete distance that was traveled without tacking.

North compass readings on southbound tacks?

North compass readings on southbound tacks?

Imagine my surprise when I found that some of the selected data points appeared on southbound legs of the course, since I thought I had selected only northern courses. Now I had another reason to call Gerhard, and this time the answer was not that easy. The GPX logger exports its data as an unformatted stream. Usually headings are reported with two decimal digits. But if the direction logged was exactly at a whole number, then that logger simply skipped the decimal places. The data import first ignored that behavior and always interpreted the last two digits as decimals. For example, 34815 was correctly imported as 348.15 degrees, but 348 was falsely imported as 3.48 degrees.

What is interesting about this finding is that it was only revealed by the combination of some logic (selecting northern courses) and a graph (showing the sailboat’s positions). Logic alone would not have found this error. Without the interactive graphs in JMP, we could not have uncovered this problem. And for the logic part, there was no need to write code; all it took was sliding some bars in the histogram.

I will not conclude that with JMP it is fun to assess the quality of a given data set. But at least, JMP makes it quite easy. It offers insight you can’t get using other strategies or tools, and it is fast. And as I was reminded with this data, finding the problem is often easier than identifying the cause.

Post a Comment

Visualizing historical biomarker data with JMP

I shared in my previous post that I scoured my baby book and tracking notebooks and requested various medical records to gather historical information about my weight fluctuations over the years. I used this data to construct JMP graphs with annotations and pinned hover labels containing pictures (thanks to the new expression columns feature in JMP 12).

The process of creating and exploring these graphs gave me a lot of insight into how stress-related overeating patterns contributed to my past weight gains. I also rediscovered that self-tracking was an essential component to all of my successful weight loss and maintenance efforts since college.

Self-tracking has been an essential component to my past weight loss successes.

I tracked during all of my successful weight loss efforts (annotations in blue), but things often went in the other direction when I wasn't tracking (annotations in red).

I collected weight data because it was easy to do at home. But excess weight is one of several major independent risk factors for heart disease. With the 13th Wear Red Day approaching on Feb. 5, it’s a perfect time to recall that heart disease is the No. 1 killer of women in the US and revisit the American Heart Association website to learn more about weight and the other risk factors that contribute to heart disease. Wear Red Day helps raise awareness that the majority of heart disease deaths can be prevented by adopting lifestyle changes like losing weight, getting regular exercise, and improving eating habits. Discussing your lifestyle factors,  family history, and current blood pressure and cholesterol numbers with your doctor can be a helpful step in understanding your own personal risks.

Experts generally agree that carrying excess weight stresses the cardiovascular system directly and has a negative impact on risk biomarkers like blood pressure and cholesterol. While regular exercise can improve good cholesterol and reduce heart disease risk, exercise alone might not be enough to combat the negative effects of obesity on general longevity. One recently published study followed 1.3 million Swedish men for an average of 30 years and found little longevity benefit for obese yet very fit individuals. On average, those young men died 30% earlier than their thin yet unfit counterparts when all causes of death were considered. This is less surprising when you consider that obesity has been associated with a diverse list of health problems extending far beyond heart disease.

Moving from large N to N=1

As a scientist, I find the trends and risks uncovered in large populations to be interesting, but I am also cautious about automatically extending their conclusions to me. Large studies like the one I mention above seek to eliminate variation by selecting subjects from a sub-population whose age, gender and ethnicity may in fact be quite different from mine. Broader population studies that employ statistical approaches that adjust for demographic diversity or focus on group-level trends may mask diversity in individual responses due to genetic variation, microbiome composition and environmental exposures (just to name a few).

Since I can’t replicate myself, I can’t do controlled experiments to determine whether lifestyle changes I make actually influence my own personal health risks in the future. I have to research what is known about generally applicable risk biomarkers, measure my own changes in those biomarkers in response to lifestyle changes over time, and hope that the statistical relationships between those biomarkers and future outcomes found in large-scale studies hold true for me.

Learning from my own past biomarker data

While reviewing my medical records to fill in missing weight data, I rediscovered historical data on my blood pressure and blood cholesterol, two major risk factors for heart disease. Requesting my medical records from past health care providers was free, costing me only a stamp and the time it took to fill out a request form. If you want to explore your own health history, I strongly encourage you to collect your own records and put your data into JMP!

As I shared in an earlier blog post, my first recorded set of detailed cholesterol measurements is from 2008. That year, I found out that although my total cholesterol number was near 200, my HDL (good) cholesterol number was only 52 ng/dl, nearly to the low threshold of 50 that is considered to raise heart disease risk. I was on an upward weight swing at the time, exercising occasionally, but exercise was not enough to balance out my routine overeating habit. My blood pressure was borderline high, but I wasn’t yet ready to face the choice posed by my health care provider: Lose weight and change my ways, or face the reality of blood pressure medication in the near future. Since I had gained another 10 pounds by the time I was due for my 2009 checkup, I never even scheduled it.

Something major happened in my biomarker data gap. I hit an important turning point and began to make changes.

Something major happened in my biomarker data gap. I hit an important turning point and began to make changes.

I'm not proud of the fact that I avoided cholesterol and blood pressure tests from 2008 to late 2010, but I am thankful that something major happened during that data gap. In mid-2009, I saw pictures from a family vacation (some of which are included in the first graph above) and faced the reality of my weight problem. I resolved to change my habits for the better.

I reduced my daily calorie intake, started tracking my meals in a notebook, and added regular strength training and walking workouts. By my next checkup, my lifestyle changes had paid off. I had dropped 45 pounds, and my cholesterol composition had shifted dramatically for the better. My formerly borderline high blood pressure was now normal. Although my weight did rise again during my second pregnancy in 2011, I used the same strategies to shed the extra baby weight and return to my maintenance weight zone where my risk biomarkers have remained relatively stable.

An alternative graph of the data

I like the view above because it shows my data points over time. However, the simplified version below also has its advantages, showing that weight is not quite enough to explain my changing patterns. Although my post-baby weight in early 2012 was nearly identical to my non-pregnant weight a year before, my blood cholesterol level was clearly being influenced by other factors, likely some combination of caloric restriction combined with nursing a small child.

An alternative view of my cholesterol test data by weight.

An alternative view of my cholesterol test data by weight.

Pondering the unknowns of biomarker measurement variability

I weigh myself each morning and measure my body fat because smartphone-connected sensors make the process quick and easy. I can connect up and down trends to weekly, monthly and seasonal changes in my eating and exercise habits. This daily monitoring practice has given me useful insights into how I can manage my fluctuations and stay within my comfort zone.

Unfortunately, the risk biomarker measurements I have collected from doctor’s visits are too few and far between to provide much understanding of the changes that could be happening as a result of shifts in my diet and activity patterns over time. If I had a sensor that permitted inexpensive replicate measures of various blood biomarkers on-demand and at home, I could probably begin to tease apart the technical and biological variation in the system and connect blood biomarker changes to trends in my eating and exercise habits. My limited experience with at-home measurements of daily fasting blood sugar and blood pressure has convinced me that there is clear variation in the system, but I don’t have the kind of replication required to separate the impact of technical and biological components of that variation.

I measured my fasting glucose for a few weeks surrounding Thanksgiving 2014. Not surprisingly, there was variability from day to day!

I measured my fasting glucose for a few weeks surrounding Thanksgiving 2014. Not surprisingly, there was variability from day to day!

Regardless, testing my blood sugar throughout several days (including Thanksgiving 2014) provided some helpful insights. Chocolate greek yogurt and cheesecake barely affect my blood sugar levels, while I was shocked to see a 130-point rise in my blood sugar levels after having a single piece of pizza and a honey cinnamon pretzel!

But as interesting as these insights into my personal blood sugar response patterns were to me, it turns out that they might be completely irrelevant to you. In fact, a recent study that used microbiome testing, extensive food logging and continuous glucose monitoring of 800 participants demonstrated that blood sugar responses to food are far from uniform across the population. Remarkably, some participants showed high and reproducible blood sugar spikes when they consumed foods generally considered to be "healthy" like bananas and tomatoes, yet displayed little to no blood sugar response after consuming "junky" foods like cookies. Others showed opposite responses. It was fantastic to see such well-collected evidence for something that I have believed for quite some time now: There is no one single eating approach that will work for everyone, and it requires some investigative work to find the right eating patterns that work for YOU.

Is understanding biomarker variability worth it?

Since I don’t live in that dream world where daily blood tests are easy and cheap, I am left to decide how much of my budget I want to dedicate to extra testing in the absence of a diagnosed medical problem. If it was really important to me to understand the variability of my lab results on a shorter time scale, I do have options today. At-home tests for LDL, HDL and tryglycerides are available. Or I could choose a direct-to-consumer (DTC) blood test option offering online ordering and blood draws at a service provider’s lab. DTC services like Inside Tracker go several steps further, calculating optimal biomarker levels based on personal attributes and offering personalized diet and exercise recommendations for athletes and others interested in optimizing their biomarker levels.

The question becomes whether I can justify the extra cost of repeated blood biomarker testing to understand the variability in my system, especially since my worrisome numbers appear to be a thing of the past. Intuitively, it seems obvious that routine monitoring of seemingly healthy individuals like me could reveal much about pre-symptomatic patterns that lead to disease over time. Frequent monitoring is certainly worth the extra cost for professional athletes, who must be on the lookout for nutrient deficiencies and signs of over training that could impact their performance.

But outside of competitive athletics, frequent testing of healthy individuals remains a controversial topic. If you don’t believe me, check out what happened to basketball team owner Mark Cuban after he tweeted last year that he has quarterly blood work and recommends it for anyone who can afford it. The headlines ranged from “Mark Cuban doesn’t understand health care” to “Mark Cuban is half right on blood tests “ to “Mark Cuban understands the future of health care.”

So which is it? Given the money pouring into the fitness wearables market and the shifting focus of many large health care companies and hospital systems to electronic medical records, home monitoring and digital tools and apps, I side with Mark and the big data revolution on this. I suspect that frequent biomarker tests are going to be the norm in the future, for healthy and unhealthy individuals alike. I look forward to the day when I can track my blood biomarkers as easily and cheaply at home as I can track other metrics.

For now, let Wear Red Day be your inspiration to keep track of whatever biomarker data that you can get, whether it's weight readings from your scale or data from your medical records.

Post a Comment

10 things you didn’t know about JMP: Tips and tricks

A fellow consultant has advised that, when working on submissions for a regulatory agency, you should “model” your submission on previous ones (that is, leverage previous successful work to your benefit). In that spirit, I fully admit to “modeling” this blog post on those by Jeff Perkinson (10 Things You Don’t know about JMP and 10 More Things You Don’t Know About JMP).

With each release, the depth of JMP increases. Here are 10 things that you may not know that you can do with JMP. A few of these options are rather new (first appeared in JMP 12) while others are a bit more dated (JMP 9 and earlier).

  1. New Formula Column
  2. Set Quantile Increments
  3. Group Columns
  4. Edit Feature in the Effect Summary of Fit Model
  5. Analysis of Means
  6. Think It Should Be There? Try a Right-Click
  7. Make Combined Data Table
  8. Nonlinear Curve Comparison
  9. Column Viewer
  10. Arrange in Rows

New Formula Column

Select one or more columns in the data table and then right-click in the column heading area on the table for a column menu and select New Formula Column. The options available will depend on the type of column selected (numeric or character) and the number of columns (single vs. multiple) selected. A new column with the formula selected will be added to your data table. This is a great way to concatenate character columns and to transform data (such as log transformations).

Formula_column

 

Set Quantile Increments

The default quantiles in the distribution platform are 0, 0.5, 2.5, 10, 25, 50, 75, 90, 97.5, 99.5, and 100%. These are easy to change. Under the red triangle menu on the variable of interest, select Display Options>Set Quantile Increment. This will pop up a dialog box in which to enter the desired increment (not broadcastable). Custom Quantiles, on the other hand, provides specific quantiles and their confidence intervals.

Quantile_increments_2

Quantile_increments_1

 

Group Columns

In large tables, grouping columns can help you organize your table and speed the entry of variables into dialogs (such as for fitting models). Highlight the columns you wish to group in the column panel, right-click for menu and select Group Columns. Once grouped, you can click on the grouped name to rename as appropriate.

Group_columns_1

Group_columns_2

 

Edit Feature in the Effect Summary of Fit Model

effect_summary

The Effect Summary Report appears when you use any of the following model platforms:

  • Standard Least Squares
  • Nominal Logistic
  • Ordinal Logistic
  • Proportional Hazard
  • Parametric Survival
  • Generalized Linear Model

For a single response, the p-values will match those in the Effects Tests table. For multiple responses, the Effect Summary is an overall summary reporting the minimum p-value across models for that effect. Adding or removing an effect applies to all models. Highlight an effect and click Remove to remove. Click the Add button to add effects. To add interactions or cross terms, use the Edit button. Thus, the Add and Edit button are similar except the Add is limited to adding main effects to the model.

 

Analysis of Means

While Graph Builder is awesome, don’t neglect the use of the Fit Y by X platform for all of the analysis capabilities found there. For example, with a categorical x (i.e., groups) and a continuous y (i.e., a measurement), the analysis of means (ANOM) can be used to compare group means. ANOM is a multiple comparison technique that compares the mean of each group to the overall mean with the results displayed in an easy-to-read chart. If a group mean falls beyond the decision limits, then it differs from the overall mean. Thus, you not only discover if the means differ but also which means differ, and the direction and magnitude of the difference. ANOM for Proportions is an option for categorical groups when the outcome is dichotomous.

Anom_1 Anom_2.png

 

Think It Should Be There? Try a Right-Click

For options to customize graphics, adjust an axis, or format a table, the options are often a right-click away. Position your cursor over a graph, axis, or table and right-click for options.

 

Make Combined Data Table

Make Data Table and Make Combined Data Table are right-click options for tables of results in platforms. The combined option will find all reports of the type selected and make one data table.

Make_combined_data_table

Nonlinear Curve Comparison

The Nonlinear platform has an option for comparing the parameter estimates of a set of curves. For instance, are the calibration curves from three lots the same or different? To do so, use a grouping column in the Nonlinear platform. Then once you have fit a model, the Compare Parameter Estimates option will be in under the red triangle for that model.

nonlinear_curve_comparison_1 nonlinear_curve_comparison_2 nonlinear_curve_comparison_3.png

 

Column Viewer

To quickly obtain numerical summaries of a group of columns, use the Columns Viewer from the column menu. Note that once you have your numerical summary, you can easily run distributions on selected columns. Remember that a histogram and/or box plot will give you a much better understanding of your data than a table of numerical summaries.

column_viewer_1 column_viewer_2 column_viewer_3

 

Arrange in Rows

Platforms such as Distribution and Fit Y by X have an option to Arrange in Rows when you have multiple instances in a single window. This option allows you to select the number of reports to have in each row. This ability to arrange your results is helpful to improve the use of your screen space (there is also an Order by Goodness of Fit option in the modeling platforms).

 

arrange_in_rows

 

Post a Comment

Analyzing Manning and Brady over the years

If you follow pro football (and probably even if you don't), you know that last night the Denver Broncos beat the New England Patriots to earn their spot in Super Bowl 50. But as if a shot at the Super Bowl wasn't big enough, this game meant even more than usual. It was the 17th meeting in one of the great rivalries in sports: Peyton Manning versus Tom Brady. Brady owns the lead with 11 wins in their 17 meetings and four Super Bowl championships to Manning's one, but I think most people would agree that they are two of the best quarterbacks of all time.

With both players in their late thirties (Brady is 38, and Manning is 39), one has to wonder how many more times these greats will face each other. That being the case, now seems like a good time to look back at how Manning and Brady have performed over the life of their careers.

I'm more of a casual football fan (if you've read other posts, I'm a much bigger basketball fan), so I wasn't sure which metrics might be best to evaluate quarterbacks. So I decided to stick with two: completion percentage and yards per pass attempt. There are no shortage of ways to evaluate a quarterback, but for our purposes I thought that these numbers were a good way to evaluate their accuracy and effectiveness.

If we plot these statistics over their careers so far, do we notice any trends? It looks like they both started slowly and improved with experience, not exactly surprising or exciting. Instead, let's use a statistical model to see if we can gain more insight into how these quarterbacks have changed over time.

observedBradyAndManning2

One way of modeling something like completion percentage is to assume that after adjusting for things like home-field advantage and opponent, the completion rate is piecewise constant over time. For example, maybe my true completion percentage is 50% over my first 20 games, then everything clicks and I am a 60% passer for the next 50 games until I have a serious shoulder injury and I drop down to a 45% passer for the remainder of my career. We can fit this model using a variation of the fused lasso that I described when analyzing LeBron James's dominance in the playoffs last year.

Now if we look at the model for Brady and Manning's completion percentages, we get a much clearer picture. Manning's completion percentage improved steadily until it peaked right around his 200th game. Then it dipped sharply right around his 275th game. Brady has shared a relatively similar trajectory, hitting his prime right around his 100th game.

completionPercentage

The models for yards per attempt tells a similar story. Most notably, Manning's yards per attempt drops dramatically right around game number 275. Brady has a rough start in terms of yards per attempt, but gets up to speed quickly.

yardsPerAttempt

Tom Brady and Peyton Manning are two of the best quarterbacks of all time, and their rivalry has been a lot of fun to watch. By using a variation of the fused lasso model, we are better able to understand how their performance has changed over the course of their careers so far.

Neither player is as accurate or efficient as they might have been in their primes, but they are both still playing at a very high level. Could we use these models to say that one of these players is better than the other? I don't think so. Making such an evaluation is far more complicated. But it is still fun to look back at how these two players have changed over the years.

Post a Comment

So, you like hearing from our developers

Last year, you heard from many people in JMP Development here in this blog. And it turns out you liked that best! Almost all of the top 10 posts of 2015 were written by R&D folks, and that's not a surprise. Our developers have tons of useful information and examples, and they are passionate about their work and exploring data.

The subjects of the top posts were varied. They included a timely contribution to a pop-culture conversation, the remaking of a graph about the impact of vaccination, analyses and visualizations of diet and fitness data (three on this topic!), a statistical model of a star basketball player's performance -- as well as details about a couple of new features in JMP 12, which was released last March.

Without further ado, here's what you, our readers, liked best in 2015 as determined by numbers of views:

  1. What color is The Dress? JMP can tell you!
    Remember the frenzy over The Dress? (Is it blue and black, or white and gold?) John Ponte presented a calm approach to the question using the Image Analyzer, which you can download for yourself.
  2. AMA Advanced Research Techniques (ART) Forum is next week!
    The ART Forum is a conference that brings together marketing academics and applied researchers. Melinda Thielbar told us about the meeting, as I'm sure she will do again this year, since she has now become its chair. Congrats, Melinda!
  3. Why are some dogs adopted faster than others?
    A rising sixth-grader explored patterns in dog adoption. It was a fun story that I got to write.
  4. Graph makeover: Measles heat map
    Xan Gregg got data on measles in the 20th century, used JMP to recreate a graph published in The Wall Street Journal and offered his own version.
  5. Interactive HTML Bubble Plot platform in JMP 12
    Dynamic, animated bubble plots that you can share with anyone are a user fave. John Powell showed lots of cool examples of this new feature in JMP 12, and shared videos and a detailed guide for interacting with bubble plots in a browser.
  6. Reflections on my ongoing diet and fitness project
    Analyzing food log data and activity measurements presents challenges and limitations, and Shannon Conners explained what she had learned about these issues and what works best for her for weight loss.
  7. Cleaning up and visualizing my food log data with JMP 12
    In this one, Shannon used Recode to group the different kinds of chocolate (among other things) in her food log data and showed a treemap that revealed her changing relationship to bread.
  8. Using my muscle map as a selection filter for workout data
    Yes, it's Shannon again! Here, she used a custom muscle map to quickly view her weight workouts over time and see such things as total weight lifted and muscles targeted.
  9. Did LeBron James step up his game in the playoffs?
    Clay Barker modeled LeBron James' points, rebounds and assists to show how special the basketball star's performance in the NBA finals and playoffs was.
  10. Coming in JMP 12: Overhauled Recode for easier data cleaning
    James Preiss gave a peek at the major changes that he worked on in Recode -- before he left for grad school. We miss you, James!

Thanks for reading in 2015! You can look forward to more posts by our amazing developers again this year. And feel free to let me know what you're particularly interested in knowing about JMP.

Post a Comment

The power of passion and curiosity

Jason Brinkley AIRWalt Disney once said, “We keep moving forward, opening up new doors and doing new things, because we're curious and curiosity keeps leading us down new paths.”

Curiosity and passion can be powerful levers in opening new doors, making new discoveries and doing better — so says Jason Brinkley, Senior Researcher at American Institutes for Research, who is our first guest of the fifth year of Analytically Speaking. Jason has a healthy dose of passion and curiosity. He shares a very thoughtful perspective on a range of analytical topics.

Collaboration and culture

As a researcher who has collaborated with many subject-matter experts, he has some noteworthy advice on successful partnerships with experts in other disciplines and what it takes to get more value from data — in many cases, really transforming behaviors and enabling better decisions.

He shares thoughts on what it takes to develop a culture of curiosity, the importance of data visualization, and how we can use passion and curiosity to overcome criticism and fear of trying new approaches to problem-solving. As Jason says:

“It’s not about right versus wrong, but how to do better.”

Jason leads by example, bringing an open mind and a drive to continually adding to his already-diverse analytic skill set. He approaches alternate methods and techniques with curiosity instead of suspicion and judgment. Here’s a quick preview of that conversation:

Did we pique your curiosity? We hope you’ll join us Jan. 27 to watch the premier of this prerecorded webcast. Alternatively, you can watch this and previous Analytically Speaking webcasts on-demand.

Here’s to more passion and curiosity in 2016!

Post a Comment

Your Discovery Summit Europe tip sheet

JMP-Discovery2015_50B4795You’ve never been to Discovery Summit? Let’s make this right! Plan to join us in Amsterdam for Discovery Summit Europe, and we will help you make the most of the conference, thanks to feedback from those who have attended before.

Here's what you need to know:

  1. Breakout sessions are cited by past attendees as the top reason for going to the conference. But, by all means, review the agenda before you arrive. These talks run four-at-a-time, so you’ll have some JMP-Discovery2015_50B3929tough choices to make. We’ve seen standing room only for some topics – you don’t want to be caught not knowing where you want to go.
  2. Networking is usually rated as the second most important reason to attend. So be prepared to make friends! Some of the best learning happens during hallway conversations and social time. (By the way, our first social event will be a welcome dinner at the National Maritime Museum. Check it out.)
  3. This conference features four stellar keynote speakers. But I already told you about them. You didn’t miss that blog post, did you?
  4. We have a mobile app! Be sure you download it before you travel to Amsterdam. You can use the app to read about the JMP-Discovery2015_50B3834developers you’d like to meet, connect with fellow attendees and build your agenda. You can also check in to sessions to earn badges in an interactive game. Yes, winners get a prize.
  5. Want to go more in-depth on certain topics? We offer pre-conference training led by SAS instructors and developer tutorials led by the software masterminds themselves.

I leave you with one last piece of advice, straight from a former attendee: “Get some rest BEFORE you go. You will be too busy with good stuff to rest while at the conference.”

There you have it.

Post a Comment

It's time to update your copy of JMP 12 and JMP Pro 12

Are you using JMP 12 or JMP Pro 12? If so, please read on....

A maintenance update for JMP 12 and JMP Pro 12 is now available, and it’s recommended for all users and sites.

What's in JMP 12.2?

JMP 12.2 includes bug fixes and a few new features, including:

  • A SQL Query() function that supports a Version argument to indicate compatibility with JMP versions.
  • Sample scripts for Generalized Regression that illustrate the shrinkage effect of varying Alpha and the tuning parameter, Lambda, in the elastic net fit for a single predictor.
  • Documentation in Japanese and Simplified Chinese.

Read the release notes on our software updates page for a complete list of improvements and new features, and make sure your copy of JMP 12 is updated.

Learn more, including how to tell what type of JMP license you have, at jmp.com/update

And thanks for using JMP!

Post a Comment

Top JMP add-ins of 2015

It's nearly the end of the year, and we are taking a look at the activity in the JMP User Community. Last time, I shared the top content among Discussions posts. Today, we have list of the most popular JMP add-ins, courtesy of community manager Stan Koprowski.

JMP_addin_top_2015

Never used a JMP add-in? An add-in extends JMP by adding new features, functions and applications. It's a JSL script that you can run from the JMP Add-Ins menu. You can share the add-in with other JMP users, and that's what we are looking at here -- those add-ins shared in the File Exchange in the User Community.

The most popular JMP add-ins, in order of number of downloads, are below. Maybe you will find that you need some of these!

Top 10 Most-Downloaded Add-Ins

  1. Full Factorial Repeated Measures ANOVA Add-In
  2. Statistical Dot Plots for JMP 10
  3. Venn Diagram
  4. Custom Map Creator
  5. Interactive Binning (V2)
  6. Text to Columns, Version 2
  7. Capability animation: standard deviation impact
  8. HTML5 Auto-Publishing
  9. Anderson-Darling Normality Test
  10. Method Comparison

While all of these add-ins were submitted by my current or former co-workers, I wanted to give some attention to the top add-ins submitted by those outside our company. A special thanks to you for contributing these add-ins to the User Community!

Top 10 Most-Downloaded Add-Ins by Non-Employees

  1. Add-In: Spatial Data Analysis
  2. JMP Addins for Frequency Analysis and Dual Seasonality Time Series Analysis
  3. ROC-Curve & partial Area Under the Curve Analysis
  4. Model Diagnostics and Transformations
  5. Scoping Design DoE JMP Add-In
  6. Window Manager
  7. Add-In to Explore the Implications of Measurement System Quality (ICC or Intra Class Correlation) and Process Capability (Cp)
  8. Icon Names Addin
  9. Univariate Binning using the Normal Mixtures Distribution
  10. Capability Analysis JMP Add-In

I look forward to seeing what new add-ins the JMP users share in 2016!

Post a Comment