Coming in JMP 12: Ternary Plot enhancements

The Ternary Plot platform in JMP

Figure 1: The Ternary Plot platform in JMP

JMP is used to analyze and visualize data with a wide variety of types and relationships. The term mixture is used to describe a collection of variables that sum to a constant value. Mixtures may be observed in gases, liquids or solids, or they may be used as part of a design when optimizing a product for effectiveness, cost, or other features. While components of a mixture can be plotted using any graph platform in JMP, the Ternary Plot uses the special relationship between three components of a mixture to display three axes in a 2D graph without loss of information. Figure 1 illustrates the JMP Ternary Plot. Due to the relationship between the components, the domain is bounded by the limits [0,sum] for each of the three axes. The shaded region represents constraints on the range of values for each of the components. Ternary plots are also used in the Mixture Profiler, which allows you to explore higher-dimensional mixtures by choosing three components and displaying a slice of the full domain.

In previous versions of JMP, ternary frames did not support all of the features of a rectangular frame, or in some cases the interactions or behaviors were more like a rectangular frame. One of the goals of JMP 12 was to make the ternary frame behave more like other graphs in JMP, as described below.

Result after performing a zoom with the magnifier tool

Figure 2: Result after performing a zoom with the magnifier tool

Tool manipulation

JMP supports a variety of tools for interacting with the contents of a display frame. Some tools are aware of the ternary data and change behavior to match. The most notable exception in previous versions of JMP is the magnifier tool, which used a rectangular zoom region and resulted in a view of a triangle clipped by a rectangle. In JMP 12, the magnification region is a triangle, and the view always retains a triangular shape, as shown in Figure 2.

The hand tool performs translation in the graph, and in previous releases would allow you to translate the entire ternary triangle out of view.  With JMP 12, the ternary frame is constant in size and location -- only the axes are changed when performing a translation. The fixed component bounds of [0,sum] are enforced during interactive translation.

Axis manipulation

In addition to tool manipulation within a rectangular frame, axes of graphs also support direct interaction. In JMP 12, these actions are also supported for ternary frames:

  • Double-click on an axis to bring up the axis property dialog
    • Reference lines
    • Major and minor gridlines
  • Click in the middle of the axis to translate the axis
  • Click toward the extremes of the axis to zoom in or out

The relationships between the ternary variables is reflected in these transformations. Translating or zooming on one axis will necessarily affect at least one of the other axes. As with tool manipulation, the bounded extents [0,sum] of each axis are also enforced to maintain valid coordinate ranges within the ternary frame.


USDA Soil Classification Taxonomy

Figure 3: USDA Soil Classification Taxonomy

The improved support for ternary frames is not only on the surface. In JMP 12, ternary plots are drawn directly from the ternary representation rather than transforming into a rectangular coordinate frame for drawing. This means that it is much easier to support additional geometry and custom geometry within the ternary frame. In addition to the built-in features such as reference lines, you can add custom geometry to a ternary frame in the same way that you would with a rectangular frame. Due to the relationship between the ternary components, it is only necessary to specify the first two coordinates when drawing to a ternary frame. Figure 3 illustrates a case where text, lines, and polygons are added to a ternary frame to show the USDA Soil classification regions.

In a future blog post, I will provide a more detailed look into the changes for ternary frames. Please share in the comments to let me know how you use mixtures with JMP.

Post a Comment

#OneLessPie chart on Pi Day 2015

Last year, we launched the #OneLessPie initiative to use Pi Day (March 14 or 3/14) as a catalyst to improve the data visualization landscape. Many experts have criticized pie charts as a poor way to communicate information, and we listed some of the pitfalls of pie charts last year. But we wanted to encourage people to pair criticism with action by replacing ill-advised pie charts with more effective visualizations.

Participation in last year's effort included:

This year we want to continue to raise the bar, so to speak, and we're giving you a few days of advance notice this time. If you've been bothered by a deceptive pie chart, take action on or around Pi Day (this Saturday) and leave a comment here or tweet with the #onelesspie tag. See the above posts from last year for more motivation and examples.

I'll kick things off with a makeover I did a few weeks ago of a kind of pie chart. The social media site Reddit has a couple subreddits devoted to visualizations. r/dataisbeautiful is for "visualizations that effectively convey information," while r/dataisugly is for "butchered visualizations and misleading charts." The following visual had the distinction of being top rated on both subreddits, which epitomizes some of the love/hate dichotomy people have with circular charts.


These and other circular charts share many of the faults of pie charts. Interpreting data encoded around a circle is error-prone in general. Even for data that is naturally circular (wind readings), circular charts proved inferior in an experiment in the paper, Graphical Inference for Infovis.

Regarding the letter frequency chart, the task I most wanted to do was compare each language to the others. I tried a few line charts and line/bar combinations but ended up with the bullet chart below. In each column, the blue bar is the letter frequency for that column's language, and the gray bar is the frequency for the other six languages. The rows are ordered by overall frequency.



You can see, for instance, that "t" and "h" are disproportionately more common in written English, and that "o" is more common in Portuguese and Italian but less common in German and Turkish.

I'll count this as #OneLessPie.

Post a Comment

A passion for powerful market research methods

paczkowski_walIf you want to learn about some of the best (and worst) practices in market research, you will want to tune in to Analytically Speaking with Walter Paczkowski, PhD, founder of Data Analytics Corp. on March 18.

Walter’s passion for using and sharing effective market research methods is evidenced by the success of his company, his work as an educator, and his efforts to improve the way market research is done. He teaches quantitative courses at Rutgers and The College of New Jersey, and he is a Business Knowledge Series instructor for SAS teaching “Analyzing Marketing Data: Going Beyond Tabs and Bar Charts with JMP.”

Formerly at AT&T Bell Labs, Walter worked with Daniel McFadden, Nobel Laureate in Economics for contributions in discrete choice modeling. So, as Walter says, he learned from the best. Walter has had a lot of valuable input into the Choice Analysis Platform enhancements in JMP 12, including the Probability Profiler and the Multiple Choice Profiler. For more on that, including screenshots, see developer Melinda Theilbar’s recent post on Probability and Multiple Choice Profilers in the Choice Platform.

Walter sees a lot of over-reliance on spreadsheets for survey analysis, as well as simple summary statistics displayed in static pie and bar charts (usually, not the best ways to graphically encode the data). To really extract insights from survey data, Walter advocates interactive, dynamic tools to explore the data — that’s when you often find things you didn’t expect.

Walter is also a fan of looking at relationships across a series of questions. He thinks the new Multiple Correspondence Analysis Platform in JMP provides a great way to quickly and dynamically explore relationships between categorical variables, which are common in market research surveys. For more details on what’s coming in the new JMP 12 MCA platform, please see developer Jianfeng Ding’s post.

Join us for the live webcast with Walter on March 18 to hear from an expert and see some interactive, dynamic and visual enhancements to analysis for consumer and market research. If you can’t make it, you can always view the archived webcast on demand at your convenience. We hope you will tune in — not only will you get to hear from an expert, but you will also get to see some of the newest market research enhancements in JMP 12.

Post a Comment

Coming in JMP 12: Multiple Correspondence Analysis

In multivariate analysis, dimension reduction into a small number of factors is the most important step for capturing the variability among a large number of variables. In JMP, we have the Principal Components (PC) platform to do dimension reduction for continuous variables. However, when we have categorical variables, we cannot use the PC platform for dimension reduction. The new Multiple Correspondence Analysis (MCA) platform in JMP 12 takes multiple categorical variables as input variables and seeks to identify associations between the levels of those variables.

Multiple correspondence analysis is frequently used in the social sciences; it is particularly popular in France and Japan. It can be used in survey analysis to identify question agreement. It is also used in consumer research to identify potential markets for products. Microarray studies in genetics also use MCA to identify potential relationships between genes.

Example of Multiple Correspondence Analysis

The Car sample data in JMP contains data collected from car polls. The data include aspects about the individuals polled, such as sex, marital status and age. The data also include aspects about the car that respondents own, such as the country of origin, the size, and the type. We may want to explore relationships between sex, marital status, country and size of car to identify consumer preferences.

A key part of correspondence analysis is the multidimensional map produced as part of the output. The correspondence map allows you to visualize the relationships among categories spatially on dimensional axes. In other words, you can see which categories are close to other categories on empirically derived dimensions.


The Correspondence Analysis map from Car Poll data shows the cloud of categories of the four variables as projected onto the two principal axes. From this cloud, you can see that Americans have a strong association with the large car size, while Japanese are highly associated with the small car size. Also, males and single people are strongly associated with the small car type, and females and married people are associated with the medium car size. This information could be used in market research to identify target audiences for advertisements.

Once JMP 12 becomes available later this month, I hope you can explore the new MCA platform with your own categorical variables. I look forward to your feedback.

Post a Comment

JMP graphics tip: Alphabetic markers

Someone recently asked me about using letters instead of built-in symbols in JMP scatterplots. In case others are wondering the same thing, here's the long answer.

In addition to the 32 built-in symbols, you can use any character as a marker for a scatterplot. The easiest way to set a letter as the symbol for a row is with the "Other..." item in the Marker submenu. Here's an example with Big Class. After selecting all males, right-click in the row-state area to get:


That brings up a dialog where you can type a letter.


Setting the male rows to "M" and the female rows to "F" yields:


And the scatterplot looks like:


Any character in the current Marker Font (set in Preferences) is allowed, so there is a wide variety of Unicode characters available, and you can change the font preference to a symbol font for even more specialty options. Special characters can be tricky to type from the keyboard, but you can usually find them in a Unicode table and paste them into the dialog.

You can also set them through scripting. Here's a scripting example that uses the Mars/Venus male/female symbols, showing two different techniques for specifying the character.

dt = Open( "$SAMPLE_DATA/Big" );  
r = dt << Select Where( :sex == "M" );  
r << Markers( "\!U2642" ); // hexadecimal  = dt << Select Where( :sex == "F" );  
r << Markers( "♀" ); // actual character  

Data table:


Scatter plot:


Post a Comment

Graph makeover: Disposable income change chart

Kaiser Fung recently critiqued this chart of changes on disposable income.

He put forth the idea of using categorized slope graphs instead:

How do we do something like that in JMP? We can do paneling and lines with variable color in Graph Builder; we just need to get the data into the right form. As usual, a key step in making a slope graph in Graph Builder is using Table > Stack to get the two data columns into a column of labels and a column of data. Then the label column goes on the X axis, and the data column goes on the Y axis.

First, we need to get the data. Sometimes when I'm doing a quick remake, the simplest thing to do is the read the data right off the graph. There are programs and websites to help with that. This time, I used the online app, Web Plot Digitizer, which produced a separate CSV file for each series. The values are only estimates of the original values, of course.

After opening the two CSV files and setting the column names to "x" and "y" I wrote the following script to combine them and calculate some derived values.

dt1 = Data Table( "data" );  
dt2 = Data Table( "data (1)" );  
dt = New Table( "changes",  
  New Column( "Percentile", Format( "Percent", 8, 0 ), Set Values( (1 :: 100) / 100 ) ),  
  New Column( "Base", Format( "Percent", 8, 1 ) ),  
  New Column( "Change2010", Format( "Percent", 8, 1 ) ),  
  New Column( "Change2013", Format( "Percent", 8, 1 ) ),  
  New Column( "Net2010", Format( "Percent", 8, 1 ) ),  
  New Column( "Net2013", Format( "Percent", 8, 1 ) ),  
  New Column( "Group", Character ),  
  New Column( "Rank" )  
For Each Row(  
  dt:Base = 1.0;  
  dt:Change2010 = Interpolate( dt:Percentile, dt2:x << Get Values(), dt2:y << Get Values );  
  dt:Change2013 = Interpolate( dt:Percentile, dt1:x << Get Values(), dt1:y << Get Values );  
  dt:Net2010 = dt:Base * (1 + dt:Change2010);  
  dt:Net2013 = dt:Net2010 * (1 + dt:Change2013);  
  dt:Group = If(  
    dt:Percentile <= 0.1, "Bottom 10%",  
    dt:Percentile <= 0.9, "Middle 80%",  
    "Top 10%"  
  dt:Rank = If(  
    dt:Percentile <= 0.1, dt:Percentile / .1,  
    dt:Percentile <= 0.9, (dt:Percentile - .1) / .8,  
    (dt:Percentile - .9) / .1  

The interpolation is to align the "x" values on even percentile values. The Group and Rank are to control how we group and color in Graph Builder. The Base and Net values are because I wanted to try something a little different than what Kaiser did. Just seeing loss and gain percentages is not enough to see the net effect, so I explicitly compute the net effect and graph those instead.

Note: This calculation comes with a big if: It assumes the percentiles are the same across both periods. That is, that the people in a given percentile for the 2007-2010 period are in the same percentile for the 2010-2013 period. I don't know if that's the case, though it's unlikely the percentiles changed too much in the first period given every percentile went down and at mostly uniform rates.

After stacking the three columns, Base2007, Net2010, and Net2013, I can create the lines chart. I put Group into the Group X role and Rank into the Color role, added a reference line at 100% and updated the labels.


I just realized that even though I need the base value for the calculations, I don't actually need to show the base values (they're all the same). After using a Local Data Filter to exclude Base2007 and following Kaiser's example to thin out the middle group, I get:



Post a Comment

Attention procrastinators: The time has come

Ask most anyone I work with, and they'll tell you that I need a deadline to make something happen.

If you're the same way, then the time has come for you.

The call for papers for Discovery Summit 2015 in San Diego closes Monday morning, March 9. That gives you less than 96 hours to write a couple of paragraphs explaining how you will:

  • demonstrate how you've used JMP to make important discoveries, or
  • share your favorite JMP tips and tricks, or
  • explain how you've applied JMP to advance analytic excellence at your organization.

As a procrastinator, you'll be pleased to find that right now all you need to create is an abstract describing your presentation. You'll have until September to actually create the presentation, so you've got plenty of time to enjoy putting that work off.

Submit your abstract now, and let the next round of procrastination begin immediately.





What? Are you still here reading this? You really are a procrastinator.

Not that I should encourage that kind of behavior, but, if you need some inspiration, check out the papers from Discovery Summit 2014 to see what the conference steering committee is looking for.

Now, go submit your abstract!

Post a Comment

Graph makeover: Measles heat map

The Wall Street Journal recently published a nice, interactive graphic piece called "Battling Infectious Diseases in the 20th Century: The Impact of Vaccines," which contains a series of graphs showing the incidence of selected infectious diseased by state and year. Here's the one for measles.


My first impression was, "Wow, it looks like the country had the measles, and the vaccine cleared it up." To the extent that a visualization should communicate a point, this one succeeds at showing the broad impact of the vaccine. However, I looked at it some more, and a few details bothered me.

  • There's a lot of detail needed to make a single point (poor data/ink ratio).
  • The rainbow-like color scheme makes it hard to quickly compare values. That is, there's no perceptual ordering of blue, green and yellow.
  • The color gradient goes to 4000, but the data only goes to 3000.
  • Half the state names are missing.
  • The states are ordered by their postal codes but displayed with standard abbreviations, which is why Alaska is before Alabama, for instance.
  • The data stops at 2002, before the current upswing that puts vaccinations in the news in the first place.

Robert Allison ("How to make infectious diseases look better") and Andy Kirk ("Is it the visualisation or the data we like") have also taken critical looks at this visualization. I was more interested in getting the data and trying other ways of visualizing it in JMP.

Getting the Data

I downloaded CSV files from Project Tycho, which was more trouble than I expected. Apparently, there's an easier bulk download, but I didn't find it. More than 100 manual downloads later, I had the raw data files. After getting the data, I encountered two surprises:

  • The data is weekly instead of yearly.
  • There's missing data throughout -- not just where the WSJ graph indicates.

I like to make a few quick, exploratory graphs to make sure the data looks reasonable. Here's one of all the weekly incidence rates. Even though I cranked down the transparency, it's still not enough to tell that most of the data is under 10 cases per week per 100,000 persons. You can see the outliers though, and I labeled a few of them. I don't know the subject well enough to know if those outliers are suspicious or not.


weekly incidence.png

Handling Missing Data

Before I could start making real graphs, I needed a plan for dealing with the missing data. The WSJ graph essentially treats all missing values as 0, unless the whole year is missing -- then they show the year as a slightly different color (white instead of light blue). I decided to take a different but still artificial approach. I figured the missing data in the early years is likely unknown, but in the later years it seems more likely that the data is really 0 and just not reported because the disease was so rare by then. So I maintain missingness up to some year (1975) and treat missing values as 0 after that date. That will at least avoid the splotchiness on the right side of the graph.

Recreating the Original

Before I went too far with exploration, I decided to try to reproduce the WSJ graph as it is in JMP. I had to:

  • Summarize the data into years.
  • Join the data with another data table of state abbreviations.
  • Create a custom color theme.
  • Create a heat map in Graph Builder.
  • Add a reference line for the vaccination introduction date.

heat map original.png

The result is pretty close. The state ordering is slightly different because I didn't bother sorting the states by postal code instead of the displayed abbreviation. The colors may differ because I didn't copy the colors carefully. And of course, the missing data treatment changes some missing years to 0.

After recreating the color theme, I could see some of the logic in it. Rainbow color themes are normally bad for continuous values because different hues are perceived categorically instead of continuously. The WSJ color scheme is essentially three different schemes:

  • A continuous yellow-orange-red scheme for "high" levels.
  • A solid green for "moderate" levels.
  • A continuous white-blue scheme for "low" levels.

I think the greens are what bothers my perception most, so I thought I would try the same idea without the green.

Heat map original no green.png

We still have to pick some arbitrary cut-off, but I like it better without the green because there are fewer categories to sort out. This coloring also limits the range to 3000.

Changing the Colors

How about a truly continuous color theme? Robert Allison uses a white-to-red scheme, which seems reasonable. The problem with a single continuous range is that the data is skewed, so a linear color application would only highlight differences on one end of the scale. We can get around that by using a piece-wise linear scale and by customizing the color theme.

Heat map red.png

Strangely, given the diminished range of colors, the nationwide dip in 1945 is easier to see.

We might also try ordering the states by average incidence instead of alphabetically. This makes it easier to see that the states aren't as uniformly affected as it seemed.

Heat map red and ordered.png

So far, I've been showing the yearly incidence rates, like the WSJ original. However, given the amount of missing data, it makes sense to show weekly averages like Allison does so that we only average the data that we have. Showing the yearly rates is like treating the missing weeks' values as 0. Here is same graph of the weekly rates:

Heap map weekly.png

The differences are subtle. One example is Georgia in 1934, where only 27 weeks had data, and those weeks' values were quite high. In the original, Georgia in 1934 looked only average because it got watered down by all the missing values that were treated as zeroes, but here it looks high. The truth is probably in between -- it's unlikely the missing values were either 0 or as high as the non-missing values.

The strict missing-is-missing approach is not without its own pitfalls. For comparison, here is Allison's version showing weekly averages. Note the apparent spike in measles in Vermont in 1997. That's really just one week of data. All the other weeks' data is missing. So it looks like a bad year, but maybe it was only a bad week.


I don't know the best way to address the variable amount of missing data. I tried making the squares shrink when their data was missing. It's a truer representation, but I'm not sure the extra complexity is worth it. Besides, you can barely tell the size difference most of the time.

Weekly missing sized2.png

Exploring Other Views

My original thought was to make a nationwide bar chart over time. This proved to be more complicated than I thought because I needed yearly population values in order to properly aggregate the state incidence rates. The raw data only includes counts and rates per 100,000. I figured I could estimate state populations from count divided by the rate (method 1), but that fails when the rate are 0 or very small. My other approach (method 2) was to interpolate from a few historical population values I found at the Census Bureau site. After trying both, I compared them with a plot:

Population check.png

That horizontal line at 10 (million) is from years with a count of 1 and a rate of 0.01. The rates are only provided to two decimal places. Similar issues cause the outlier at 70: That year, Texas contained several weeks with a count of 1 and a rate of 0.00. I ended up using a formula that mixed both estimates, only considering the computed ratio then the rate wasn't very small.

Here is the national incidence rate bar chart, which makes the original point pretty well and makes it easier to see the year-to-year variation, including the quiet 1945.

national bars.png

I'm not fond of spiky line graphs, but here's one to provide context for the next graph. Each line is a state.


I prefer the less-is-more philosophy that smoothers bring. You lose the range in this case, but you get a better sense of the general trend and of a few outlier states.



Closing Thoughts

For this study, I'm sharing scripts of all processing and graphing steps I took, in the spirit of reproducible research. Measles Study.ZIP is attached to my original version of this post on The Plot Thickens in the JMP User Community. Having the scripts also made it easy to try out different missing value strategies. The  ZIP file contains the source CSV files plus my JSL scripts and final JMP data table, which itself contains scripts for the graphs. Run the script called "Measles Study" to recreate the files. You can change the "Missing is zero after" variable to some other year besides 1975 to try a different missing value cut-off, or none at all.

Though it took a bit of work to get this far, I can see more work to be done for a better analysis. For instance, there is a seasonal pattern to the cases which might be used to impute some of the missing values. It might be interesting to look for clusters of states or outbreak patterns. Visually, maybe range bands would be better that lines. I'd like to explore Andy Kirk's idea of trying the color schemes on data without such a sharp drop-off. How about using national data after 2002? ...

However, I have to stop somewhere. Besides, with all the data attached perhaps you, dear reader, will continue the exploration.

Post a Comment

Coming in JMP 12: Menu customization on Mac

For the past several versions, JMP users have been able to customize their menu bar on Windows. They can add new menu items, rearrange the existing ones, or simplify their user interface by removing items they don’t use. Now Mac users can do these same things in JMP 12.


Users can create new menu items to run simple JSL snippets or complex scripts, allowing them to tailor JMP to their specific application. Here’s an example that adds a new command to launch Principal Components with specific options preselected.Pic2

We hope this new flexibility allows you to streamline your JMP workflow and increase your proficiency.

Post a Comment

What color is The Dress? JMP can tell you!

The Internet was abuzz last week over a picture of a blue and black dress. Or was that a white and gold dress? That was the question. What color is that dress? Well, as a guy, my immediate response was, "Who cares?" What I wanted to know is: Is that dress on sale? How much is this viral fashion going to end up costing me? But then, as a software developer who does image analysis, I also thought this phenomenon was really cool. How is it that people see the same dress as very different colors? Can I use JMP to tell me why? Can JMP tell me what the actual colors are?

About a month ago, I wrote a JMP script called Image Analyzer, which you can download from the JMP File Exchange (requires a free SAS profile). You can run the script and select an image. The image will be read into JMP and displayed in a window, and a data table will be generated, where each pixel in the image is represented by a row of data in the table. Each row contains an x and y value, which is the location of the pixel in the image; red, green and blue values representing the color in the RGB color space; an intensity value; and hue, lightness and saturation values representing the color in the HLS color space. The unique thing about this new script over others I have written in the past is that this script will keep a connection between the image and the data table so that selections in one will be reflected in the other. In fact, you can run an analysis in JMP, and the graph, the data table and the image will all show the same corresponding selected data.

So let's try the script out with the dress.


Now that I have read in the image and created a data table, I can do some analysis. My colleague, Craige Hales, was also interested in analyzing the dress, and he started by using Clustering. If I select Analyze->Multivariate Methods->Cluster, I get the following graph:


The Cluster analysis would indicate three distinct clusters, which I have identified with a 1, 2 and 3. Selecting each cluster individually shows the corresponding pixels in the image. I have shown each of the three images also with the same labeling. Notice that cluster 3 does not highlight any pixels in the dress, but rather it identifies areas in the background. That would mean that the dress is actually made up of two fairly distinct colors.

But what are the actual colors? For that, let's look at the actual color values in the data table. Often when working with color, it is beneficial to look at the colors in the HLS color space instead of the RGB color space. So this time, I use Analyze->Distribution and select H, L and S as my distribution columns.


From the Distribution, the Hue (H) again identifies two very distinct colors. Since the HLS values are in a normalized color space, I can interpret the value of 0.65 to be, in fact, blue. Similarly, I can interpret the value of 0.1 to be gold. So that would explain why people see blue and gold in the dress. But why do people usually see either blue and black or white and gold and not just blue and gold? The answer to that lies in the other two Distributions, lightness (L) and saturation (S).

If we select the bars for the Hue indicating a blue color, we see that the lightness values are mostly above the mid-range, while the saturation values are typically around the 20 percent range, maxing out at about one-third. A low saturation means there is not a lot of color present, but rather it is muted by gray. And a mid- to higher lightness value would indicate that the gray color is closer to white. So someone who focuses on the hue would see blue, whereas someone who is more influenced by the lightness and saturation would see white.


Similarly, we can do the same for the other color. We can select the bars for Hue indicating a gold color. We see a similar pattern for the saturation, indicating the color is muted by gray. But this time, the lightness is predominantly low, indicating that the gray value is closer to black than to white. So someone who focuses on the hue would see gold, whereas someone who is more influenced by the lightness and saturation would see black.


What is interesting is that one group of people seem to be influenced by the lightness and saturation. This causes the blue to appear as white and the black to appear as gold. The reverse is true for the other group. They appear to be able to filter out the light and focus more on the hue and correctly see the blue and the black.

I have to admit that I am one of those people who clearly see white and gold even though the manufacturer of the dress has indicated it is really blue and black. What colors do you see? And, more importantly, have you found it on sale (in case my wife asks)?

Post a Comment