Wide data discriminant analysis

With multivariate methods, you have to do things differently when you have wide (many thousands of columns) data. The Achilles heel for the traditional approaches for wide data is that they start with a covariance matrix, which grows with the square of the number of columns. Genomics research, for example, [...]

Post a Comment

Handling outliers at scale

In an earlier blog post, we looked at cleaning up dirty data in categories. This time, we look at cleaning dirty data in the form of outliers for continuous columns. In industry, it’s not unusual to have most of your values in a narrow range (for example between .1 and [...]

Post a Comment

Accessing data at scale from databases

Many JMP users get their data from databases. A few releases ago, we introduced an interactive wizard import dialog to make it easier to import from text files. In a subsequent release, we created a feature that lets you import Web page tables into JMP data tables. In JMP 11, [...]

Post a Comment

Cleaning categories at scale with Recode

Data entered manually is usually not clean and consistent. Even when data is entered by multiple-choice fields rather than by text-entry fields, it might need additional work when it is combined with data that may not use the same categories across sources. Sometimes the same categories are spelled differently, abbreviated [...]

Post a Comment

“The desktop computer is dead” and other myths

The desktop or laptop is now in decline, squeezed from one side by mobile platforms and from the other side by the cloud. As a developer of desktop software, I believe it is time to address the challenges to our viability. Is software for the desktop PC now the living [...]

Post a Comment

Big real data is different from big simulated data: Benchmarking

To benchmark computer performance on statistical methods with big data, we can just generate random data and measure performance on that, right? Well, it could be that simulated data may not act the same as real data. Let’s find out. Logistic Regression Suppose that we are benchmarking logistic regression. So [...]

Post a Comment

It’s not just what you say, but what you don’t say: Informative missing values

Sometimes emptiness is meaningful. If a loan applicant leaves his debt and salary fields empty, don’t you think that emptiness is meaningful? If a job applicant leaves this previous job field empty, don’t you think that emptiness is meaningful? If a political candidate fills out a form that has an [...]

Post a Comment

Big Data always has significant differences but not always practical differences: Practical significance and equivalence

When you have millions of observations of real data and do a simple fit across two variables, if you don’t get a significant test, then it is strong evidence of fraud. The one kind of data that is reliably non-significant for very tall data tables is simulated data. We live [...]

Post a Comment

Bad data happens to good people: Robust to outliers

In semiconductor data, it is common for probe measurements that encounter an electrical short to exhibit measurements that are far out in the distribution, i.e., they are outliers. When we test that means are the same, these outlying values inflate our estimate of the standard deviation [sigma]. Remember that the [...]

Post a Comment

Not just filtering coincidences: False discovery rate

Purely random data has a 5% chance of being significant. Choose the most significant p-values from many tests of random data, and you will filter out the tests that are significant by chance alone. Suppose we have a process that we know is stable and consistent. We measure lots of [...]

Post a Comment