Tag: Big Statistics

0
Wide data discriminant analysis

With multivariate methods, you have to do things differently when you have wide (many thousands of columns) data. The Achilles heel for the traditional approaches for wide data is that they start with a covariance matrix, which grows with the square of the number of columns. Genomics research, for example,

0
Handling outliers at scale

In an earlier blog post, we looked at cleaning up dirty data in categories. This time, we look at cleaning dirty data in the form of outliers for continuous columns. In industry, it’s not unusual to have most of your values in a narrow range (for example between .1 and

0
Accessing data at scale from databases

Many JMP users get their data from databases. A few releases ago, we introduced an interactive wizard import dialog to make it easier to import from text files. In a subsequent release, we created a feature that lets you import Web page tables into JMP data tables. In JMP 11,

1
Cleaning categories at scale with Recode

Data entered manually is usually not clean and consistent. Even when data is entered by multiple-choice fields rather than by text-entry fields, it might need additional work when it is combined with data that may not use the same categories across sources. Sometimes the same categories are spelled differently, abbreviated

17
“The desktop computer is dead” and other myths

The desktop or laptop is now in decline, squeezed from one side by mobile platforms and from the other side by the cloud. As a developer of desktop software, I believe it is time to address the challenges to our viability. Is software for the desktop PC now the living

0
Big real data is different from big simulated data: Benchmarking

To benchmark computer performance on statistical methods with big data, we can just generate random data and measure performance on that, right? Well, it could be that simulated data may not act the same as real data. Let’s find out. Logistic Regression Suppose that we are benchmarking logistic regression. So

0
Bad data happens to good people: Robust to outliers

In semiconductor data, it is common for probe measurements that encounter an electrical short to exhibit measurements that are far out in the distribution, i.e., they are outliers. When we test that means are the same, these outlying values inflate our estimate of the standard deviation [sigma]. Remember that the

0
Not just filtering coincidences: False discovery rate

Purely random data has a 5% chance of being significant. Choose the most significant p-values from many tests of random data, and you will filter out the tests that are significant by chance alone. Suppose we have a process that we know is stable and consistent. We measure lots of

1 2