With multivariate methods, you have to do things differently when you have wide (many thousands of columns) data. The Achilles heel for the traditional approaches for wide data is that they start with a covariance matrix, which grows with the square of the number of columns. Genomics research, for example,
Tag: Big Statistics
In an earlier blog post, we looked at cleaning up dirty data in categories. This time, we look at cleaning dirty data in the form of outliers for continuous columns. In industry, it’s not unusual to have most of your values in a narrow range (for example between .1 and
Many JMP users get their data from databases. A few releases ago, we introduced an interactive wizard import dialog to make it easier to import from text files. In a subsequent release, we created a feature that lets you import Web page tables into JMP data tables. In JMP 11,
Data entered manually is usually not clean and consistent. Even when data is entered by multiple-choice fields rather than by text-entry fields, it might need additional work when it is combined with data that may not use the same categories across sources. Sometimes the same categories are spelled differently, abbreviated
The desktop or laptop is now in decline, squeezed from one side by mobile platforms and from the other side by the cloud. As a developer of desktop software, I believe it is time to address the challenges to our viability. Is software for the desktop PC now the living
To benchmark computer performance on statistical methods with big data, we can just generate random data and measure performance on that, right? Well, it could be that simulated data may not act the same as real data. Let’s find out. Logistic Regression Suppose that we are benchmarking logistic regression. So
Sometimes emptiness is meaningful. If a loan applicant leaves his debt and salary fields empty, don’t you think that emptiness is meaningful? If a job applicant leaves this previous job field empty, don’t you think that emptiness is meaningful? If a political candidate fills out a form that has an
When you have millions of observations of real data and do a simple fit across two variables, if you don’t get a significant test, then it is strong evidence of fraud. The one kind of data that is reliably non-significant for very tall data tables is simulated data. We live
In semiconductor data, it is common for probe measurements that encounter an electrical short to exhibit measurements that are far out in the distribution, i.e., they are outliers. When we test that means are the same, these outlying values inflate our estimate of the standard deviation [sigma]. Remember that the
Purely random data has a 5% chance of being significant. Choose the most significant p-values from many tests of random data, and you will filter out the tests that are significant by chance alone. Suppose we have a process that we know is stable and consistent. We measure lots of