This is the final post in my series of machine learning best practices. If you missed the earlier posts, start at the beginning, or read the whole series by clicking on the image to the right. While post four in the series was about combining different types of models, this
Uncategorized
This is the seventh post in my series of machine best practices. Catch up by reading the first post or the whole series now. Generalization is the learned model’s ability to fit well to new, unseen data instead of the data it was trained on. Overfitting refers to a model that fits
In the recent article, “Price-bots can collude against consumers,” the Economist discusses the consumer effects of prices set by price-bots. The article starts with an example of gasoline pricing strategies on Martha’s Vineyard. With a small number of gas stations on the island, the price-bots can cover all competitor prices frequently
This is the sixth post in my series of machine learning best practices. If you've come across the series for the first time, you can go back to the beginning or read the whole series. Aristotle was likely one of the first data scientists who studied empiricism by learning through
This is the fifth post in my series of machine learning best practices. Hyperparameters are the algorithm options one "turns and tunes" when building a learning model. Hyperparameters cannot be learned using that algorithm. So, these parameters need to be assigned before training of the model. A lot of manual
This is the fourth post in my series of 10 machine learning best practices. It’s common to build models on historical training data and then apply the model to new data to make decisions. This process is called model deployment or scoring. I often hear data scientists say, “It took
This is the third post in my series of machine learning techniques and best practices. If you missed the earlier posts, read the first one now, or review the whole machine learning best practices series. Data scientists commonly use machine learning algorithms, such as gradient boosting and decision forests, that automatically build
This is the second post in my series of machine learning best practices. If you missed it, read the first post, Machine learning best practices: the basics. As we go along, all ten tips will be archived at this machine learning best practices page. Machine learning commonly requires the use of
I started my training in machine learning at the University of Tennessee in the late 1980s. Of course, we didn’t call it machine learning then, and we didn’t call ourselves data scientists yet either. We used terms like statistics, analytics, data mining and data modeling. Regardless of what you call
When building models, data scientists and statisticians often talk about penalty, regularization and shrinkage. What do these terms mean and why are they important? According to Wikipedia, regularization "refers to a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting. This information usually