This is the third post in my series of machine learning techniques and best practices. If you missed the earlier posts, read the first one now, or review the whole machine learning best practices series. Data scientists commonly use machine learning algorithms, such as gradient boosting and decision forests, that automatically build
Tag: ensemble models
Ensemble methods are commonly used to boost predictive accuracy by combining the predictions of multiple machine learning models. The traditional wisdom has been to combine so-called “weak” learners. However, a more modern approach is to create an ensemble of a well-chosen collection of strong yet diverse models. Building powerful ensemble models
I’ve often heard people say about weather forecasters “they have the best job…they just report what the models are telling them, and when they’re wrong they can always blame it on Mother Nature throwing us a curve.” While that’s true, this glass-half-empty crowd is failing to appreciate how amazing the
My esteemed colleague and recently-published author Jared Dean shared some thoughts on how ensemble models help make better predictions. For predictive modeling, Jared explains the value of two main forms of ensembles --bagging and boosting. It should not be surprising that the idea of combining predictions from more than one
In 1965's Subterranean Homesick Blues, Bob Dylan taught us: You don't need a weatherman / To know which way the wind blows In 1972's You Don't Mess Around with Jim, Jim Croce taught us: You don't spit into the wind By combining these two teachings, one can logically conclude that:
My oldest son is in the school band, and they are getting ready for their spring concert. Their fall concert was wonderful; hearing dozens of students with their specific instruments playing together creates beautiful, rich sounding music. The depth of sound from orchestral or symphonic music is unmatched. In data