Most model assessment metrics, such as Lift, AUC, KS, ASE, require the presence of the target/label to be in the data. This is always the case at the time of model training. But how can I ensure that the developed model can be applied to new data for prediction?
Tag: model validation
This post describes a fully automated validation pipeline for analytical models as part of an analytical platform, which has been set up recently as part of a customer project.
Let us now take a look at a well-known metaphor for test case development in the software industry. We are referring to the idea of the “test pyramid."
In total, there are four posts in this blog series, this is the first post describing some basic principles of the DevOps (or ModelOps) approach.
In a previous post, I discussed using discrete-event simulation to validate an optimization model and its underlying assumptions. A similar approach can be used to validate queueing models as well. And when it is found that the assumptions required for a queueing model are not a good fit for the
The primary objective of many discrete-event simulation projects is system investigation. Output data from the simulation model are used to better understand the operation of the system (whether that system is real or theoretical), as well as to conduct various "what-if"-type analyses. However, I recently worked on another model
Last year, my SAS Simulation Studio R&D team began a discrete-event simulation modeling project of a neonatal intensive care unit (NICU) with two doctors from Duke University’s Division of Neonatal-Perinatal Medicine. After several initial meetings discussing such things as necrotizing enterocolitis (NEC), retinopathy of prematurity (ROP), patent ductus arteriosis (PDA), and