Luego de otro largo lapso, termino publicando el siguiente artículo de la serie ¡Explícate!. En este veremos cómo la teoría de juegos nos da una mano para interpretar mejor nuestros modelos de machine learning, utilizando las ideas del premio nobel de economía Lloyd Shapley. Entenderemos los conceptos detrás de
Tag: model interpretability
In the preceding two posts, we looked at issues around interpretability of modern black-box machine-learning models and introduced SAS® Model Studio within SAS® Visual Data Mining and Machine Learning. Now we turn our attention to programmatic interpretability.
In the second of a three-part series of posts, SAS' Funda Gunes and her colleague Ricky Tharrington summarize model-agnostic model interpretability in SAS Viya.
A monotonic relationship exists when a model’s output increases or stays constant in step with an increase in your model’s inputs. Relationships can be monotonically increasing or decreasing with the distinction based on which direction the input and output travel. A common example is in credit risk where you would expect someone’s risk score to increase with the amount of debt they have relative to their income.
In the first of a three-part series of posts, SAS' Funda Gunes and her colleague Ricky Tharrington summarize model-agnostic model interpretability in SAS Viya.
We have updated our software for improved interpretability since this post was written. For the latest on this topic, read our new series on model-agnostic interpretability. While some machine learning models – like decision trees – are transparent, the majority of models used today – like deep neural networks, random forests, gradient boosting
We have updated our software for improved interpretability since this post was written. For the latest on this topic, read our new series on model-agnostic interpretability. Assessing a model`s accuracy usually is not enough for a data scientist who wants to know more about how a model is working. Often
We have updated our software for improved interpretability since this post was written. For the latest on this topic, read our new series on model-agnostic interpretability. Don`t jump into modelling. First, understand and explore your data! This is common advice for many data scientists. If your data set is messy,
We have updated our software for improved interpretability since this post was written. For the latest on this topic, read our new series on model-agnostic interpretability. As machine learning takes its place in many recent advances in science and technology, the interpretability of machine learning models grows in importance. We
"The Role of Model Interpretability in Data Science" is a recent post on Medium.com by Carl Anderson, Director of Data Science at the fashion eyeware company Warby Parker. Anderson argues that data scientists should be willing to make small sacrifices in model quality in order to deliver a model that