Introduction to SHAP with Python
How to create and interpret SHAP plots: waterfall, force, decision, mean SHAP, and beeswarm
SHAP is the most powerful Python package for understanding and debugging your models. With a few lines of code, we are able to identify and visualise important relationships in our model.
Interpretable vs Explainable Machine Learning
The difference between an interpretable and explainable model and why it’s probably not that important
When you first dive into the field of interpretable machine learning you will notice similar terms flying around. Interpretability vs explainability. Interpretations vs explanations. We can’t even seem to decide on the name for the field — is it IML or XAI?
What is Algorithm Fairness?
An introduction to the field that aims at understanding and preventing unfairness in machine learning
At first, the concept of an unfair machine learning model may seem like a contradiction. How can machines, with no concept of race, ethnicity, gender or religion, actively discriminate against certain groups? But algorithms do and, if left unchecked, they will continue to make decisions that perpetuate historical injustices. This is where the field of algorithm fairness comes in.
The Art of Explaining Predictions
How to explain your model in a human-friendly way
An important part of a data scientist’s role is to explain model predictions. Often, the person receiving the explanation will be non-technical. If you start talking about cost functions, hyperparameters or p-values you will be met with blank stares. We need to translate these technical concepts into layman’s terms. This process can be more challenging than building the model itself.