1.1 Motivation

Machine Learning is a vague name. There is some learning and some machines, but what the heck is going on? What does it really mean? Is it possible that the meaning of this term evolves over time?

  • A few years ago I would say that the term refers to machines learning from humans. In the supervised learning problems, a human being creates a labeled dataset and machines are tuned/trained to predict correct labels from data.
  • Recently we have more and more examples of machines that are learning from other machines. Self-playing neural nets like AlphaGo Zero (Silver et al. 2017Silver, David, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, et al. 2017. “Mastering the Game of Go Without Human Knowledge.” Nature 550 (7676):354–59. https://doi.org/10.1038/nature24270.) learn from themselves with blazing speed. Humans are involved in designing the learning environment but the labeling turns out to be very expensive or not feasible, and we are looking for other ways to learn from partial labels, fuzzy labels, or no labels at all.
  • I could imagine that in close future humans will learn from machines. Well trained black-boxes may teach us how to be better at playing Go, how to be better at reading PET images (Positron-Emission Tomography images), or how to be better at diagnosing patients.

As the human supervision over learning is decreasing over time, the understanding of black-boxes is more important. To make this future possible, we need tools that extract useful information from black-box models.

DALEX is the tool for this.

1.1.1 Why DALEX?

In recent years we have been observing an increasing interest in tools for knowledge extraction from complex machine learning models, see (Štrumbelj and Kononenko 2011Štrumbelj, Erik, and Igor Kononenko. 2011. “A General Method for Visualizing and Explaining Black-Box Regression Models.” In Proceedings of the 10th International Conference on Adaptive and Natural Computing Algorithms - Volume Part Ii, 21–30. ICANNGA’11. Berlin, Heidelberg: Springer-Verlag. http://dl.acm.org/citation.cfm?id=1997005.1997009.), (Tzeng and Ma 2005Tzeng, F. Y., and K. L. Ma. 2005. “Opening the Black Box - Data Driven Visualization of Neural Networks.” In VIS 05. IEEE Visualization, 2005., 383–90. https://doi.org/10.1109/VISUAL.2005.1532820.), (Puri et al. 2017Puri, N., P. Gupta, P. Agarwal, S. Verma, and B. Krishnamurthy. 2017. “MAGIX: Model Agnostic Globally Interpretable Explanations.” ArXiv E-Prints, June.), (Zeiler and Fergus 2014Zeiler, Matthew D., and Rob Fergus. 2014. “Visualizing and Understanding Convolutional Networks.” In Computer Vision – Eccv 2014, 818–33. Springer International Publishing.).

There are some very useful R packages that may be used for knowledge extraction from R models, see for example pdp (Greenwell 2017Greenwell, Brandon M. 2017. “Pdp: An R Package for Constructing Partial Dependence Plots.” The R Journal 9 (1):421–36. https://journal.r-project.org/archive/2017/RJ-2017-016/index.html.), ALEPlot (Apley 2017Apley, Dan. 2017. ALEPlot: Accumulated Local Effects (Ale) Plots and Partial Dependence (Pd) Plots. https://CRAN.R-project.org/package=ALEPlot.), randomForestExplainer (Paluszynska and Biecek 2017Paluszynska, Aleksandra, and Przemyslaw Biecek. 2017. RandomForestExplainer: A Set of Tools to Understand What Is Happening Inside a Random Forest. https://github.com/MI2DataLab/randomForestExplainer.), xgboostExplainer (Foster 2017Foster, David. 2017. XgboostExplainer: An R Package That Makes Xgboost Models Fully Interpretable. https://github.com/AppliedDataSciencePartners/xgboostExplainer/.), live (Staniak and Biecek 2017Staniak, Mateusz, and Przemyslaw Biecek. 2017. Live: Local Interpretable (Model-Agnostic) Visual Explanations. https://github.com/MI2DataLab/live.) and others.

Do we need yet another R package to better understand ML models? I think so. There are some features available in the DALEX package which make it unique.

  • Scope. DALEX is a wrapper for a large number of very good tools / model explainers. It offers a wide range of state-of-the-art techniques for model exploration. Some of these techniques are more useful for understanding model predictions; other techniques are more handy for understanding model structure.
  • Consistency. DALEX offers a consistent grammar across various techniques for model explanation. It’s a wrapper that smoothes differences across different R packages.
  • Model agnostic. DALEX explainers are model agnostic. One can use them for linear models, tree ensembles, or other structures, hence we are not limited to any particular family of black-box models.
  • Model comparisons. One can learn a lot from a single black-box model, but one can learn much more by contrasting models with different structures, like linear models with ensembles of trees. All DALEX explainers support model comparisons.
  • Visual consistency. Each DALEX explainer can be plotted with the generic plot() function. These visual explanations are based on ggplot2 (Wickham 2009Wickham, Hadley. 2009. Ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York. http://ggplot2.org.) package, which generates elegant, customizable, and consistent graphs.

Chapter 2 presents the overall architecture of the DALEX package. Chapter 3 presents explainers that explore global model performance and variable importance of feature effects. Chapter 4 presents explainers that explore feature attribution for single predictions of validation of a model prediction’s reliability.

In this document we focus on three primary use-cases for DALEX explainers.

1.1.2 To validate

Explainers presented in Section 3.1 help in understanding model performance and comparing performance of different models.

Explainers presented in Section 4.1 help to identify outliers or observations with particularly large residuals.

Explainers presented in Section 4.2 help to understand which key features influence model predictions.

1.1.3 To understand

Explainers presented in Section 3.2 help to understand which variables are the most important in the model. Explainers presented in Section 4.2 help to understand which features influence single predictions. They are useful in identifying key influencers behind the black-box.

Explainers presented in Section 3.3 help to understand how particular features affect model prediction.

1.1.4 To improve

Explainers presented in Section 3.3 help to perform feature engineering based on model conditional responses.

Explainers presented in Section 4.2 help to understand which variables result in incorrect model decisions. These explainers are useful in identifying and correcting biases in the training data.