The R6 class SHAP calculates the famous Shapley values based on game theory for an instance to be explained. It is a model-agnostic method that can be applied to any predictive model. This means, in particular, that SHAP can be applied not only to objects of the Converter class but also to any other model. The only requirement is the argument pred_fun, which generates predictions with the model for given data. However, this function is pre-implemented for models created with nn_sequential, keras_model, neuralnet or Converter. Internally, the suggested package fastshap is utilized and applied to data.frame.

The R6 class can also be initialized using the run_shap function as a helper function so that no prior knowledge of R6 classes is required.

Note: Even signal and image data are initially transformed into a data.frame using as.data.frame() and then fastshap::explain is applied. In other words, a custom pred_fun may need to convert the data.frame back into an array as necessary.

Methods

Public methods

Inherited methods


Method new()

Create a new instance of the SHAP R6 class. When initialized, the method SHAP is applied to the given data and the results are stored in the field result.

Usage

SHAP$new(
  model,
  data,
  data_ref,
  pred_fun = NULL,
  output_idx = NULL,
  output_label = NULL,
  channels_first = TRUE,
  input_dim = NULL,
  input_names = NULL,
  output_names = NULL,
  ...
)

Arguments

model

(any prediction model)
A fitted model for a classification or regression task that is intended to be interpreted. A Converter object can also be passed. In order for the package to know how to make predictions with the given model, a prediction function must also be passed with the argument pred_fun. However, for models created by nn_sequential, keras_model, neuralnet or Converter, these have already been pre-implemented and do not need to be specified.

data

(array, data.frame or torch_tensor)
The individual instances to be explained by the method. These must have the same format as the input data of the passed model and has to be either matrix, an array, a data.frame or a torch_tensor. If no value is specified, all instances in the dataset data will be explained.
Note: For the model-agnostic methods, only models with a single input and output layer is allowed!

data_ref

(array, data.frame or torch_tensor)
The dataset to which the method is to be applied. These must have the same format as the input data of the passed model and has to be either matrix, an array, a data.frame or a torch_tensor.
Note: For the model-agnostic methods, only models with a single input and output layer is allowed!

pred_fun

(function)
Prediction function for the model. This argument is only needed if model is not a model created by nn_sequential, keras_model, neuralnet or Converter. The first argument of pred_fun has to be newdata, e.g.,

function(newdata, ...) model(newdata)

output_idx

(integer, list or NULL)
These indices specify the output nodes for which the method is to be applied. In order to allow models with multiple output layers, there are the following possibilities to select the indices of the output nodes in the individual output layers:

  • An integer vector of indices: If the model has only one output layer, the values correspond to the indices of the output nodes, e.g., c(1,3,4) for the first, third and fourth output node. If there are multiple output layers, the indices of the output nodes from the first output layer are considered.

  • A list of integer vectors of indices: If the method is to be applied to output nodes from different layers, a list can be passed that specifies the desired indices of the output nodes for each output layer. Unwanted output layers have the entry NULL instead of a vector of indices, e.g., list(NULL, c(1,3)) for the first and third output node in the second output layer.

  • NULL (default): The method is applied to all output nodes in the first output layer but is limited to the first ten as the calculations become more computationally expensive for more output nodes.

output_label

(character, factor, list or NULL)
These values specify the output nodes for which the method is to be applied. Only values that were previously passed with the argument output_names in the converter can be used. In order to allow models with multiple output layers, there are the following possibilities to select the names of the output nodes in the individual output layers:

  • A character vector or factor of labels: If the model has only one output layer, the values correspond to the labels of the output nodes named in the passed Converter object, e.g., c("a", "c", "d") for the first, third and fourth output node if the output names are c("a", "b", "c", "d"). If there are multiple output layers, the names of the output nodes from the first output layer are considered.

  • A list of charactor/factor vectors of labels: If the method is to be applied to output nodes from different layers, a list can be passed that specifies the desired labels of the output nodes for each output layer. Unwanted output layers have the entry NULL instead of a vector of labels, e.g., list(NULL, c("a", "c")) for the first and third output node in the second output layer.

  • NULL (default): The method is applied to all output nodes in the first output layer but is limited to the first ten as the calculations become more computationally expensive for more output nodes.

channels_first

(logical(1))
The channel position of the given data (argument data). If TRUE, the channel axis is placed at the second position between the batch size and the rest of the input axes, e.g., c(10,3,32,32) for a batch of ten images with three channels and a height and width of 32 pixels. Otherwise (FALSE), the channel axis is at the last position, i.e., c(10,32,32,3). If the data has no channel axis, use the default value TRUE.

input_dim

(integer)
The model input dimension excluding the batch dimension. It can be specified as vector of integers, but has to be in the format "channels first".

input_names

(character, factor or list)
The input names of the model excluding the batch dimension. For a model with a single input layer and input axis (e.g., for tabular data), the input names can be specified as a character vector or factor, e.g., for a dense layer with 3 input features use c("X1", "X2", "X3"). If the model input consists of multiple axes (e.g., for signal and image data), use a list of character vectors or factors for each axis in the format "channels first", e.g., use list(c("C1", "C2"), c("L1","L2","L3","L4","L5")) for a 1D convolutional input layer with signal length 4 and 2 channels.
Note: This argument is optional and otherwise the names are generated automatically. But if this argument is set, all found input names in the passed model will be disregarded.

output_names

(character, factor )
A character vector with the names for the output dimensions excluding the batch dimension, e.g., for a model with 3 output nodes use c("Y1", "Y2", "Y3"). Instead of a character vector you can also use a factor to set an order for the plots.
Note: This argument is optional and otherwise the names are generated automatically. But if this argument is set, all found output names in the passed model will be disregarded.

...

other arguments forwarded to fastshap::explain.


Method clone()

The objects of this class are cloneable with this method.

Usage

SHAP$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

#----------------------- Example 1: Torch -----------------------------------
library(torch)

# Create nn_sequential model and data
model <- nn_sequential(
  nn_linear(5, 12),
  nn_relu(),
  nn_linear(12, 2),
  nn_softmax(dim = 2)
  )
data <- torch_randn(25, 5)

# Calculate Shapley values for the first 10 instances and set the
# feature and outcome names
shap <- SHAP$new(model, data[1:10, ], data_ref = data,
                 input_names = c("Car", "Cat", "Dog", "Plane", "Horse"),
                 output_names = c("Buy it!", "Don't buy it!"))

# You can also use the helper function `run_shap` for initializing
# an R6 SHAP object
shap <- run_shap(model, data[1:10, ], data_ref = data,
                 input_names = c("Car", "Cat", "Dog", "Plane", "Horse"),
                 output_names = c("Buy it!", "Don't buy it!"))

# Get the result as an array for the first two instances
get_result(shap)[1:2,, ]
#> , , Buy it!
#> 
#>               Car          Cat         Dog        Plane       Horse
#> [1,] -0.008805692  0.034070820 -0.02459812 -0.035755724 -0.03845069
#> [2,]  0.048597306 -0.002759516  0.00000000  0.002144039 -0.07650542
#> 
#> , , Don't buy it!
#> 
#>              Car          Cat          Dog       Plane       Horse
#> [1,] -0.02542257 -0.038758606 1.927227e-02  0.02593446 -0.01201844
#> [2,] -0.01413250 -0.008205563 6.130338e-05 -0.01806748  0.02788341
#> 

# Plot the result for both classes
plot(shap, output_idx = c(1, 2))


# Show the boxplot over all 10 instances
boxplot(shap, output_idx = c(1, 2))


# We can also forward some arguments to fastshap::explain, e.g. nsim to
# get more accurate values
shap <- run_shap(model, data[1:10, ], data_ref = data,
                 input_names = c("Car", "Cat", "Dog", "Plane", "Horse"),
                 output_names = c("Buy it!", "Don't buy it!"),
                 nsim = 10)

# Plot the boxplots again
boxplot(shap, output_idx = c(1, 2))


#----------------------- Example 2: Converter object --------------------------
# We can do the same with an Converter object (all feature and outcome names
# will be extracted by the SHAP method!)
conv <- convert(model,
                input_dim = c(5),
                input_names = c("Car", "Cat", "Dog", "Plane", "Horse"),
                output_names = c("Buy it!", "Don't buy it!"))

# Calculate Shapley values for the first 10 instances
shap <- run_shap(conv, data[1:10], data_ref = data)

# Plot the result for both classes
plot(shap, output_idx = c(1, 2))


#----------------------- Example 3: Other model -------------------------------
if (require("neuralnet") & require("ranger")) {
  library(neuralnet)
  library(ranger)
  data(iris)

  # Fit a random forest unsing the ranger package
  model <- ranger(Species ~ ., data = iris, probability = TRUE)

  # There is no pre-implemented predict function for ranger models, i.e.,
  # we have to define it ourselves.
  pred_fun <- function(newdata, ...) {
    predict(model, newdata, ...)$predictions
  }

  # Calculate Shapley values for the instances of index 1 and 111 and add
  # the outcome labels
  shap <- run_shap(model, iris[c(1, 111), -5], data_ref = iris[, -5],
                   pred_fun = pred_fun,
                   output_names = levels(iris$Species),
                   nsim = 10)

  # Plot the result for the first two classes and all selected instances
  plot(shap, data_idx = 1:2, output_idx = 1:2)

  # Get the result as a torch_tensor
  get_result(shap, "torch_tensor")
}
#> torch_tensor
#> (1,.,.) = 
#>   0.0259 -0.0199 -0.0529
#>   0.0014  0.0000 -0.0034
#>   0.3300 -0.1866 -0.1146
#>   0.3310 -0.2348 -0.1964
#> 
#> (2,.,.) = 
#>  -0.0368 -0.0272  0.0716
#>  -0.0006  0.0114 -0.0090
#>  -0.1842 -0.1129  0.3146
#>  -0.1398 -0.1397  0.3183
#> [ CPUFloatType{2,4,3} ]