Calculate SmoothGrad by averaging gradients over noisy samples of the input.
This function provides a lightweight alternative to run_smoothgrad
and SmoothGrad that is more efficient for torch models as it avoids
model conversion overhead and uses native torch autograd directly. Therefore,
it can be used for any torch::nn_module without restrictions on
architecture or layers.
torch_smoothgrad(
model,
data,
output_idx = NULL,
n = 50,
noise_level = 0.1,
times_input = FALSE,
dtype = "float",
return_object = FALSE
)(nn_module)
A torch model.
(torch_tensor, array, or matrix)
Input data.
(integer)
Index or indices of output nodes. Default: NULL (all outputs).
(integer(1))
Number of noisy samples. Default: 50.
(numeric(1))
Standard deviation of noise relative to input range:
\(\sigma = (max(x) - min(x)) *\) noise_level. Default: 0.1.
(logical(1))
If TRUE, multiplies gradients by input (SmoothGrad×Input).
Default: FALSE.
(character(1))
Data type: "float" or "double". Default: "float".
(logical(1))
If TRUE, returns a InterpretingMethod object with
methods like plot() and get_result(). If FALSE (default), returns
a raw torch_tensor.
If return_object = FALSE (default): A torch_tensor
containing the smoothed gradients with shape (batch_size, ..., n_outputs).
If return_object = TRUE: A InterpretingMethod object.
SmoothGrad computes gradients for n noisy versions of each input
and averages them. With \(\epsilon \sim N(0,\sigma)\):
$$1/n \sum_{i=1}^n \frac{\partial f(x+ \epsilon_i)}{\partial x}$$
This reduces noise in gradient-based explanations.
D. Smilkov et al. (2017) SmoothGrad: removing noise by adding noise. arXiv:1706.03825
Other direct torch methods:
torch_expgrad(),
torch_grad(),
torch_intgrad()
library(torch)
model <- nn_sequential(nn_linear(10, 3))
data <- torch_randn(5, 10)
# Standard SmoothGrad
smooth_grads <- torch_smoothgrad(model, data)
# SmoothGrad×Input
smooth_grads <- torch_smoothgrad(model, data, times_input = TRUE)
# More samples for smoother result
smooth_grads <- torch_smoothgrad(model, data, n = 100)