This class stores all layers converted to torch
in a module which can be
used like the original model (but torch
-based). In addition, it provides
other functions that are useful for interpreting individual predictions or
explaining the entire model. This model is part of the class Converter
and is the core for all the necessary calculations in the methods provided
in this package.
ConvertedModel(modules_list, graph, input_nodes, output_nodes, dtype = "float")
(list
)
A list of all accepted layers created by the Converter
class during initialization.
(list
)
The graph
argument gives a way to pass an input through
the model, which is especially relevant for non-sequential architectures.
It can be seen as a list of steps in which order the layers from
modules_list
must be applied. The list contains the following elements:
$current_nodes
This list describes the current position and the number
of the respective intermediate values when passing through the model.
For example, list(1,3,3)
means that in this step one output from the
first layer and two from the third layer (the numbers correspond to the
list indices from the modules_list
argument) are available for
the calculation of the current layer with index used_node
.
$used_node
The index of the layer from the modules_list
argument
which will be applied in this step.
$used_idx
The indices of the outputs from current_nodes
, which are
used as inputs of the current layer (used_node
).
$times
The frequency of the output value, i.e., is the output used
more than once as an input for subsequent layers?
(numeric
)
A vector of layer indices describing the input layers,
i.e., they are used as the starting point for the calculations.
(numeric
)
A vector of layer indices describing the indices
of the output layers.
(character(1)
)
The data type for all the calculations and defined tensors. Use
either 'float'
for torch::torch_float
or 'double'
for
torch::torch_double
.
forward()
The forward method of the whole model, i.e., it calculates the output
\(y=f(x)\) of a given input \(x\). In doing so, all intermediate
values are stored in the individual torch modules from modules_list
.
self(x,
channels_first = TRUE,
save_input = FALSE,
save_preactivation = FALSE,
save_output = FAlSE,
save_last_layer = FALSE)
x
The input torch tensor for this model.
channels_first
If the input tensor x
is given in the format
'channels first', use TRUE
. Otherwise, if the channels are last,
use FALSE
and the input will be transformed into the format 'channels
first'. Default: TRUE
.
save_input
Logical value whether the inputs from each layer
are to be saved or not. Default: FALSE
.
save_preactivation
Logical value whether the preactivations
from each layer are to be saved or not. Default: FALSE
.
save_output
Logical value whether the outputs from each layer
are to be saved or not. Default: FALSE
.
save_last_layer
Logical value whether the inputs,
preactivations and outputs from the last layer are to be saved or not.
Default: FALSE
.
update_ref()
This method updates the intermediate values in each module from the
list modules_list
for the reference input x_ref
and returns the
output from it in the same way as in the forward
method.
$update_ref(x_ref,
selfchannels_first = TRUE,
save_input = FALSE,
save_preactivation = FALSE,
save_output = FAlSE,
save_last_layer = FALSE)
x_ref
Reference input of the model.
channels_first
If the tensor x_ref
is given in the format
'channels first' use TRUE
. Otherwise, if the channels are last,
use FALSE
and the input will be transformed into the format 'channels
first'. Default: TRUE
.
save_input
Logical value whether the inputs from each layer
are to be saved or not. Default: FALSE
.
save_preactivation
Logical value whether the preactivations
from each layer are to be saved or not. Default: FALSE
.
save_output
Logical value whether the outputs from each layer
are to be saved or not. Default: FALSE
.
save_last_layer
Logical value whether the inputs,
preactivations and outputs from the last layer are to be saved or not.
Default: FALSE
.
set_dtype()
This method changes the data type for all the layers in modules_list
.
Use either 'float'
for torch::torch_float or 'double'
for
torch::torch_double.