Layers¶
Core layers¶
-
template<typename
data_t
= real_t>
classelsa::ml
::
Input
: public elsa::ml::Layer<real_t>¶ Input layer
A layer representing a network’s input.
- Author
David Tellenbach
Public Functions
-
Input
(const VolumeDescriptor &inputDescriptor, index_t batchSize = 1, const std::string &name = "")¶ Construct an input layer
- Parameters
inputDescriptor
: Descriptor for the layer’s inputbatchSize
: Batch sizename
: The layer’s name. This parameter is optional and defaults to “none”
-
index_t
getBatchSize
() const¶ - Return
the batch size
-
template<typename
data_t
= real_t>
classelsa::ml
::
Dense
: public elsa::ml::Trainable<real_t>¶ Just your regular densely-connected NN layer.
Dense implements the operation: $ \text{output} = \text{activation}(\text{input} \cdot \text{kernel} + \text{bias}) $ where activation is the element-wise activation function passed as the activation argument, kernel is a weights matrix created by the layer, and bias is a bias vector created by the layer (only applicable if
use_bias
istrue
).- Author
David Tellenbach
- Author
David Tellenbach
Public Functions
-
Dense
(index_t units, Activation activation, bool useBias = true, Initializer kernelInitializer = Initializer::GlorotNormal, Initializer biasInitializer = Initializer::Zeros, const std::string &name = "")¶ Construct a Dense layer
- Parameters
units
: The number of units (neurons) of the layer. This is also the dimensionality of the output space.activation
: Activation function to use.useBias
: Whether the layer uses a bias vector.kernelInitializer
: Initializer for the kernel weights matrix. This parameter is optional and defaults to Initializer::GlorotNormal.biasInitializer
: Initializer for the bias vector. This parameter is optional and defaults to Initializer::Zeros.name
:
-
Dense
() = default¶ Default constructor.
-
index_t
getNumberOfUnits
() const¶ - Return
the the number of units (also called neurons) of this layer.
-
void
computeOutputDescriptor
() override¶ Compute this layer’s output descriptor.
Activation layers¶
Use these layers if you want to specify an activation function decoupled from a dense or convolutional layer. Otherwise see the activation parameter of these layers.
-
template<typename
data_t
= real_t>
structSigmoid
: public elsa::ml::ActivationBase<real_t>¶ Sigmoid activation function, $ \text{sigmoid}(x) = 1 / (1 + \exp(-x)) $.
- Author
David Tellenbach
-
template<typename
data_t
= real_t>
structRelu
: public elsa::ml::ActivationBase<real_t>¶ Applies the rectified linear unit activation function.
With default values, this returns the standard ReLU activation
$ \max(x, 0) $, the element-wise maximum of 0 and the input tensor.- Author
David Tellenbach
-
template<typename
data_t
= real_t>
structTanh
: public elsa::ml::ActivationBase<real_t>¶ Hyperbolic tangent activation function.
- Author
David Tellenbach
-
template<typename
data_t
= real_t>
structClippedRelu
: public elsa::ml::ActivationBase<real_t>¶ Clipped Relu activation function.
- Author
David Tellenbach
-
template<typename
data_t
= real_t>
structElu
: public elsa::ml::ActivationBase<real_t>¶ Exponential Linear Unit.
ELUs have negative values which pushes the mean of the activations closer to zero. Mean activations that are closer to zero enable faster learning as they bring the gradient closer to the natural gradient. ELUs saturate to a negative value when the argument gets smaller. Saturation means a small derivative which decreases the variation and the information that is propagated to the next layer.
- Author
David Tellenbach
Warning
doxygenstruct: Cannot find class “elsa::ml::Identity” in doxygen xml output for project “elsa” from directory: /var/lib/gitlab-runner/builds/y-Xocdgn/0/IP/elsa/build/docs/xml
-
template<typename
data_t
= real_t>
classelsa::ml
::
Softmax
: public elsa::ml::Layer<real_t>¶ A Softmax layer.
- Author
David Tellenbach
Initializer¶
-
enum
elsa::ml
::
Initializer
¶ Initializer that can be used to initialize trainable parameters in a network layer.
Values:
-
enumerator
Ones
¶ Ones initialization
Initialize data with $ 1 $
-
enumerator
Zeros
¶ Zeros initialization
Initialize data with $ 0 $
-
enumerator
Uniform
¶ Uniform initialization
Initialize data with random samples from a uniform distribution in the interval $ [-1, 1 ] $.
-
enumerator
Normal
¶ Normal initialization
Initialize data with random samples from a standard normal distribution, i.e., a normal distribution with mean 0 and standard deviation $ 1 $.
-
enumerator
TruncatedNormal
¶ Truncated normal initialization
Initialize data with random samples from a truncated standard normal distribution, i.e., a normal distribution with mean 0 and standard deviation $ 1 $ where values with a distance of greater than $ 2 \times $ standard deviations from the mean are discarded.
-
enumerator
GlorotUniform
¶ Glorot uniform initialization
Initialize a data container with a random samples from a uniform distribution on the interval $ \left [ - \sqrt{\frac{6}{\text{fanIn} + \text{fanOut}}} , \sqrt{\frac{6}{\text{fanIn} + \text{fanOut}}} \right ] $
-
enumerator
GlorotNormal
¶ Glorot normal initialization
Initialize data with random samples from a truncated normal distribution with mean $ 0 $ and stddev $ \sqrt{ \frac{2}{\text{fanIn} + \text{fanOut}}} $.
-
enumerator
HeNormal
¶ He normal initialization
Initialize data with random samples from a truncated normal distribution with mean $ 0 $ and stddev $ \sqrt{\frac{2}{\text{fanIn}}} $
-
enumerator
HeUniform
¶ He uniform initialization
Initialize a data container with a random samples from a uniform distribution on the interval $ \left [ - \sqrt{\frac{6}{\text{fanIn}}} , \sqrt{\frac{6}{\text{fanIn}}} \right ] $
-
enumerator
RamLak
¶ RamLak filter initialization
Initialize data with values of the RamLak filter, the discrete version of the Ramp filter in the spatial domain.
Values for this initialization are given by the following equation:
\[ \text{data}[i] = \begin{cases} \frac{1}{i^2 \pi^2}, & i \text{ even} \\ \frac 14, & i = \frac{\text{size}-1}{2} \\ 0, & i \text{ odd}. \end{cases} \]
-
enumerator
Convolutional layers¶
-
enum
elsa::ml
::
Padding
¶ Padding type for Pooling and Convolutional layers.
Values:
-
enumerator
Valid
¶ Do not pad the input.
-
enumerator
Same
¶ Pad the input such that the output shape matches the input shape.
-
enumerator
-
template<typename
data_t
= real_t>
structelsa::ml
::
Conv1D
: public elsa::ml::Conv<real_t>¶ 1D convolution layer (e.g. temporal convolution).
This layer creates a convolution kernel that is convolved with the layer input over a single spatial (or temporal) dimension. If
useBias
istrue
, a bias vector is added to the outputs.- Author
David Tellenbach
Public Functions
-
Conv1D
(index_t numberOfFilters, const VolumeDescriptor &filterDescriptor, Activation activation, index_t strides = 1, Padding padding = Padding::Valid, bool useBias = true, Initializer kernelInitializer = Initializer::GlorotUniform, Initializer biasInitializer = Initializer::Zeros, const std::string &name = "")¶ Construct a Conv1D layer
- Parameters
numberOfFilters
: The number of filters for the convolutionfilterDescriptor
: A VolumeDescriptor describing the shape of all filters.activation
: The activation function finally applied to the outputs.strides
: The strides for the convolution. This parameter is optional and defaults to 1.padding
: The input padding that is applied before the convolution. This parameter is optional and defaults to Padding::Valid.useBias
: True if the layer uses a bias vector, false otherwise. This parameter is optional and defaults totrue
.kernelInitializer
: The initializer used for all convolutional filters. This parameter is optional and defaults to Initializer::GlorotUniform.biasInitializer
: The initializer used to initialize the bias vector. IfuseBias
isfalse
this has no effect. This parameter is optional and defaults to Initializer::Zeros.name
: The name of this layer.
-
template<typename
data_t
= real_t>
structelsa::ml
::
Conv2D
: public elsa::ml::Conv<real_t>¶ 2D convolution layer (e.g. spatial convolution over images).
This layer implements a spatial convolution layer with a given number of filters that is convolved over the spatial dimensions of an image. If
useBias
istrue
, a bias vector is added to the outputs.- Author
David Tellenbach
Public Functions
-
Conv2D
(index_t numberOfFilters, const VolumeDescriptor &filterDescriptor, Activation activation, index_t strides = 1, Padding padding = Padding::Valid, bool useBias = true, Initializer kernelInitializer = Initializer::GlorotUniform, Initializer biasInitializer = Initializer::Zeros, const std::string &name = "")¶ Construct a Conv2D layer
- Parameters
numberOfFilters
: The number of filters for the convolutionfilterDescriptor
: A VolumeDescriptor describing the shape of all filters.activation
: The activation function finally applied to the outputs.strides
: The strides for the convolution. This parameter is optional and defaults to 1.padding
: The input padding that is applied before the convolution. This parameter is optional and defaults to Padding::Valid.useBias
: True if the layer uses a bias vector, false otherwise. This parameter is optional and defaults totrue
.kernelInitializer
: The initializer used for all convolutional filters. This parameter is optional and defaults to Initializer::GlorotUniform.biasInitializer
: The initializer used to initialize the bias vector. IfuseBias
isfalse
this has no effect. This parameter is optional and defaults to Initializer::Zeros.name
: The name of this layer.
-
Conv2D
(index_t numberOfFilters, const std::array<index_t, 3> &filterSize, Activation activation, index_t strides = 1, Padding padding = Padding::Valid, bool useBias = true, Initializer kernelInitializer = Initializer::GlorotUniform, Initializer biasInitializer = Initializer::Zeros, const std::string &name = "")¶ Construct a Conv2D layer
- Parameters
numberOfFilters
: The number of filters for the convolutionfilterSize
: The size of all filters as anstd::array
, e.g.{w, h, c}
.activation
: The activation function finally applied to the outputs.strides
: The strides for the convolution. This parameter is optional and defaults to 1.padding
: The input padding that is applied before the convolution. This parameter is optional and defaults to Padding::Valid.useBias
: True if the layer uses a bias vector, false otherwise. This parameter is optional and defaults totrue
.kernelInitializer
: The initializer used for all convolutional filters. This parameter is optional and defaults to Initializer::GlorotUniform.biasInitializer
: The initializer used to initialize the bias vector. IfuseBias
isfalse
this has no effect. This parameter is optional and defaults to Initializer::Zeros.name
: The name of this layer.
-
template<typename
data_t
= real_t>
structelsa::ml
::
Conv3D
: public elsa::ml::Conv<real_t>¶ 3D convolution layer (e.g. spatial convolution over volumes).
This layer implements a spatial convolution layer with a given number of filters that is convolved over the spatial dimensions of a volume. If
useBias
istrue
, a bias vector is added to the outputs.- Author
David Tellenbach
Public Functions
-
Conv3D
(index_t numberOfFilters, const VolumeDescriptor &filterDescriptor, Activation activation, index_t strides = 1, Padding padding = Padding::Valid, bool useBias = true, Initializer kernelInitializer = Initializer::GlorotUniform, Initializer biasInitializer = Initializer::Zeros, const std::string &name = "")¶ Construct a Conv3D layer
- Parameters
numberOfFilters
: The number of filters for the convolutionfilterDescriptor
: A VolumeDescriptor describing the shape of all filters.activation
: The activation function finally applied to the outputs.strides
: The strides for the convolution. This parameter is optional and defaults to 1.padding
: The input padding that is applied before the convolution. This parameter is optional and defaults to Padding::Valid.useBias
: True if the layer uses a bias vector, false otherwise. This parameter is optional and defaults totrue
.kernelInitializer
: The initializer used for all convolutional filters. This parameter is optional and defaults to Initializer::GlorotUniform.biasInitializer
: The initializer used to initialize the bias vector. IfuseBias
isfalse
this has no effect. This parameter is optional and defaults to Initializer::Zeros.name
: The name of this layer.
-
template<typename
data_t
= real_t>
structelsa::ml
::
Conv2DTranspose
: public elsa::ml::ConvTranspose<real_t>¶ Transposed convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
- Author
David Tellenbach
Public Functions
-
Conv2DTranspose
(index_t numberOfFilters, const VolumeDescriptor &filterDescriptor, Activation activation, index_t strides = 1, Padding padding = Padding::Valid, bool useBias = true, Initializer kernelInitializer = Initializer::GlorotUniform, Initializer biasInitializer = Initializer::Zeros, const std::string &name = "")¶ Construct a Conv2DTranspose layer
- Parameters
numberOfFilters
: The number of filters for the convolutionfilterDescriptor
: A VolumeDescriptor describing the shape of all filters.activation
: The activation function finally applied to the outputs.strides
: The strides for the convolution. This parameter is optional and defaults to 1.padding
: The input padding that is applied before the convolution. This parameter is optional and defaults to Padding::Valid.useBias
: True if the layer uses a bias vector, false otherwise. This parameter is optional and defaults totrue
.kernelInitializer
: The initializer used for all convolutional filters. This parameter is optional and defaults to Initializer::GlorotUniform.biasInitializer
: The initializer used to initialize the bias vector. IfuseBias
isfalse
this has no effect. This parameter is optional and defaults to Initializer::Zeros.name
: The name of this layer.
Merging layers¶
-
template<typename
data_t
= real_t>
classelsa::ml
::
Sum
: public elsa::ml::Merging<real_t>¶ Sum layer
This layer takes a list of inputs, all of the same shape, and returns a single output, also of the same shape, which is the sum of all inputs.
-
template<typename
data_t
= real_t>
classelsa::ml
::
Concatenate
: public elsa::ml::Merging<real_t>¶ Layer that concatenates a list of inputs.
The input of this layer is a list of DataContainers, all of the same same, except for the concatenation axis. It outputs a single
Public Functions
-
Concatenate
(index_t axis, std::initializer_list<Layer<data_t>*> inputs, const std::string &name = "")¶ Construct a Concatenate layer.
-
void
computeOutputDescriptor
() override¶ Compute this layer’s output descriptor.
-
Reshaping layers¶
-
template<typename
data_t
= real_t>
classelsa::ml
::
Reshape
: public elsa::ml::Layer<real_t>¶ A reshape layer.
Reshapes the input while leaving the data unchanged.
- Author
David Tellenbach
-
template<typename
data_t
= real_t>
classelsa::ml
::
Flatten
: public elsa::ml::Layer<real_t>¶ A flatten layer.
Flattens the input while leaving the data unchanged.
- Author
David Tellenbach
-
enum
elsa::ml
::
Interpolation
¶ Type of the interpolation for Upsampling.
Values:
-
enumerator
NearestNeighbour
¶ Perform nearest neighbour interpolarion.
-
enumerator
Bilinear
¶ Perform bilinear interpolarion.
-
enumerator
-
template<typename
data_t
= real_t>
structelsa::ml
::
UpSampling1D
: public elsa::ml::UpSampling<real_t, LayerType::UpSampling1D, 1>¶ Upsampling layer for 1D inputs.
Repeats each temporal step size times along the time axis.
- Author
David Tellenbach
Public Functions
-
UpSampling1D
(const std::array<index_t, 1> &size, Interpolation interpolation = Interpolation::NearestNeighbour, const std::string &name = "")¶ Construct an UpSampling1D layer
- Parameters
size
: The upsampling factors for dim1.interpolation
: The interpolation used to upsample the input. This parameter is optional and defaults to Interpolation::NearestNeighbour.name
: The name of this layer.
-
template<typename
data_t
= real_t>
structelsa::ml
::
UpSampling2D
: public elsa::ml::UpSampling<real_t, LayerType::UpSampling2D, 2>¶ Upsampling layer for 2D inputs.
Repeats the rows and columns of the data by size[0] and size[1] respectively.
- Author
David Tellenbach
Public Functions
-
UpSampling2D
(const std::array<index_t, 2> &size, Interpolation interpolation = Interpolation::NearestNeighbour, const std::string &name = "")¶ Construct an UpSampling2D layer
- Parameters
size
: The upsampling factors for dim1 and dim2.interpolation
: The interpolation used to upsample the input. This parameter is optional and defaults to Interpolation::NearestNeighbour.name
: The name of this layer.
-
template<typename
data_t
= real_t>
structelsa::ml
::
UpSampling3D
: public elsa::ml::UpSampling<real_t, LayerType::UpSampling3D, 3>¶ Upsampling layer for 3D inputs.
Repeats the 1st, 2nd and 3rd dimensions of the data by size[0], size[1] and size[2] respectively.
- Author
David Tellenbach
Public Functions
-
UpSampling3D
(const std::array<index_t, 3> &size, Interpolation interpolation = Interpolation::NearestNeighbour, const std::string &name = "")¶ Construct an UpSampling3D layer
- Parameters
size
: The upsampling factors for dim1, dim2 and dim3.interpolation
: The interpolation used to upsample the input. This parameter is optional and defaults to Interpolation::NearestNeighbour.name
: The name of this layer.