Model

enum elsa::ml::MlBackend

Backend to execute model primitives.

Values:

enumerator Auto

Automatically choose the fastest backend available.

enumerator Dnnl

Use the Dnnl, aka. OneDNN backend which is optimized for CPUs.

enumerator Cudnn

Use the Cudnn backend which is optimized for Nvidia GPUs.

template<typename data_t = real_t, MlBackend Backend = MlBackend::Auto>
class elsa::ml::Model

A ml model that can be used for training and inference.

Author

David Tellenbach

Template Parameters
  • data_t: The type of all coefficients used in the model. This parameter is optional and defaults to real_t.

  • Backend: The MlBackend that will be used for inference and training. This parameter is optional and defaults to MlBackend::Auto.

Public Functions

Model() = default

Default constructor.

Model(std::initializer_list<Input<data_t>*> inputs, std::initializer_list<Layer<data_t>*> outputs, const std::string &name = "model")

Construct a model by specifying a set of in- and outputs.

Parameters
  • inputs: a list if input layers

  • outputs: a list of layers that serve as this model’s outputs

  • name: a name for this model

Model(Input<data_t> *input, Layer<data_t> *output, const std::string &name = "model")

Construct a model by specifying a single input and a single output

Parameters
  • input: an input layer

  • output: a layer that serves as this model’s output

  • name: a name for this model

void compile(const Loss<data_t> &loss, Optimizer<data_t> *optimizer)

Compile the model by specifying a loss function and an optimizer.

const Loss<data_t> &getLoss() const

Get a constant reference to the model’s loss function.

Optimizer<data_t> *getOptimizer()

Get this model’s optimizer.

index_t getBatchSize() const

Get this model’s batch-size.

std::string getName() const

Get this models name.

std::vector<Input<data_t>*> getInputs()

Get a list of this model’s inputs.

std::vector<Layer<data_t>*> getOutputs()

Get a list of this model’s outputs.

History fit(const std::vector<DataContainer<data_t>> &x, const std::vector<DataContainer<data_t>> &y, index_t epochs)

Train the model by providing inputs x and labels y.

Return

a Model::History object

Parameters
  • x: model input

  • y: labels

  • epochs: training epochs

DataContainer<data_t> predict(const DataContainer<data_t> &x)

Perform inference for a given input x.

Friends

template<typename T, MlBackend B>
friend std::ostream &operator<<(std::ostream &os, const Model<T, B> &model)

Pretty print this model.

struct History