models module¶
- class models.Model(layers)¶
Bases:
object
Model class.
- layers¶
List of layers of model.
- Type
list
- reg_loss¶
The sum of the regularization losses of all layers of the model.
- Type
float
- compiled¶
Flag showing if the model is compiled.
- Type
bool
- metrics_dict¶
The dictionary of the training and validation metric values over training.
- Type
None or dict
- loss_dict¶
The dictionary of the training and validation loss values over training.
- Type
None or dict
- cost_dict¶
The dictionary of the training and validation cost values over training. Note that cost = data loss + regularization loss
- Type
None or dict
- metrics¶
The list of metrics for evaluating the model during training and validation over training.
- Type
None or list
- __init__(layers)¶
Constuctor.
- forward(x)¶
Forward propagates signal through the model.
- backward(y)¶
Back-propagates signal through the model.
- get_reg_loss()¶
Returns the overall regularization loss of the layers in the model.
- get_gradients()¶
Returns the gradients of all parameters of all layers.
- get_trainable_params()¶
Returns all trainable parameters of all layers.
- set_trainable_params(trainable_params)¶
Sets all trainable parameters of all layers.
- compile_model(optimizer, loss, metrics)¶
Compiles the model.
- fit(x_train, y_train, x_val, y_val, n_epochs, batch_size)¶
Fits the model to the data.
- __repr__()¶
Returns the string representation of class.
- backward(y, **params)¶
Back-propagates signal through the model.
- Parameters
y (numpy.ndarray) – Labels of the input data to model, shape is (batch_size, ).
- Returns
- Return type
Notes
Iterates over layers in descending order in the self.layers list.
- compile_model(optimizer, loss, metrics)¶
Compiles the model.
- Parameters
- Returns
- Return type
Notes
Sets self.compiled to True. If self.compiled is not called, self.fit will raise AssertionError.
- compute_metrics(y, scores, postfix=None)¶
- fit(x_train, y_train, x_val, y_val, n_epochs, batch_size, verbose, aug_func)¶
Fits the model to the data.
- Parameters
x_train (numpy.ndarray) – Training data to model of shape (batch_size, in_dim) where in_dim is the input dimension of the first layer of the Model.
y_train (numpy.ndarray) – True labels of training data. Shape is (batch_size, )
x_val (numpy.ndarray) – Validation data to model of shape (batch_size, in_dim) where in_dim is the input dimension of the first layer of the Model.
y_val (numpy.ndarray) – True labels of validation data. Shape is (batch_size, )
n_epochs (int) – The number of epochs to train for.
batch_size (int) – The batch size of the mini-batch gradient descent algorithm. x_train.shape[0] has to be divisible by batch_size
verbose (int) – The degree to which training progress is printed in the console. 2: print all, 1: print some, 0: do not print
aug_func (func) – Data augmentation function using imgaug.
- Returns
The history of training and validation loss, metrics, and learning rates. dict is {**self.metrics_dict, **self.loss_dict, **self.lr_dict}
- Return type
dict
Notes
None
- Raises
AssertionError – If the model has not yet been complied with the self.compiled method.
- fit_rnn(x_train, y_train, x_val, y_val, n_epochs, batch_size, verbose, callbacks)¶
- forward(x, **params)¶
Forward propagates signal through the model.
- Parameters
x (numpy.ndarray) – Input data to model, shape is (batch_size, in_dim) where in_dim is the input dimension of the first layer of the model.
params (dict) – Dict of params for forward pass such as train or test mode, seed, etc.
- Returns
scores – Activation of last layer of the model - the scores of the network. Shape is (batch_size, out_dim) where out_dim is the output dimension of the last layer of the model - usually same as the number of classes.
- Return type
numpy.ndarray
Notes
Iterates over layers in ascending order in the self.layers list.
- get_gradients()¶
Returns the gradients of all parameters of all layers.
- Parameters
None –
- Returns
grads – The list of dictionaries of gradients of all parameters of all layers of the model. At idx is the dictionary of gradients of layer idx in the self.layers list. A list has two keys - dw and db.
- Return type
list
Notes
Iterates over layers in ascending order in the self.layers list.
- get_reg_loss()¶
Returns the overall regularization loss of the layers in the model.
- Parameters
None –
- Returns
The sum of the regularization losses of all layers of the model.
- Return type
float
Notes
None
- get_trainable_params()¶
Returns all trainable parameters of all layers.
- Parameters
None –
- Returns
trainable_params – The list of dictionaries of the trainable parameters of all layers of the model. At idx is the dictionary of trainable parameters of layer idx in the self.layers list. A list has two keys - w and b.
- Return type
list
Notes
Iterates over layers in ascending order in the self.layers list.
- set_trainable_params(trainable_params)¶
Sets all trainable parameters of all layers.
- Parameters
trainable_params (list) – The list of dictionaries of the trainable parameters of all layers of the model. At idx is the dictionary of trainable parameters of layer idx in the self.layers list. A list has two keys - w and b.
- Returns
- Return type
Notes
Iterates over layers in ascending order in the self.layers list.