activations module¶
- class activations.Activation¶
Bases:
object
Activation parent class.
- cache¶
Run-time cache of attibutes such as gradients.
- Type
dict
- __init__()¶
Constuctor.
- class activations.LinearActivation¶
Bases:
activations.Activation
Linear activation. Usually followed by CategoricalHingeLoss. Inherits everything from class Activation.
- cache¶
Run-time cache of attibutes such as gradients.
- Type
dict
- __init__()¶
Constuctor.
- forward(z)¶
Activates the linear transformation of the layer, and forward propagates activation. Activation is linear.
- backward(g)¶
Backpropagates incoming gradient into the layer, based on the linear activation.
- __repr__()¶
Returns the string representation of class.
- backward(g)¶
Backpropagates incoming gradient into the layer, based on the linear activation.
- Parameters
g (numpy.ndarray) – Incoming gradient to the activation. Shape is unknown here, but will usually be (batch size, this layer output dim = next layer input dim)
- Returns
Gradient of activation. Shape is unknown here, but will usually be (batch size, this layer output dim = next layer input dim)
- Return type
numpy.ndarray
Notes
None
- forward(z)¶
Activates the linear transformation of the layer, and forward propagates activation. Activation is linear.
- Parameters
z (numpy.ndarray) – Linear transformation of layer. Shape is unknown here, but will usually be (batch size, this layer output dim = next layer input dim)
- Returns
Linear activation.
- Return type
numpy.ndarray
Notes
None
- class activations.ReLUActivation¶
Bases:
activations.Activation
ReLU activation. Can be followed by virtually anything. Inherits everything from class Activation.
- cache¶
Run-time cache of attibutes such as gradients.
- Type
dict
- __init__()¶
Constuctor.
- forward(z)¶
Activates the linear transformation of the layer, and forward propagates activation. Activation is rectified linear.
- backward(g)¶
Backpropagates incoming gradient into the layer, based on the rectified linear activation.
- __repr__()¶
Returns the string representation of class.
- backward(g_in)¶
Backpropagates incoming gradient into the layer, based on the rectified linear activation.
- Parameters
g_in (numpy.ndarray) – Incoming gradient to the activation. Shape is unknown here, but will usually be (batch size, this layer output dim = next layer input dim)
- Returns
Gradient of activation. Shape is unknown here, but will usually be (batch size, this layer output dim = next layer input dim)
- Return type
numpy.ndarray
Notes
None
- forward(z)¶
Activates the linear transformation of the layer, and forward propagates activation. Activation is rectified linear.
- Parameters
z (numpy.ndarray) – Linear transformation of layer. Shape is unknown here, but will usually be (batch size, this layer output dim = next layer input dim)
- Returns
ReLU activation.
- Return type
numpy.ndarray
Notes
None
- class activations.SoftmaxActivation¶
Bases:
activations.Activation
Softmax activation. Usually activation of last layer and forward propagates into a CategoricalCrossEntropyLoss. Inherits everything from class Activation.
- cache¶
Run-time cache of attibutes such as gradients.
- Type
dict
- __init__()¶
Constuctor.
- forward(z)¶
Activates the linear transformation of the layer, and forward propagates activation. Activation is softmax.
- backward(g)¶
Backpropagates incoming gradient into the layer, based on the softmax activation.
- __repr__()¶
Returns the string representation of class.
- backward(g_in)¶
Backpropagates incoming gradient into the layer, based on the softmax activation.
- Parameters
g_in (numpy.ndarray) – Incoming gradient to the activation. Shape is unknown here, but will usually be (batch size, )
- Returns
Gradient of activation. Shape is unknown here, but will usually be (batch size, )
- Return type
numpy.ndarray
Notes
None
- forward(z)¶
Activates the linear transformation of the layer, and forward propagates activation. Activation is softmax.
- Parameters
z (numpy.ndarray) – Linear transformation of layer. Shape is unknown here, but will usually be (batch size, this layer output dim = number of classes)
- Returns
Softmax activation. Shape is (batch size, this layer output dim = number of classes)
- Return type
numpy.ndarray
Notes
None
- class activations.TanhActivation¶
Bases:
activations.Activation
Tanh activation. Can be followed by virtually anything. Inherits everything from class Activation.
- cache¶
Run-time cache of attibutes such as gradients.
- Type
dict
- __init__()¶
Constuctor.
- forward(z)¶
Activates the linear transformation of the layer, and forward propagates activation. Activation is tanh.
- backward(g)¶
Backpropagates incoming gradient into the layer, based on the tanh activation.
- __repr__()¶
Returns the string representation of class.
- backward(g_in)¶
Backpropagates incoming gradient into the layer, based on the tanh activation.
- Parameters
g_in (numpy.ndarray) – Incoming gradient to the activation. Shape is unknown here, but will usually be (batch size, this layer output dim = next layer input dim)
- Returns
Gradient of activation. Shape is unknown here, but will usually be (batch size, this layer output dim = next layer input dim)
- Return type
numpy.ndarray
Notes
None
- forward(z)¶
Activates the linear transformation of the layer, and forward propagates activation. Activation is tanh.
- Parameters
z (numpy.ndarray) – Linear transformation of layer. Shape is unknown here, but will usually be (batch size, this layer output dim = next layer input dim)
- Returns
ReLU activation.
- Return type
numpy.ndarray
Notes
None