Documentation ¶
Index ¶
- func Test()
- type Acts
- type Args
- type Elman
- func (n *Elman) BPTT(input, expected *m.Dense) (dErrdIH, dErrdHH, dErrdHO *m.Dense)
- func (n *Elman) Forward(input *m.Dense) (sums []*Sums, acts []*Acts)
- func (n *Elman) GetError(prevErrs, currSums *m.Vector, w *m.Dense) *m.Vector
- func (n *Elman) GetHidden(prevHidden, sample *m.Vector) (sums, acts *m.Vector)
- func (n *Elman) GetOutError(outActs, outSums, expected *m.Vector) *m.Vector
- func (n *Elman) GetOutput(currHidden *m.Vector) (sums, acts *m.Vector)
- func (n *Elman) RunEpochs(numEpochs int, input, expected *m.Dense)
- type Sums
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
type Acts ¶
Acts is used to keep activations of each neuron of output and hidden layers. So, Acts.Hid.At(2, 0) is the activation of the 3rd neuron in hidden layer.
type Elman ¶
type Elman struct { NumInp int NumHid int NumOut int Depth int // Number of steps down the unfolded network IH *m.Dense // Weights from input to hidden layer HH *m.Dense // Weights from hidden to hidden layer HO *m.Dense // Weights from hidden to output layer // contains filtered or unexported fields }
Elman is a simple Recursive Neuron Network which has recurrent connections from hidden layer neurons at time step (t-1) to hidden layer neurons at time step (t). We use this simplified model (without the possibility to add arbitrary number of hidden layers) to reduce the number of obscure indices and to use only named entities. We also use no biases (again, for simplicity).
func (*Elman) BPTT ¶
BPTT implements the Backpropagation Through Time algorithm to learn the network's weight. As BPTT is a variation of standard Backpropagation, it might be useful to look at basicNN code and look for similarities. Note that we don't have a separate Update() method; all weights are updated "on the go".
func (*Elman) Forward ¶
Forward accumulates sums and activations for each layer for each training sample.
func (*Elman) GetError ¶
GetError returns errors for each neuron in any single layer (L) using the errors in the layer just after it (L+1). The errors of (L+1) are propagated backwards to (L) using the same (L-to-L+1) weights that we used when passing (L)-activations to (L+1). Of course, we need to get a transposed version of (L-to-L+1) weights to make the matrix operations possible. After this backward-pass we multiply the (L)-errors by sigmoidPrime(L-sums), just as in GetOutError().
func (*Elman) GetHidden ¶
GetHidden calculates current hidden state as follows:
- Multiplies inputToHidden matrix by the input sample (same as getting a weighted sum of inputs for each hidden neuron);
- Multiplies hiddenToHidden matrix by previous hidden layer (same as getting a weighted sum of inputs for each hidden neuron);
- Sums the results from steps 1, 2 and applies activation function (hyperbolic tanhent in this case).
func (*Elman) GetOutError ¶
GetOutError returns the output layer error as (output activations − expected activations) ⊙ sigmoidPrime(output sums).