gone

package
v0.0.0-...-1eaf84e Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 4, 2023 License: MIT Imports: 10 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	// ErrWeightsNotMatch is an error for when the parents don't have the same amount of weights.
	ErrWeightsNotMatch = errors.New("gone: parents must have the exact same amount of weights")
)

Functions

This section is empty.

Types

type Activation

type Activation struct {
	Name   activationName
	F      func(x float64) float64
	FPrime func(x float64) float64
}

Activation is an activation function it contains the normal f(x) and the derivative f'(x)

func Identity

func Identity() Activation

Identity is the identity (linear) function f(x) = x

func ReLU

func ReLU() Activation

ReLU is a ReLU activation function

func Sigmoid

func Sigmoid() Activation

Sigmoid is a sigmoid activation functio

func Softmax

func Softmax() Activation

Softmax is a softmax activation function NOT IMPLEMENTED YET

type DataSample

type DataSample struct {
	Inputs  []float64
	Targets []float64
}

DataSample represents a single train data set

type DataSet

type DataSet []DataSample

DataSet represents a slice of all the entires in a data set

func (DataSet) Batch

func (t DataSet) Batch(current int, batchSize int) DataSet

Batch chunks the slice

func (DataSet) Shuffle

func (t DataSet) Shuffle()

Shuffle shuffles the data in a random order

type Layer

type Layer struct {
	Nodes     int
	Activator Activation
}

Layer represents a layer in a neural network

type Loss

type Loss struct {
	Name   lossName
	F      func(y, yHat matrigo.Matrix) float64
	FPrime func(y, yHat matrigo.Matrix) matrigo.Matrix
}

Loss is a loss function it contains the normal f(x) and the derivative f'(x)

func MSE

func MSE() Loss

MSE is the Mean Squared Error

type Mutator

type Mutator func(val float64) float64

Mutator is a function for mutating genes

func GaussianMutation

func GaussianMutation(mutationRate float64, stdenv, mean float64) Mutator

GaussianMutation applies a randomly distributed gaussian mutation using mutationRate which should be a number in the range [0.0, 1.0] and represents a probability for a mutation to occur

type NeuralNetwork

type NeuralNetwork struct {
	Weights      []matrigo.Matrix
	Biases       []matrigo.Matrix
	Activations  []matrigo.Matrix
	LearningRate float64
	Layers       []Layer
	DebugMode    bool
	Loss         Loss
}

NeuralNetwork represents a neural network

func Load

func Load(filename string) (*NeuralNetwork, error)

Load loads a neural network from a file

func New

func New(learningRate float64, loss Loss, layers ...Layer) *NeuralNetwork

New creates a neural network

func (*NeuralNetwork) Copy

func (n *NeuralNetwork) Copy() *NeuralNetwork

Copy makes a deep copy of the network

func (*NeuralNetwork) Crossover

func (firstParent *NeuralNetwork) Crossover(secondParent *NeuralNetwork) (*NeuralNetwork, error)

Crossover applies a crossover between 2 neural networks by getting random bits of both to create a child

func (*NeuralNetwork) Mutate

func (n *NeuralNetwork) Mutate(mutator Mutator)

Mutate randomly mutates some weights and biases

func (*NeuralNetwork) Predict

func (n *NeuralNetwork) Predict(data []float64) []float64

Predict is the feedforward process

func (*NeuralNetwork) Save

func (n *NeuralNetwork) Save(filename string) error

Save saves the neural network to a file

func (*NeuralNetwork) SetDebugMode

func (n *NeuralNetwork) SetDebugMode(b bool)

SetDebugMode toggles debug mode

func (*NeuralNetwork) Train

func (n *NeuralNetwork) Train(optimizer Optimizer, dataSet DataSet, epochs int)

Train trains the neural network using backpropagation

type Optimizer

type Optimizer func(n *NeuralNetwork, dataSet DataSet) float64

Optimizer is the optimizer type (returns the error).

func GD

func GD() Optimizer

GD is a normal gradient descent (Optimizes after the entire data set)

func MBGD

func MBGD(batchSize int) Optimizer

MBGD is a Mini-Batch Gradient Descent (Batch training)

func SGD

func SGD() Optimizer

SGD is Stochastic Gradient Descent (On-line Training)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL