godl

package module
v0.0.0-...-61db93c Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 14, 2022 License: Apache-2.0 Imports: 22 Imported by: 0

README

GoDL

godl is Go Deep Learning framework written on top of Gorgonia.
godl is to Gorgonia what Keras is to TensorFlow.

API Stability

The API is not stable and can change at any moment. I'm writing this framework mostly to learn and so I don't provide any guarantees that it'll work for you. Use it at your own risk.

Roadmap

The following items are in the current roadmap, some of them need to be implemented in Gorgonia first.

  • Data loader
  • Base storage (save/load)
  • CLI to scaffold a project
  • Embeddings
  • Dense/Linear/FC
Losses
  • Cross Entropy
  • MSE
  • BCE
  • BinaryXent
  • CTC Losses
Pooling
  • MaxPool
  • AvgPool
  • GlobalMaxPool
  • GlobalAvgPool
Normalization
  • Batch Norm
  • Ghost Batch Norm
  • GroupNorm
  • LayerNorm
Recurrent Layers
  • LSTM
  • Bidirectional
  • GRU
  • ConvLSTM2D
Reshaping
  • ZeroPadding
  • UpSampling
Convolutional
  • Conv2D
  • DepthWiseConv2D
Applications
  • TabNet
  • VGG16
  • VGGFace2 (in progress)
  • VGG19
  • ResNet50
  • ResNet101
  • YOLO
  • BERT
Future
  • Support ONNX
  • Support hdf5 files

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func ErrorF

func ErrorF(lt LayerType, template string, args ...interface{}) error

func HandleErr

func HandleErr(err error, where string, args ...interface{})

HandleErr panics if the given err is not nil

func InBatches

func InBatches(x tensor.Tensor, batchSize int, cb func(v tensor.Tensor))

func MustBeGreatherThan

func MustBeGreatherThan(lt LayerType, context string, v interface{}, base interface{})

func Train

func Train(m *Model, module Module, trainX, trainY, validateX, validateY tensor.Tensor, opts TrainOpts) error

Train trains the model with the given data

func Validate

func Validate(m *Model, x, y *gorgonia.Node, costVal, predVal gorgonia.Value, validateX, validateY tensor.Tensor, opts TrainOpts) error

Types

type ActivationAxisModule

type ActivationAxisModule struct {
	// contains filtered or unexported fields
}

func (*ActivationAxisModule) Forward

func (m *ActivationAxisModule) Forward(inputs ...*Node) Nodes

func (*ActivationAxisModule) Name

func (m *ActivationAxisModule) Name() string

type ActivationModule

type ActivationModule struct {
	// contains filtered or unexported fields
}

func (*ActivationModule) Forward

func (m *ActivationModule) Forward(inputs ...*Node) Nodes

func (*ActivationModule) Name

func (m *ActivationModule) Name() string

type AvgPool2DModule

type AvgPool2DModule struct {
	// contains filtered or unexported fields
}

func AvgPool2D

func AvgPool2D(nn *Model, opts AvgPool2DOpts) *AvgPool2DModule

AvgPool2D applies the average pool operation to the given image

func (*AvgPool2DModule) Forward

func (m *AvgPool2DModule) Forward(inputs ...*Node) Nodes

func (*AvgPool2DModule) Name

func (m *AvgPool2DModule) Name() string

type AvgPool2DOpts

type AvgPool2DOpts struct {
	Kernel  tensor.Shape
	Padding []int
	Stride  []int
}

type BatchNormModule

type BatchNormModule struct {
	// contains filtered or unexported fields
}

func BatchNorm1d

func BatchNorm1d(nn *Model, opts BatchNormOpts) *BatchNormModule

BatchNorm1d defines the batch norm operation for tensors with shape (B, N)

func BatchNorm2d

func BatchNorm2d(nn *Model, opts BatchNormOpts) *BatchNormModule

BatchNorm2d defines the batch norm operation for tensors with shape (B, C, W, H)

func (*BatchNormModule) Forward

func (m *BatchNormModule) Forward(inputs ...*Node) Nodes

func (*BatchNormModule) Name

func (m *BatchNormModule) Name() string

type BatchNormOpts

type BatchNormOpts struct {
	Momentum            float64
	Epsilon             float64
	ScaleInit, BiasInit gorgonia.InitWFn

	ScaleName, BiasName string

	InputSize int
}

BatchNormOpts are the options to configure a batch normalization

type ConfusionMatrix

type ConfusionMatrix map[MatchType]int

func (ConfusionMatrix) Accuracy

func (cmat ConfusionMatrix) Accuracy() float64

func (ConfusionMatrix) F1Score

func (cmat ConfusionMatrix) F1Score() float64

func (ConfusionMatrix) MissRate

func (cmat ConfusionMatrix) MissRate() float64

func (ConfusionMatrix) Precision

func (cmat ConfusionMatrix) Precision() float64

func (ConfusionMatrix) Recall

func (cmat ConfusionMatrix) Recall() float64

func (ConfusionMatrix) String

func (cmat ConfusionMatrix) String() string

type Conv2dModule

type Conv2dModule struct {
	// contains filtered or unexported fields
}

func Conv2d

func Conv2d(m *Model, opts Conv2dOpts) *Conv2dModule

Conv2d applies a conv2d operation to the input

func (*Conv2dModule) Forward

func (m *Conv2dModule) Forward(inputs ...*Node) Nodes

func (*Conv2dModule) Name

func (m *Conv2dModule) Name() string

type Conv2dOpts

type Conv2dOpts struct {
	InputDimension  int
	OutputDimension int

	KernelSize tensor.Shape
	Pad        []int
	Stride     []int
	Dilation   []int

	WithBias bool

	WeightsInit, BiasInit gorgonia.InitWFn
	WeightsName, BiasName string
	FixedWeights          bool
}

Conv2dOpts are the options to run the conv2d operation

type CostFn

type CostFn func(output Nodes, target *Node) *Node

func CategoricalCrossEntropyLoss

func CategoricalCrossEntropyLoss(opts CrossEntropyLossOpt) CostFn

CategoricalCrossEntropyLoss is softmax + cce

func CrossEntropyLoss

func CrossEntropyLoss(opts CrossEntropyLossOpt) CostFn

CrossEntropyLoss implements cross entropy loss function

func MSELoss

func MSELoss(opts MSELossOpts) CostFn

MSELoss defines the mean square root cost function

type CrossEntropyLossOpt

type CrossEntropyLossOpt struct {
	Reduction Reduction
}

type DataLoader

type DataLoader struct {
	FeaturesShape tensor.Shape
	Rows          int
	Batches       int
	CurrentBatch  int
	// contains filtered or unexported fields
}

func NewDataLoader

func NewDataLoader(x tensor.Tensor, y tensor.Tensor, opts DataLoaderOpts) *DataLoader

NewDataLoader creates a data loader with the given data and options

func (DataLoader) HasNext

func (dl DataLoader) HasNext() bool

HasNext returns true if there's more batches to fetch

func (*DataLoader) Next

func (dl *DataLoader) Next() (tensor.Tensor, tensor.Tensor)

Next returns the next batch

func (*DataLoader) Reset

func (dl *DataLoader) Reset()

Reset resets the iterator

func (*DataLoader) Shuffle

func (dl *DataLoader) Shuffle() error

Shuffle shuffles the data

type DataLoaderOpts

type DataLoaderOpts struct {
	Shuffle   bool
	BatchSize int
	Drop      bool
}

type EmbeddingGeneratorModule

type EmbeddingGeneratorModule struct {
	// contains filtered or unexported fields
}

func EmbeddingGenerator

func EmbeddingGenerator(m *Model, inputDims int, catDims []int, catIdxs []int, catEmbDim []int, opts EmbeddingOpts) *EmbeddingGeneratorModule

func (*EmbeddingGeneratorModule) Forward

func (m *EmbeddingGeneratorModule) Forward(inputs ...*Node) Nodes

type EmbeddingModule

type EmbeddingModule struct {
	// contains filtered or unexported fields
}

func Embedding

func Embedding(m *Model, embeddingSize int, embeddingDim int, opts EmbeddingOpts) *EmbeddingModule

Embedding implements a embedding layer

func (*EmbeddingModule) Forward

func (m *EmbeddingModule) Forward(inputs ...*Node) Nodes

type EmbeddingOpts

type EmbeddingOpts struct {
	WeightsInit gorgonia.InitWFn
}

type GLUModule

type GLUModule struct {
	// contains filtered or unexported fields
}

func GLU

func GLU(nn *Model, opts GLUOpts) *GLUModule

GLU implements a Gated Linear Unit Block

func (*GLUModule) Forward

func (m *GLUModule) Forward(inputs ...*Node) Nodes

type GLUOpts

type GLUOpts struct {
	InputDimension   int
	OutputDimension  int
	VirtualBatchSize int
	Activation       activation.Function
	Linear           *LinearModule
	WeightsInit      gorgonia.InitWFn
	WithBias         bool
	Momentum         float64
}

GLUOpts are the supported options for GLU

type GhostBatchNormModule

type GhostBatchNormModule struct {
	// contains filtered or unexported fields
}

func GhostBatchNorm

func GhostBatchNorm(nn *Model, opts GhostBatchNormOpts) *GhostBatchNormModule

GhostBatchNorm implements a Ghost Batch Normalization: https://arxiv.org/pdf/1705.08741.pdf momentum defaults to 0.01 if 0 is passed epsilon defaults to 1e-5 if 0 is passed

func (*GhostBatchNormModule) Forward

func (m *GhostBatchNormModule) Forward(inputs ...*Node) Nodes

type GhostBatchNormOpts

type GhostBatchNormOpts struct {
	Momentum         float64
	Epsilon          float64
	VirtualBatchSize int
	OutputDimension  int

	ScaleInit, BiasInit gorgonia.InitWFn
}

GhostBatchNormOpts contains config options for the ghost batch normalization

type GlobalAvgPool2DModule

type GlobalAvgPool2DModule struct {
	// contains filtered or unexported fields
}

func GlobalAvgPool2D

func GlobalAvgPool2D(nn *Model) *GlobalAvgPool2DModule

GlobalAvgPool2D applies the global average pool operation to the given image

func (*GlobalAvgPool2DModule) Forward

func (m *GlobalAvgPool2DModule) Forward(inputs ...*Node) Nodes

func (*GlobalAvgPool2DModule) Name

func (m *GlobalAvgPool2DModule) Name() string

type GlobalMaxPool2DModule

type GlobalMaxPool2DModule struct {
	// contains filtered or unexported fields
}

func GlobalMaxPool2D

func GlobalMaxPool2D(nn *Model) *GlobalMaxPool2DModule

GlobalMaxPool2D applies the global average pool operation to the given image

func (*GlobalMaxPool2DModule) Forward

func (m *GlobalMaxPool2DModule) Forward(inputs ...*Node) Nodes

func (*GlobalMaxPool2DModule) Name

func (m *GlobalMaxPool2DModule) Name() string

type LayerType

type LayerType string

func AddLayer

func AddLayer(typ string) LayerType

type LinearModule

type LinearModule struct {
	// contains filtered or unexported fields
}

func Linear

func Linear(nn *Model, opts LinearOpts) *LinearModule

func (*LinearModule) Forward

func (m *LinearModule) Forward(inputs ...*Node) (out Nodes)

func (*LinearModule) Name

func (m *LinearModule) Name() string

type LinearOpts

type LinearOpts struct {
	Activation      activation.Function
	Dropout         float64
	OutputDimension int
	InputDimension  int

	WeightsInit           gorgonia.InitWFn
	BiasInit              gorgonia.InitWFn
	WithBias              bool
	WeightsName, BiasName string
	FixedWeights          bool
}

LinearOpts contains optional parameter for a layer

type MSELossOpts

type MSELossOpts struct {
	Reduction Reduction
}

type MatchType

type MatchType int
const (
	MatchTypeTruePositive MatchType = iota
	MatchTypeTrueNegative
	MatchTypeFalsePositive
	MatchTypeFalseNegative
)

type MaxPool2DModule

type MaxPool2DModule struct {
	// contains filtered or unexported fields
}

func MaxPool2D

func MaxPool2D(nn *Model, opts MaxPool2DOpts) *MaxPool2DModule

MaxPool2D applies the average pool operation to the given image

func (*MaxPool2DModule) Forward

func (m *MaxPool2DModule) Forward(inputs ...*Node) Nodes

func (*MaxPool2DModule) Name

func (m *MaxPool2DModule) Name() string

type MaxPool2DOpts

type MaxPool2DOpts struct {
	Kernel  tensor.Shape
	Padding []int
	Stride  []int
}

type Model

type Model struct {
	Logger  *log.Logger
	Storage *storage.Storage
	// contains filtered or unexported fields
}

Model implements the tab net model

func NewModel

func NewModel() *Model

NewModel creates a new model for the neural network

func (*Model) AddBias

func (t *Model) AddBias(lt LayerType, shape tensor.Shape, opts NewWeightsOpts) *gorgonia.Node

func (*Model) AddLearnable

func (t *Model) AddLearnable(lt LayerType, typ string, shape tensor.Shape, opts NewWeightsOpts) *gorgonia.Node

func (*Model) AddWeights

func (t *Model) AddWeights(lt LayerType, shape tensor.Shape, opts NewWeightsOpts) *gorgonia.Node

func (Model) CheckArity

func (m Model) CheckArity(lt LayerType, nodes []*gorgonia.Node, arity int) error

CheckArity checks if the arity is the correct one

func (*Model) CreateWeightsNode

func (t *Model) CreateWeightsNode(shape tensor.Shape, opts NewWeightsOpts) *gorgonia.Node

func (*Model) Learnables

func (m *Model) Learnables() gorgonia.Nodes

Learnables returns all learnables in the model

func (*Model) Predictor

func (m *Model) Predictor(module Module, opts PredictOpts) (Predictor, error)

func (Model) PrintWatchables

func (m Model) PrintWatchables()

func (*Model) Run

func (m *Model) Run(vmOpts ...gorgonia.VMOpt) error

Run runs the virtual machine in prediction mode

func (*Model) TrainGraph

func (m *Model) TrainGraph() *gorgonia.ExprGraph

TrainGraph returns the graph for the model

func (*Model) Watch

func (m *Model) Watch(name string, node *gorgonia.Node)

Watch watches the given node

func (*Model) WeightsCount

func (t *Model) WeightsCount() int64

WeightsCount return the number of learnables

func (*Model) WriteSVG

func (m *Model) WriteSVG(path string) error

WriteSVG creates a SVG representation of the node

type Module

type Module interface {
	Forward(inputs ...*Node) Nodes
	Name() string
}

func Rectify

func Rectify() Module

func Sigmoid

func Sigmoid() Module

func SoftMax

func SoftMax(axis ...int) Module

func SparseMax

func SparseMax(axis ...int) Module

func Tanh

func Tanh() Module

type ModuleList

type ModuleList []Module

func Sequential

func Sequential(m *Model, modules ...Module) ModuleList

Sequential runs the given layers one after the other

func (*ModuleList) Add

func (m *ModuleList) Add(mods ...Module)

func (ModuleList) Forward

func (m ModuleList) Forward(inputs ...*Node) (out Nodes)

func (ModuleList) Name

func (m ModuleList) Name() string

type NewWeightsOpts

type NewWeightsOpts struct {
	UniqueName string
	Value      gorgonia.Value
	InitFN     gorgonia.InitWFn

	// Fixed indicates that the weights won't be learnable. By default the weights are learnable
	Fixed bool
}

NewWeightsOpts defines the options to create a node Value has priority if it's not defined then it uses the InitFN if it's not defined it uses Glorot/Xavier(1.0) If UniqueName is empty an automatic one will be assigned.

type Node

type Node = gorgonia.Node

type Nodes

type Nodes = gorgonia.Nodes

type PredictOpts

type PredictOpts struct {
	InputShape tensor.Shape
	DevMode    bool
}

type Predictor

type Predictor func(x tensor.Tensor) (gorgonia.Value, error)

type Reduction

type Reduction string
const (
	ReductionNone Reduction = "none"
	ReductionSum  Reduction = "sum"
	ReductionMean Reduction = "mean"
)

func (Reduction) Func

func (r Reduction) Func() func(*gorgonia.Node, ...int) (*gorgonia.Node, error)

type TrainOpts

type TrainOpts struct {
	Epochs    int
	BatchSize int

	// DevMode detects common issues like exploding and vanishing gradients at the cost of performance
	DevMode bool

	WriteGraphFileTo string

	// WithLearnablesHeatmap writes images representing heatmaps for the weights. Use it to debug.
	WithLearnablesHeatmap bool

	// Solver defines the solver to use. It uses gorgonia.AdamSolver by default if none is passed
	Solver gorgonia.Solver

	// ValidateEvery indicates the number of epochs to run before running a validation. Defaults 1 (every epoch)
	ValidateEvery int

	CostObserver       func(epoch int, totalEpoch, batch int, totalBatch int, cost float32)
	ValidationObserver func(confMat ConfusionMatrix, cost float32)
	MatchTypeFor       func(predVal, targetVal []float32) MatchType
	CostFn             CostFn
}

TrainOpts are the options to train the model

Directories

Path Synopsis
examples

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL