gorgonia: github.com/gorgonia/gorgonia

package gorgonia

`import "github.com/gorgonia/gorgonia"`

Package gorgonia is a library that helps facilitate machine learning in Go. Write and evaluate mathematical equations involving multidimensional arrays easily. Do differentiation with them just as easily.

Autodiff showcases automatic differentiation

Code:

```g := NewGraph()

var x, y, z *Node
var err error

// define the expression
x = NewScalar(g, Float64, WithName("x"))
y = NewScalar(g, Float64, WithName("y"))
if z, err = Add(x, y); err != nil {
log.Fatal(err)
}

// set initial values then run
Let(x, 2.0)
Let(y, 2.5)

// by default, LispMachine performs forward mode and backwards mode execution
m := NewLispMachine(g)
defer m.Close()
if err = m.RunAll(); err != nil {
log.Fatal(err)
}

fmt.Printf("z: %v\n", z.Value())

}

}```

Output:

```z: 4.5
dz/dx: 1
dz/dy: 1
```

Basic example of representing mathematical equations as graphs.

In this example, we want to represent the following equation

```z = x + y
```

Code:

```g := NewGraph()

var x, y, z *Node
var err error

// define the expression
x = NewScalar(g, Float64, WithName("x"))
y = NewScalar(g, Float64, WithName("y"))
if z, err = Add(x, y); err != nil {
log.Fatal(err)
}

// create a VM to run the program on
machine := NewTapeMachine(g)
defer machine.Close()

// set initial values then run
Let(x, 2.0)
Let(y, 2.5)
if err = machine.RunAll(); err != nil {
log.Fatal(err)
}

fmt.Printf("%v", z.Value())```

Output:

```4.5
```

Code:

```xV, yV, bs := prep()
concurrentTraining(xV, yV, bs, epochs)

fmt.Printf("x:\n%1.1v", xV)
fmt.Printf("y:\n%1.1v", yV)

// Outputx:
// x:
// ⎡    6      7      8      9  ... 5e+01  5e+01  5e+01  5e+01⎤
// ⎢7e+01  7e+01  7e+01  7e+01  ... 1e+02  1e+02  1e+02  1e+02⎥
// ⎢1e+02  1e+02  1e+02  1e+02  ... 2e+02  2e+02  2e+02  2e+02⎥
// ⎢2e+02  2e+02  2e+02  2e+02  ... 2e+02  2e+02  2e+02  2e+02⎥
// .
// .
// .
// ⎢4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎥
// ⎢4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎥
// ⎢4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎥
// ⎣4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎦
// y:
// [-1e+02  -4e+02  -7e+02  -9e+02  ... -2e+08  -2e+08  -2e+08  -2e+08]
```

Linear Regression Example

The formula for a straight line is

```y = mx + c
```

We want to find an `m` and a `c` that fits the equation well. We'll do it in both float32 and float64 to showcase the extensibility of Gorgonia

Code:

```package main

import (
"fmt"
"log"
"math/rand"
"runtime"

. "gorgonia.org/gorgonia"
"gorgonia.org/tensor"
)

const (
vecSize = 1000000
)

// manually generate a fake dataset which is y=2x+random
func xy(dt tensor.Dtype) (x tensor.Tensor, y tensor.Tensor) {
var xBack, yBack interface{}
switch dt {
case Float32:
xBack = tensor.Range(tensor.Float32, 1, vecSize+1).([]float32)
yBackC := tensor.Range(tensor.Float32, 1, vecSize+1).([]float32)

for i, v := range yBackC {
yBackC[i] = v*2 + rand.Float32()
}
yBack = yBackC
case Float64:
xBack = tensor.Range(tensor.Float64, 1, vecSize+1).([]float64)
yBackC := tensor.Range(tensor.Float64, 1, vecSize+1).([]float64)

for i, v := range yBackC {
yBackC[i] = v*2 + rand.Float64()
}
yBack = yBackC
}

x = tensor.New(tensor.WithBacking(xBack), tensor.WithShape(vecSize))
y = tensor.New(tensor.WithBacking(yBack), tensor.WithShape(vecSize))
return
}

func random(dt tensor.Dtype) interface{} {
rand.Seed(13370)
switch dt {
case tensor.Float32:
return rand.Float32()
case tensor.Float64:
return rand.Float64()
default:
panic("Unhandled dtype")
}
}

func linregSetup(Float tensor.Dtype) (m, c *Node, machine VM) {
var xT, yT Value
xT, yT = xy(Float)

g := NewGraph()
x := NewVector(g, Float, WithShape(vecSize), WithName("x"), WithValue(xT))
y := NewVector(g, Float, WithShape(vecSize), WithName("y"), WithValue(yT))
m = NewScalar(g, Float, WithName("m"), WithValue(random(Float)))
c = NewScalar(g, Float, WithName("c"), WithValue(random(Float)))

se := Must(Square(Must(Sub(pred, y))))
cost := Must(Mean(se))

if _, err := Grad(cost, m, c); err != nil {
log.Fatalf("Failed to backpropagate: %v", err)
}

// machine := NewLispMachine(g)  // you can use a LispMachine, but it'll be VERY slow.
machine = NewTapeMachine(g, BindDualValues(m, c))
return m, c, machine
}

func linregRun(m, c *Node, machine VM, iter int, autoCleanup bool) (retM, retC Value) {
if autoCleanup {
defer machine.Close()
}
solver := NewVanillaSolver(WithLearnRate(0.001), WithClip(5)) // good idea to clip

if CUDA {
}
var err error
for i := 0; i < iter; i++ {
if err = machine.RunAll(); err != nil {
fmt.Printf("Error during iteration: %v: %v\n", i, err)
break
}

if err = solver.Step(model); err != nil {
log.Fatal(err)
}

machine.Reset() // Reset is necessary in a loop like this
}
return m.Value(), c.Value()

}

func linearRegression(Float tensor.Dtype, iter int) (retM, retC Value) {
defer runtime.GC()
m, c, machine := linregSetup(Float)
return linregRun(m, c, machine, iter, true)
}

// Linear Regression Example
//
// The formula for a straight line is
//		y = mx + c
// We want to find an `m` and a `c` that fits the equation well. We'll do it in both float32 and float64 to showcase the extensibility of Gorgonia
func main() {
var m, c Value
// Float32
m, c = linearRegression(Float32, 500)
fmt.Printf("float32: y = %3.3fx + %3.3f\n", m, c)

// Float64
m, c = linearRegression(Float64, 500)
fmt.Printf("float64: y = %3.3fx + %3.3f\n", m, c)

}
```

Code:

```xV, yV, _ := prep()
nonConcurrentTraining(xV, yV, epochs)

fmt.Printf("x:\n%1.1v", xV)
fmt.Printf("y:\n%1.1v", yV)```

Output:

```x:
⎡    6      7      8      9  ... 5e+01  5e+01  5e+01  5e+01⎤
⎢7e+01  7e+01  7e+01  7e+01  ... 1e+02  1e+02  1e+02  1e+02⎥
⎢1e+02  1e+02  1e+02  1e+02  ... 2e+02  2e+02  2e+02  2e+02⎥
⎢2e+02  2e+02  2e+02  2e+02  ... 2e+02  2e+02  2e+02  2e+02⎥
.
.
.
⎢4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎥
⎢4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎥
⎢4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎥
⎣4e+07  4e+07  4e+07  4e+07  ... 4e+07  4e+07  4e+07  4e+07⎦
y:
[-1e+02  -4e+02  -7e+02  -9e+02  ... -2e+08  -2e+08  -2e+08  -2e+08]
```

SymbolicDiff showcases symbolic differentiation

Code:

```g := NewGraph()

var x, y, z *Node
var err error

// define the expression
x = NewScalar(g, Float64, WithName("x"))
y = NewScalar(g, Float64, WithName("y"))
if z, err = Add(x, y); err != nil {
log.Fatal(err)
}

// symbolically differentiate z with regards to x and y
log.Fatal(err)
}

// create a VM to run the program on
machine := NewTapeMachine(g)
defer machine.Close()

// set initial values then run
Let(x, 2.0)
Let(y, 2.5)
if err = machine.RunAll(); err != nil {
log.Fatal(err)
}

fmt.Printf("z: %v\n", z.Value())
}

}```

Output:

```z: 4.5
dz/dx: 1 | 1
dz/dy: 1 | 1
```

Constants ¶

`const CUDA = false`

CUDA indicates if this build is using CUDA

`const DEBUG = false`

DEBUG indicates if this build is in debug mode. It is not.

Variables ¶

```var (
Float64 = tensor.Float64
Float32 = tensor.Float32
Int     = tensor.Int
Int64   = tensor.Int64
Int32   = tensor.Int32
Byte    = tensor.Uint8
Bool    = tensor.Bool

Ptr = tensor.UnsafePointer // equivalent to interface{}. Ugh Ugh Ugh

)```

func BatchNorm¶Uses

`func BatchNorm(x, scale, bias *Node, momentum, epsilon float64) (retVal, γ, β *Node, op *BatchNormOp, err error)`

func Binomial32¶Uses

`func Binomial32(trials, prob float64, s ...int) []float32`

Binomial32 returns a []float32 drawn from a binomial distribution given the trial and probability parameters.

func Binomial64¶Uses

`func Binomial64(trials, prob float64, s ...int) []float64`

Binomial64 returns a []float64 drawn from a binomial distribution given the trial and probability parameters.

`func Broadcast(a, b *Node, pattern BroadcastPattern) (*Node, *Node, error)`

Broadcast apply the pattern to the input nodes and returns two nodes suitable for a binary operator. Broadcast works somewhat like Numpy's broadcast, except it's now exposed as a function.

func Compile¶Uses

`func Compile(g *ExprGraph) (prog *program, locMap map[*Node]register, err error)`

Compile takes a graph and outputs a program suitable for *tapeMachine to run

func CompileFunction¶Uses

`func CompileFunction(g *ExprGraph, inputs, outputs Nodes) (prog *program, locMap map[*Node]register, err error)`

CompileFunction takes a graph, subsets it based on the input and output nodes provided and outputs a program suitable for *tapeMachine to run. It is analogous to theano.Function(). If some input nodes are not used or is not reachable, this function will return an error

func DebugDerives¶Uses

`func DebugDerives()`

DebugDerives turns on the derivation debug option when printing a graph

func DimSizersToShapes¶Uses

`func DimSizersToShapes(ds []DimSizer) ([]tensor.Shape, error)`

DimSizersToShapes is a convenience function to convert a slice of DimSizer to a slice of tensor.Shape. It will return an error if any of them isn't a tensor.Shape

func DontDebugDerives¶Uses

`func DontDebugDerives()`

DontDebugDerives turns off derivation debug option when printing a graph. It is off by default

func FmtNodeMap¶Uses

`func FmtNodeMap(m interface{}) mapFmt`

FmtNodeMap is a convenience function to print map[*Node]<T>

The fmt flag that makes it all nicely formatted is "-". Because a map consists of two types (key's type and val's type), and the Go fmt verb doesn't quite allow us to do something like "%ds", a hack is introduced to enable nicer printing of map[*Node]<T>

Here's the hack: The "#" flag is used to indicate if the map will use the Node's ID or Name when formatting the map.

```%-v 	nodeName:%v
%-#v	nodeID:%v
%-d 	nodeName:%x
%-#d 	nodeID: %x
%-p 	nodeName:%p
%-#p	nodeID:%p
```

If the "-" flag is not found, then the formatter returns the default Go format for map[<T>]<T2>

func Gaussian32¶Uses

`func Gaussian32(mean, stdev float64, s ...int) []float32`

Gaussian32 returns a []float32 drawn from a gaussian distribution as defined by the mean and stdev

func Gaussian64¶Uses

`func Gaussian64(mean, stdev float64, s ...int) []float64`

Gaussian64 returns a []float64 drawn from a gaussian distribution as defined by the mean and stdev

func GlorotEtAlN32¶Uses

`func GlorotEtAlN32(gain float64, s ...int) []float32`

GlorotEtAlN32 returns float32 weights sampled from a normal distribution using the methods specified in Glorot et. al (2010). See also: http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf

func GlorotEtAlN64¶Uses

`func GlorotEtAlN64(gain float64, s ...int) []float64`

GlorotEtAlN64 returns float64 weights sampled from a normal distribution using the methods specified in Glorot et. al (2010). See also: http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf

func GlorotEtAlU32¶Uses

`func GlorotEtAlU32(gain float64, s ...int) []float32`

GlorotEtAlU32 returns float32 weights sampled from a uniform distribution using the methods specified in Glorot et. al (2010). See also: http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf

For best results, use:

```1.0 for gain for weights that will be used in linear and/or sigmoid units
math.Sqrt(2.0) for gain for weights that will be used in ReLU units
math.Sqrt(2.0 / (1+alpha*alpha)) for ReLU that are leaky with alpha
```

func GlorotEtAlU64¶Uses

`func GlorotEtAlU64(gain float64, s ...int) []float64`

GlorotEtAlU64 returns float64 weights sampled from a uniform distribution using the methods specified in Glorot et. al (2010). See also: http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf

For best results, use:

```1.0 for gain for weights that will be used in linear and/or sigmoid units
math.Sqrt(2.0) for gain for weights that will be used in ReLU units
math.Sqrt(2.0 / (1+alpha*alpha)) for ReLU that are leaky with alpha
```

func GraphCollisionStats¶Uses

`func GraphCollisionStats() (int, int, int)`

Graph Collision related debugging code

func HeEtAlN64¶Uses

`func HeEtAlN64(gain float64, s ...int) []float64`

HeEtAlN64 returns float64 weights sampled from a normal distro, using the methods described in He et al (2015). The formula is:

```randn(n) * sqrt(2/n)
```

For best results, use:

```1.0 for gain for weights that will be used in linear and/or sigmoid units
math.Sqrt(2.0) for gain for weights that will be used in ReLU units
math.Sqrt(2.0 / (1+alpha*alpha)) for ReLU that are leaky with alpha
```

func HeEtAlU64¶Uses

`func HeEtAlU64(gain float64, s ...int) []float64`

HeEtAlU64 returns float64 weights sampled from a uniform distro, using the methods described in He et al (2015). The formula is:

```randn(n) * sqrt(2/n)
```

For best results, use:

```1.0 for gain for weights that will be used in linear and/or sigmoid units
math.Sqrt(2.0) for gain for weights that will be used in ReLU units
math.Sqrt(2.0 / (1+alpha*alpha)) for ReLU that are leaky with alpha
```

func Let¶Uses

`func Let(n *Node, be interface{}) error`

Let binds a Value to a node that is a variable. A variable is represented as a *Node with no Op. It is equivalent to :

```x = 2
```

func NewLispMachine¶Uses

`func NewLispMachine(g *ExprGraph, opts ...VMOpt) *lispMachine`

NewLispMachine creates a VM that executes the graph as it is traversed. Depending on the VMOpts passed in this VM is also capable of performing automatic differentiation.

func NewTapeMachine¶Uses

`func NewTapeMachine(g *ExprGraph, opts ...VMOpt) *tapeMachine`

NewTapeMachine creates a VM that compiles a graph into a prog.

func ReturnNode¶Uses

`func ReturnNode(n *Node)`

ReturnNode returns a node to the pool. It does not check that the *Node has been removed from the graph. USE WITH CAUTION.

func ReturnType¶Uses

`func ReturnType(t hm.Type)`

ReturnType ...

func S¶Uses

`func S(start int, opt ...int) tensor.Slice`

S creates a tensor.Slice. end is optional. It should be passed in as the first param of the optionals. step is optional. It should be passed in as the second param of the optionals.

Default end is start+1. Default step is 1, unless end == step+1, then it defaults to 0

func SetDerivOf¶Uses

`func SetDerivOf(deriv, of *Node)`

SetDerivOf is used to hack around the fundamental limitations of Gorgonia.

Specifically it is used to set a node as the derivative of another node, used in the cuDNN version of batch norm.

The cuDNN BatchNorm operation produces the derivatives for the scale and bias as a side effect of calculating the derivative of the input. Because Gorgonia's Ops are modelled as pure functions (and no tuples) this causes a bit of trouble. With the clever use of scratch space ops multireturn can be simulated. But this causes derivatives to not be set correctly.

func SetOptimizationLevel¶Uses

`func SetOptimizationLevel(i int)`

SetOptimizationLevel sets the fast math optimization level. By default, fast math is turned off, and this function is a no-op.

Use the `fastmath` build tag to use fast math

func TypeOf¶Uses

`func TypeOf(v Value) hm.Type`

TypeOf returns the Type of the value

func Uniform32¶Uses

`func Uniform32(low, high float64, s ...int) []float32`

Uniform32 returns a []float64 drawn from a uniform distribution between [low, high) that is provided

func Uniform64¶Uses

`func Uniform64(low, high float64, s ...int) []float64`

Uniform64 returns a []float64 drawn from a uniform distribution between [low, high) that is provided

func UnsafeLet¶Uses

`func UnsafeLet(n *Node, be interface{}) error`

UnsafeLet binds a Value to any node, not just a variable node. This means that you can use it to change any node's value at the runtime of the graph. UNSAFE!

Additional notes: if `be` is a tensor.Slice, and the node's op is a sliceOp or sliceIncrOp, the op's slice will be replaced with the new slice.

func Use¶Uses

`func Use(b BLAS)`

Use defines which BLAS implementation gorgonia should use. The default is Gonum's Native. These are the other options:

```Use(blastoise.Implementation())
Use(cubone.Implementation())
Use(cgo.Implementation)
```

Note the differences in the brackets. The blastoise and cubone ones are functions.

func UseNonStable¶Uses

`func UseNonStable()`

UseNonStable turns off the stabilization functions when building graphs.

func UseStabilization¶Uses

`func UseStabilization()`

UseStabilization sets the global option to invoke stabilization functions when building the graph. Numerical stabilization is on by default

func ValueClose¶Uses

`func ValueClose(a, b Value) bool`

ValueClose checks whether two values are close to one another. It's predominantly used as an alternative equality test for floats

func ValueEq¶Uses

`func ValueEq(a, b Value) bool`

ValueEq is the equality function for values

func WalkGraph¶Uses

`func WalkGraph(start *Node) <-chan *Node`

WalkGraph walks a graph. It returns a channel of *Nodes, so be sure to consume the channel or there may be a deadlock

func WithGraphName¶Uses

`func WithGraphName(name string) graphconopt`

WithGraphName is a ExprGraph construction option that provides a name.

```type ADOp interface {
Op

DoDiff(ctx ExecutionContext, inputs Nodes, output *Node) error
}```

An ADOp is an Op that supports automatic differentiation.

```type AdaGradSolver struct {
// contains filtered or unexported fields
}```

`func NewAdaGradSolver(opts ...SolverOpt) *AdaGradSolver`

`func (s *AdaGradSolver) Step(model []ValueGrad) (err error)`

Step steps through each node in the model and applies the Adaptive Gradient gradient descent algorithm on the value.

This function will error out if the nodes do not have an associated Grad value.

```type AdamSolver struct {
// contains filtered or unexported fields
}```

AdamSolver is the Adaptive Moment Estimation solver (basically RMSProp on steroids). Paper: http://arxiv.org/abs/1412.6980

We overload the purpose of existing data structure of a *dualValue. However, instead of just holding a value and its derivative, the cache's *dualValues hold the Means of gradients (in .Value) and the variances of the gradients (in .d)

`func NewAdamSolver(opts ...SolverOpt) *AdamSolver`

```eta (learn rate)	  	: 0.001
eps (smoothing factor)		: 1e-8
beta1				: 0.9
beta2 				: 0.999
batch				: 1
```

`func (s *AdamSolver) Step(model []ValueGrad) (err error)`

Step steps through each node in the model and applies the Adaptive Moment Estimation gradient descent algorithm on the value.

This function will error out if the nodes do not have an associated Grad value.

type Arena¶Uses

```type Arena interface {
Get(dev Device, size int64) (tensor.Memory, error)       // Get returns a NoOpError when it cannot get a memory. Please allocate
GetFromValue(dev Device, v Value) (tensor.Memory, error) // Gets a memory and copies the values into the memory and returns it.
Put(dev Device, mem tensor.Memory, size int64)           // puts the memory back into the arena
PutValue(dev Device, v Value)                            // puts the memory back into the arena

// Transfers memory from device to device
Transfer(toDev, fromDev Device, v Value, synchronous bool) (retVal Value, err error)
}```

Arena is a representation of a pool of tensor.Memory

type AutoDiffError¶Uses

`type AutoDiffError struct{}`

AutoDiffError is an error which should be passed if the function is not differentiable. This is useful for Op implementations

func (AutoDiffError) Error¶Uses

`func (err AutoDiffError) Error() string`

type B¶Uses

`type B bool`

B represents a bool value.

func (*B) Data¶Uses

`func (v *B) Data() interface{}`

Data returns the original representation of the Value

func (*B) Dtype¶Uses

`func (v *B) Dtype() tensor.Dtype`

Dtype returns the Dtype of the value

func (*B) Format¶Uses

`func (v *B) Format(s fmt.State, c rune)`

Format implements fmt.Formatter

func (*B) MemSize¶Uses

`func (v *B) MemSize() uintptr`

MemSize satisfies the tensor.Memory interface

func (*B) Pointer¶Uses

`func (v *B) Pointer() unsafe.Pointer`

Pointer returns the pointer as an unsafe.Pointer. Satisfies the tensor.Memory interface

func (*B) Shape¶Uses

`func (v *B) Shape() tensor.Shape`

Shape returns a scalar shape for all scalar values

func (*B) Size¶Uses

`func (v *B) Size() int`

Size returns 0 for all scalar Values

func (*B) Uintptr¶Uses

`func (v *B) Uintptr() uintptr`

Uintptr satisfies the tensor.Memory interface

type BLAS¶Uses

```type BLAS interface {
blas.Float32
blas.Float64
}```

BLAS represents all the possible implementations of BLAS. The default is Gonum's Native

func WhichBLAS¶Uses

`func WhichBLAS() BLAS`

WhichBLAS returns the BLAS that gorgonia uses.

type BarzilaiBorweinSolver¶Uses

```type BarzilaiBorweinSolver struct {
// contains filtered or unexported fields
}```

Barzilai-Borwein performs Gradient Descent in steepest descend direction Solves 0 = F(x), by x_{i+1} = x_i - eta * Grad(F)(x_i) Where the learn rate eta is calculated by the Barzilai-Borwein method: eta(x_i) = <(x_i - x_{i-1}), (Grad(F)(x_i) - Grad(F)(x_{i-1}))> /

```||(Grad(F)(x_i) - Grad(F)(x_{i-1}))||^2
```

The input learn rate is used for the first iteration. TODO: Check out stochastic implementations, e.g. "Barzilai-Borwein Step Size for Stochastic Gradient Descent" https://arxiv.org/abs/1605.04131

func NewBarzilaiBorweinSolver¶Uses

`func NewBarzilaiBorweinSolver(opts ...SolverOpt) *BarzilaiBorweinSolver`

func (*BarzilaiBorweinSolver) Step¶Uses

`func (s *BarzilaiBorweinSolver) Step(model []ValueGrad) (err error)`

Step steps through each node in the model and applies the Barzilai-Borwein gradient descent algorithm on the value.

This function will error out if the nodes do not have an associated Grad value.

type BatchNormOp¶Uses

```type BatchNormOp struct {
// contains filtered or unexported fields
}```

BatchNormOp is a batch normalization process as described by Ioffe and Szegedy (2015) - http://arxiv.org/abs/1502.03167

Normalization is done as:

```γ(x - μ) / σ + β
```

γ is the scaling factor and β is the offset factor. These are created by BatchNorm()

func (*BatchNormOp) Arity¶Uses

`func (op *BatchNormOp) Arity() int`

func (*BatchNormOp) CallsExtern¶Uses

`func (op *BatchNormOp) CallsExtern() bool`

func (*BatchNormOp) DiffWRT¶Uses

`func (op *BatchNormOp) DiffWRT(inputs int) []bool`

func (*BatchNormOp) Do¶Uses

`func (op *BatchNormOp) Do(values ...Value) (retVal Value, err error)`

func (*BatchNormOp) DoDiff¶Uses

`func (op *BatchNormOp) DoDiff(ctx ExecutionContext, inputs Nodes, output *Node) error`

func (*BatchNormOp) Hashcode¶Uses

`func (op *BatchNormOp) Hashcode() uint32`

func (*BatchNormOp) InferShape¶Uses

`func (op *BatchNormOp) InferShape(ns ...DimSizer) (tensor.Shape, error)`

func (*BatchNormOp) OverwritesInput¶Uses

`func (op *BatchNormOp) OverwritesInput() int`

func (*BatchNormOp) Reset¶Uses

`func (op *BatchNormOp) Reset() error`

func (*BatchNormOp) ReturnsPtr¶Uses

`func (op *BatchNormOp) ReturnsPtr() bool`

func (*BatchNormOp) SetTesting¶Uses

`func (op *BatchNormOp) SetTesting()`

func (*BatchNormOp) SetTraining¶Uses

`func (op *BatchNormOp) SetTraining()`

func (*BatchNormOp) String¶Uses

`func (op *BatchNormOp) String() string`

func (*BatchNormOp) SymDiff¶Uses

`func (op *BatchNormOp) SymDiff(inputs Nodes, output *Node, grad *Node) (retVal Nodes, err error)`

func (*BatchNormOp) Type¶Uses

`func (op *BatchNormOp) Type() hm.Type`

func (*BatchNormOp) UsePreallocDo¶Uses

`func (op *BatchNormOp) UsePreallocDo(prealloc Value, inputs ...Value) (retVal Value, err error)`

func (*BatchNormOp) WriteHash¶Uses

`func (op *BatchNormOp) WriteHash(h hash.Hash)`

type Batched¶Uses

```type Batched interface {
WorkAvailable() <-chan struct{}
DoWork()
}```

type BatchedBLAS¶Uses

```type BatchedBLAS interface {
Batched
BLAS
}```

type BatchedDevice¶Uses

```type BatchedDevice interface {
Batched
Retval() interface{}
Errors() error
}```

type BestDoer¶Uses

```type BestDoer interface {
Op

BestDo(prealloc Value, vals ...Value) (Value, error)
}```

type BinaryOp¶Uses

```type BinaryOp interface {
Op

IsBinary() bool
}```

A BinaryOp is an Op that takes only two inputs

`type BroadcastPattern byte`

BroadcastPattern is actually a bit array. It's split into 2 nibbles - the left nibble represents the left operand, the right nibble represents the right operand:

```xxxx|xxxx
```

The least significant bit of each nibble is elem 0. Concrete examples:

```00000010 (0x02) = broadcast axis 1 of the right operand
00000001 (0x01) = broadcast axis 0 of the right operand
00000101 (0x09) = broadcast axis 0 AND axis 2 of the right operand
00010000 (0x10) = broadcast axis 0 of the left operand
00110000 (0x30) = broadcast axis 0 and axis 1 of the lef operand
```

You get the drill.

Do note that the current limitation of the BroadcastPattern allows only up to 4 dimensions per operand.

`func NewBroadcastPattern(leftAxes, rightAxes []byte) BroadcastPattern`

type CLDoer¶Uses

```type CLDoer interface {
CLDo(inputs ...Value) (Value, error)
}```

CLDoer uses OpenCL to perform the Op. As of now, there are NO Ops that support OpenCL

```type CUDAADOp interface {
CUDADoDiff(extern External, dev Device, inputs Nodes, output *Node) error
}```

```type CUDADoer interface {
CUDADo(extern External, dev Device, prealloc Value, inputs ...Value) (retVal Value, err error)
}```

CUDADoer uses CUDA to perform the Op.

type CloneErrorer¶Uses

```type CloneErrorer interface {
Clone() (interface{}, error)
}```

CloneErrorer represents any type that can clone itself and return an error if necessary

type Cloner¶Uses

```type Cloner interface {
Clone() interface{}
}```

Cloner represents any type that can clone itself.

type CopierFrom¶Uses

```type CopierFrom interface {
CopyFrom(src interface{}) error
}```

CopierFrom represents any type that can copy data from the source provided.

type CopierTo¶Uses

```type CopierTo interface {
CopyTo(dest interface{}) error
}```

CopierTo represents any type that can copy data to the destination.

type Device¶Uses

`type Device int`

Device represents the device where the code will be executed on. In this build, all code will run on the CPU

```const (
CPU Device = 0 // CPU the only device the graph will be executed on
)```

func (Device) Alloc¶Uses

`func (d Device) Alloc(extern External, size int64) (tensor.Memory, error)`

Alloc allocates memory on the device. This is currently a NO-OP in this build

func (Device) Free¶Uses

`func (d Device) Free(extern External, mem tensor.Memory, sie uint) error`

Free frees the memory on the device. This is currently a NO-OP in this build

func (Device) IsGPU¶Uses

`func (d Device) IsGPU() bool`

IsGPU will always return false in this build

func (Device) String¶Uses

`func (d Device) String() string`

String implements fmt.Stringer and runtime.Stringer

type DimSizer¶Uses

```type DimSizer interface {
DimSize(int) (int, error)
}```

DimSizer is any type (typically a tensor.Shape) that allows querying for a dimension size given an input dimension.

func ShapesToDimSizers¶Uses

`func ShapesToDimSizers(shapes []tensor.Shape) []DimSizer`

ShapesToDimSizers is a convenience function to convert a slice of tensor.Shape to a slice of DimSizer

type Dtyper¶Uses

```type Dtyper interface {
Dtype() tensor.Dtype
}```

Dtyper represents any type (typically a Value) that knows its own Dtype

type ExecutionContext¶Uses

```type ExecutionContext struct {
External
Device
}```

ExecutionContext informs how an op should be executed

type ExprGraph¶Uses

```type ExprGraph struct {
// contains filtered or unexported fields
}```

ExprGraph is a data structure for a directed acyclic graph (of expressions). This structure is the main entry point for Gorgonia.

func NewGraph¶Uses

`func NewGraph(opts ...graphconopt) *ExprGraph`

NewGraph creates a new graph. Duh

`func (g *ExprGraph) AddNode(n *Node) (retVal *Node)`

AddNode adds n to the graph. It panics if the added node ID matches an existing node ID.

func (*ExprGraph) AllNodes¶Uses

`func (g *ExprGraph) AllNodes() Nodes`

AllNodes is like Nodes, but returns Nodes instead of []graph.Node. Nodes() has been reserved for the graph.Directed interface, so this one is named AllNodes instead

func (*ExprGraph) ByName¶Uses

`func (g *ExprGraph) ByName(name string) (retVal Nodes)`

ByName returns nodes that have the name provided. Bear in mind that the name that is compared to is the internal name, not the result of calling node.Name(). The reason for doing this is for ease of finding only names that are user-supplied, instead of autogenerated names

func (*ExprGraph) Clone¶Uses

`func (g *ExprGraph) Clone() interface{}`

Clone clones the graph. All nodes gets cloned, and their values are cloned as well.

func (*ExprGraph) Constant¶Uses

`func (g *ExprGraph) Constant(v Value) *Node`

Constant returns a constant that may be found in the graph. If no constant were found, a new one is created instead

func (*ExprGraph) Edge¶Uses

`func (g *ExprGraph) Edge(u, v int64) graph.Edge`

Edge returns the edge from u to v if such an edge exists and nil otherwise. The node v must be directly reachable from u as defined by the From method.

func (*ExprGraph) ExactSubgraphRoots¶Uses

`func (g *ExprGraph) ExactSubgraphRoots(ns ...*Node) *ExprGraph`

ExactSubgraphRoots creates a subgraph from the roots provided. The difference between SubgraphRoots and ExactSubgraphRoots is that ExactSubGraphRoots will not attempt to discover if any nodes are missing.

Given a function like the following:

```z = x + y
set(x, -x.Grad) // setting the value of x to the negative of the gradient
```

When SubgraphRoots is used on z, the `-x.Grad` will be included. When using ExactSubgraphRoots, only `x` and `y` are included in the subgraph

func (*ExprGraph) From¶Uses

`func (g *ExprGraph) From(nodeid int64) graph.Nodes`

From returns all nodes in g that can be reached directly from n.

func (*ExprGraph) Has¶Uses

`func (g *ExprGraph) Has(nodeid int64) bool`

Has returns whether the node exists within the graph.

func (*ExprGraph) HasEdgeBetween¶Uses

`func (g *ExprGraph) HasEdgeBetween(x, y int64) bool`

HasEdgeBetween returns whether an edge exists between nodes x and y without considering direction.

func (*ExprGraph) HasEdgeFromTo¶Uses

`func (g *ExprGraph) HasEdgeFromTo(u, v int64) bool`

HasEdgeFromTo returns whether an edge exists in the graph from u to v.

func (*ExprGraph) Inputs¶Uses

`func (g *ExprGraph) Inputs() (retVal Nodes)`

Inputs returns a list of nodes which are inputs (that is to say, the user is required to set a value in it)

func (*ExprGraph) Node¶Uses

`func (g *ExprGraph) Node(id int64) graph.Node`

Node returns the node in the graph with the given ID.

func (*ExprGraph) Nodes¶Uses

`func (g *ExprGraph) Nodes() graph.Nodes`

Nodes returns all the nodes in the graph.

func (*ExprGraph) RemoveNode¶Uses

`func (g *ExprGraph) RemoveNode(node graph.Node)`

RemoveNode removes n from the graph, as well as any edges attached to it. If the node is not in the graph it is a no-op.

func (*ExprGraph) Roots¶Uses

`func (g *ExprGraph) Roots() (retVal Nodes)`

Roots returns a list of nodes that are not children of any other nodes

func (*ExprGraph) SetEdge¶Uses

`func (g *ExprGraph) SetEdge(e graph.Edge)`

SetEdge adds e, an edge from one node to another. If the nodes do not exist, they are added. It will panic if the IDs of the e.From and e.To are equal.

func (*ExprGraph) String¶Uses

`func (g *ExprGraph) String() string`

func (*ExprGraph) Subgraph¶Uses

`func (g *ExprGraph) Subgraph(ns ...*Node) *ExprGraph`

Subgraph subsets a graph. This function has overloaded meanings - If only one node is passed in, it assumes that the one node is the root, otherwise, it treats ns as the subset of nodes to be included in the subgraph

func (*ExprGraph) SubgraphRoots¶Uses

`func (g *ExprGraph) SubgraphRoots(ns ...*Node) *ExprGraph`

SubgraphRoots creates a subgraph, assuming the provided nodes are roots to the new subgraph.

func (*ExprGraph) To¶Uses

`func (g *ExprGraph) To(nid int64) graph.Nodes`

To returns all nodes in g that can reach directly to n.

func (*ExprGraph) ToDot¶Uses

`func (g *ExprGraph) ToDot() string`

ToDot generates the graph in graphviz format. The use of this is to generate for the entire graph which may have multiple trees with different roots TODO: This is getting unwieldy. Perhaps refactor out into a ToDot(...Opt)?

func (*ExprGraph) UnbindAll¶Uses

`func (g *ExprGraph) UnbindAll()`

UnbindAll unbinds all the values from the nodes

func (*ExprGraph) UnbindAllNonInputs¶Uses

`func (g *ExprGraph) UnbindAllNonInputs()`

UnbindAllNonInputs unbinds all the values from nodes that aren't input nodes

```type ExternMetadata struct {
tensor.Engine
// contains filtered or unexported fields
}```

ExternMetadata is used to hold metadata about external execution devices. In this build, it's an empty struct because the default build doesn't use external devices to execute the graph on

`func (m *ExternMetadata) Cleanup()`

Cleanup cleans up the ancillary allocations made during the calling of batched external device function.

The reason for this method is due to the fact that there is currently no way to free memory while the context is still running without causing some weirdness to the CUDA calls.

This is a No-op in this build

`func (m *ExternMetadata) DoWork() error`

DoWork flushes any batched cgo calls. In this build it only flushes the batched BLAS calls.

`func (m *ExternMetadata) Get(dev Device, size int64) (tensor.Memory, error)`

Get allocates a memory of the size. In this build it returns a NoOpError.

`func (m *ExternMetadata) GetFromValue(dev Device, v Value) (tensor.Memory, error)`

GetFromValue allocates a memory of the size of v. In this build it returns a NoOpError, and v itself

`func (m ExternMetadata) HasFunc(name string) bool`

HasFunc will always return false in this build

`func (m *ExternMetadata) Put(dev Device, mem tensor.Memory, size int64)`

Put puts a previously allocated memory slab of the provided size back into the pool. Currently this is a No-op in this build.

`func (m *ExternMetadata) PutValue(dev Device, v Value)`

PutValue puts a previously allocated value into the pool. In this build, it is a noop.

`func (m *ExternMetadata) Reset()`

`func (m *ExternMetadata) Signal()`

Signal sends a signal down the workavailable channel, telling the VM to call the DoWork method. Signal is a synchronous method

`func (m *ExternMetadata) Sync() chan struct{}`

Sync returns the sync channel

`func (m *ExternMetadata) Transfer(toDev, fromDev Device, v Value, synchronous bool) (retVal Value, err error)`

Transfer transfers a value from device to device. In this build, it's a noop, returning the input value, and a nil error

`func (m *ExternMetadata) WorkAvailable() <-chan bool`

WorkAvailable returns a channel of empty struct, which is used to signal to the VM when there is work available. The VM will then call the DoWork method.

type External¶Uses

```type External interface {
Arena
Signal() // signals the machine to do work
Sync() chan struct{}
}```

External is a representation of an external device (cuda/cgo/openCL), conceptually modelled as a machine.

type ExternalOp¶Uses

```type ExternalOp struct {
Op
ExecutionContext

Prealloc  Value
Incr      Value // is this a Incr? IncrDoers have higher precedence over PreallocDo
UseUnsafe bool  // Is this an unsafe op? Lowest of all "special" Dos
}```

ExternalOp is an op that contains an external context. This allows for ops to be run without needing a VM

`func NewAddOp(a, b *Node, ctx ExecutionContext) *ExternalOp`

func NewExternalOp¶Uses

`func NewExternalOp(op Op, ctx ExecutionContext, prealloc Value) *ExternalOp`

NewExternalOp creates a new *ExternalOp.

`func NewHadamardProdOp(a, b *Node, ctx ExecutionContext) *ExternalOp`

func NewSubOp¶Uses

`func NewSubOp(a, b *Node, ctx ExecutionContext) *ExternalOp`

NewSubOp creates a new *ExternalOp that wraps a sub op

func (*ExternalOp) DetermineDevice¶Uses

`func (op *ExternalOp) DetermineDevice(inputs Nodes, output *Node) error`

func (*ExternalOp) Do¶Uses

`func (op *ExternalOp) Do(vals ...Value) (Value, error)`

Do performs the op,

func (*ExternalOp) String¶Uses

`func (op *ExternalOp) String() string`

type F32¶Uses

`type F32 float32`

F32 represents a float32 value.

func (*F32) Data¶Uses

`func (v *F32) Data() interface{}`

Data returns the original representation of the Value

func (*F32) Dtype¶Uses

`func (v *F32) Dtype() tensor.Dtype`

Dtype returns the Dtype of the value

func (*F32) Format¶Uses

`func (v *F32) Format(s fmt.State, c rune)`

Format implements fmt.Formatter

func (*F32) MemSize¶Uses

`func (v *F32) MemSize() uintptr`

MemSize satisfies the tensor.Memory interface

func (*F32) Pointer¶Uses

`func (v *F32) Pointer() unsafe.Pointer`

Pointer returns the pointer as an unsafe.Pointer. Satisfies the tensor.Memory interface

func (*F32) Shape¶Uses

`func (v *F32) Shape() tensor.Shape`

Shape returns a scalar shape for all scalar values

func (*F32) Size¶Uses

`func (v *F32) Size() int`

Size returns 0 for all scalar Values

func (*F32) Uintptr¶Uses

`func (v *F32) Uintptr() uintptr`

Uintptr satisfies the tensor.Memory interface

type F64¶Uses

`type F64 float64`

F64 represents a float64 value.

func (*F64) Data¶Uses

`func (v *F64) Data() interface{}`

Data returns the original representation of the Value

func (*F64) Dtype¶Uses

`func (v *F64) Dtype() tensor.Dtype`

Dtype returns the Dtype of the value

func (*F64) Format¶Uses

`func (v *F64) Format(s fmt.State, c rune)`

Format implements fmt.Formatter

func (*F64) MemSize¶Uses

`func (v *F64) MemSize() uintptr`

MemSize satisfies the tensor.Memory interface

func (*F64) Pointer¶Uses

`func (v *F64) Pointer() unsafe.Pointer`

Pointer returns the pointer as an unsafe.Pointer. Satisfies the tensor.Memory interface

func (*F64) Shape¶Uses

`func (v *F64) Shape() tensor.Shape`

Shape returns a scalar shape for all scalar values

func (*F64) Size¶Uses

`func (v *F64) Size() int`

Size returns 0 for all scalar Values

func (*F64) Uintptr¶Uses

`func (v *F64) Uintptr() uintptr`

Uintptr satisfies the tensor.Memory interface

type I¶Uses

`type I int`

I represents a int value.

func (*I) Data¶Uses

`func (v *I) Data() interface{}`

Data returns the original representation of the Value

func (*I) Dtype¶Uses

`func (v *I) Dtype() tensor.Dtype`

Dtype returns the Dtype of the value

func (*I) Format¶Uses

`func (v *I) Format(s fmt.State, c rune)`

Format implements fmt.Formatter

func (*I) MemSize¶Uses

`func (v *I) MemSize() uintptr`

MemSize satisfies the tensor.Memory interface

func (*I) Pointer¶Uses

`func (v *I) Pointer() unsafe.Pointer`

Pointer returns the pointer as an unsafe.Pointer. Satisfies the tensor.Memory interface

func (*I) Shape¶Uses

`func (v *I) Shape() tensor.Shape`

Shape returns a scalar shape for all scalar values

func (*I) Size¶Uses

`func (v *I) Size() int`

Size returns 0 for all scalar Values

func (*I) Uintptr¶Uses

`func (v *I) Uintptr() uintptr`

Uintptr satisfies the tensor.Memory interface

type I32¶Uses

`type I32 int32`

I32 represents a int32 value.

func (*I32) Data¶Uses

`func (v *I32) Data() interface{}`

Data returns the original representation of the Value

func (*I32) Dtype¶Uses

`func (v *I32) Dtype() tensor.Dtype`

Dtype returns the Dtype of the value

func (*I32) Format¶Uses

`func (v *I32) Format(s fmt.State, c rune)`

Format implements fmt.Formatter

func (*I32) MemSize¶Uses

`func (v *I32) MemSize() uintptr`

MemSize satisfies the tensor.Memory interface

func (*I32) Pointer¶Uses

`func (v *I32) Pointer() unsafe.Pointer`

Pointer returns the pointer as an unsafe.Pointer. Satisfies the tensor.Memory interface

func (*I32) Shape¶Uses

`func (v *I32) Shape() tensor.Shape`

Shape returns a scalar shape for all scalar values

func (*I32) Size¶Uses

`func (v *I32) Size() int`

Size returns 0 for all scalar Values

func (*I32) Uintptr¶Uses

`func (v *I32) Uintptr() uintptr`

Uintptr satisfies the tensor.Memory interface

type I64¶Uses

`type I64 int64`

I64 represents a int64 value.

func (*I64) Data¶Uses

`func (v *I64) Data() interface{}`

Data returns the original representation of the Value

func (*I64) Dtype¶Uses

`func (v *I64) Dtype() tensor.Dtype`

Dtype returns the Dtype of the value

func (*I64) Format¶Uses

`func (v *I64) Format(s fmt.State, c rune)`

Format implements fmt.Formatter

func (*I64) MemSize¶Uses

`func (v *I64) MemSize() uintptr`

MemSize satisfies the tensor.Memory interface

func (*I64) Pointer¶Uses

`func (v *I64) Pointer() unsafe.Pointer`

Pointer returns the pointer as an unsafe.Pointer. Satisfies the tensor.Memory interface

func (*I64) Shape¶Uses

`func (v *I64) Shape() tensor.Shape`

Shape returns a scalar shape for all scalar values

func (*I64) Size¶Uses

`func (v *I64) Size() int`

Size returns 0 for all scalar Values

func (*I64) Uintptr¶Uses

`func (v *I64) Uintptr() uintptr`

Uintptr satisfies the tensor.Memory interface

type IncrDoer¶Uses

```type IncrDoer interface {
IncrDo(toIncr Value, inputs ...Value) error
}```

IncrDoer increments the toIncr with the result of doing

type InitWFn¶Uses

`type InitWFn func(dt tensor.Dtype, s ...int) interface{}`

InitWFn is a type of helper function to help initialize weights vector/matrices. It generates the backing required for the tensors.

It's typically used in closures

func Gaussian¶Uses

`func Gaussian(mean, stdev float64) InitWFn`

Gaussian creates a InitWFn with the specified parameters. Example Usage:

```w := NewMatrix(g, Float64, WithName("w"), WithShape(2,2), WithInit(Gaussian(0, 1)))
```

This will create a backing slice of []float64, with the length of 4, and its values are drawn from a gaussian distro

func GlorotN¶Uses

`func GlorotN(gain float64) InitWFn`

GlorotN creates a InitWFn that populates a Value with weights normally sampled using Glorot et al.'s algorithm

func GlorotU¶Uses

`func GlorotU(gain float64) InitWFn`

GlorotU creates a InitWFn that populates a Value with weights uniformly sampled using Glorot et al.'s algorithm

func Ones¶Uses

`func Ones() InitWFn`

func RangedFrom¶Uses

`func RangedFrom(start int) InitWFn`

RangedFrom creates an InitWFn that populates a Value starting with the provided start, increamenting the number for each element in the value by 1

func Uniform¶Uses

`func Uniform(low, high float64) InitWFn`

Uniform creates a InitWFn with the specified parameters. Example Usage:

```w := NewMatrix(g, Float64, WithName("w"), WithShape(2,2), WithInit(Uniform(-1, 1)))
```

This will create a backing slice of []float64, with the length of 4, and its values are drawn from a uniform distro

func ValuesOf¶Uses

`func ValuesOf(val interface{}) InitWFn`

func Zeroes¶Uses

`func Zeroes() InitWFn`

Zeroes creates an InitWfn that populates a Value with... zeroes. I don't know what you expected.

type Momentum¶Uses

```type Momentum struct {
// contains filtered or unexported fields
}```

Momentum is the stochastic gradient descent optimizer with momentum item.

func NewMomentum¶Uses

`func NewMomentum(opts ...SolverOpt) *Momentum`

NewMomentum creates a new Momentum with sane-ish default values

func (*Momentum) Step¶Uses

`func (s *Momentum) Step(model []ValueGrad) (err error)`

Step steps through each node in the model and applies the Momentum stochastic gradient descent algorithm on the value.

This function will error out if the nodes do not have an associated Grad value.

type Namer¶Uses

```type Namer interface {
Name() string
}```

Namer is anything that has a name

type NoOpError¶Uses

```type NoOpError interface {
NoOp() bool
}```

NoOpError is an error returned when an operation does nothing.

type NoRetOp¶Uses

```type NoRetOp interface {
Op

ReturnsNothing() bool
}```

A NoRetOp is an Op that reads a value, but does not return any value. It's a representation of a not-pure function

type Node¶Uses

```type Node struct {
// contains filtered or unexported fields
}```

A Node is a node in the computation graph

func Abs¶Uses

`func Abs(a *Node) (*Node, error)`

Abs performs a pointwise abs.

`func Add(a, b *Node) (*Node, error)`

func ApplyOp¶Uses

`func ApplyOp(op Op, children ...*Node) (retVal *Node, err error)`

ApplyOp is the generic function application - for when no specialization is required

func ApplyOpWithName¶Uses

`func ApplyOpWithName(op Op, name string, children ...*Node) (retVal *Node, err error)`

ApplyOpWithName applies the op, and then gives the node the given name

func At¶Uses

`func At(a *Node, coords ...int) (retVal *Node, err error)`

At is a symbolic operation for getting a value at the provided coordinates. If the input is a scalar, all the coordinates MUST be 0, or else an error will be returned.

func BatchedMatMul¶Uses

`func BatchedMatMul(a, b *Node) (retVal *Node, err error)`

BatchedMatMul returns a node representing the batched mat mul operation

func BinaryXent¶Uses

`func BinaryXent(output, target *Node) (retVal *Node, err error)`

BinaryXent is a convenience function for doing binary crossentropy stuff. The formula is as below:

```-(y * logprob) +  (1-y)(1-logprob)
```

func BinomialRandomNode¶Uses

`func BinomialRandomNode(g *ExprGraph, dt tensor.Dtype, trials, prob float64, shape ...int) *Node`

BinomialRandomNode creates an input node that has a random op so that everytime the node is passed, random values will be plucked from a binomial distribution with the mean and stdev provided. The type of the node depends on the shape passed in. To get a scalar value at run time, don't pass in any shapes

Whilst technically the number of trials of a binomal distribution should be a discrete value (you can't have half a trial), to keep with API uniformity, trials is passed in as a float64, but will be truncated to an int at runtime.

func Ceil¶Uses

`func Ceil(a *Node) (*Node, error)`

Ceil performs a pointwise ceil.

func Concat¶Uses

`func Concat(axis int, ns ...*Node) (retVal *Node, err error)`

Concat performs a concatenate on the provided axis and inputs.

func Conv1d¶Uses

`func Conv1d(in, filter *Node, kernel, pad, stride, dilation int) (*Node, error)`

Conv1d is a 1D convlution. It relies on Conv2D

func Conv2d¶Uses

`func Conv2d(im, filter *Node, kernelShape tensor.Shape, pad, stride, dilation []int) (retVal *Node, err error)`

Conv2d is a simple 2D convoution, to be used for CPU computation only. If CuDNN is used, use the CUDAConv2D function. These are the properties the inputs must fulfil:

im: must have 4D shape. Expected format is BCHW (batch, channel, height, width) filter: must have 4D shape: (batch, kernel, height, width) kernelShape: shape of the filter kernel pad: len(pad) == 2 stride: len(stride) == 2 dilation: len(dilation) == 2

func Cos¶Uses

`func Cos(a *Node) (*Node, error)`

Cos performs a pointwise cos.

func Cube¶Uses

`func Cube(a *Node) (*Node, error)`

Cube performs a pointwise cube.

func Div¶Uses

`func Div(a, b *Node) (retVal *Node, err error)`

Div is a shortcut function for HadamardDiv for scalar values. For matrix/tensor values, the matrix division operation is not yet handled, and will panic.

func Dropout¶Uses

`func Dropout(x *Node, prob float64) (retVal *Node, err error)`

Dropout is a convenience function to implement dropout. It uses randomly zeroes out a *Tensor with a probability drawn from a uniform distribution

func Eq¶Uses

`func Eq(a, b *Node, retSame bool) (*Node, error)`

Eq perfors a pointwise eq operation.

```retSame indicates if the data type of the return value should be the same as the input data type. It defaults to Bool otherwise.
```

func Exp¶Uses

`func Exp(a *Node) (*Node, error)`

Exp performs a pointwise exp.

func Expm1¶Uses

`func Expm1(a *Node) (*Node, error)`

Expm1 performs a pointwise expm1.

func Floor¶Uses

`func Floor(a *Node) (*Node, error)`

Floor performs a pointwise floor.

func GaussianRandomNode¶Uses

`func GaussianRandomNode(g *ExprGraph, dt tensor.Dtype, mean, stdev float64, shape ...int) *Node`

GaussianRandomNode creates an input node that has a random op so everytime the node is passed, random values will be plucked from a gaussian distribution with the mean and stdev provided. The type of the node depends on the shape passed in. To get a scalar value at run time, don't pass in any shapes

func Gt¶Uses

`func Gt(a, b *Node, retSame bool) (*Node, error)`

Gt perfors a pointwise gt operation.

```retSame indicates if the data type of the return value should be the same as the input data type. It defaults to Bool otherwise.
```

func Gte¶Uses

`func Gte(a, b *Node, retSame bool) (*Node, error)`

Gte perfors a pointwise gte operation.

```retSame indicates if the data type of the return value should be the same as the input data type. It defaults to Bool otherwise.
```

`func HadamardDiv(a, b *Node) (*Node, error)`

`func HadamardProd(a, b *Node) (*Node, error)`

func Im2Col¶Uses

`func Im2Col(n *Node, kernel, pad, stride, dilation tensor.Shape) (retVal *Node, err error)`

Im2Col converts a BCHW image block to columns. The kernel, pad and stride parameter must be shape of size 2, no more no less This poor naming scheme clearly comes from matlab

func Inverse¶Uses

`func Inverse(a *Node) (*Node, error)`

Inverse performs a pointwise inverse.

func InverseSqrt¶Uses

`func InverseSqrt(a *Node) (*Node, error)`

InverseSqrt performs a pointwise inversesqrt.

func Log¶Uses

`func Log(a *Node) (*Node, error)`

Log performs a pointwise log.

func Log1p¶Uses

`func Log1p(a *Node) (*Node, error)`

Log1p performs a pointwise log1p.

func Log2¶Uses

`func Log2(a *Node) (*Node, error)`

Log2 performs a pointwise log2.

func LogSumExp¶Uses

`func LogSumExp(a *Node, axis int) (retVal *Node, err error)`

LogSumExp performs addition in the log domain

func Lt¶Uses

`func Lt(a, b *Node, retSame bool) (*Node, error)`

Lt perfors a pointwise lt operation.

```retSame indicates if the data type of the return value should be the same as the input data type. It defaults to Bool otherwise.
```

func Lte¶Uses

`func Lte(a, b *Node, retSame bool) (*Node, error)`

Lte perfors a pointwise lte operation.

```retSame indicates if the data type of the return value should be the same as the input data type. It defaults to Bool otherwise.
```

func Max¶Uses

`func Max(a *Node, along ...int) (retVal *Node, err error)`

Max performs a max() on the input and the provided axes.

func MaxPool2D¶Uses

`func MaxPool2D(x *Node, kernel tensor.Shape, pad, stride []int) (*Node, error)`

func Mean¶Uses

`func Mean(a *Node, along ...int) (retVal *Node, err error)`

Mean performs a mean() on the input and the provided axes.

func Mul¶Uses

`func Mul(a, b *Node) (retVal *Node, err error)`

Mul is the general handler for multiplication of nodes. It is extremely overloaded. Only use if you know what you're doing

If any of the nodes are ScalarType, then it'll be redirected to HadamardProd() instead If the nodes are both vectors (that is, have a shape of (x, 1) or (1, x)), then the operator used will be a vectorDot If only one of the nodes is a vector, then the operator used will be a matrix-vector multiplication will be used, and most importantly, a transpose will be used (when necessary) If both nodes are matrices, then well, matrix multiplication will be done

func Must¶Uses

`func Must(n *Node, err error, opts ...NodeConsOpt) *Node`

Must indicates a node must be created. If there isn't a node created, or there was an error, it subsumes the error, and immediately panics

func Ne¶Uses

`func Ne(a, b *Node, retSame bool) (*Node, error)`

Ne perfors a pointwise ne operation.

```retSame indicates if the data type of the return value should be the same as the input data type. It defaults to Bool otherwise.
```

func Neg¶Uses

`func Neg(a *Node) (*Node, error)`

Neg performs a pointwise neg.

func NegNegOptimization¶Uses

`func NegNegOptimization(a *Node) (retVal *Node, err error)`

NegNegOptimization optimizes away -(-x) to just return x place before neg

func NewConstant¶Uses

`func NewConstant(v interface{}, opts ...NodeConsOpt) *Node`

NewConstant takes in any reasonable value and makes it a constant node.

func NewMatrix¶Uses

`func NewMatrix(g *ExprGraph, t tensor.Dtype, opts ...NodeConsOpt) *Node`

NewMatrix creates a Node representing a variable that holds a matrix (nxm)

func NewScalar¶Uses

`func NewScalar(g *ExprGraph, t tensor.Dtype, opts ...NodeConsOpt) *Node`

NewScalar creates a Node representing a variable that holds a scalar value

func NewTensor¶Uses

`func NewTensor(g *ExprGraph, t tensor.Dtype, dims int, opts ...NodeConsOpt) *Node`

NewTensor creates a Node representing a variable that holds a tensor (any n-dimensional array with dimensions greater than 2)

func NewUniqueNode¶Uses

`func NewUniqueNode(opts ...NodeConsOpt) *Node`

NewUniqueNode creates a new unique node in a graph. If no graph was specified in the construction options then it will just return a graphless node.

func NewVector¶Uses

`func NewVector(g *ExprGraph, t tensor.Dtype, opts ...NodeConsOpt) *Node`

NewVector creates a Node representing a variable that holds a vector (nx1 matrix)

func NodeFromAny¶Uses

`func NodeFromAny(g *ExprGraph, any interface{}, opts ...NodeConsOpt) *Node`

NodeFromAny creates a Node from a tensor.Tensor, automatically filling in shape and type info

func Norm¶Uses

`func Norm(a *Node, axis, p int) (retVal *Node, err error)`

Norm returns the p-norm of a Value. Use p=2 if you want to use unordered norms.

This is a simpler version of the norms found in the Tensor package, which specializes and optimizes even more (well, given it's adapted from Numpy, it is clearly way more optimized)

func OneHotVector¶Uses

`func OneHotVector(id, classes int, t tensor.Dtype, opts ...NodeConsOpt) *Node`

OneHotVector creates a node representing a one hot vector

func OuterProd¶Uses

`func OuterProd(a, b *Node) (retVal *Node, err error)`

OuterProd returns a Node representing the outer product of two vectors. This function will return an error if both input nodes are not vectors

func Pow¶Uses

`func Pow(a, b *Node) (*Node, error)`

Pow perfors a pointwise pow operation.

`func Read(n *Node, into *Value) (retVal *Node)`

Read is one of those special snowflake tumblrina *Nodes. It allows for extraction of the value of the *Node at runtime into a Value. Note that a *Value (a pointer to a Value) is passed into this function, not a Value.

func Rectify¶Uses

`func Rectify(x *Node) (retVal *Node, err error)`

Rectify is a convenience function for creating rectified linear units activation functions. This function uses >=, which is the canonical version. If you want to use >, you can create your own by just following this.

`func ReduceAdd(nodes Nodes, opts ...NodeConsOpt) (retVal *Node, err error)`

ReduceAdd takes a slice of *Nodes, and folds them into one by adding

func ReduceMul¶Uses

`func ReduceMul(nodes Nodes, opts ...NodeConsOpt) (retVal *Node, err error)`

ReduceMul is like foldl(*, nodes)

func Reshape¶Uses

`func Reshape(n *Node, to tensor.Shape) (retVal *Node, err error)`

Reshape reshapes a node and returns a new node with the new shape

func Set¶Uses

`func Set(a, b *Node) (retVal *Node)`

Set is the equivalent of doing this:

```a = b
```

where a and b are both variables

func Sigmoid¶Uses

`func Sigmoid(a *Node) (*Node, error)`

Sigmoid performs a pointwise sigmoid.

func Sign¶Uses

`func Sign(a *Node) (*Node, error)`

Sign performs a pointwise sign.

func Sin¶Uses

`func Sin(a *Node) (*Node, error)`

Sin performs a pointwise sin.

func SizeOf¶Uses

`func SizeOf(axis int, x *Node) (retVal *Node, err error)`

SizeOf returns the size of a value along an axis

func Slice¶Uses

`func Slice(n *Node, slices ...tensor.Slice) (retVal *Node, err error)`

Slice slices a *Node. For T[:] slices, pass in nil. Will error out if node's type is not a Tensor

func SoftMax¶Uses

`func SoftMax(a *Node) (retVal *Node, err error)`

SoftMax performs softmax on the input. Specifically this is used:

```e^(a[i]) / sum((e^(a[i])))
```

For a more numerically stable SoftMax, use StableSoftMax.

func Softplus¶Uses

`func Softplus(a *Node) (*Node, error)`

Softplus performs a pointwise softplus.

func Sqrt¶Uses

`func Sqrt(a *Node) (*Node, error)`

Sqrt performs a pointwise sqrt.

func Square¶Uses

`func Square(a *Node) (*Node, error)`

Square performs a pointwise square.

func StableSoftMax¶Uses

`func StableSoftMax(a *Node) (retVal *Node, err error)`

StableSoftMax performs a numerically stable softmax on the input. Specifically this is the formula used:

```e^(a - max(a)) / sum(e^(a - max(a)))
```

func Sub¶Uses

`func Sub(a, b *Node) (*Node, error)`

Sub perfors a pointwise sub operation.

func Sum¶Uses

`func Sum(a *Node, along ...int) (retVal *Node, err error)`

Sum performs a sum() on the input and the provided axes.

func Tanh¶Uses

`func Tanh(a *Node) (*Node, error)`

Tanh performs a pointwise tanh.

func Tensordot¶Uses

`func Tensordot(aAxes []int, bAxes []int, a, b *Node) (retVal *Node, err error)`

Tensordot performs a tensor contraction of a and b along specified axes.

func Transpose¶Uses

`func Transpose(n *Node, axes ...int) (retVal *Node, err error)`

Transpose performs a transpose on the input and provided permutation axes.

func UniformRandomNode¶Uses

`func UniformRandomNode(g *ExprGraph, dt tensor.Dtype, low, high float64, shape ...int) *Node`

UniformRandomNode creates an input node that has a random op so everytime the node is passed, random values will be plucked from a uniform distribution. The type of the node depends on the shape passed in. To get a scalar value at run time, don't pass in any shapes

func (*Node) Clone¶Uses

`func (n *Node) Clone() (retVal interface{})`

Clone clones the node. There are some caveats:

```- the graph is not copied over - the node essentially does not belong to a collection
- there is no ID
- the children are not cloned
```

func (*Node) CloneTo¶Uses

`func (n *Node) CloneTo(g *ExprGraph) *Node`

CloneTo clones the node into a new graph. If CloneTo() is called on the same graph as the n, it will return n. The reason this is done is because at any given time, every node should be unique in the *ExprGraph.

TODO: clone children as well (this means that CloneTo() is only currently suitable fo input nodes)

func (*Node) Device¶Uses

`func (n *Node) Device() Device`

Device returns the device the data will be on

func (*Node) Dims¶Uses

`func (n *Node) Dims() int`

Dims indicates how many dimensions the node's result has

func (*Node) Dtype¶Uses

`func (n *Node) Dtype() tensor.Dtype`

Dtype returns the dtype of the node

`func (n *Node) Grad() (Value, error)`

`func (n *Node) GradOnDevice(dev Device, extern External) (retVal Value, allocOnExtern bool, err error)`

GradOnDevice gets the gradient value of the node as a Value but on the desired device. In this build the device is always CPU, so it's equivalent to calling .Grad()

func (*Node) Graph¶Uses

`func (n *Node) Graph() *ExprGraph`

Graph returns the graph of the node

func (*Node) Hashcode¶Uses

`func (n *Node) Hashcode() uint32`

Hashcode provides the hash for the tree, assuming that the node is the root of the tree. Original implementation was here by Vatine (who's apparently 80 years old and using SO!?!):

```http://stackoverflow.com/questions/1988665/hashing-a-tree-structure
```

func (*Node) ID¶Uses

`func (n *Node) ID() int64`

ID returns the ID of the node. This satisfies the gonum/graph.Node interface

func (*Node) IsColVec¶Uses

`func (n *Node) IsColVec() bool`

IsColVec indicates if a node represents a Column Vector. This is based on the type of the node, not the actual value associated with the node

func (*Node) IsMatrix¶Uses

`func (n *Node) IsMatrix() bool`

IsMatrix indicates if a node represents a matrix. This is based on the type of the node, not the actual value associated with the node

func (*Node) IsRowVec¶Uses

`func (n *Node) IsRowVec() bool`

IsRowVec indicates if a node represents a Row Vector. This is based on the type of the node, not the actual value associated with the node

func (*Node) IsScalar¶Uses

`func (n *Node) IsScalar() bool`

IsScalar indicates if a node represents a a scalar value. This is based on the type of the node, not the actual value associated with the node

func (*Node) IsVar¶Uses

`func (n *Node) IsVar() bool`

IsVar returns true if the node represents a differentiable variable (i.e. it's an argument to the function that is not a statement)

func (*Node) IsVec¶Uses

`func (n *Node) IsVec() bool`

IsVec returns whether this node is a vector

func (*Node) IsVector¶Uses

`func (n *Node) IsVector() bool`

IsVector indicates if a node represents a vector value. This is based on the type of the node, not the actual value associated with the node

func (*Node) Name¶Uses

`func (n *Node) Name() string`

Name returns the name of the node. If a name was specified and it is too long, the short name will be used instead (except in inputs)

The short name is typically of the form: OpName(%1, %2 ...), making it read more like a function call

func (*Node) Op¶Uses

`func (n *Node) Op() Op`

Op returns the Op of the node

func (*Node) RestrictedToDot¶Uses

`func (n *Node) RestrictedToDot(up, down int) string`

RestrictedToDot prints the graphviz compatible string but does not print the entire tree up and down indicates how many levels to look up, and how many levels to look down

func (*Node) Shape¶Uses

`func (n *Node) Shape() tensor.Shape`

Shape returns the shape of the node

func (*Node) Strides¶Uses

`func (n *Node) Strides() []int`

Strides returns the strides of the value of the node

func (*Node) String¶Uses

`func (n *Node) String() string`

String() implements the fmt.Stringer interface

func (*Node) ToDot¶Uses

`func (n *Node) ToDot() string`

ToDot returns the graph as a graphviz compatible string

func (*Node) Type¶Uses

`func (n *Node) Type() hm.Type`

Type returns the type of the node

func (*Node) Value¶Uses

`func (n *Node) Value() Value`

Value returns the valuse bound to the node. May return nil

func (*Node) ValueOnDevice¶Uses

`func (n *Node) ValueOnDevice(dev Device, extern External) (retVal Value, allocOnExtern bool, err error)`

ValueOnDevice gets the value of the node as a Value but on the desired device. In this build the device is always CPU, so it's equivalent to calling .Value()

func (*Node) WriteHash¶Uses

`func (n *Node) WriteHash(h hash.Hash32)`

WriteHash writes the hash to the provided Hash32.

type NodeConsOpt¶Uses

`type NodeConsOpt func(*Node)`

NodeConsOpt is a function that provides construction options for any Node.

func In¶Uses

`func In(g *ExprGraph) NodeConsOpt`

In is a node construction option to set a node's graph. A `*Node`'s graph is immutable. If the graph has already been set, a check will be made that the specifiec *Graph and the *Graph set in *Node are the same. If they are not, the function will panic/

func WithChildren¶Uses

`func WithChildren(children Nodes) NodeConsOpt`

WithChildren sets the children of a node to the specified chidren. This construction option does NOT check if existing children exists, and will overwrite the existing children.

`func WithGrad(any interface{}) NodeConsOpt`

WithGrad is a node construction option that binds the value to the *Node. This function may panic if:

```- There isn't already a value associated with the node (.boundTo == nil)
- The type of the Value does not match the value of the node.
```

func WithGroupName¶Uses

`func WithGroupName(name string) NodeConsOpt`

WithGroupName is a node construction option to group a *Node within a particular group. This option is useful for debugging with graphs.

func WithInit¶Uses

`func WithInit(fn InitWFn) NodeConsOpt`

WithInit is a node construction option to initialize a *Node with the InitWFn provided.

func WithName¶Uses

`func WithName(name string) NodeConsOpt`

WithName is a node construction option that gives the *Node the provided name. This is especially useful in debugging graphs.

func WithOp¶Uses

`func WithOp(op Op) NodeConsOpt`

WithOp is a node construction option to set a node's Op to the specified Op. `Op`s in `*Node`s are immutable once set and cannot be changed. If the node already has an Op specified a check will be made to see if the provided Op and the one already specified in the `*Node` is the same - do note that comparison of Ops is done using the `Hashcode()` method of Ops, and hash collisions MAY occur - If both ops are different, this function will panic.

func WithShape¶Uses

`func WithShape(shp ...int) NodeConsOpt`

WithShape is a node construction option to initialize a *Node with a particular shape. This function panics if the shape's dimensions do not match the specified dimensions of the *Node.

func WithType¶Uses

`func WithType(t hm.Type) NodeConsOpt`

WithType is a node construction option to set a node to the specified type. Types in *Node are immutable once set. If the type has already been specified in the node, a check will be made to see if the both types are the same. If it isn't, it will panic.

func WithValue¶Uses

`func WithValue(any interface{}) NodeConsOpt`

WithValue is a node construction option that binds the value to the *Node. This function may panic if:

```- Gorgonia was unable to convert interface{} into a Value.
- The type of the Value does not match the type of the nodes.
```

type NodeSet¶Uses

`type NodeSet map[*Node]struct{}`

NodeSet is the primary type that represents a set

func NewNodeSet¶Uses

`func NewNodeSet(a ...*Node) NodeSet`

NewNodeSet creates and returns a reference to an empty set.

`func (set NodeSet) Add(i *Node) bool`

func (NodeSet) Cardinality¶Uses

`func (set NodeSet) Cardinality() int`

Cardinality returns how many items are currently in the set.

func (*NodeSet) Clear¶Uses

`func (set *NodeSet) Clear()`

Clear clears the entire set to be the empty set.

func (NodeSet) Clone¶Uses

`func (set NodeSet) Clone() NodeSet`

Clone returns a clone of the set. Does NOT clone the underlying elements.

func (NodeSet) Contains¶Uses

`func (set NodeSet) Contains(i *Node) bool`

Contains determines if a given item is already in the set.

func (NodeSet) ContainsAll¶Uses

`func (set NodeSet) ContainsAll(i ...*Node) bool`

ContainsAll determines if the given items are all in the set

func (NodeSet) Difference¶Uses

`func (set NodeSet) Difference(other NodeSet) NodeSet`

Difference returns a new set with items in the current set but not in the other set

func (NodeSet) Equal¶Uses

`func (set NodeSet) Equal(other NodeSet) bool`

Equal determines if two sets are equal to each other. If they both are the same size and have the same items they are considered equal. Order of items is not relevant for sets to be equal.

func (NodeSet) Intersect¶Uses

`func (set NodeSet) Intersect(other NodeSet) NodeSet`

Intersect returns a new set with items that exist only in both sets.

func (NodeSet) IsSubset¶Uses

`func (set NodeSet) IsSubset(other NodeSet) bool`

IsSubset determines if every item in the other set is in this set.

func (NodeSet) IsSuperset¶Uses

`func (set NodeSet) IsSuperset(other NodeSet) bool`

IsSuperset determines if every item of this set is in the other set.

func (NodeSet) Iter¶Uses

`func (set NodeSet) Iter() <-chan *Node`

Iter returns a channel of type *Node that you can range over.

func (NodeSet) Remove¶Uses

`func (set NodeSet) Remove(i *Node)`

Remove allows the removal of a single item in the set.

func (NodeSet) SymmetricDifference¶Uses

`func (set NodeSet) SymmetricDifference(other NodeSet) NodeSet`

SymmetricDifference returns a new set with items in the current set or the other set but not in both.

func (NodeSet) ToSlice¶Uses

`func (set NodeSet) ToSlice() Nodes`

ToSlice returns the elements of the current set as a slice

func (NodeSet) Union¶Uses

`func (set NodeSet) Union(other NodeSet) NodeSet`

Union returns a new set with all items in both sets.

type Nodes¶Uses

`type Nodes []*Node`

Nodes is a slice of nodes, but it also acts as a set of nodes by implementing the Sort interface

func Backpropagate¶Uses

`func Backpropagate(outputs, gradOutputs, wrt Nodes) (retVal Nodes, err error)`

Backpropagate backpropagates errors by performing revers-emode symbolic differentiation, starting from the outputs, and working its way towads the inputs.

This is the rough algorithm:

```1. Filter out nodes that are unreachable
2. Forwards analysis, where a list of nodes affecting the output is added to consideration
3. Backwards analysis, where a list of nodes affected by differentiating the output are added to the consideration
4. If there is a difference in both sets, it will cause an error (both sets should be the same)
5. Traverse the graph from output towards input. On each visit, perform the symbolic differentiation
```

For most cases, Grad() should be used instead of Backpropagate(), as Grad() performs several checks which would be the general use case, before calling Backpropagate()

`func Grad(cost *Node, WRTs ...*Node) (retVal Nodes, err error)`

Grad takes a scalar cost node and a list of with-regards-to, and returns the gradient

func Sort¶Uses

`func Sort(g *ExprGraph) (sorted Nodes, err error)`

Sort topologically sorts a ExprGraph: root of graph will be first

func UnstableSort¶Uses

`func UnstableSort(g *ExprGraph) (sorted Nodes, err error)`

`func (ns Nodes) Add(n *Node) Nodes`

func (Nodes) AllSameGraph¶Uses

`func (ns Nodes) AllSameGraph() bool`

AllSameGraph returns true if all the nodes in the slice belong to the same graph. Note that constants do not have to belong to the same graph.

func (Nodes) Contains¶Uses

`func (ns Nodes) Contains(want *Node) bool`

Contains checks if the wanted node is in the set

func (Nodes) Difference¶Uses

`func (ns Nodes) Difference(other Nodes) Nodes`

Difference is ns - other. Bear in mind it is NOT commutative

func (Nodes) Equals¶Uses

`func (ns Nodes) Equals(other Nodes) bool`

Equals returns true if two Nodes are the same

func (Nodes) Format¶Uses

`func (ns Nodes) Format(s fmt.State, c rune)`

Format implements fmt.Formatter, which allows Nodes to be differently formatted depending on the verbs

func (Nodes) Intersect¶Uses

`func (ns Nodes) Intersect(other Nodes) Nodes`

Intersect performs an intersection with other Nodes

func (Nodes) Len¶Uses

`func (ns Nodes) Len() int`

func (Nodes) Less¶Uses

`func (ns Nodes) Less(i, j int) bool`

func (Nodes) Set¶Uses

`func (ns Nodes) Set() Nodes`

Set returns a uniquifies slice. It mutates the slice.

func (Nodes) Swap¶Uses

`func (ns Nodes) Swap(i, j int)`

type Op¶Uses

```type Op interface {

// Arity returns the number of inputs the Op expects. -1 indicates that it's n-ary and will be determined at runtime
Arity() int

// Informs the type of the Op (not the node). This will be used by the type system to infer the final type of the node
Type() hm.Type

// returns the output shape as a function of the inputs
InferShape(...DimSizer) (tensor.Shape, error)

// executes the op
Do(...Value) (Value, error)

// indicates if the Op will return a pointer (allowing possible inplace edits) or by value
// if it's false, the return value of the Op will be a copy of its input
ReturnsPtr() bool

// Does this op potentially call external (cgo or cuda) functions (thereby requiring extra overhead for Go's trampolining thing)
CallsExtern() bool

// overwriteInput() is a method which states which input the output will be overwriting.
// This allows for some efficiency gains as the underlying arrays wouldn't have to be re-allocated.
// The method returns an int instead of a bool because potentially different operations may be allowed
// to overwrite certain inputs. For example, consider an operation to increment a value:
// the IncrementOp would be a unary operator, and assuming we would like to overwrite the input,
// the retVal of overwriteInput() will be 0 (inputs[0]).
// -1 is returned if overwriting of input is disallowed
OverwritesInput() int

/* Other methods */
WriteHash(h hash.Hash)
Hashcode() uint32
fmt.Stringer
}```

An Op is a symbolic representation of an operation Think of them as functions, taking an input (or multiple), and outputting something

All Ops have type signatures that look like this:

```OpName :: (Floats a) ⇒ Tensor a → Tensor a → Tensor a
```

type RMSPropSolver¶Uses

```type RMSPropSolver struct {
// contains filtered or unexported fields
}```

RMSPropSolver is a solver that implements Geoffrey Hinton's RMSProp gradient descent optimization algorithm. http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf

func NewRMSPropSolver¶Uses

`func NewRMSPropSolver(opts ...SolverOpt) *RMSPropSolver`

NewRMSPropSolver creates an RMSProp solver with these default values:

```eta (learn rate)	  : 0.001
eps (smoothing factor): 1e-8
rho (decay factor)    : 0.999
```

func (*RMSPropSolver) Step¶Uses

`func (s *RMSPropSolver) Step(model []ValueGrad) (err error)`

Step steps through each node in the model and applies the RMSProp gradient descent algorithm on the value.

This function will error out if the nodes do not have an associated Grad value.

type ReductionOp¶Uses

```type ReductionOp interface {
Op

IsReduction() bool
}```

ReductionOp changes the shape of the node

type SDOp¶Uses

```type SDOp interface {
Op

// DiffWRT indicates if the op is differentiable with regards to the given number of inputs
// returns []bool to indicate which input it is differentiable to
DiffWRT(inputs int) []bool

// SymDiff symbolically differentiates the op
SymDiff(inputs Nodes, output, grad *Node) (retVal Nodes, err error)
}```

A SDOp is an Op that supports symbolic differentiation

type Scalar¶Uses

```type Scalar interface {
Value
// contains filtered or unexported methods
}```

Scalar represents a scalar(non-array-based) value. Do note that it's the pointers of the scalar types (F64, F32, etc) that implement the Scalar interface. The main reason is primarily due to optimizations with regards to memory allocation and copying for device interoperability.

type Solver¶Uses

```type Solver interface {
}```

Solver is anything that does gradient updates. The name solvers is stolen from Caffe. A much shorter name than GradientUpdaters

type SolverOpt¶Uses

`type SolverOpt func(s Solver)`

SolverOpt is a function that provides construction options for a Solver

func WithBatchSize¶Uses

`func WithBatchSize(batch float64) SolverOpt`

WithBatchSize sets the batch size for the solver. Currently only Adam and Vanilla (basic SGD) has batch size support

func WithBeta1¶Uses

`func WithBeta1(beta1 float64) SolverOpt`

WithBeta1 sets the beta1 param of the solver. Only works with Adam

func WithBeta2¶Uses

`func WithBeta2(beta2 float64) SolverOpt`

WithBeta2 sets the beta1 param of the solver. Only works with Adam

func WithClip¶Uses

`func WithClip(clip float64) SolverOpt`

WithClip clips the gradient if it gets too crazy. By default all solvers do not have any clips attached

func WithEps¶Uses

`func WithEps(eps float64) SolverOpt`

WithEps sets the smoothing factor for the solver.

func WithL1Reg¶Uses

`func WithL1Reg(l1reg float64) SolverOpt`

WithL1Reg adds a L1 regularization parameter to the solver. By default, the solvers do not use any regularization param

func WithL2Reg¶Uses

`func WithL2Reg(l2reg float64) SolverOpt`

WithL2Reg adds a L2 regularization parameter to the solver. By default, the solvers do not use any regularization param

func WithLearnRate¶Uses

`func WithLearnRate(eta float64) SolverOpt`

WithLearnRate sets the learn rate or step size for the solver.

func WithMomentum¶Uses

`func WithMomentum(momentum float64) SolverOpt`

func WithRho¶Uses

`func WithRho(rho float64) SolverOpt`

WithRho sets the decay parameter of the RMSProp solver

type StandardEngine¶Uses

```type StandardEngine struct {
tensor.StdEng
}```

StandardEngine is the default CPU engine for gorgonia

func (StandardEngine) Transpose¶Uses

`func (e StandardEngine) Transpose(a tensor.Tensor, expStrides []int) error`

type SymDiffError¶Uses

```type SymDiffError struct {
// contains filtered or unexported fields
}```

SymDiffError provides the context at which an error occurred

func (SymDiffError) Error¶Uses

`func (err SymDiffError) Error() string`

`func (err SymDiffError) Grad() *Node`

`func (err SymDiffError) Grads() map[*Node]Nodes`

func (SymDiffError) Node¶Uses

`func (err SymDiffError) Node() *Node`

func (SymDiffError) Nodes¶Uses

`func (err SymDiffError) Nodes() Nodes`

type SymbolicEngine¶Uses

`type SymbolicEngine struct{}`

type Tensor¶Uses

```type Tensor interface {
Shape() tensor.Shape
Strides() []int
Dtype() tensor.Dtype
Dims() int
Size() int
DataSize() int

// Data access related
RequiresIterator() bool
Iterator() tensor.Iterator

// ops
tensor.Slicer
At(...int) (interface{}, error)
SetAt(v interface{}, coord ...int) error
Reshape(...int) error
T(axes ...int) error
UT()
Transpose() error // Transpose actually moves the data
Apply(fn interface{}, opts ...tensor.FuncOpt) (tensor.Tensor, error)

// data related interface
tensor.Zeroer
tensor.MemSetter
tensor.Dataer
tensor.Eq
tensor.Cloner

IsScalar() bool
ScalarValue() interface{}

// engine/memory related stuff
// all Tensors should be able to be expressed of as a slab of memory
// Note: the size of each element can be acquired by T.Dtype().Size()
Engine() tensor.Engine      // Engine can be nil
MemSize() uintptr           // the size in memory
Uintptr() uintptr           // the pointer to the first element, as a uintptr
Pointer() unsafe.Pointer    // the pointer to the first elemment as a unsafe.Ponter
IsNativelyAccessible() bool // Can Go access the memory
IsManuallyManaged() bool    // Must Go manage the memory

// formatters
fmt.Formatter
fmt.Stringer

// all Tensors are serializable to these formats
WriteNpy(io.Writer) error
gob.GobEncoder
gob.GobDecoder
}```

type TensorType¶Uses

```type TensorType struct {
Dims int // dims

Of  hm.Type
}```

TensorType is a type constructor for tensors.

Think of it as something like this:

```data Tensor a = Tensor d a
```

The shape of the Tensor is not part of TensorType. Shape checking is relegated to the dynamic part of the program run

func (TensorType) Apply¶Uses

`func (t TensorType) Apply(sub hm.Subs) hm.Substitutable`

Apply applies the substitutions on the types. Satisfies the hm.Type interface.

func (TensorType) Eq¶Uses

`func (t TensorType) Eq(other hm.Type) bool`

Eq is the equality function of this type. The type of Tensor has to be the same, and for now, only the dimensions are compared. Shape may be compared in the future for tighter type inference. Satisfies the hm.Type interface.

func (TensorType) Format¶Uses

`func (t TensorType) Format(state fmt.State, c rune)`

Format implements fmt.Formatter. It is also required for the satisfication the hm.Type interface.

func (TensorType) FreeTypeVar¶Uses

`func (t TensorType) FreeTypeVar() hm.TypeVarSet`

FreeTypeVar returns any free (unbound) type variables in this type. Satisfies the hm.Type interface.

func (TensorType) Name¶Uses

`func (t TensorType) Name() string`

Name returns the name of the type, which will always be "Tensor". Satisfies the hm.Type interface.

func (TensorType) Normalize¶Uses

`func (t TensorType) Normalize(k, v hm.TypeVarSet) (hm.Type, error)`

Normalize normalizes the type variable names (if any) in the TensorType. Satisfies the hm.Type interface.

func (TensorType) String¶Uses

`func (t TensorType) String() string`

String implements fmt.Stringer and runtime.Stringer. Satisfies the hm.Type interface.

func (TensorType) Types¶Uses

`func (t TensorType) Types() hm.Types`

Types returns a list of types that TensorType contains - in this case, the type of Tensor (float64, float32, etc). Satisfies the hm.Type interface.

type Typer¶Uses

```type Typer interface {
Type() hm.Type
}```

Typer represents any type (typically a Op) that knows its own Type

type U8¶Uses

`type U8 byte`

U8 represents a byte value.

func (*U8) Data¶Uses

`func (v *U8) Data() interface{}`

Data returns the original representation of the Value

func (*U8) Dtype¶Uses

`func (v *U8) Dtype() tensor.Dtype`

Dtype returns the Dtype of the value

func (*U8) Format¶Uses

`func (v *U8) Format(s fmt.State, c rune)`

Format implements fmt.Formatter

func (*U8) MemSize¶Uses

`func (v *U8) MemSize() uintptr`

MemSize satisfies the tensor.Memory interface

func (*U8) Pointer¶Uses

`func (v *U8) Pointer() unsafe.Pointer`

Pointer returns the pointer as an unsafe.Pointer. Satisfies the tensor.Memory interface

func (*U8) Shape¶Uses

`func (v *U8) Shape() tensor.Shape`

Shape returns a scalar shape for all scalar values

func (*U8) Size¶Uses

`func (v *U8) Size() int`

Size returns 0 for all scalar Values

func (*U8) Uintptr¶Uses

`func (v *U8) Uintptr() uintptr`

Uintptr satisfies the tensor.Memory interface

type UnaryOp¶Uses

```type UnaryOp interface {
Op

IsUnary() bool
}```

A UnaryOp is an Op that takes only one input

type UnsafeDoer¶Uses

```type UnsafeDoer interface {
UnsafeDo(inputs ...Value) (Value, error)
}```

UnsafeDoer is an op that will overwrite the underlying value.

type UsePreallocDoer¶Uses

```type UsePreallocDoer interface {
UsePreallocDo(prealloc Value, inputs ...Value) (Value, error)
}```

UsePreallocDoer is an op that works when a preallocated value is provided

type VM¶Uses

```type VM interface {
RunAll() error
Reset()

// Close closes all the machine resources (CUDA, if any, loggers if any)
Close() error
}```

VM represents a structure that can execute a graph or program. There are two VMs (both unexported):

```- *tapeMachine
- *lispMachine
```

The *tapeMachine pre-compiles a graph into a list of instructions, then executes the instructions linearly and sequentially. The main tradeoff is dynamism. Graphs cannot be dynamically created on the fly as a re-compilation process is required (and compilation is relatively expensive). However, graphs executed with the *tapeMachine run much faster as plenty of optimizations has been done in the code generation stage.

The *lispMachine allows for graphs to be dynamically built and executed upon. The tradeoff is that executing a graph on *lispMachine is generally slower than on *tapeMachine, given the same static "image" of a graph.

type VMOpt¶Uses

`type VMOpt func(m VM)`

VMOpt is a VM creation option

func BindDualValues¶Uses

`func BindDualValues(nodes ...*Node) VMOpt`

BindDualValues is an option for *tapeMachine only. This is useful to set when using a Solver

func ExecuteBwdOnly¶Uses

`func ExecuteBwdOnly() VMOpt`

ExecuteBwdOnly creates a VM that will execute a graph by doing back propagation only. The assumption is of course, that the forward graph has already been executed, and there are already values associated with the nodes. This option is only for *lispMachine. Try it on any other VMs and it will panic.

func ExecuteFwdOnly¶Uses

`func ExecuteFwdOnly() VMOpt`

ExecuteFwdOnly creates a VM that will execute a graph forwards only - it will not do back propagation. This option is only for *lispMachine. Try it on any other VMs and it will panic.

func LogBothDir¶Uses

`func LogBothDir() VMOpt`

LogBothDir logs both directions of the execution of the graph. This option is only available for *lispMachine.

func LogBwd¶Uses

`func LogBwd() VMOpt`

LogBwd logs the backwards execution of a graph. This option is only for *lispMachine. Try it on any other VMs and it will panic.

func LogFwd¶Uses

`func LogFwd() VMOpt`

LogFwd logs the forward execution of a graph. This option is only for *lispMachine. Try it on any other VMs and it will panic.

func TraceExec¶Uses

`func TraceExec() VMOpt`

TraceExec is an option for *tapeMachine only. It stores an immutable copy of the executed value into the node, instead of a mutable value, which may be clobbered

func UseCudaFor¶Uses

`func UseCudaFor(ops ...string) VMOpt`

UseCudaFor is an option for *tapeMachine. This function is NO-OP unless the program is built with the `cuda` tag.

func WithEngine¶Uses

`func WithEngine(e tensor.Engine) VMOpt`

func WithInfWatch¶Uses

`func WithInfWatch() VMOpt`

WithInfWatch creates a VM that will watch for Infs when executing. It watches for +Inf, -Inf and Inf. No choice there. This slows the execution down.

func WithLogger¶Uses

`func WithLogger(logger *log.Logger) VMOpt`

WithLogger creates a VM with the supplied logger. If the logger is nil, a default logger, writing to os.stderr will be created.

`func WithManualGradient() VMOpt`

WithManualGradient allows the user to set the gradient of the root, before backprop. The root gradients should be set using the SetDeriv method

func WithNaNWatch¶Uses

`func WithNaNWatch() VMOpt`

WithNaNWatch creates a VM that will watch for NaNs when executing. This slows the execution down.

func WithPrecompiled¶Uses

`func WithPrecompiled(prog *program, locMap map[*Node]register) VMOpt`

WithPrecompiled is an option to pass in compiled programs. This is useful for users who use the CompileFunction function

func WithValueFmt¶Uses

`func WithValueFmt(format string) VMOpt`

WithValueFmt defines how the logger will output the values. It defaults to "%3.3f"

func WithWatchlist¶Uses

`func WithWatchlist(list ...interface{}) VMOpt`

WithWatchlist creates a VM with a watchlist. When the execution touches the things in the watchlist, the VM's logger will the log it. This allows for watching and finetuning of the algorithm. When nothing is passed in, then the VM will default to watching and logging every single execution object.

The watchlist allows for different things to be watched, depending on VM type:

```*lispMachine will ONLY take *Node
*tapeMachine will take int (for register IDs) or *Node.
```

type Value¶Uses

```type Value interface {
Shape() tensor.Shape // Shape  returns the shape of the Value. Scalar values return ScalarShape()
Size() int           // Size represents the number of elements in the Value. Note that in cases such as a *tensor.Dense, the underlying slice MAY have more elements than the Size() reports. This is correct.
Data() interface{}   // Data returns the original representation of the Value
Dtype() tensor.Dtype // Dtype returns the Dtype of the value

tensor.Memory
fmt.Formatter
}```

Value represents a value that Gorgonia accepts. At this point it is implemented by:

```- all scalar value types (F64, F32... etc)
- *tensor.Dense
- *dualValue
```

A Value is essentially any thing that knows its own type and shape. Most importantly though, a Value is a pointer - and can be converted into a tensor.Memory. This is done for the sake of interoperability with external devices like cgo or CUDA or OpenCL. This also means for the most part most Values will be allocated on the heap. There are some performance tradeoffs made in this decision, but ultimately this is better than having to manually manage blocks of memory

func CloneValue¶Uses

`func CloneValue(v Value) (Value, error)`

CloneValue clones a value. For scalars, since Go copies scalars, it returns itself

func Copy¶Uses

`func Copy(dest, src Value) (Value, error)`

Copy copies the src values into dest values. For scalars, it just returns itself

func ScalarAsTensor¶Uses

`func ScalarAsTensor(v Value, dims int, e tensor.Engine) Value`

ScalarAsTensor returns the tensor representation of a scalar. It is particularly useful as a "reshape" of tensors of sorts

The Value passed in are either Scalar, tensor.Tensor, or *dualValue. Anything else will panic.

func ZeroValue¶Uses

`func ZeroValue(v Value) Value`

ZeroValue returns the zero value of a type

type ValueCloser¶Uses

```type ValueCloser interface {
ValueClose(interface{}) bool
}```

ValueCloser represents any type that can perform a close-value check

type ValueEqualer¶Uses

```type ValueEqualer interface {
ValueEq(Value) bool
}```

ValueEqualer represents any type that can perform a equal value check

```type ValueGrad interface {
Valuer
}```

ValueGrad is any type that has a value and a grad. This is used for Solvers

`func NodesToValueGrads(in Nodes) (out []ValueGrad)`

NodesToValueGrads is a utility function that converts a Nodes to a slice of ValueGrad for the solvers

type Valuer¶Uses

```type Valuer interface {
Value() Value
}```

Valuer is any type that can return a Value

type VanillaSolver¶Uses

```type VanillaSolver struct {
// contains filtered or unexported fields
}```

VanillaSolver is your bog standard stochastic gradient descent optimizer. There are no fancy features to this

func NewVanillaSolver¶Uses

`func NewVanillaSolver(opts ...SolverOpt) *VanillaSolver`

NewVanillaSolver creates a new VanillaSolver with sane-ish default values

func (*VanillaSolver) Step¶Uses

`func (s *VanillaSolver) Step(model []ValueGrad) (err error)`

Step steps through each node in the model and applies the most basic gradient descent algorithm on the value.

This function will error out if the nodes do not have an associated Grad value.

type ZeroValuer¶Uses

```type ZeroValuer interface {
Value
ZeroValue() Value
}```

ZeroValuer is a a Value that can provide the zero-value of its type

type Zeroer¶Uses

```type Zeroer interface {
Value
Zero()
}```

Zeroer is a Value that can zero itself

Directories ¶

PathSynopsis
blasePackage blase is a thin wrapper over Gonum's BLAS interface that provides a queue so that cgo calls are batched.
cuda

Package gorgonia imports 35 packages (graph). Updated 2019-04-24. Refresh now. Tools for package owners.