pytorch

package module
v0.0.2-0...-f2a7fd5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 29, 2020 License: NCSA Imports: 18 Imported by: 0

README

go-pytorch

Build Status Build Status Go Report Card License

Go binding for Pytorch C++ API. This is used by the Pytorch agent in MLModelScope to perform model inference in Go.

Installation

Download and install go-pytorch:

go get -v github.com/rai-project/go-pytorch

The binding requires Pytorch C++ (libtorch) and other Go packages.

Pytorch C++ (libtorch) Library

The Pytorch C++ library is expected to be under /opt/libtorch.

To install Pytorch C++ on your system, you can

  1. download pre-built binary from Pytorch website: Choose Pytorch Build = Stable (1.3), Your OS = <fill>, Package = LibTorch, Language = C++ and CUDA = <fill>. Then download cxx11 ABI version. Unzip the packaged directory and copy to /opt/libtorch (or modify the corresponding CFLAGS and LDFLAGS paths if using a custom location).

  2. build it from source: Refer to our scripts or the LIBRARY INSTALLATION section in the dockefiles.

  • The default blas is OpenBLAS. The default OpenBLAS path for macOS is /usr/local/opt/openblas if installed throught homebrew (openblas is keg-only, which means it was not symlinked into /usr/local, because macOS provides BLAS and LAPACK in the Accelerate framework).

  • The default pytorch C++ installation path is /opt/libtorch for linux, darwin and ppc64le without powerai

  • The default CUDA path is /usr/local/cuda

See lib.go for details.

If you get an error about not being able to write to /opt then perform the following

sudo mkdir -p /opt/libtorch
sudo chown -R `whoami` /opt/libtorch

If you are using Pytorch docker images or other libary paths, change CGO_CFLAGS, CGO_CXXFLAGS and CGO_LDFLAGS enviroment variables. Refer to Using cgo with the go command.

For example,

    export CGO_CFLAGS="${CGO_CFLAGS} -I/tmp/libtorch/include"
    export CGO_CXXFLAGS="${CGO_CXXFLAGS} -I/tmp/libtorch/include"
    export CGO_LDFLAGS="${CGO_LDFLAGS} -L/tmp/libtorch/lib"
Go Packages

You can install the dependency through go get.

cd $GOPATH/src/github.com/rai-project/go-pytorch
go get -u -v ./...

Or use Dep.

dep ensure -v

This installs the dependency in vendor/.

Configure Environmental Variables

Configure the linker environmental variables since the Pytorch C++ library is under a non-system directory. Place the following in either your ~/.bashrc or ~/.zshrc file

Linux

export LIBRARY_PATH=$LIBRARY_PATH:/opt/libtorch/lib
export LD_LIBRARY_PATH=/opt/libtorch/lib:$DYLD_LIBRARY_PATH

macOS

export LIBRARY_PATH=$LIBRARY_PATH:/opt/libtorch/lib
export DYLD_LIBRARY_PATH=/opt/libtorch/lib:$DYLD_LIBRARY_PATH

Check the Build

Run go build in to check the dependences installation and library paths set-up. On linux, the default is to use GPU, if you don't have a GPU, do go build -tags nogpu instead of go build.

Note : The CGO interface passes go pointers to the C API. This is an error by the CGO runtime. Disable the error by placing

export GODEBUG=cgocheck=0

in your ~/.bashrc or ~/.zshrc file and then run either source ~/.bashrc or source ~/.zshrc

Examples

Examples of using the Go Pytorch binding to do model inference are under examples

batch_mlmodelscope

This example shows how to use the MLModelScope tracer to profile the inference.

Refer to Set up the external services to start the tracer.

Then run the example by

  cd example/batch_mlmodelscope
  go build
  ./batch

Now you can go to localhost:16686 to look at the trace of that inference.

batch_nvprof

This example shows how to use nvprof to profile the inference. You need GPU and CUDA to run this example.

  cd example/batch_nvprof
  go build
  nvprof --profile-from-start off ./batch_nvprof

Refer to Profiler User's Guide for using nvprof.

Credits

Parts of the implementation have been borrowed from orktes/go-torch

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	Version   = "0.0.1"
	BuildDate = "undefined"
	GitCommit = "undefined"
)

Functions

func GetError

func GetError() error

func GetErrorString

func GetErrorString() string

func HasError

func HasError() bool

func PanicOnError

func PanicOnError()

func PrintTensors

func PrintTensors(inputs ...*Tensor)

PrintTensors prints tensors contents

func ResetError

func ResetError()

Types

type DType

type DType C.Torch_DataType

DType tensor scalar data type

const (
	UnknownType DType = C.Torch_Unknown
	// Byte byte tensors (go type uint8)
	Byte DType = C.Torch_Byte
	// Char char tensor (go type int8)
	Char DType = C.Torch_Char
	// Int int tensor (go type int32)
	Int DType = C.Torch_Int
	// Long long tensor (go type int64)
	Long DType = C.Torch_Long
	// Float tensor (go type float32)
	Float DType = C.Torch_Float
	// Double tensor  (go type float64)
	Double DType = C.Torch_Double
)

type DeviceKind

type DeviceKind C.Torch_DeviceKind
const (
	UnknownDeviceKind DeviceKind = C.UNKNOWN_DEVICE_KIND
	CPUDeviceKind     DeviceKind = C.CPU_DEVICE_KIND
	CUDADeviceKind    DeviceKind = C.CUDA_DEVICE_KIND
)

type Error

type Error struct {
	// contains filtered or unexported fields
}

Error errors returned by torch functions

func (*Error) Error

func (te *Error) Error() string

type Predictor

type Predictor struct {
	// contains filtered or unexported fields
}

func New

func New(ctx context.Context, opts ...options.Option) (*Predictor, error)

func (*Predictor) Close

func (p *Predictor) Close()

func (*Predictor) DisableProfiling

func (p *Predictor) DisableProfiling() error

func (*Predictor) EnableProfiling

func (p *Predictor) EnableProfiling() error

func (*Predictor) EndProfiling

func (p *Predictor) EndProfiling() error

func (*Predictor) Predict

func (p *Predictor) Predict(ctx context.Context, inputs []tensor.Tensor) error

func (*Predictor) ReadPredictionOutput

func (p *Predictor) ReadPredictionOutput(ctx context.Context) ([]tensor.Tensor, error)

func (*Predictor) ReadProfile

func (p *Predictor) ReadProfile() (string, error)

func (*Predictor) StartProfiling

func (p *Predictor) StartProfiling(name, metadata string) error

type Tensor

type Tensor struct {
	// contains filtered or unexported fields
}

Tensor holds a multi-dimensional array of elements of a single data type.

func NewTensor

func NewTensor(value interface{}, device DeviceKind) (*Tensor, error)

NewTensor converts from a Go value to a Tensor. Valid values are scalars, slices, and arrays. Every element of a slice must have the same length so that the resulting Tensor has a valid shape.

func NewTensorWithShape

func NewTensorWithShape(value interface{}, shape []int64, dt DType, device DeviceKind) (*Tensor, error)

NewTensorWithShape converts a single dimensional Go array or slice into a Tensor with given shape

func (*Tensor) DType

func (t *Tensor) DType() DType

DType returns tensors datatype

func (*Tensor) Shape

func (t *Tensor) Shape() []int64

Shape returns tensors shape

func (*Tensor) Value

func (t *Tensor) Value() interface{}

Value returns tensors value as a go type

type Tuple

type Tuple []interface{}

Tuple a tuple type

func NewTuple

func NewTuple(vals ...interface{}) (Tuple, error)

NewTuple returns a new tuple for given values (go types, torch.Tensor, torch.Tuple)

func (Tuple) Get

func (t Tuple) Get(index int) interface{}

Get returns a type in specific tuple index (otherwise returns nil)

Directories

Path Synopsis
example

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL