predictor

package
v1.2.8 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 14, 2022 License: NCSA Imports: 35 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func ElementTypeSliceToTensor

func ElementTypeSliceToTensor(data [][]ElementType, shape []int64) (*tf.Tensor, error)

func Float32SliceToTensor

func Float32SliceToTensor(data [][]float32, shape []int64) (*tf.Tensor, error)

func Float64SliceToTensor

func Float64SliceToTensor(data [][]float64, shape []int64) (*tf.Tensor, error)

func Int16SliceToTensor

func Int16SliceToTensor(data [][]int16, shape []int64) (*tf.Tensor, error)

func Int32SliceToTensor

func Int32SliceToTensor(data [][]int32, shape []int64) (*tf.Tensor, error)

func Int64SliceToTensor

func Int64SliceToTensor(data [][]int64, shape []int64) (*tf.Tensor, error)

func Int8SliceToTensor

func Int8SliceToTensor(data [][]int8, shape []int64) (*tf.Tensor, error)

func NewGeneralPredictor added in v1.1.0

func NewGeneralPredictor(model dlframework.ModelManifest, os ...options.Option) (common.Predictor, error)

NewGeneralPredictor ...

func Uint16SliceToTensor

func Uint16SliceToTensor(data [][]uint16, shape []int64) (*tf.Tensor, error)

func Uint32SliceToTensor

func Uint32SliceToTensor(data [][]uint32, shape []int64) (*tf.Tensor, error)

func Uint64SliceToTensor

func Uint64SliceToTensor(data [][]uint64, shape []int64) (*tf.Tensor, error)

func Uint8SliceToTensor

func Uint8SliceToTensor(data [][]uint8, shape []int64) (*tf.Tensor, error)

Types

type Device

type Device struct {
	Name, Type       string
	MemoryLimitBytes int64
}

Device structure contains information about a device associated with a session, as returned by ListDevices()

func (Device) String

func (d Device) String() string

String describes d and implements fmt.Stringer.

type ElementType

type ElementType generic.Type

type GeneralPredictor added in v1.1.0

type GeneralPredictor struct {
	common.Base
	// contains filtered or unexported fields
}

GeneralPredictor ...

func (*GeneralPredictor) Close added in v1.1.0

func (p *GeneralPredictor) Close() error

Close ...

func (*GeneralPredictor) Load added in v1.1.0

Load ...

func (*GeneralPredictor) Modality added in v1.1.0

func (p *GeneralPredictor) Modality() (dlframework.Modality, error)

Modality ...

func (*GeneralPredictor) Predict added in v1.1.0

func (p *GeneralPredictor) Predict(ctx context.Context, data interface{}, opts ...options.Option) error

Predict ...

func (*GeneralPredictor) ReadPredictedFeaturesAsMap added in v1.1.0

func (p *GeneralPredictor) ReadPredictedFeaturesAsMap(ctx context.Context) (map[string]interface{}, error)

ReadPredictedFeaturesAsMap ...

func (*GeneralPredictor) Reset added in v1.1.0

func (p *GeneralPredictor) Reset(ctx context.Context) error

Reset ...

func (*GeneralPredictor) SetDesiredOutput added in v1.2.1

func (p *GeneralPredictor) SetDesiredOutput(modality dlframework.Modality)

This allows postprocess to use different output formats, however, the model has to output in a format that the desired modality postprocess can handle

type Graph

type Graph = tf.Graph

type Operation

type Operation = tf.Operation

type Output

type Output = tf.Output

type Session

type Session struct {
	// contains filtered or unexported fields
}

Session drives a TensorFlow graph computation.

When a Session is created with a given target, a new Session object is bound to the universe of resources specified by that target. Those resources are available to this session to perform computation described in the GraphDef. After creating the session with a graph, the caller uses the Run() API to perform the computation and potentially fetch outputs as Tensors. A Session allows concurrent calls to Run().

func NewSession

func NewSession(graph *Graph, options *SessionOptions) (*Session, error)

NewSession creates a new execution session with the associated graph. options may be nil to use the default options.

func (*Session) Close

func (s *Session) Close() error

Close a session. This contacts any other processes associated with this session, if applicable. Blocks until all previous calls to Run have returned.

func (*Session) ListDevices

func (s *Session) ListDevices() ([]Device, error)

ListDevices returns the list of devices associated with a Session.

func (*Session) Run

func (s *Session) Run(ctx context.Context, feeds map[Output]*Tensor, fetches []Output, targets []*Operation, runOpts *proto.RunOptions, graphPath string) ([]*Tensor, error)

type SessionOptions

type SessionOptions struct {
	// Target indicates the TensorFlow runtime to connect to.
	//
	// If 'target' is empty or unspecified, the local TensorFlow runtime
	// implementation will be used.  Otherwise, the TensorFlow engine
	// defined by 'target' will be used to perform all computations.
	//
	// "target" can be either a single entry or a comma separated list
	// of entries. Each entry is a resolvable address of one of the
	// following formats:
	//   local
	//   ip:port
	//   host:port
	//   ... other system-specific formats to identify tasks and jobs ...
	//
	// NOTE: at the moment 'local' maps to an in-process service-based
	// runtime.
	//
	// Upon creation, a single session affines itself to one of the
	// remote processes, with possible load balancing choices when the
	// "target" resolves to a list of possible processes.
	//
	// If the session disconnects from the remote process during its
	// lifetime, session calls may fail immediately.
	Target string

	// Config is a binary-serialized representation of the
	// tensorflow.ConfigProto protocol message
	// (https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto).
	Config []byte
}

SessionOptions contains configuration information for a session.

type Tensor

type Tensor = tf.Tensor

type Trace

type Trace struct {
	// contains filtered or unexported fields
}

func (*Trace) Publish

func (t *Trace) Publish(ctx context.Context, opts ...opentracing.StartSpanOption) error

Notes about start and end time from the NodeExecStats proto: For GPU, there is no difference between op_end_rel_micros and all_end_rel_micros. All are kernel times. For CPU, op_end_rel is the kernel time, while all_end_rel_micros includes some post-processing. Besides, currently, there is no way to measure the execution time of async ops accurately.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL