local

package
v0.0.0-...-0f4c570 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 1, 2018 License: Apache-2.0 Imports: 51 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

View Source
var DefaultS3Region = "us-east-1"

DefaultS3Region is the region used for s3 requests if a bucket's region is undiscoverable (e.g., lacking permissions for the GetBucketLocation API call.)

Amazon generally defaults to us-east-1 when regions are unspecified (or undiscoverable), but this can be overridden if a different default is desired.

Functions

This section is empty.

Types

type Executor

type Executor struct {
	// RunID of the run - <username>@grailbio.com/<hash>
	RunID string
	// ID is the ID of the executor. It is the URI of the executor and also
	// the prefix used in any Docker containers whose exec's are
	// children of this executor.
	ID string
	// Prefix is the filesystem prefix used to access paths on disk. This is
	// defined so that the executor can run inside of a Docker container
	// (which has the host's filesystem exported at this prefix).
	Prefix string
	// Dir is the root directory of this executor. All of its state is contained
	// within it.
	Dir string
	// Client is the Docker client used by this executor.
	Client *client.Client
	// Authenticator is used to pull images that are stored on Amazon's ECR
	// service.
	Authenticator ecrauth.Interface
	// AWSImage is a Docker image that contains the 'aws' tool.
	// This is used to implement S3 interns and externs.
	AWSImage string
	// AWSCreds is an AWS credentials provider, used for S3 operations
	// and "$aws" passthroughs.
	AWSCreds *credentials.Credentials
	// Log is this executor's logger where operational status is printed.
	Log *log.Logger

	// DigestLimiter limits the number of concurrent digest operations
	// performed while installing files into this executor's repository.�
	DigestLimiter *limiter.Limiter

	// S3FileLimiter controls the number of S3 file downloads that may
	// proceed concurrently.
	S3FileLimiter *limiter.Limiter

	// ExternalS3 defines whether to use external processes (AWS CLI tool
	// running in docker) for S3 operations. At the moment, this flag only
	// works for interns.
	ExternalS3 bool

	// FileRepository is the (file-based) object repository used by this
	// Executor. It may be provided by the user, or else it is set to a
	// default implementation when (*Executor).Start is called.
	FileRepository *file.Repository
	// contains filtered or unexported fields
}

Executor is a small management layer on top of exec. It implements reflow.Executor. Executor assumes that it has local access to the file system (perhaps with a prefix).

Executor stores its state to disk and, when recovered, re-instantiates all execs (which in turn recover).

func (*Executor) Execs

func (e *Executor) Execs(ctx context.Context) ([]reflow.Exec, error)

Execs returns all execs managed by this executor.

func (*Executor) Get

func (e *Executor) Get(ctx context.Context, id digest.Digest) (reflow.Exec, error)

Get returns the exec named ID, or an errors.NotExist if the exec does not exist.

func (*Executor) Kill

func (e *Executor) Kill(ctx context.Context) error

Kill disposes of the executors and all of its execs. It also sets the executor's "dead" flag, so that all future operations on the executor returns an error.

func (*Executor) Put

Put idempotently defines a new exec with a given ID and config. The exec may be (deterministically) rewritten.

func (*Executor) Remove

func (e *Executor) Remove(ctx context.Context, id digest.Digest) error

Remove removes the exec named id.

func (*Executor) Repository

func (e *Executor) Repository() reflow.Repository

Repository returns the repository attached to this executor.

func (*Executor) Resources

func (e *Executor) Resources() reflow.Resources

Resources reports the total capacity of this executor.

func (*Executor) SetResources

func (e *Executor) SetResources(r reflow.Resources)

SetResources sets the resources reported by Resources() to r.

func (*Executor) Start

func (e *Executor) Start() error

Start initializes the executor and recovers previously stored state. It re-initializes all stored execs.

func (*Executor) URI

func (e *Executor) URI() string

URI returns the executor's ID.

type Manifest

type Manifest struct {
	Type  execType
	State execState

	Created time.Time

	Result    reflow.Result
	Config    reflow.ExecConfig   // The object config used to create this object.
	Docker    types.ContainerJSON // Docker inspect output.
	Resources reflow.Resources
	Stats     stats
	Gauges    reflow.Gauges
}

Manifest stores the state of an exec. It is serialized to JSON and stored on disk so that executors are restartable, and can recover from crashes.

type Pool

type Pool struct {
	// Dir is the filesystem root of the pool. Everything under this
	// path is assumed to be owned and managed by the pool.
	Dir string
	// Prefix is prepended to paths constructed by allocs. This is to
	// permit running the pool manager inside of a Docker container.
	Prefix string
	// Client is the Docker client. We assume that the Docker daemon
	// runs on the same host from which the pool is managed.
	Client *client.Client
	// Authenticator is used to authenticate ECR image pulls.
	Authenticator interface {
		Authenticates(ctx context.Context, image string) (bool, error)
		Authenticate(ctx context.Context, cfg *types.AuthConfig) error
	}
	// AWSImage is the name of the image that contains the 'aws' tool.
	// This is used to implement directory syncing via s3.
	AWSImage string
	// AWSCreds is a credentials provider used to mint AWS credentials.
	// They are used to access AWS services.
	AWSCreds *credentials.Credentials
	// Log
	Log *log.Logger
	// DigestLimiter controls the number of digest operations that may
	// proceed concurrently.
	DigestLimiter *limiter.Limiter
	// S3FileLimiter controls the number of S3 file downloads that may
	// proceed concurrently.
	S3FileLimiter *limiter.Limiter
	// contains filtered or unexported fields
}

Pool implements a resource pool on top of a Docker client. The pool itself must run on the same machine as the Docker instance as it performs local filesystem operations that must be reflected inside the container.

Pool keeps all state on disk, as follows:

Prefix/Dir/state.json
	Stores the set of currently active allocs, together with their
	resource requirements.

Prefix/Dir/allocs/<id>/
	The root directory for the alloc with id. The state under
	this directory is managed by an executor instance.

func (*Pool) Alloc

func (p *Pool) Alloc(ctx context.Context, id string) (pool.Alloc, error)

Alloc looks up an alloc by ID.

func (*Pool) Allocs

func (p *Pool) Allocs(ctx context.Context) ([]pool.Alloc, error)

Allocs lists all the active allocs in the pool.

func (*Pool) ID

func (p *Pool) ID() string

ID returns the ID of the pool. It is always "local".

func (*Pool) Offer

func (p *Pool) Offer(ctx context.Context, id string) (pool.Offer, error)

Offer looks up the an offer by ID.

func (*Pool) Offers

func (p *Pool) Offers(ctx context.Context) ([]pool.Offer, error)

Offers enumerates all the current offers of this pool. The local pool always returns either no offers, when there are no more available resources, or 1 offer comprising the entirety of available resources.

func (*Pool) Start

func (p *Pool) Start() error

Start starts the pool. If the pool has a state snapshot, Start will restore the pool's previous state. Start will also make sure that all zombie allocs are collected.

func (*Pool) StopIfIdleFor

func (p *Pool) StopIfIdleFor(d time.Duration) bool

StopIfIdle stops the pool if it is idle. Returns whether the pool was stopped.

Directories

Path Synopsis
Package pooltest tests pools.
Package pooltest tests pools.
Package testutil provides utilities for testing code that involves pools.
Package testutil provides utilities for testing code that involves pools.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL