reflow: github.com/grailbio/reflow Index | Files | Directories

package reflow

import "github.com/grailbio/reflow"

Package reflow implements the core data structures and (abstract) runtime for Reflow.

Reflow is a system for distributed program execution. The programs are described by Flows, which are an abstract specification of the program's execution. Each Flow node can take any number of other Flows as dependent inputs and perform some (local) execution over these inputs in order to compute some output value.

Reflow supports a limited form of dynamic dependencies: a Flow may evaluate to a list of values, each of which may be executed independently. This mechanism also provides parallelism.

The system orchestrates Flow execution by evaluating the flow in the manner of an abstract syntax tree; see Eval for more details.

Index

Package Files

assertion.go cache.go digester.go doc.go executor.go fileset.go repository.go

Variables

var Digester = digest.Digester(crypto.SHA256)

func AssertExact Uses

func AssertExact(_ context.Context, source, target []*Assertions) bool

AssertExact implements Assert for an exact match. That is, for each key in target, the value should match exactly what's in src and target can't contain keys missing in src.

func AssertNever Uses

func AssertNever(_ context.Context, _, _ []*Assertions) bool

AssertNever implements Assert for an always match (ie, never assert).

func PrettyDiff Uses

func PrettyDiff(lefts, rights []*Assertions) string

PrettyDiff returns a pretty-printable string representing the differences between the set of Assertions in lefts and rights. Specifically only these differences are relevant: - any key present in any of the rights but not in lefts. - any entry (in any of the rights) with a mismatching assertion (in any of the lefts). TODO(swami): Add unit tests.

type Arg Uses

type Arg struct {
    // Out is true if this is an output argument.
    Out bool
    // Fileset is the fileset used as an input argument.
    Fileset *Fileset `json:",omitempty"`
    // Index is the output argument index.
    Index int
}

Arg represents an exec argument (either input or output).

type Assert Uses

type Assert func(ctx context.Context, source, target []*Assertions) bool

Assert asserts whether the target set of assertions are compatible with the src set. Compatibility is directional and this strictly determines if the target is compatible with src and Assert(target, src) may not yield the same result.

type AssertionGenerator Uses

type AssertionGenerator interface {
    // Generate computes assertions for a given AssertionKey.
    Generate(ctx context.Context, key AssertionKey) (*Assertions, error)
}

AssertionGenerator generates assertions based on a AssertionKey. Implementations are specific to a namespace and generate assertions for a given subject.

type AssertionGeneratorMux Uses

type AssertionGeneratorMux map[string]AssertionGenerator

GeneratorMux multiplexes a number of AssertionGenerator implementations based on the namespace.

func (AssertionGeneratorMux) Generate Uses

func (am AssertionGeneratorMux) Generate(ctx context.Context, key AssertionKey) (*Assertions, error)

Generate implements the AssertionGenerator interface for AttributerMux.

type AssertionKey Uses

type AssertionKey struct {
    Subject, Namespace string
}

AssertionKey represents a subject within a namespace whose properties can be asserted. - Subject represents the unique entity within the Namespace to which this Assertion applies.

(eg: full path to blob object, a Docker Image, etc)

- Namespace represents the namespace to which the subject of this Assertion belongs.

(eg: "blob" for blob objects, "docker" for docker images, etc)

func (AssertionKey) Less Uses

func (a AssertionKey) Less(b AssertionKey) bool

Less returns whether the given AssertionKey is lexicographically smaller than this one.

type Assertions Uses

type Assertions struct {
    // contains filtered or unexported fields
}

Assertions represent a collection of AssertionKeys with specific values for various properties of theirs. Assertions are constructed in one of the following ways:

NewAssertions: creates an empty Assertions and is typically used when subsequent operations are to AddFrom.
AssertionsFromEntry: creates Assertions from a single entry mapping an AssertionKey
to various properties (within the key's Namespace) of the named Subject in the key.
AssertionsFromMap: creates Assertions from a mapping of AssertionKey to properties.
MergeAssertions: merges a list of Assertions into a single Assertions.

func AssertionsFromEntry Uses

func AssertionsFromEntry(k AssertionKey, v map[string]string) *Assertions

AssertionsFromEntry creates an Assertions from a single entry. It is similar to AssertionsFromMap and exists for convenience.

func AssertionsFromMap Uses

func AssertionsFromMap(m map[AssertionKey]map[string]string) *Assertions

AssertionsFromMap creates an Assertions from a given mapping of AssertionKey to a map representing its property names and corresponding values.

func MergeAssertions Uses

func MergeAssertions(list ...*Assertions) (*Assertions, error)

MergeAssertions merges a list of Assertions into a single Assertions. Returns an error if the same key maps to a conflicting value as a result of the merge.

func NewAssertions Uses

func NewAssertions() *Assertions

NewAssertions creates a new Assertions object.

func NonEmptyAssertions Uses

func NonEmptyAssertions(list ...*Assertions) ([]*Assertions, int)

NonEmptyAssertions returns the list of non-empty Assertions from the given list and a total size.

func (*Assertions) AddFrom Uses

func (s *Assertions) AddFrom(list ...*Assertions) error

AddFrom adds to this Assertions from the given list of Assertions. Returns an error if the same key maps to a conflicting value as a result of the adding. AddFrom panics if s is nil.

func (*Assertions) Digest Uses

func (s *Assertions) Digest() digest.Digest

Digest returns the assertions' digest.

func (*Assertions) Equal Uses

func (s *Assertions) Equal(t *Assertions) bool

Equal returns whether the given Assertions is equal to this one.

func (*Assertions) Filter Uses

func (s *Assertions) Filter(t *Assertions) (*Assertions, []AssertionKey)

Filter returns new Assertions mapping keys from t with values from s (panics if s is nil) and a list of AssertionKeys that exist in t but are missing in s.

func (*Assertions) IsEmpty Uses

func (s *Assertions) IsEmpty() bool

IsEmpty returns whether this is empty, which it is if its a nil reference or has no entries.

func (*Assertions) MarshalJSON Uses

func (s *Assertions) MarshalJSON() ([]byte, error)

MarshalJSON defines a custom marshal method for converting Assertions to JSON.

func (*Assertions) PrettyDiff Uses

func (s *Assertions) PrettyDiff(t *Assertions) string

PrettyDiff returns a pretty-printable string representing the differences in the given Assertions that conflict with this one. Specifically only these differences are relevant: - any key present in t but not in s. - any entry with a mismatching assertion in t and s.

func (*Assertions) Short Uses

func (s *Assertions) Short() string

Short returns a short, string representation of assertions.

func (*Assertions) String Uses

func (s *Assertions) String() string

String returns a full, human-readable string representing the assertions.

func (*Assertions) UnmarshalJSON Uses

func (s *Assertions) UnmarshalJSON(b []byte) error

UnmarshalJSON defines a custom unmarshal method for Assertions.

func (*Assertions) WriteDigest Uses

func (s *Assertions) WriteDigest(w io.Writer)

WriteDigest writes the digestible material for a to w. The io.Writer is assumed to be produced by a Digester, and hence infallible. Errors are not checked.

type Cache Uses

type Cache interface {
    // Lookup returns the value associated with a (digest) key.
    // Lookup returns an error flagged errors.NotExist when there
    // is no such value.
    //
    // Lookup should also check to make sure that the objects
    // actually exist, and provide a reasonable guarantee that they'll
    // be available for transfer.
    //
    // TODO(marius): allow the caller to maintain a lease on the desired
    // objects so that garbage collection can (safely) be run
    // concurrently with flows. This isn't a correctness concern (the
    // flows may be restarted), but rather one of efficiency.
    Lookup(context.Context, digest.Digest) (Fileset, error)

    // Transfer transmits the file objects associated with value v
    // (usually retrieved by Lookup) to the repository dst. Transfer
    // should be used in place of direct (cache) repository access since
    // it may apply additional policies (e.g., rate limiting, etc.)
    Transfer(ctx context.Context, dst Repository, v Fileset) error

    // NeedTransfer returns the set of files in the Fileset v that are absent
    // in the provided repository.
    NeedTransfer(ctx context.Context, dst Repository, v Fileset) ([]File, error)

    // Write stores the Value v, whose file objects exist in Repository repo,
    // under the key id. If the repository is nil no objects are transferred.
    Write(ctx context.Context, id digest.Digest, v Fileset, repo Repository) error

    // Delete removes the value named by id from this cache.
    Delete(ctx context.Context, id digest.Digest) error

    // Repository returns this cache's underlying repository. It should
    // not be used for data transfer during the course of evaluation; see
    // Transfer.
    Repository() Repository
}

A Cache stores Values and their associated File objects for later retrieval. Caches may be temporary: objects are not guaranteed to persist.

type Exec Uses

type Exec interface {
    // ID returns the digest of the exec. This is equivalent to the Digest of the value computed
    // by the Exec.
    ID() digest.Digest

    // URI names execs in a process-agnostic fashion.
    URI() string

    // Result returns the exec's result after it has been completed.
    Result(ctx context.Context) (Result, error)

    // Inspect inspects the exec. It can be called at any point in the Exec's lifetime.
    Inspect(ctx context.Context) (ExecInspect, error)

    // Wait awaits completion of the Exec.
    Wait(ctx context.Context) error

    // Logs returns the standard error and/or standard output of the Exec.
    // If it is called during execution, and if follow is true, it follows
    // the logs until completion of execution.
    // Completed Execs return the full set of available logs.
    Logs(ctx context.Context, stdout, stderr, follow bool) (io.ReadCloser, error)

    // Shell invokes /bin/bash inside an Exec. It can be invoked only when
    // the Exec is executing. r provides the shell input. The returned read
    // closer has the shell output. The caller has to close the read closer
    // once done.
    // TODO(pgopal) - Implement shell for zombie execs.
    Shell(ctx context.Context) (io.ReadWriteCloser, error)

    // Promote installs this exec's objects into the alloc's repository.
    // Promote assumes that the Exec is complete. i.e. Wait returned successfully.
    Promote(context.Context) error
}

An Exec computes a Value. It is created from an ExecConfig; the Exec interface permits waiting on completion, and inspection of results as well as ongoing execution.

type ExecConfig Uses

type ExecConfig struct {
    // The type of exec: "exec", "intern", "extern"
    Type string

    // A human-readable name for the exec.
    Ident string

    // intern, extern: the URL from which data is fetched or to which
    // data is pushed.
    URL string

    // exec: the docker image used to perform an exec
    Image string

    // The docker image that is specified by the user
    OriginalImage string

    // exec: the Sprintf-able command that is to be run inside of the
    // Docker image.
    Cmd string

    // exec: the set of arguments (one per %s in Cmd) passed to the command
    // extern: the single argument which is to be exported
    Args []Arg

    // exec: the resource requirements for the exec
    Resources

    // NeedAWSCreds indicates the exec needs AWS credentials defined in
    // its environment: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and
    // AWS_SESSION_TOKEN will be available with the user's default
    // credentials.
    NeedAWSCreds bool

    // NeedDockerAccess indicates that the exec needs access to the host docker daemon
    NeedDockerAccess bool

    // OutputIsDir tells whether an output argument (by index)
    // is a directory.
    OutputIsDir []bool `json:",omitempty"`
}

ExecConfig contains all the necessary information to perform an exec.

func (ExecConfig) String Uses

func (e ExecConfig) String() string

type ExecInspect Uses

type ExecInspect struct {
    Created time.Time
    Config  ExecConfig
    State   string        // "created", "waiting", "running", .., "zombie"
    Status  string        // human readable status
    Error   *errors.Error `json:",omitempty"` // non-nil runtime on error
    Profile Profile

    // Gauges are used to export realtime exec stats. They are used only
    // while the Exec is in running state.
    Gauges Gauges
    // Commands running from top, for live inspection.
    Commands []string
    // Docker inspect output.
    Docker types.ContainerJSON
    // ExecError stores exec result errors.
    ExecError *errors.Error `json:",omitempty"`
}

ExecInspect describes the current state of an Exec.

func (ExecInspect) Runtime Uses

func (e ExecInspect) Runtime() time.Duration

Runtime computes the exec's runtime based on Docker's timestamps.

type Executor Uses

type Executor interface {
    // Put creates a new Exec at id. It is idempotent.
    Put(ctx context.Context, id digest.Digest, exec ExecConfig) (Exec, error)

    // Get retrieves the Exec named id.
    Get(ctx context.Context, id digest.Digest) (Exec, error)

    // Remove deletes an Exec.
    Remove(ctx context.Context, id digest.Digest) error

    // Execs lists all Execs known to the Executor.
    Execs(ctx context.Context) ([]Exec, error)

    // Load fetches missing files into the executor's repository. Load fetches
    // resolved files from the specified backing repository and unresolved files
    // directly from the source. The resolved fileset is returned and is available
    // on the executor on successful return. The client has to explicitly unload the
    // files to free them.
    Load(ctx context.Context, repo *url.URL, fileset Fileset) (Fileset, error)

    // Unload the data from the executor's repository. Any use of the unloaded files
    // after the successful return of Unload is undefined.
    Unload(ctx context.Context, fileset Fileset) error

    // Resources indicates the total amount of resources available at the Executor.
    Resources() Resources

    // Repository returns the Repository associated with this Executor.
    Repository() Repository
}

Executor manages Execs and their values.

type File Uses

type File struct {
    // The digest of the contents of the file.
    ID  digest.Digest

    // The size of the file.
    Size int64

    // Source stores a URL for the file from which it may
    // be retrieved.
    Source string `json:",omitempty"`

    // ETag stores an optional entity tag for the Source file.
    ETag string `json:",omitempty"`

    // LastModified stores the file's last modified time.
    LastModified time.Time `json:",omitempty"`

    // ContentHash is the digest of the file contents and can be present
    // for unresolved (ie reference) files.
    // ContentHash is expected to equal ID once this file is resolved.
    ContentHash digest.Digest `json:",omitempty"`

    // Assertions are the set of assertions representing the state
    // of all the dependencies that went into producing this file.
    // Unlike Etag/Size etc which are properties of this File,
    // Assertions can include properties of other subjects that
    // contributed to producing this File.
    Assertions *Assertions `json:",omitempty"`
}

File represents a File inside of Reflow. A file is said to be resolved if it contains the digest of the file's contents (ID). Otherwise, a File is said to be a reference, in which case it must contain a source and etag and may contain a ContentHash. Any type of File (resolved or reference) can contain Assertions. TODO(swami): Split into resolved/reference files explicitly.

func (File) Digest Uses

func (f File) Digest() digest.Digest

Digest returns the file's digest: if the file is a reference and it's ContentHash is unset, the digest comprises the reference, source, etag and assertions. Reference files will return ContentHash if set (which is assumed to be the digest of the file's contents). Resolved files return ID which is the digest of the file's contents.

func (File) Equal Uses

func (f File) Equal(g File) bool

Equal returns whether files f and g represent the same content. Since equality is a property of the file's contents, assertions are ignored.

func (File) IsRef Uses

func (f File) IsRef() bool

IsRef returns whether this file is a file reference.

func (File) Short Uses

func (f File) Short() string

func (File) String Uses

func (f File) String() string

type Fileset Uses

type Fileset struct {
    List []Fileset       `json:",omitempty"`
    Map  map[string]File `json:"Fileset,omitempty"`
}

Fileset is the result of an evaluated flow. Values may either be lists of values or Filesets. Filesets are a map of paths to Files.

func (*Fileset) AddAssertions Uses

func (v *Fileset) AddAssertions(as ...*Assertions) error

AddAssertions adds the given assertions to all files in this Fileset.

func (Fileset) AnyEmpty Uses

func (v Fileset) AnyEmpty() bool

AnyEmpty tells whether this value, or any of its constituent values contain no files.

func (Fileset) Assertions Uses

func (v Fileset) Assertions() []*Assertions

Assertions returns a list of all Assertions across all the Files in this Fileset

func (Fileset) Diff Uses

func (v Fileset) Diff(w Fileset) (string, bool)

Diff deep-compares the values two filesets assuming they have the same structure and returns a pretty-diff of the differences (if any) and a boolean if they are different.

func (Fileset) Digest Uses

func (v Fileset) Digest() digest.Digest

Digest returns a digest representing the value. Digests preserve semantics: two values with the same digest are considered to be equivalent.

func (Fileset) Empty Uses

func (v Fileset) Empty() bool

Empty tells whether this value is empty, that is, it contains no files.

func (Fileset) Equal Uses

func (v Fileset) Equal(w Fileset) bool

Equal reports whether v is equal to w.

func (Fileset) Files Uses

func (v Fileset) Files() []File

Files returns the set of Files that comprise the value.

func (Fileset) Flatten Uses

func (v Fileset) Flatten() []Fileset

Flatten is a convenience function to flatten (shallowly) the value v, returning a list of Values. If the value is a list value, the list is returned; otherwise a unary list of the value v is returned.

func (Fileset) N Uses

func (v Fileset) N() int

N returns the number of files (not necessarily unique) in this value.

func (Fileset) Pullup Uses

func (v Fileset) Pullup() Fileset

Pullup merges this value (tree) into a single toplevel fileset.

func (Fileset) Short Uses

func (v Fileset) Short() string

Short returns a short, human-readable string representing the value. Its intended use is for pretty-printed output. In particular, hashes are abbreviated, and lists display only the first member, followed by ellipsis. For example, a list of values is printed as:

list<val<sample.fastq.gz=f2c59c40>, ...50MB>

func (Fileset) Size Uses

func (v Fileset) Size() int64

Size returns the total size of this value.

func (Fileset) String Uses

func (v Fileset) String() string

String returns a full, human-readable string representing the value v. Unlike Short, string is fully descriptive: it contains the full digest and lists are complete. For example:

list<sample.fastq.gz=sha256:f2c59c40a1d71c0c2af12d38a2276d9df49073c08360d72320847efebc820160>,
  sample2.fastq.gz=sha256:59eb82c49448e349486b29540ad71f4ddd7f53e5a204d50997f054d05c939adb>>

func (Fileset) Subst Uses

func (v Fileset) Subst(sub map[digest.Digest]File) (out Fileset, resolved bool)

Subst the files in fileset using the provided mapping of File object digests to Files. Subst returns whether the fileset is fully resolved after substitution. That is, any unresolved file f in this fileset tree, will be substituted by sub[f.Digest()].

func (Fileset) WriteDigest Uses

func (v Fileset) WriteDigest(w io.Writer)

WriteDigest writes the digestible material for v to w. The io.Writer is assumed to be produced by a Digester, and hence infallible. Errors are not checked.

type Gauges Uses

type Gauges map[string]float64

Gauges stores a set of named gauges.

func (Gauges) Snapshot Uses

func (g Gauges) Snapshot() Gauges

Snapshot returns a snapshot of the gauge values g.

type Profile Uses

type Profile map[string]struct {
    Max, Mean, Var float64
    N              int64
    First, Last    time.Time
}

Profile stores keyed statistical summaries (currently: mean, max, N).

func (Profile) String Uses

func (p Profile) String() string

type Repository Uses

type Repository interface {
    // Collect removes from this repository any objects not in the
    // Liveset
    Collect(context.Context, liveset.Liveset) error

    // CollectWithThreshold removes from this repository any objects not in the live set and
    // is either in the dead set or its creation times are not more recent than the threshold time.
    CollectWithThreshold(ctx context.Context, live liveset.Liveset, dead liveset.Liveset, threshold time.Time, dryrun bool) error

    // Stat returns the File metadata for the blob with the given digest.
    // It returns errors.NotExist if the blob does not exist in this
    // repository.
    Stat(context.Context, digest.Digest) (File, error)

    // Get streams the blob named by the given Digest.
    // If it does not exist in this repository, an error with code
    // errors.NotFound will be returned.
    Get(context.Context, digest.Digest) (io.ReadCloser, error)

    // Put streams a blob to the repository and returns its
    // digest when completed.
    Put(context.Context, io.Reader) (digest.Digest, error)

    // WriteTo writes a blob identified by a Digest directly to a
    // foreign repository named by a URL. If the repository is
    // unable to write directly to the foreign repository, an error
    // with flag errors.NotSupported is returned.
    WriteTo(context.Context, digest.Digest, *url.URL) error

    // ReadFrom reads a blob identified by a Digest directly from a
    // foreign repository named by a URL. If the repository is
    // unable to read directly from the foreign repository, an error
    // with flag errors.NotSupported is returned.
    ReadFrom(context.Context, digest.Digest, *url.URL) error

    // URL returns the URL of this repository, or nil if it does not
    // have one. The returned URL may be used for direct transfers via
    // WriteTo or ReadFrom.
    URL() *url.URL
}

Repository defines an interface used for servicing blobs of data that are named-by-hash.

type Requirements Uses

type Requirements struct {
    // Min is the smallest amount of resources that must be allocated
    // to satisfy the requirements.
    Min Resources
    // Width is the width of the requirements. A width of zero indicates
    // a "narrow" job: minimum describes the exact resources needed.
    // Widths greater than zero are "wide" requests: they require some
    // multiple of the minimum requirement. The distinction between a
    // width of zero and a width of one is a little subtle: width
    // represents the smallest acceptable width, and thus a width of 1
    // can be taken as a hint to allocate a higher multiple of the
    // minimum requirements, whereas a width of 0 represents a precise
    // requirement: allocating any more is likely to be wasteful.
    Width int
}

Requirements stores resource requirements, comprising the minimum amount of acceptable resources and a width.

func (*Requirements) Add Uses

func (r *Requirements) Add(s Requirements)

Add adds the provided requirements s to the requirements r. R's minimum requirements are set to the larger of the two; the two widths are added.

func (*Requirements) AddParallel Uses

func (r *Requirements) AddParallel(s Resources)

AddParallel adds the provided resources s to the requirements, and also increases the requirement's width by one.

func (*Requirements) AddSerial Uses

func (r *Requirements) AddSerial(s Resources)

AddSerial adds the provided resources s to the requirements.

func (Requirements) Equal Uses

func (r Requirements) Equal(s Requirements) bool

Equal reports whether r and s represent the same requirements.

func (*Requirements) Max Uses

func (r *Requirements) Max() Resources

Max is the maximum amount of resources represented by this resource request.

func (Requirements) String Uses

func (r Requirements) String() string

String renders a human-readable representation of r.

func (*Requirements) Wide Uses

func (r *Requirements) Wide() bool

Wide returns whether these requirements represent a wide resource request.

type Resources Uses

type Resources map[string]float64

Resources describes a set of labeled resources. Each resource is described by a string label and assigned a value. The zero value of Resources represents the resources with zeros for all labels.

func (*Resources) Add Uses

func (r *Resources) Add(x, y Resources) *Resources

Add sets r to the sum x[key]+y[key] for all keys and returns r.

func (Resources) Available Uses

func (r Resources) Available(s Resources) bool

Available tells if s resources are available from r.

func (Resources) Equal Uses

func (r Resources) Equal(s Resources) bool

Equal tells whether the resources r and s are equal in all dimensions of both r and s.

func (*Resources) Max Uses

func (r *Resources) Max(x, y Resources) *Resources

Max sets r to the maximum max(x[key], y[key]) for all keys and returns r.

func (*Resources) Min Uses

func (r *Resources) Min(x, y Resources) *Resources

Min sets r to the minimum min(x[key], y[key]) for all keys and returns r.

func (*Resources) Scale Uses

func (r *Resources) Scale(s Resources, factor float64) *Resources

Scale sets r to the scaled resources s[key]*factor for all keys and returns r.

func (Resources) ScaledDistance Uses

func (r Resources) ScaledDistance(u Resources) float64

ScaledDistance returns the distance between two resources computed as a sum of the differences in memory, cpu and disk with some predefined scaling.

func (*Resources) Set Uses

func (r *Resources) Set(s Resources) *Resources

Set sets r[key]=s[key] for all keys and returns r.

func (Resources) String Uses

func (r Resources) String() string

String renders a Resources. All nonzero-valued labels are included; mem, cpu, and disk are always included regardless of their value.

func (*Resources) Sub Uses

func (r *Resources) Sub(x, y Resources) *Resources

Sub sets r to the difference x[key]-y[key] for all keys and returns r.

type Result Uses

type Result struct {
    // Fileset is the fileset produced by an exec.
    Fileset Fileset `json:",omitempty"`

    // Err is error produced by an exec.
    Err *errors.Error `json:",omitempty"`
}

Result is the result of an exec.

func (Result) Equal Uses

func (r Result) Equal(s Result) bool

Equal tells whether r is equal to s.

func (Result) Short Uses

func (r Result) Short() string

Short renders an abbreviated human-readable string of this result.

func (Result) String Uses

func (r Result) String() string

String renders a human-readable string of this result.

type Transferer Uses

type Transferer interface {
    // Transfer transfers a set of files from the src to the dst
    // repository. A transfer manager may apply policies (e.g., rate
    // limits and concurrency limits) to these transfers.
    Transfer(ctx context.Context, dst, src Repository, files ...File) error

    NeedTransfer(ctx context.Context, dst Repository, files ...File) ([]File, error)
}

Transferer defines an interface used for management of transfers between multiple repositories.

Directories

PathSynopsis
assocPackage assoc defines data types for associative maps used within Reflow.
assoc/dydbassocPackage dydbassoc implements an assoc.Assoc based on AWS's DynamoDB.
assoc/test
batchPackage batch implements support for running batches of reflow (stateful) evaluations.
blobPackage blob implements a set of generic interfaces used to implement blob storage implementations such as S3, GCS, and local file system implementations.
blob/s3blobPackage s3blob implements the blob interfaces for S3.
blob/testblobPackage testblob implements a blobstore appropriate for testing.
bootstrap
bootstrap/clientPackage client implements a remote client for the reflow bootstrap.
bootstrap/common
ec2authenticatorPackage ec2authenticator implements Docker repository authentication for ECR using an AWS SDK session and a root.
ec2clusterPackage ec2cluster implements support for maintaining elastic clusters of Reflow instances on EC2.
ec2cluster/instances
ec2cluster/test
ec2cluster/volumePackage volume implements support for maintaining (EBS) volumes on an EC2 instance by watching the disk usage of the underlying disk device and resizing the EBS volumes whenever necessary based on provided parameters.
errorsPackage errors provides a standard error definition for use in Reflow.
flow
infra
internal/ecrauthPackage ecrauth provides an interface and utilities for authenticating AWS EC2 ECR Docker repositories.
internal/execimage
internal/fs
internal/iputil
internal/s3walker
internal/scannerPackage scanner provides a scanner and tokenizer for UTF-8-encoded text.
internal/walker
langPackage lang implements the reflow language.
liveset
liveset/bloomlive
local
local/pooltestPackage pooltest tests pools.
local/testutilPackage testutil provides utilities for testing code that involves pools.
logPackage log implements leveling and teeing on top of Go's standard logs package.
poolPackage pool implements resource pools for reflow.
pool/clientPackage client implements a remoting client for reflow pools.
pool/serverPackage server exposes a pool implementation for remote access.
reflowlet
repositoryPackage repository provides common ways to dial reflow.Repository implementations; it also provides some common utilities for working with repositories.
repository/blobrepoPackage blobrepo implements a generic reflow.Repository on top of a blob.Bucket.
repository/clientPackage client implements repository REST client.
repository/filerepoPackage filerepo implements a filesystem-backed repository.
repository/http
repository/s3
repository/s3/test
repository/serverPackage server implements a Repository REST server.
repository/test
restPackage rest provides a framework for serving and accessing hierarchical resource-based APIs.
runner
schedPackage sched implements task scheduling for Reflow.
syntaxPackage syntax implements the Reflow language.
taskdbPackage taskdb defines interfaces and data types for storing and querying reflow runs and tasks.
taskdb/dynamodbtaskPackage dynamodbtask implements the taskdb.TaskDB interface for AWS dynamodb backend.
test/flowPackage flow contains a number of constructors for Flow nodes that are convenient for testing.
test/regress
test/testutil
tool
tracePackage trace provides a tracing system for Reflow events.
trace/xraytrace
typesPackage types contains data structures and algorithms for dealing with value types in Reflow.
valuesPackage values defines data structures for representing (runtime) values in Reflow.
wgPackage wg implements a channel-enabled WaitGroup.

Package reflow imports 19 packages (graph) and is imported by 52 packages. Updated 2020-04-19. Refresh now. Tools for package owners.