content

package
v0.11.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 18, 2021 License: MIT Imports: 21 Imported by: 0

Documentation

Index

Constants

View Source
const (
	// DefaultBlobMediaType specifies the default blob media type
	DefaultBlobMediaType = ocispec.MediaTypeImageLayer
	// DefaultBlobDirMediaType specifies the default blob directory media type
	DefaultBlobDirMediaType = ocispec.MediaTypeImageLayerGzip
)
View Source
const (
	// AnnotationDigest is the annotation key for the digest of the uncompressed content
	AnnotationDigest = "io.deis.oras.content.digest"
	// AnnotationUnpack is the annotation key for indication of unpacking
	AnnotationUnpack = "io.deis.oras.content.unpack"
)
View Source
const (
	// what you get for a blank digest
	BlankHash = digest.Digest("sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855")
)
View Source
const (
	// DefaultBlocksize default size of each slice of bytes read in each write through in gunzipand untar.
	// Simply uses the same size as io.Copy()
	DefaultBlocksize = 32768
)
View Source
const (
	// OCIImageIndexFile is the file name of the index from the OCI Image Layout Specification
	// Reference: https://github.com/opencontainers/image-spec/blob/master/image-layout.md#indexjson-file
	OCIImageIndexFile = "index.json"
)
View Source
const (
	// TempFilePattern specifies the pattern to create temporary files
	TempFilePattern = "oras"
)

Variables

View Source
var (
	ErrNotFound           = errors.New("not_found")
	ErrNoName             = errors.New("no_name")
	ErrUnsupportedSize    = errors.New("unsupported_size")
	ErrUnsupportedVersion = errors.New("unsupported_version")
)

Common errors

View Source
var (
	ErrPathTraversalDisallowed = errors.New("path_traversal_disallowed")
	ErrOverwriteDisallowed     = errors.New("overwrite_disallowed")
)

FileStore errors

Functions

func NewGunzipWriter added in v0.9.0

func NewGunzipWriter(writer content.Writer, opts ...WriterOpt) content.Writer

NewGunzipWriter wrap a writer with a gunzip, so that the stream is gunzipped

By default, it calculates the hash when writing. If the option `skipHash` is true, it will skip doing the hash. Skipping the hash is intended to be used only if you are confident about the validity of the data being passed to the writer, and wish to save on the hashing time.

func NewIoContentWriter added in v0.9.0

func NewIoContentWriter(writer io.Writer, opts ...WriterOpt) content.Writer

NewIoContentWriter create a new IoContentWriter.

By default, it calculates the hash when writing. If the option `skipHash` is true, it will skip doing the hash. Skipping the hash is intended to be used only if you are confident about the validity of the data being passed to the writer, and wish to save on the hashing time.

func NewPassthroughMultiWriter added in v0.10.0

func NewPassthroughMultiWriter(writers func(name string) (content.Writer, error), f func(r io.Reader, getwriter func(name string) io.Writer, done chan<- error), opts ...WriterOpt) content.Writer

func NewPassthroughWriter added in v0.9.0

func NewPassthroughWriter(writer content.Writer, f func(r io.Reader, w io.Writer, done chan<- error), opts ...WriterOpt) content.Writer

NewPassthroughWriter creates a pass-through writer that allows for processing the content via an arbitrary function. The function should do whatever processing it wants, reading from the Reader to the Writer. When done, it must indicate via sending an error or nil to the Done

func NewUntarWriter added in v0.9.0

func NewUntarWriter(writer content.Writer, opts ...WriterOpt) content.Writer

NewUntarWriter wrap a writer with an untar, so that the stream is untarred

By default, it calculates the hash when writing. If the option `skipHash` is true, it will skip doing the hash. Skipping the hash is intended to be used only if you are confident about the validity of the data being passed to the writer, and wish to save on the hashing time.

func NewUntarWriterByName added in v0.10.0

func NewUntarWriterByName(writers func(string) (content.Writer, error), opts ...WriterOpt) content.Writer

NewUntarWriterByName wrap multiple writers with an untar, so that the stream is untarred and passed to the appropriate writer, based on the filename. If a filename is not found, it is up to the called func to determine how to process it.

func ResolveName

func ResolveName(desc ocispec.Descriptor) (string, bool)

ResolveName resolves name from descriptor

Types

type DecompressStore added in v0.9.0

type DecompressStore struct {
	// contains filtered or unexported fields
}

DecompressStore store to decompress content and extract from tar, if needed, wrapping another store. By default, a FileStore will simply take each artifact and write it to a file, as a MemoryStore will do into memory. If the artifact is gzipped or tarred, you might want to store the actual object inside tar or gzip. Wrap your Store with DecompressStore, and it will check the media-type and, if relevant, gunzip and/or untar.

For example:

fileStore := NewFileStore(rootPath)
decompressStore := store.NewDecompressStore(fileStore, WithBlocksize(blocksize))

The above example works if there is no tar, i.e. each artifact is just a single file, perhaps gzipped, or if there is only one file in each tar archive. In other words, when each content.Writer has only one target output stream. However, if you have multiple files in each tar archive, each archive of which is an artifact layer, then you need a way to select how to handle each file in the tar archive. In other words, when each content.Writer has more than one target output stream. In that case, use the following example:

multiStore := NewMultiStore(rootPath) // some store that can handle different filenames
decompressStore := store.NewDecompressStore(multiStore, WithBlocksize(blocksize), WithMultiWriterIngester())

func NewDecompressStore added in v0.9.0

func NewDecompressStore(ingester ctrcontent.Ingester, opts ...WriterOpt) DecompressStore

func (DecompressStore) Writer added in v0.9.0

Writer get a writer

type FileStore

type FileStore struct {
	DisableOverwrite          bool
	AllowPathTraversalOnWrite bool

	// Reproducible enables stripping times from added files
	Reproducible bool
	// contains filtered or unexported fields
}

FileStore provides content from the file system

func NewFileStore

func NewFileStore(rootPath string, opts ...WriterOpt) *FileStore

NewFileStore creats a new file store

func (*FileStore) Add

func (s *FileStore) Add(name, mediaType, path string) (ocispec.Descriptor, error)

Add adds a file reference

func (*FileStore) Close added in v0.4.0

func (s *FileStore) Close() error

Close frees up resources used by the file store

func (*FileStore) MapPath

func (s *FileStore) MapPath(name, path string) string

MapPath maps name to path

func (*FileStore) ReaderAt

func (s *FileStore) ReaderAt(ctx context.Context, desc ocispec.Descriptor) (content.ReaderAt, error)

ReaderAt provides contents

func (*FileStore) ResolvePath

func (s *FileStore) ResolvePath(name string) string

ResolvePath returns the path by name

func (*FileStore) Writer

func (s *FileStore) Writer(ctx context.Context, opts ...content.WriterOpt) (content.Writer, error)

Writer begins or resumes the active writer identified by desc

type IoContentWriter added in v0.9.0

type IoContentWriter struct {
	// contains filtered or unexported fields
}

IoContentWriter writer that wraps an io.Writer, so the results can be streamed to an open io.Writer. For example, can be used to pull a layer and write it to a file, or device.

func (*IoContentWriter) Close added in v0.9.0

func (w *IoContentWriter) Close() error

func (*IoContentWriter) Commit added in v0.9.0

func (w *IoContentWriter) Commit(ctx context.Context, size int64, expected digest.Digest, opts ...content.Opt) error

Commit commits the blob (but no roll-back is guaranteed on an error). size and expected can be zero-value when unknown. Commit always closes the writer, even on error. ErrAlreadyExists aborts the writer.

func (*IoContentWriter) Digest added in v0.9.0

func (w *IoContentWriter) Digest() digest.Digest

Digest may return empty digest or panics until committed.

func (*IoContentWriter) Status added in v0.9.0

func (w *IoContentWriter) Status() (content.Status, error)

Status returns the current state of write

func (*IoContentWriter) Truncate added in v0.9.0

func (w *IoContentWriter) Truncate(size int64) error

Truncate updates the size of the target blob

func (*IoContentWriter) Write added in v0.9.0

func (w *IoContentWriter) Write(p []byte) (n int, err error)

type Memorystore

type Memorystore struct {
	// contains filtered or unexported fields
}

Memorystore provides content from the memory

func NewMemoryStore

func NewMemoryStore() *Memorystore

NewMemoryStore creats a new memory store

func (*Memorystore) Add

func (s *Memorystore) Add(name, mediaType string, content []byte) ocispec.Descriptor

Add adds content

func (*Memorystore) Get

Get finds the content from the store

func (*Memorystore) GetByName

func (s *Memorystore) GetByName(name string) (ocispec.Descriptor, []byte, bool)

GetByName finds the content from the store by name (i.e. AnnotationTitle)

func (*Memorystore) ReaderAt

func (s *Memorystore) ReaderAt(ctx context.Context, desc ocispec.Descriptor) (content.ReaderAt, error)

ReaderAt provides contents

func (*Memorystore) Set

func (s *Memorystore) Set(desc ocispec.Descriptor, content []byte)

Set adds the content to the store

func (*Memorystore) Writer

func (s *Memorystore) Writer(ctx context.Context, opts ...content.WriterOpt) (content.Writer, error)

Writer begins or resumes the active writer identified by desc

type MultiReader added in v0.9.0

type MultiReader struct {
	// contains filtered or unexported fields
}

MultiReader store to read content from multiple stores. It finds the content by asking each underlying store to find the content, which it does based on the hash.

Example:

fileStore := NewFileStore(rootPath)
memoryStore := NewMemoryStore()
// load up content in fileStore and memoryStore
multiStore := MultiReader([]content.Provider{fileStore, memoryStore})

You now can use multiStore anywhere that content.Provider is accepted

func (*MultiReader) AddStore added in v0.9.0

func (m *MultiReader) AddStore(store ...content.Provider)

AddStore add a store to read from

func (MultiReader) ReaderAt added in v0.9.0

ReaderAt get a reader

type MultiWriterIngester added in v0.10.0

type MultiWriterIngester interface {
	ctrcontent.Ingester
	Writers(ctx context.Context, opts ...ctrcontent.WriterOpt) (func(string) (ctrcontent.Writer, error), error)
}

MultiWriterIngester an ingester that can provide a single writer or multiple writers for a single descriptor. Useful when the target of a descriptor can have multiple items within it, e.g. a layer that is a tar file with multiple files, each of which should go to a different stream, some of which should not be handled at all.

type OCIStore added in v0.6.0

type OCIStore struct {
	content.Store
	// contains filtered or unexported fields
}

OCIStore provides content from the file system with the OCI-Image layout. Reference: https://github.com/opencontainers/image-spec/blob/master/image-layout.md

func NewOCIStore added in v0.6.0

func NewOCIStore(rootPath string) (*OCIStore, error)

NewOCIStore creates a new OCI store

func (*OCIStore) AddReference added in v0.6.0

func (s *OCIStore) AddReference(name string, desc ocispec.Descriptor)

AddReference adds or updates an reference to index.

func (*OCIStore) DeleteReference added in v0.6.0

func (s *OCIStore) DeleteReference(name string)

DeleteReference deletes an reference from index.

func (*OCIStore) ListReferences added in v0.6.0

func (s *OCIStore) ListReferences() map[string]ocispec.Descriptor

ListReferences lists all references in index.

func (*OCIStore) LoadIndex added in v0.6.0

func (s *OCIStore) LoadIndex() error

LoadIndex reads the index.json from the file system

func (*OCIStore) SaveIndex added in v0.6.0

func (s *OCIStore) SaveIndex() error

SaveIndex writes the index.json to the file system

type PassthroughMultiWriter added in v0.10.0

type PassthroughMultiWriter struct {
	// contains filtered or unexported fields
}

PassthroughMultiWriter single writer that passes through to multiple writers, allowing the passthrough function to select which writer to use.

func (*PassthroughMultiWriter) Close added in v0.10.0

func (pmw *PassthroughMultiWriter) Close() error

func (*PassthroughMultiWriter) Commit added in v0.10.0

func (pmw *PassthroughMultiWriter) Commit(ctx context.Context, size int64, expected digest.Digest, opts ...content.Opt) error

Commit commits the blob (but no roll-back is guaranteed on an error). size and expected can be zero-value when unknown. Commit always closes the writer, even on error. ErrAlreadyExists aborts the writer.

func (*PassthroughMultiWriter) Digest added in v0.10.0

func (pmw *PassthroughMultiWriter) Digest() digest.Digest

Digest may return empty digest or panics until committed.

func (*PassthroughMultiWriter) Status added in v0.10.0

func (pmw *PassthroughMultiWriter) Status() (content.Status, error)

Status returns the current state of write

func (*PassthroughMultiWriter) Truncate added in v0.10.0

func (pmw *PassthroughMultiWriter) Truncate(size int64) error

Truncate updates the size of the target blob, but cannot do anything with a multiwriter

func (*PassthroughMultiWriter) Write added in v0.10.0

func (pmw *PassthroughMultiWriter) Write(p []byte) (n int, err error)

type PassthroughWriter added in v0.9.0

type PassthroughWriter struct {
	// contains filtered or unexported fields
}

PassthroughWriter takes an input stream and passes it through to an underlying writer, while providing the ability to manipulate the stream before it gets passed through

func (*PassthroughWriter) Close added in v0.9.0

func (pw *PassthroughWriter) Close() error

func (*PassthroughWriter) Commit added in v0.9.0

func (pw *PassthroughWriter) Commit(ctx context.Context, size int64, expected digest.Digest, opts ...content.Opt) error

Commit commits the blob (but no roll-back is guaranteed on an error). size and expected can be zero-value when unknown. Commit always closes the writer, even on error. ErrAlreadyExists aborts the writer.

func (*PassthroughWriter) Digest added in v0.9.0

func (pw *PassthroughWriter) Digest() digest.Digest

Digest may return empty digest or panics until committed.

func (*PassthroughWriter) Status added in v0.9.0

func (pw *PassthroughWriter) Status() (content.Status, error)

Status returns the current state of write

func (*PassthroughWriter) Truncate added in v0.9.0

func (pw *PassthroughWriter) Truncate(size int64) error

Truncate updates the size of the target blob

func (*PassthroughWriter) Write added in v0.9.0

func (pw *PassthroughWriter) Write(p []byte) (n int, err error)

type ProvideIngester added in v0.6.0

type ProvideIngester interface {
	content.Provider
	content.Ingester
}

ProvideIngester is the interface that groups the basic Read and Write methods.

type WriterOpt added in v0.9.0

type WriterOpt func(*WriterOpts) error

func WithBlocksize added in v0.9.0

func WithBlocksize(blocksize int) WriterOpt

WithBlocksize set the blocksize used by the processor of data. The default is DefaultBlocksize, which is the same as that used by io.Copy. Includes a safety check to ensure the caller doesn't actively set it to <= 0.

func WithErrorOnNoName added in v0.11.1

func WithErrorOnNoName() WriterOpt

WithErrorOnNoName some ingesters, when creating a Writer, do not return an error if the descriptor does not have a valid name on the descriptor. Passing WithErrorOnNoName tells the writer to return an error instead of passing the data to a nil writer.

func WithIgnoreNoName deprecated added in v0.11.0

func WithIgnoreNoName() WriterOpt

WithIgnoreNoName some ingesters, when creating a Writer, return an error if the descriptor does not have a valid name on the descriptor. Passing WithIgnoreNoName tells the writer not to return an error, but rather to pass the data to a nil writer.

Deprecated: Use WithErrorOnNoName

func WithInputHash added in v0.9.0

func WithInputHash(hash digest.Digest) WriterOpt

WithInputHash provide the expected input hash to a writer. Writers may suppress their own calculation of a hash on the stream, taking this hash instead. If the Writer processes the data before passing it on to another Writer layer, this is the hash of the *input* stream.

To have a blank hash, use WithInputHash(BlankHash).

func WithMultiWriterIngester added in v0.10.0

func WithMultiWriterIngester() WriterOpt

WithMultiWriterIngester the passed ingester also implements MultiWriter and should be used as such. If this is set to true, but the ingester does not implement MultiWriter, calling Writer should return an error.

func WithOutputHash added in v0.9.0

func WithOutputHash(hash digest.Digest) WriterOpt

WithOutputHash provide the expected output hash to a writer. Writers may suppress their own calculation of a hash on the stream, taking this hash instead. If the Writer processes the data before passing it on to another Writer layer, this is the hash of the *output* stream.

To have a blank hash, use WithInputHash(BlankHash).

type WriterOpts added in v0.9.0

type WriterOpts struct {
	InputHash           *digest.Digest
	OutputHash          *digest.Digest
	Blocksize           int
	MultiWriterIngester bool
	IgnoreNoName        bool
}

func DefaultWriterOpts added in v0.9.0

func DefaultWriterOpts() WriterOpts

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL