awss3

package
v0.0.0-...-f448fd0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 1, 2021 License: Apache-2.0 Imports: 15 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Backend

type Backend struct {
	BucketURL string
	Session   *session.Session
	Client    *s3.S3
	// contains filtered or unexported fields
}

func New

func New(connectionString string) (*Backend, error)

func (*Backend) Delete

func (b *Backend) Delete(repoKey, path string) error

func (*Backend) Finalise

func (b *Backend) Finalise(repoKey string, cacheID int, parts []s.CachePart) (string, error)

func (*Backend) GenerateArchiveURL

func (b *Backend) GenerateArchiveURL(scheme, host, repoKey, path string) (string, error)

func (*Backend) GetFilePath

func (b *Backend) GetFilePath(key string) (string, error)

func (*Backend) Setup

func (b *Backend) Setup() error

func (*Backend) Type

func (b *Backend) Type() string

func (*Backend) Write

func (b *Backend) Write(repoKey string, cacheID int, r io.Reader, start, end int, size int64) (string, int64, error)

S3 has UploadPartCopy to create multipart uploads from other objects. So we can upload chunks as uuid named files and store the filename in a db with start and end therefore when finalising we know the order in which to concatenate files.

The reason we don't just use a regular multipart upload is chunks can be uploaded in parallel, and you could receive a later chunk before an earlier one, multipart upload parts need an int 1-10000 and will be assembled in sorted order, as we don't have data uploaded in order, we can't reliably do this.

Write Uploads a part of a file to S3.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL