rclone: github.com/ncw/rclone/backend/s3 Index | Files

package s3

import "github.com/ncw/rclone/backend/s3"

Package s3 provides an interface to Amazon S3 oject storage


Package Files

s3.go v2sign.go

func NewFs Uses

func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error)

NewFs constructs an Fs from the path, bucket:path

type Fs Uses

type Fs struct {
    // contains filtered or unexported fields

Fs represents a remote s3 server

func (*Fs) Copy Uses

func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error)

Copy src to this remote using server side copy operations.

This is stored with the remote path given

It returns the destination Object and a possible error

Will only be called if src.Fs().Name() == f.Name()

If it isn't possible then return fs.ErrorCantCopy

func (*Fs) Features Uses

func (f *Fs) Features() *fs.Features

Features returns the optional features of this Fs

func (*Fs) Hashes Uses

func (f *Fs) Hashes() hash.Set

Hashes returns the supported hash sets.

func (*Fs) List Uses

func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error)

List the objects and directories in dir into entries. The entries can be returned in any order but should be for a complete directory.

dir should be "" to list the root, and should not have trailing slashes.

This should return ErrDirNotFound if the directory isn't found.

func (*Fs) ListR Uses

func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error)

ListR lists the objects and directories of the Fs starting from dir recursively into out.

dir should be "" to start from the root, and should not have trailing slashes.

This should return ErrDirNotFound if the directory isn't found.

It should call callback for each tranche of entries read. These need not be returned in any particular order. If callback returns an error then the listing will stop immediately.

Don't implement this unless you have a more efficient way of listing recursively than doing a directory traversal.

func (*Fs) Mkdir Uses

func (f *Fs) Mkdir(ctx context.Context, dir string) error

Mkdir creates the bucket if it doesn't exist

func (*Fs) Name Uses

func (f *Fs) Name() string

Name of the remote (as passed into NewFs)

func (*Fs) NewObject Uses

func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error)

NewObject finds the Object at remote. If it can't be found it returns the error fs.ErrorObjectNotFound.

func (*Fs) Precision Uses

func (f *Fs) Precision() time.Duration

Precision of the remote

func (*Fs) Put Uses

func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error)

Put the Object into the bucket

func (*Fs) PutStream Uses

func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error)

PutStream uploads to the remote path with the modTime given of indeterminate size

func (*Fs) Rmdir Uses

func (f *Fs) Rmdir(ctx context.Context, dir string) error

Rmdir deletes the bucket if the fs is at the root

Returns an error if it isn't empty

func (*Fs) Root Uses

func (f *Fs) Root() string

Root of the remote (as passed into NewFs)

func (*Fs) String Uses

func (f *Fs) String() string

String converts this Fs to a string

type Object Uses

type Object struct {
    // contains filtered or unexported fields

Object describes a s3 object

func (*Object) Fs Uses

func (o *Object) Fs() fs.Info

Fs returns the parent Fs

func (*Object) GetTier Uses

func (o *Object) GetTier() string

GetTier returns storage class as string

func (*Object) Hash Uses

func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error)

Hash returns the Md5sum of an object returning a lowercase hex string

func (*Object) MimeType Uses

func (o *Object) MimeType(ctx context.Context) string

MimeType of an Object if known, "" otherwise

func (*Object) ModTime Uses

func (o *Object) ModTime(ctx context.Context) time.Time

ModTime returns the modification time of the object

It attempts to read the objects mtime and if that isn't present the LastModified returned in the http headers

func (*Object) Open Uses

func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error)

Open an object for read

func (*Object) Remote Uses

func (o *Object) Remote() string

Remote returns the remote path

func (*Object) Remove Uses

func (o *Object) Remove(ctx context.Context) error

Remove an object

func (*Object) SetModTime Uses

func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error

SetModTime sets the modification time of the local fs object

func (*Object) SetTier Uses

func (o *Object) SetTier(tier string) (err error)

SetTier performs changing storage class

func (*Object) Size Uses

func (o *Object) Size() int64

Size returns the size of an object in bytes

func (*Object) Storable Uses

func (o *Object) Storable() bool

Storable raturns a boolean indicating if this object is storable

func (*Object) String Uses

func (o *Object) String() string

Return a string version

func (*Object) Update Uses

func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error

Update the Object from in with modTime and size

type Options Uses

type Options struct {
    Provider              string        `config:"provider"`
    EnvAuth               bool          `config:"env_auth"`
    AccessKeyID           string        `config:"access_key_id"`
    SecretAccessKey       string        `config:"secret_access_key"`
    Region                string        `config:"region"`
    Endpoint              string        `config:"endpoint"`
    LocationConstraint    string        `config:"location_constraint"`
    ACL                   string        `config:"acl"`
    BucketACL             string        `config:"bucket_acl"`
    ServerSideEncryption  string        `config:"server_side_encryption"`
    SSEKMSKeyID           string        `config:"sse_kms_key_id"`
    StorageClass          string        `config:"storage_class"`
    UploadCutoff          fs.SizeSuffix `config:"upload_cutoff"`
    ChunkSize             fs.SizeSuffix `config:"chunk_size"`
    DisableChecksum       bool          `config:"disable_checksum"`
    SessionToken          string        `config:"session_token"`
    UploadConcurrency     int           `config:"upload_concurrency"`
    ForcePathStyle        bool          `config:"force_path_style"`
    V2Auth                bool          `config:"v2_auth"`
    UseAccelerateEndpoint bool          `config:"use_accelerate_endpoint"`

Options defines the configuration for this backend

Package s3 imports 36 packages (graph). Updated 2019-09-19. Refresh now. Tools for package owners.