rclone: github.com/ncw/rclone/backend/googlecloudstorage Index | Files

package googlecloudstorage

import "github.com/ncw/rclone/backend/googlecloudstorage"

Package googlecloudstorage provides an interface to Google Cloud Storage

Index

Package Files

googlecloudstorage.go

func NewFs Uses

func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error)

NewFs constructs an Fs from the path, bucket:path

type Fs Uses

type Fs struct {
    // contains filtered or unexported fields
}

Fs represents a remote storage server

func (*Fs) Copy Uses

func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error)

Copy src to this remote using server side copy operations.

This is stored with the remote path given

It returns the destination Object and a possible error

Will only be called if src.Fs().Name() == f.Name()

If it isn't possible then return fs.ErrorCantCopy

func (*Fs) Features Uses

func (f *Fs) Features() *fs.Features

Features returns the optional features of this Fs

func (*Fs) Hashes Uses

func (f *Fs) Hashes() hash.Set

Hashes returns the supported hash sets.

func (*Fs) List Uses

func (f *Fs) List(dir string) (entries fs.DirEntries, err error)

List the objects and directories in dir into entries. The entries can be returned in any order but should be for a complete directory.

dir should be "" to list the root, and should not have trailing slashes.

This should return ErrDirNotFound if the directory isn't found.

func (*Fs) ListR Uses

func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error)

ListR lists the objects and directories of the Fs starting from dir recursively into out.

dir should be "" to start from the root, and should not have trailing slashes.

This should return ErrDirNotFound if the directory isn't found.

It should call callback for each tranche of entries read. These need not be returned in any particular order. If callback returns an error then the listing will stop immediately.

Don't implement this unless you have a more efficient way of listing recursively that doing a directory traversal.

func (*Fs) Mkdir Uses

func (f *Fs) Mkdir(dir string) (err error)

Mkdir creates the bucket if it doesn't exist

func (*Fs) Name Uses

func (f *Fs) Name() string

Name of the remote (as passed into NewFs)

func (*Fs) NewObject Uses

func (f *Fs) NewObject(remote string) (fs.Object, error)

NewObject finds the Object at remote. If it can't be found it returns the error fs.ErrorObjectNotFound.

func (*Fs) Precision Uses

func (f *Fs) Precision() time.Duration

Precision returns the precision

func (*Fs) Put Uses

func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error)

Put the object into the bucket

Copy the reader in to the new object which is returned

The new object may have been created if an error is returned

func (*Fs) PutStream Uses

func (f *Fs) PutStream(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error)

PutStream uploads to the remote path with the modTime given of indeterminate size

func (*Fs) Rmdir Uses

func (f *Fs) Rmdir(dir string) (err error)

Rmdir deletes the bucket if the fs is at the root

Returns an error if it isn't empty: Error 409: The bucket you tried to delete was not empty.

func (*Fs) Root Uses

func (f *Fs) Root() string

Root of the remote (as passed into NewFs)

func (*Fs) String Uses

func (f *Fs) String() string

String converts this Fs to a string

type Object Uses

type Object struct {
    // contains filtered or unexported fields
}

Object describes a storage object

Will definitely have info but maybe not meta

func (*Object) Fs Uses

func (o *Object) Fs() fs.Info

Fs returns the parent Fs

func (*Object) Hash Uses

func (o *Object) Hash(t hash.Type) (string, error)

Hash returns the Md5sum of an object returning a lowercase hex string

func (*Object) MimeType Uses

func (o *Object) MimeType() string

MimeType of an Object if known, "" otherwise

func (*Object) ModTime Uses

func (o *Object) ModTime() time.Time

ModTime returns the modification time of the object

It attempts to read the objects mtime and if that isn't present the LastModified returned in the http headers

func (*Object) Open Uses

func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error)

Open an object for read

func (*Object) Remote Uses

func (o *Object) Remote() string

Remote returns the remote path

func (*Object) Remove Uses

func (o *Object) Remove() (err error)

Remove an object

func (*Object) SetModTime Uses

func (o *Object) SetModTime(modTime time.Time) (err error)

SetModTime sets the modification time of the local fs object

func (*Object) Size Uses

func (o *Object) Size() int64

Size returns the size of an object in bytes

func (*Object) Storable Uses

func (o *Object) Storable() bool

Storable returns a boolean as to whether this object is storable

func (*Object) String Uses

func (o *Object) String() string

Return a string version

func (*Object) Update Uses

func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error

Update the object with the contents of the io.Reader, modTime and size

The new object may have been created if an error is returned

type Options Uses

type Options struct {
    ProjectNumber             string `config:"project_number"`
    ServiceAccountFile        string `config:"service_account_file"`
    ServiceAccountCredentials string `config:"service_account_credentials"`
    ObjectACL                 string `config:"object_acl"`
    BucketACL                 string `config:"bucket_acl"`
    BucketPolicyOnly          bool   `config:"bucket_policy_only"`
    Location                  string `config:"location"`
    StorageClass              string `config:"storage_class"`
}

Options defines the configuration for this backend

Package googlecloudstorage imports 30 packages (graph) and is imported by 1 packages. Updated 2019-06-18. Refresh now. Tools for package owners.