gocloud.dev: gocloud.dev/blob Index | Examples | Files | Directories

package blob

import "gocloud.dev/blob"

Package blob provides an easy and portable way to interact with blobs within a storage location, hereafter called a "bucket".

It supports operations like reading and writing blobs (using standard interfaces from the io package), deleting blobs, and listing blobs in a bucket.

Subpackages contain distinct implementations of blob for various providers, including Cloud and on-prem solutions. For example, "fileblob" supports blobs backed by a filesystem. Your application should import one of these provider-specific subpackages and use its exported function(s) to create a *Bucket; do not use the NewBucket function in this package. For example:

bucket, err := fileblob.OpenBucket("path/to/dir", nil)
if err != nil {
    return fmt.Errorf("could not open bucket: %v", err)
}
buf, err := bucket.ReadAll(ctx.Background(), "myfile.txt")
...

Then, write your application code using the *Bucket type, and you can easily reconfigure your initialization code to choose a different provider. You can develop your application locally using fileblob, or deploy it to multiple Cloud providers. You may find http://github.com/google/wire useful for managing your initialization code.

Alternatively, you can construct a *Bucket using blob.Open by providing a URL that's supported by a blob subpackage that you have linked in to your application.

Code:

// Connect to a bucket when your program starts up.
// This example uses the file-based implementation in fileblob, and creates
// a temporary directory to use as the root directory.
dir, cleanup := newTempDir()
defer cleanup()
bucket, err := fileblob.OpenBucket(dir, nil)
if err != nil {
    log.Fatal(err)
}

// We now have a *blob.Bucket! We can write our application using the
// *blob.Bucket type, and have the freedom to change the initialization code
// above to choose a different provider later.

// In this example, we'll write a blob and then read it.
ctx := context.Background()
if err := bucket.WriteAll(ctx, "foo.txt", []byte("Go Cloud"), nil); err != nil {
    log.Fatal(err)
}
b, err := bucket.ReadAll(ctx, "foo.txt")
if err != nil {
    log.Fatal(err)
}
fmt.Println(string(b))

Output:

Go Cloud

Index

Examples

Package Files

blob.go

Constants

const DefaultSignedURLExpiry = 1 * time.Hour

DefaultSignedURLExpiry is the default duration for SignedURLOptions.Expiry.

func ErrorAs Uses

func ErrorAs(err error, i interface{}) bool

ErrorAs converts i to provider-specific types. See Bucket.As for more details.

func IsNotExist Uses

func IsNotExist(err error) bool

IsNotExist returns true iff err indicates that the referenced blob does not exist.

func IsNotImplemented Uses

func IsNotImplemented(err error) bool

IsNotImplemented returns true iff err indicates that the provider does not support the given operation.

func Register Uses

func Register(scheme string, fn FromURLFunc)

Register is for use by provider implementations. It allows providers to register an instantiation function for URLs with the given scheme. It is expected to be called from the provider implementation's package init function.

fn will be called from Open, with a bucket name and options parsed from the URL. All option keys will be lowercased.

Register panics if a provider has already registered for scheme.

type Attributes Uses

type Attributes struct {
    // ContentType is the MIME type of the blob. It will not be empty.
    ContentType string
    // Metadata holds key/value pairs associated with the blob.
    // Keys are guaranteed to be in lowercase, even if the backend provider
    // has case-sensitive keys (although note that Metadata written via
    // this package will always be lowercased). If there are duplicate
    // case-insensitive keys (e.g., "foo" and "FOO"), only one value
    // will be kept, and it is undefined which one.
    Metadata map[string]string
    // ModTime is the time the blob was last modified.
    ModTime time.Time
    // Size is the size of the blob's content in bytes.
    Size int64
    // MD5 is an MD5 hash of the blob contents or nil if not available.
    MD5 []byte
    // contains filtered or unexported fields
}

Attributes contains attributes about a blob.

func (*Attributes) As Uses

func (a *Attributes) As(i interface{}) bool

As converts i to provider-specific types. See Bucket.As for more details.

type Bucket Uses

type Bucket struct {
    // contains filtered or unexported fields
}

Bucket provides an easy and portable way to interact with blobs within a "bucket", including read, write, and list operations. To create a Bucket, use constructors found in provider-specific subpackages.

func NewBucket Uses

func NewBucket(b driver.Bucket) *Bucket

NewBucket creates a new *Bucket based on a specific driver implementation. Most end users should use subpackages to construct a *Bucket instead of this function; see the package documentation for details. It is intended for use by provider implementations.

func Open Uses

func Open(ctx context.Context, urlstr string) (*Bucket, error)

Open creates a *Bucket from a URL. See the package documentation in provider-specific subpackages for more details on supported scheme(s) and URL parameter(s).

Code:

// Connect to a bucket using a URL.
// This example uses the file-based implementation, which registers for
// the "file" scheme.
dir, cleanup := newTempDir()
defer cleanup()

ctx := context.Background()
if _, err := blob.Open(ctx, "file:///nonexistentpath"); err == nil {
    log.Fatal("Expected an error opening nonexistent path")
}
fmt.Println("Got expected error opening a nonexistent path")

// Ensure the path has a leading slash; fileblob ignores the URL's
// Host field, so URLs should always start with "file:///". On
// Windows, the leading "/" will be stripped, so "file:///c:/foo"
// will refer to c:/foo.
urlpath := url.PathEscape(filepath.ToSlash(dir))
if !strings.HasPrefix(urlpath, "/") {
    urlpath = "/" + urlpath
}
if _, err := blob.Open(ctx, "file://"+urlpath); err != nil {
    log.Fatal(err)
}
fmt.Println("Got a bucket for valid path")

Output:

Got expected error opening a nonexistent path
Got a bucket for valid path

func (*Bucket) As Uses

func (b *Bucket) As(i interface{}) bool

As converts i to provider-specific types.

This function (and the other As functions in this package) are inherently provider-specific, and using them will make that part of your application non-portable, so use with care.

See the documentation for the subpackage used to instantiate Bucket to see which type(s) are supported.

Usage:

1. Declare a variable of the provider-specific type you want to access.

2. Pass a pointer to it to As.

3. If the type is supported, As will return true and copy the provider-specific type into your variable. Otherwise, it will return false.

Provider-specific types that are intended to be mutable will be exposed as a pointer to the underlying type.

See https://github.com/google/go-cloud/blob/master/internal/docs/design.md#as for more background.

Code:

// Connect to a bucket when your program starts up.
// This example uses the file-based implementation.
dir, cleanup := newTempDir()
defer cleanup()

// Create the file-based bucket.
bucket, err := fileblob.OpenBucket(dir, nil)
if err != nil {
    log.Fatal(err)
}
// This example uses As to try to fill in a string variable. As will return
// false because fileblob doesn't support any types for Bucket.As.
// See the package documentation for your provider (e.g., gcsblob or s3blob)
// to see what type(s) it supports.
var providerSpecific string
if bucket.As(&providerSpecific) {
    fmt.Println("fileblob supports the `string` type for Bucket.As")
    // Use providerSpecific.
} else {
    fmt.Println("fileblob does not support the `string` type for Bucket.As")
}

// This example sets WriterOptions.BeforeWrite to be called before the
// provider starts writing. In the callback, it uses asFunc to try to fill in
// a *string. Again, asFunc will return false because fileblob doesn't support
// any types for Writer.
fn := func(asFunc func(i interface{}) bool) error {
    var mutableProviderSpecific *string
    if asFunc(&mutableProviderSpecific) {
        fmt.Println("fileblob supports the `*string` type for WriterOptions.BeforeWrite")
        // Use mutableProviderSpecific.
    } else {
        fmt.Println("fileblob does not support the `*string` type for WriterOptions.BeforeWrite")
    }
    return nil
}
ctx := context.Background()
if err := bucket.WriteAll(ctx, "foo.txt", []byte("Go Cloud"), &blob.WriterOptions{BeforeWrite: fn}); err != nil {
    log.Fatal(err)
}

Output:

fileblob does not support the `string` type for Bucket.As
fileblob does not support the `*string` type for WriterOptions.BeforeWrite

func (*Bucket) Attributes Uses

func (b *Bucket) Attributes(ctx context.Context, key string) (Attributes, error)

Attributes returns attributes for the blob stored at key.

If the blob does not exist, Attributes returns an error for which IsNotExist will return true.

func (*Bucket) Delete Uses

func (b *Bucket) Delete(ctx context.Context, key string) error

Delete deletes the blob stored at key.

If the blob does not exist, Delete returns an error for which IsNotExist will return true.

func (*Bucket) List Uses

func (b *Bucket) List(opts *ListOptions) *ListIterator

List returns a ListIterator that can be used to iterate over blobs in a bucket, in lexicographical order of UTF-8 encoded keys. The underlying implementation fetches results in pages.

A nil ListOptions is treated the same as the zero value.

List is not guaranteed to include all recently-written blobs; some providers are only eventually consistent.

Code:

// Connect to a bucket when your program starts up.
// This example uses the file-based implementation.
dir, cleanup := newTempDir()
defer cleanup()

// Create the file-based bucket.
bucket, err := fileblob.OpenBucket(dir, nil)
if err != nil {
    log.Fatal(err)
}

// Create some blob objects for listing: "foo[0..4].txt".
ctx := context.Background()
for i := 0; i < 5; i++ {
    if err := bucket.WriteAll(ctx, fmt.Sprintf("foo%d.txt", i), []byte("Go Cloud"), nil); err != nil {
        log.Fatal(err)
    }
}

// Iterate over them.
// This will list the blobs created above because fileblob is strongly
// consistent, but is not guaranteed to work on all providers.
iter := bucket.List(nil)
for {
    obj, err := iter.Next(ctx)
    if err == io.EOF {
        break
    }
    if err != nil {
        log.Fatal(err)
    }
    fmt.Println(obj.Key)
}

Output:

foo0.txt
foo1.txt
foo2.txt
foo3.txt
foo4.txt

Code:

// Connect to a bucket when your program starts up.
// This example uses the file-based implementation.
dir, cleanup := newTempDir()
defer cleanup()

// Create the file-based bucket.
bucket, err := fileblob.OpenBucket(dir, nil)
if err != nil {
    log.Fatal(err)
}

// Create some blob objects in a hierarchy.
ctx := context.Background()
for _, key := range []string{
    "dir1/subdir/a.txt",
    "dir1/subdir/b.txt",
    "dir2/c.txt",
    "d.txt",
} {
    if err := bucket.WriteAll(ctx, key, []byte("Go Cloud"), nil); err != nil {
        log.Fatal(err)
    }
}

// list lists files in b starting with prefix. It uses the delimiter "/",
// and recurses into "directories", adding 2 spaces to indent each time.
// It will list the blobs created above because fileblob is strongly
// consistent, but is not guaranteed to work on all providers.
var list func(context.Context, *blob.Bucket, string, string)
list = func(ctx context.Context, b *blob.Bucket, prefix, indent string) {
    iter := b.List(&blob.ListOptions{
        Delimiter: "/",
        Prefix:    prefix,
    })
    for {
        obj, err := iter.Next(ctx)
        if err == io.EOF {
            break
        }
        if err != nil {
            log.Fatal(err)
        }
        fmt.Printf("%s%s\n", indent, obj.Key)
        if obj.IsDir {
            list(ctx, b, obj.Key, indent+"  ")
        }
    }
}
list(ctx, bucket, "", "")

Output:

d.txt
dir1/
  dir1/subdir/
    dir1/subdir/a.txt
    dir1/subdir/b.txt
dir2/
  dir2/c.txt

func (*Bucket) NewRangeReader Uses

func (b *Bucket) NewRangeReader(ctx context.Context, key string, offset, length int64, opts *ReaderOptions) (*Reader, error)

NewRangeReader returns a Reader to read content from the blob stored at key. It reads at most length bytes starting at offset (>= 0). If length is negative, it will read till the end of the blob.

If the blob does not exist, NewRangeReader returns an error for which IsNotExist will return true. Attributes is a lighter-weight way to check for existence.

A nil ReaderOptions is treated the same as the zero value.

The caller must call Close on the returned Reader when done reading.

Code:

// Connect to a bucket when your program starts up.
// This example uses the file-based implementation.
dir, cleanup := newTempDir()
defer cleanup()
// Write a file to read using the bucket.
err := ioutil.WriteFile(filepath.Join(dir, "foo.txt"), []byte("Hello, World!\n"), 0666)
if err != nil {
    log.Fatal(err)
}
// Create the file-based bucket.
bucket, err := fileblob.OpenBucket(dir, nil)
if err != nil {
    log.Fatal(err)
}

// Open a reader using the blob's key at a specific offset at length.
ctx := context.Background()
r, err := bucket.NewRangeReader(ctx, "foo.txt", 1, 4, nil)
if err != nil {
    log.Fatal(err)
}
defer r.Close()
// The blob reader implements io.Reader, so we can use any function that
// accepts an io.Reader.
if _, err := io.Copy(os.Stdout, r); err != nil {
    log.Fatal(err)
}

Output:

ello

func (*Bucket) NewReader Uses

func (b *Bucket) NewReader(ctx context.Context, key string, opts *ReaderOptions) (*Reader, error)

NewReader is a shortcut for NewRangedReader with offset=0 and length=-1.

Code:

// Connect to a bucket when your program starts up.
// This example uses the file-based implementation.
dir, cleanup := newTempDir()
defer cleanup()
// Write a file to read using the bucket.
err := ioutil.WriteFile(filepath.Join(dir, "foo.txt"), []byte("Hello, World!\n"), 0666)
if err != nil {
    log.Fatal(err)
}
// Create the file-based bucket.
bucket, err := fileblob.OpenBucket(dir, nil)
if err != nil {
    log.Fatal(err)
}

// Open a reader using the blob's key.
ctx := context.Background()
r, err := bucket.NewReader(ctx, "foo.txt", nil)
if err != nil {
    log.Fatal(err)
}
defer r.Close()
// The blob reader implements io.Reader, so we can use any function that
// accepts an io.Reader.
if _, err := io.Copy(os.Stdout, r); err != nil {
    log.Fatal(err)
}

Output:

Hello, World!

func (*Bucket) NewWriter Uses

func (b *Bucket) NewWriter(ctx context.Context, key string, opts *WriterOptions) (*Writer, error)

NewWriter returns a Writer that writes to the blob stored at key. A nil WriterOptions is treated the same as the zero value.

If a blob with this key already exists, it will be replaced. The blob being written is not guaranteed to be readable until Close has been called; until then, any previous blob will still be readable. Even after Close is called, newly written blobs are not guaranteed to be returned from List; some providers are only eventually consistent.

The returned Writer will store ctx for later use in Write and/or Close. To abort a write, cancel ctx; otherwise, it must remain open until Close is called.

The caller must call Close on the returned Writer, even if the write is aborted.

Code:

// Connect to a bucket when your program starts up.
// This example uses the file-based implementation.
dir, cleanup := newTempDir()
defer cleanup()
bucket, err := fileblob.OpenBucket(dir, nil)
if err != nil {
    log.Fatal(err)
}

// Open a writer using the key "foo.txt" and the default options.
ctx := context.Background()
w, err := bucket.NewWriter(ctx, "foo.txt", nil)
if err != nil {
    log.Fatal(err)
}

// The blob writer implements io.Writer, so we can use any function that
// accepts an io.Writer. A writer must always be closed.
_, printErr := fmt.Fprintln(w, "Hello, World!")
closeErr := w.Close()
if printErr != nil {
    log.Fatal(printErr)
}
if closeErr != nil {
    log.Fatal(closeErr)
}
// Copy the written blob to stdout.
r, err := bucket.NewReader(ctx, "foo.txt", nil)
if err != nil {
    log.Fatal(err)
}
defer r.Close()
if _, err := io.Copy(os.Stdout, r); err != nil {
    log.Fatal(err)
}
// Since we didn't specify a WriterOptions.ContentType for NewWriter, blob
// auto-determined one using http.DetectContentType.
fmt.Println(r.ContentType())

Output:

Hello, World!
text/plain; charset=utf-8

func (*Bucket) ReadAll Uses

func (b *Bucket) ReadAll(ctx context.Context, key string) ([]byte, error)

ReadAll is a shortcut for creating a Reader via NewReader with nil ReaderOptions, and reading the entire blob.

func (*Bucket) SignedURL Uses

func (b *Bucket) SignedURL(ctx context.Context, key string, opts *SignedURLOptions) (string, error)

SignedURL returns a URL that can be used to GET the blob for the duration specified in opts.Expiry.

A nil SignedURLOptions is treated the same as the zero value.

It is valid to call SignedURL for a key that does not exist.

If the provider implementation does not support this functionality, SignedURL will return an error for which IsNotImplemented will return true.

func (*Bucket) WriteAll Uses

func (b *Bucket) WriteAll(ctx context.Context, key string, p []byte, opts *WriterOptions) error

WriteAll is a shortcut for creating a Writer via NewWriter and writing p.

type FromURLFunc Uses

type FromURLFunc func(context.Context, *url.URL) (driver.Bucket, error)

FromURLFunc is intended for use by provider implementations. It allows providers to convert a parsed URL from Open to a driver.Bucket.

type ListIterator Uses

type ListIterator struct {
    // contains filtered or unexported fields
}

ListIterator iterates over List results.

func (*ListIterator) Next Uses

func (i *ListIterator) Next(ctx context.Context) (*ListObject, error)

Next returns a *ListObject for the next blob. It returns (nil, io.EOF) if there are no more.

type ListObject Uses

type ListObject struct {
    // Key is the key for this blob.
    Key string
    // ModTime is the time the blob was last modified.
    ModTime time.Time
    // Size is the size of the blob's content in bytes.
    Size int64
    // MD5 is an MD5 hash of the blob contents or nil if not available.
    MD5 []byte
    // IsDir indicates that this result represents a "directory" in the
    // hierarchical namespace, ending in ListOptions.Delimiter. Key can be
    // passed as ListOptions.Prefix to list items in the "directory".
    // Fields other than Key and IsDir will not be set if IsDir is true.
    IsDir bool
    // contains filtered or unexported fields
}

ListObject represents a single blob returned from List.

func (*ListObject) As Uses

func (o *ListObject) As(i interface{}) bool

As converts i to provider-specific types. See Bucket.As for more details.

type ListOptions Uses

type ListOptions struct {
    // Prefix indicates that only blobs with a key starting with this prefix
    // should be returned.
    Prefix string
    // Delimiter sets the delimiter used to define a hierarchical namespace,
    // like a filesystem with "directories".
    //
    // An empty delimiter means that the bucket is treated as a single flat
    // namespace.
    //
    // A non-empty delimiter means that any result with the delimiter in its key
    // after Prefix is stripped will be returned with ListObject.IsDir = true,
    // ListObject.Key truncated after the delimiter, and zero values for other
    // ListObject fields. These results represent "directories". Multiple results
    // in a "directory" are returned as a single result.
    Delimiter string

    // BeforeList is a callback that will be called before each call to the
    // the underlying provider's list functionality.
    // asFunc converts its argument to provider-specific types.
    // See Bucket.As for more details.
    BeforeList func(asFunc func(interface{}) bool) error
}

ListOptions sets options for listing blobs via Bucket.List.

type Reader Uses

type Reader struct {
    // contains filtered or unexported fields
}

Reader reads bytes from a blob. It implements io.ReadCloser, and must be closed after reads are finished.

func (*Reader) As Uses

func (r *Reader) As(i interface{}) bool

As converts i to provider-specific types. See Bucket.As for more details.

func (*Reader) Close Uses

func (r *Reader) Close() error

Close implements io.Closer (https://golang.org/pkg/io/#Closer).

func (*Reader) ContentType Uses

func (r *Reader) ContentType() string

ContentType returns the MIME type of the blob.

func (*Reader) ModTime Uses

func (r *Reader) ModTime() time.Time

ModTime returns the time the blob was last modified.

func (*Reader) Read Uses

func (r *Reader) Read(p []byte) (int, error)

Read implements io.Reader (https://golang.org/pkg/io/#Reader).

func (*Reader) Size Uses

func (r *Reader) Size() int64

Size returns the size of the blob content in bytes.

type ReaderOptions Uses

type ReaderOptions struct{}

ReaderOptions sets options for NewReader and NewRangedReader. It is provided for future extensibility.

type SignedURLOptions Uses

type SignedURLOptions struct {
    // Expiry sets how long the returned URL is valid for.
    // Defaults to DefaultSignedURLExpiry.
    Expiry time.Duration
}

SignedURLOptions sets options for SignedURL.

type Writer Uses

type Writer struct {
    // contains filtered or unexported fields
}

Writer writes bytes to a blob.

It implements io.WriteCloser (https://golang.org/pkg/io/#Closer), and must be closed after all writes are done.

func (*Writer) Close Uses

func (w *Writer) Close() error

Close closes the blob writer. The write operation is not guaranteed to have succeeded until Close returns with no error. Close may return an error if the context provided to create the Writer is canceled or reaches its deadline.

func (*Writer) Write Uses

func (w *Writer) Write(p []byte) (n int, err error)

Write implements the io.Writer interface (https://golang.org/pkg/io/#Writer).

Writes may happen asynchronously, so the returned error can be nil even if the actual write eventually fails. The write is only guaranteed to have succeeded if Close returns no error.

type WriterOptions Uses

type WriterOptions struct {
    // BufferSize changes the default size in bytes of the chunks that
    // Writer will upload in a single request; larger blobs will be split into
    // multiple requests.
    //
    // This option may be ignored by some provider implementations.
    //
    // If 0, the provider implementation will choose a reasonable default.
    //
    // If the Writer is used to do many small writes concurrently, using a
    // smaller BufferSize may reduce memory usage.
    BufferSize int

    // ContentType specifies the MIME type of the blob being written. If not set,
    // it will be inferred from the content using the algorithm described at
    // http://mimesniff.spec.whatwg.org/.
    ContentType string

    // ContentMD5 may be used as a message integrity check (MIC).
    // https://tools.ietf.org/html/rfc1864
    ContentMD5 []byte

    // Metadata holds key/value strings to be associated with the blob, or nil.
    // Keys may not be empty, and are lowercased before being written.
    // Duplicate case-insensitive keys (e.g., "foo" and "FOO") will result in
    // an error.
    Metadata map[string]string

    // BeforeWrite is a callback that will be called exactly once, before
    // any data is written (unless NewWriter returns an error, in which case
    // it will not be called at all). Note that this is not necessarily during
    // or after the first Write call, as providers may buffer bytes before
    // sending an upload request.
    //
    // asFunc converts its argument to provider-specific types.
    // See Bucket.As for more details.
    BeforeWrite func(asFunc func(interface{}) bool) error
}

WriterOptions sets options for NewWriter.

Directories

PathSynopsis
driverPackage driver defines a set of interfaces that the blob package uses to interact with the underlying blob services.
drivertestPackage drivertest provides a conformance test for implementations of driver.
fileblobPackage fileblob provides a blob implementation that uses the filesystem.
gcsblobPackage gcsblob provides a blob implementation that uses GCS.
s3blobPackage s3blob provides a blob implementation that uses S3.

Package blob imports 14 packages (graph) and is imported by 4 packages. Updated 2018-12-18. Refresh now. Tools for package owners.