blob

package
v0.18.1-0...-1aa001a Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 27, 2020 License: Apache-2.0 Imports: 24 Imported by: 0

Documentation

Overview

Package blob provides an easy and portable way to interact with blobs within a storage location. Subpackages contain driver implementations of blob for supported services.

See https://github.com/kainoaseto/go-cloud/howto/blob/ for a detailed how-to guide.

Errors

The errors returned from this package can be inspected in several ways:

The Code function from github.com/kainoaseto/go-cloud/gcerrors will return an error code, also defined in that package, when invoked on an error.

The Bucket.ErrorAs method can retrieve the driver error underlying the returned error.

OpenCensus Integration

OpenCensus supports tracing and metric collection for multiple languages and backend providers. See https://opencensus.io.

This API collects OpenCensus traces and metrics for the following methods:

  • Attributes
  • Copy
  • Delete
  • NewRangeReader, from creation until the call to Close. (NewReader and ReadAll are included because they call NewRangeReader.)
  • NewWriter, from creation until the call to Close.

All trace and metric names begin with the package import path. The traces add the method name. For example, "github.com/kainoaseto/go-cloud/blob/Attributes". The metrics are "completed_calls", a count of completed method calls by driver, method and status (error code); and "latency", a distribution of method latency by driver and method. For example, "github.com/kainoaseto/go-cloud/blob/latency".

It also collects the following metrics:

  • github.com/kainoaseto/go-cloud/blob/bytes_read: the total number of bytes read, by driver.
  • github.com/kainoaseto/go-cloud/blob/bytes_written: the total number of bytes written, by driver.

To enable trace collection in your application, see "Configure Exporter" at https://opencensus.io/quickstart/go/tracing. To enable metric collection in your application, see "Exporting stats" at https://opencensus.io/quickstart/go/metrics.

Example
package main

import (
	"context"
	"fmt"
	"io/ioutil"
	"log"
	"os"

	"github.com/kainoaseto/go-cloud/blob/fileblob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// Connect to a bucket when your program starts up.
	// This example uses the file-based implementation in fileblob, and creates
	// a temporary directory to use as the root directory.
	dir, cleanup := newTempDir()
	defer cleanup()
	bucket, err := fileblob.OpenBucket(dir, nil)
	if err != nil {
		log.Fatal(err)
	}
	defer bucket.Close()

	// We now have a *blob.Bucket! We can write our application using the
	// *blob.Bucket type, and have the freedom to change the initialization code
	// above to choose a different service-specific driver later.

	// In this example, we'll write a blob and then read it.
	ctx := context.Background()
	if err := bucket.WriteAll(ctx, "foo.txt", []byte("Go Cloud Development Kit"), nil); err != nil {
		log.Fatal(err)
	}
	b, err := bucket.ReadAll(ctx, "foo.txt")
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(string(b))

}

func newTempDir() (string, func()) {
	dir, err := ioutil.TempDir("", "go-cloud-blob-example")
	if err != nil {
		panic(err)
	}
	return dir, func() { os.RemoveAll(dir) }
}
Output:

Go Cloud Development Kit
Example (OpenFromURL)
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kainoaseto/go-cloud/blob"
	_ "github.com/kainoaseto/go-cloud/blob/memblob"
)

func main() {
	ctx := context.Background()

	// Connect to a bucket using a URL.
	// This example uses "memblob", the in-memory implementation.
	// We need to add a blank import line to register the memblob driver's
	// URLOpener, which implements blob.BucketURLOpener:
	// import _ "github.com/kainoaseto/go-cloud/blob/memblob"
	// memblob registers for the "mem" scheme.
	// All blob.OpenBucket URLs also work with "blob+" or "blob+bucket+" prefixes,
	// e.g., "blob+mem://" or "blob+bucket+mem://".
	b, err := blob.OpenBucket(ctx, "mem://")
	if err != nil {
		log.Fatal(err)
	}
	defer b.Close()

	// Now we can use b to read or write to blobs in the bucket.
	if err := b.WriteAll(ctx, "my-key", []byte("hello world"), nil); err != nil {
		log.Fatal(err)
	}
	data, err := b.ReadAll(ctx, "my-key")
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(string(data))
}
Output:

hello world
Example (OpenFromURLWithPrefix)
package main

import (
	"context"
	"log"

	"github.com/kainoaseto/go-cloud/blob"
	_ "github.com/kainoaseto/go-cloud/blob/memblob"
)

func main() {
	// PRAGMA: This example is used on github.com/kainoaseto/go-cloud; PRAGMA comments adjust how it is shown and can be ignored.
	// PRAGMA: On github.com/kainoaseto/go-cloud, hide lines until the next blank line.
	ctx := context.Background()

	// Connect to a bucket using a URL, using the "prefix" query parameter to
	// target a subfolder in the bucket.
	// The prefix should end with "/", so that the resulting bucket operates
	// in a subfolder.
	b, err := blob.OpenBucket(ctx, "mem://?prefix=a/subfolder/")
	if err != nil {
		log.Fatal(err)
	}
	defer b.Close()

	// Bucket operations on <key> will be translated to "a/subfolder/<key>".
}
Output:

Index

Examples

Constants

View Source
const DefaultSignedURLExpiry = 1 * time.Hour

DefaultSignedURLExpiry is the default duration for SignedURLOptions.Expiry.

Variables

View Source
var NewBucket = newBucket

NewBucket is intended for use by drivers only. Do not use in application code.

View Source
var (

	// OpenCensusViews are predefined views for OpenCensus metrics.
	// The views include counts and latency distributions for API method calls,
	// and total bytes read and written.
	// See the example at https://godoc.org/go.opencensus.io/stats/view for usage.
	OpenCensusViews = append(
		oc.Views(pkgName, latencyMeasure),
		&view.View{
			Name:        pkgName + "/bytes_read",
			Measure:     bytesReadMeasure,
			Description: "Sum of bytes read from the service.",
			TagKeys:     []tag.Key{oc.ProviderKey},
			Aggregation: view.Sum(),
		},
		&view.View{
			Name:        pkgName + "/bytes_written",
			Measure:     bytesWrittenMeasure,
			Description: "Sum of bytes written to the service.",
			TagKeys:     []tag.Key{oc.ProviderKey},
			Aggregation: view.Sum(),
		})
)

Functions

This section is empty.

Types

type Attributes

type Attributes struct {
	// CacheControl specifies caching attributes that services may use
	// when serving the blob.
	// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control
	CacheControl string
	// ContentDisposition specifies whether the blob content is expected to be
	// displayed inline or as an attachment.
	// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition
	ContentDisposition string
	// ContentEncoding specifies the encoding used for the blob's content, if any.
	// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding
	ContentEncoding string
	// ContentLanguage specifies the language used in the blob's content, if any.
	// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Language
	ContentLanguage string
	// ContentType is the MIME type of the blob. It will not be empty.
	// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type
	ContentType string
	// Metadata holds key/value pairs associated with the blob.
	// Keys are guaranteed to be in lowercase, even if the backend service
	// has case-sensitive keys (although note that Metadata written via
	// this package will always be lowercased). If there are duplicate
	// case-insensitive keys (e.g., "foo" and "FOO"), only one value
	// will be kept, and it is undefined which one.
	Metadata map[string]string
	// ModTime is the time the blob was last modified.
	ModTime time.Time
	// Size is the size of the blob's content in bytes.
	Size int64
	// MD5 is an MD5 hash of the blob contents or nil if not available.
	MD5 []byte
	// contains filtered or unexported fields
}

Attributes contains attributes about a blob.

func (*Attributes) As

func (a *Attributes) As(i interface{}) bool

As converts i to driver-specific types. See https://github.com/kainoaseto/go-cloud/concepts/as/ for background information, the "As" examples in this package for examples, and the driver package documentation for the specific types supported for that driver.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"cloud.google.com/go/storage"
	"github.com/kainoaseto/go-cloud/blob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// This example is specific to the gcsblob implementation; it demonstrates
	// access to the underlying cloud.google.com/go/storage.ObjectAttrs type.
	// The types exposed for As by gcsblob are documented in
	// https://godoc.org/github.com/kainoaseto/go-cloud/blob/gcsblob#hdr-As
	ctx := context.Background()

	b, err := blob.OpenBucket(ctx, "gs://my-bucket")
	if err != nil {
		log.Fatal(err)
	}
	defer b.Close()

	attrs, err := b.Attributes(ctx, "gopher.png")
	if err != nil {
		log.Fatal(err)
	}

	var oa storage.ObjectAttrs
	if attrs.As(&oa) {
		fmt.Println(oa.Owner)
	}
}
Output:

type Bucket

type Bucket struct {
	// contains filtered or unexported fields
}

Bucket provides an easy and portable way to interact with blobs within a "bucket", including read, write, and list operations. To create a Bucket, use constructors found in driver subpackages.

func OpenBucket

func OpenBucket(ctx context.Context, urlstr string) (*Bucket, error)

OpenBucket opens the bucket identified by the URL given.

See the URLOpener documentation in driver subpackages for details on supported URL formats, and https://github.com/kainoaseto/go-cloud/concepts/urls/ for more information.

In addition to driver-specific query parameters, OpenBucket supports the following query parameters:

  • prefix: wraps the resulting Bucket using PrefixedBucket with the given prefix.

func PrefixedBucket

func PrefixedBucket(bucket *Bucket, prefix string) *Bucket

PrefixedBucket returns a *Bucket based on b with all keys modified to have prefix, which will usually end with a "/" to target a subdirectory in the bucket.

bucket will be closed and no longer usable after this function returns.

Example
package main

import (
	"github.com/kainoaseto/go-cloud/blob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// PRAGMA: This example is used on github.com/kainoaseto/go-cloud; PRAGMA comments adjust how it is shown and can be ignored.
	// PRAGMA: On github.com/kainoaseto/go-cloud, hide lines until the next blank line.
	var bucket *blob.Bucket

	// Wrap the bucket using blob.PrefixedBucket.
	// The prefix should end with "/", so that the resulting bucket operates
	// in a subfolder.
	bucket = blob.PrefixedBucket(bucket, "a/subfolder/")

	// The original bucket is no longer usable; it has been closed.
	// The wrapped bucket should be closed when done.
	defer bucket.Close()

	// Bucket operations on <key> will be translated to "a/subfolder/<key>".
}
Output:

func (*Bucket) As

func (b *Bucket) As(i interface{}) bool

As converts i to driver-specific types. See https://github.com/kainoaseto/go-cloud/concepts/as/ for background information, the "As" examples in this package for examples, and the driver package documentation for the specific types supported for that driver.

Example
package main

import (
	"context"
	"log"

	"cloud.google.com/go/storage"
	"github.com/kainoaseto/go-cloud/blob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// This example is specific to the gcsblob implementation; it demonstrates
	// access to the underlying cloud.google.com/go/storage.Client type.
	// The types exposed for As by gcsblob are documented in
	// https://godoc.org/github.com/kainoaseto/go-cloud/blob/gcsblob#hdr-As

	// This URL will open the bucket "my-bucket" using default credentials.
	ctx := context.Background()
	b, err := blob.OpenBucket(ctx, "gs://my-bucket")
	if err != nil {
		log.Fatal(err)
	}
	defer b.Close()

	// Access storage.Client fields via gcsClient here.
	var gcsClient *storage.Client
	if b.As(&gcsClient) {
		email, err := gcsClient.ServiceAccount(ctx, "project-name")
		if err != nil {
			log.Fatal(err)
		}
		_ = email
	} else {
		log.Println("Unable to access storage.Client through Bucket.As")
	}
}
Output:

func (*Bucket) Attributes

func (b *Bucket) Attributes(ctx context.Context, key string) (_ *Attributes, err error)

Attributes returns attributes for the blob stored at key.

If the blob does not exist, Attributes returns an error for which gcerrors.Code will return gcerrors.NotFound.

func (*Bucket) Close

func (b *Bucket) Close() error

Close releases any resources used for the bucket.

func (*Bucket) Copy

func (b *Bucket) Copy(ctx context.Context, dstKey, srcKey string, opts *CopyOptions) (err error)

Copy the blob stored at srcKey to dstKey. A nil CopyOptions is treated the same as the zero value.

If the source blob does not exist, Copy returns an error for which gcerrors.Code will return gcerrors.NotFound.

If the destination blob already exists, it is overwritten.

func (*Bucket) Delete

func (b *Bucket) Delete(ctx context.Context, key string) (err error)

Delete deletes the blob stored at key.

If the blob does not exist, Delete returns an error for which gcerrors.Code will return gcerrors.NotFound.

Example
package main

import (
	"context"
	"log"

	"github.com/kainoaseto/go-cloud/blob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// PRAGMA: This example is used on github.com/kainoaseto/go-cloud; PRAGMA comments adjust how it is shown and can be ignored.
	// PRAGMA: On github.com/kainoaseto/go-cloud, hide lines until the next blank line.
	ctx := context.Background()
	var bucket *blob.Bucket

	if err := bucket.Delete(ctx, "foo.txt"); err != nil {
		log.Fatal(err)
	}
}
Output:

func (*Bucket) ErrorAs

func (b *Bucket) ErrorAs(err error, i interface{}) bool

ErrorAs converts err to driver-specific types. ErrorAs panics if i is nil or not a pointer. ErrorAs returns false if err == nil. See https://github.com/kainoaseto/go-cloud/concepts/as/ for background information.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/aws/aws-sdk-go/aws/awserr"
	"github.com/kainoaseto/go-cloud/blob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// This example is specific to the s3blob implementation; it demonstrates
	// access to the underlying awserr.Error type.
	// The types exposed for ErrorAs by s3blob are documented in
	// https://godoc.org/github.com/kainoaseto/go-cloud/blob/s3blob#hdr-As

	ctx := context.Background()

	b, err := blob.OpenBucket(ctx, "s3://my-bucket")
	if err != nil {
		log.Fatal(err)
	}
	defer b.Close()

	_, err = b.ReadAll(ctx, "nosuchfile")
	if err != nil {
		var awsErr awserr.Error
		if b.ErrorAs(err, &awsErr) {
			fmt.Println(awsErr.Code())
		}
	}
}
Output:

func (*Bucket) Exists

func (b *Bucket) Exists(ctx context.Context, key string) (bool, error)

Exists returns true if a blob exists at key, false if it does not exist, or an error. It is a shortcut for calling Attributes and checking if it returns an error with code gcerrors.NotFound.

func (*Bucket) List

func (b *Bucket) List(opts *ListOptions) *ListIterator

List returns a ListIterator that can be used to iterate over blobs in a bucket, in lexicographical order of UTF-8 encoded keys. The underlying implementation fetches results in pages.

A nil ListOptions is treated the same as the zero value.

List is not guaranteed to include all recently-written blobs; some services are only eventually consistent.

Example
package main

import (
	"context"
	"fmt"
	"io"
	"io/ioutil"
	"log"
	"os"

	"github.com/kainoaseto/go-cloud/blob/fileblob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// Connect to a bucket when your program starts up.
	// This example uses the file-based implementation.
	dir, cleanup := newTempDir()
	defer cleanup()

	// Create the file-based bucket.
	bucket, err := fileblob.OpenBucket(dir, nil)
	if err != nil {
		log.Fatal(err)
	}
	defer bucket.Close()

	// Create some blob objects for listing: "foo[0..4].txt".
	ctx := context.Background()
	for i := 0; i < 5; i++ {
		if err := bucket.WriteAll(ctx, fmt.Sprintf("foo%d.txt", i), []byte("Go Cloud Development Kit"), nil); err != nil {
			log.Fatal(err)
		}
	}

	// Iterate over them.
	// This will list the blobs created above because fileblob is strongly
	// consistent, but is not guaranteed to work on all services.
	iter := bucket.List(nil)
	for {
		obj, err := iter.Next(ctx)
		if err == io.EOF {
			break
		}
		if err != nil {
			log.Fatal(err)
		}
		fmt.Println(obj.Key)
	}

}

func newTempDir() (string, func()) {
	dir, err := ioutil.TempDir("", "go-cloud-blob-example")
	if err != nil {
		panic(err)
	}
	return dir, func() { os.RemoveAll(dir) }
}
Output:

foo0.txt
foo1.txt
foo2.txt
foo3.txt
foo4.txt
Example (WithDelimiter)
package main

import (
	"context"
	"fmt"
	"io"
	"io/ioutil"
	"log"
	"os"

	"github.com/kainoaseto/go-cloud/blob"
	"github.com/kainoaseto/go-cloud/blob/fileblob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// Connect to a bucket when your program starts up.
	// This example uses the file-based implementation.
	dir, cleanup := newTempDir()
	defer cleanup()

	// Create the file-based bucket.
	bucket, err := fileblob.OpenBucket(dir, nil)
	if err != nil {
		log.Fatal(err)
	}
	defer bucket.Close()

	// Create some blob objects in a hierarchy.
	ctx := context.Background()
	for _, key := range []string{
		"dir1/subdir/a.txt",
		"dir1/subdir/b.txt",
		"dir2/c.txt",
		"d.txt",
	} {
		if err := bucket.WriteAll(ctx, key, []byte("Go Cloud Development Kit"), nil); err != nil {
			log.Fatal(err)
		}
	}

	// list lists files in b starting with prefix. It uses the delimiter "/",
	// and recurses into "directories", adding 2 spaces to indent each time.
	// It will list the blobs created above because fileblob is strongly
	// consistent, but is not guaranteed to work on all services.
	var list func(context.Context, *blob.Bucket, string, string)
	list = func(ctx context.Context, b *blob.Bucket, prefix, indent string) {
		iter := b.List(&blob.ListOptions{
			Delimiter: "/",
			Prefix:    prefix,
		})
		for {
			obj, err := iter.Next(ctx)
			if err == io.EOF {
				break
			}
			if err != nil {
				log.Fatal(err)
			}
			fmt.Printf("%s%s\n", indent, obj.Key)
			if obj.IsDir {
				list(ctx, b, obj.Key, indent+"  ")
			}
		}
	}
	list(ctx, bucket, "", "")

}

func newTempDir() (string, func()) {
	dir, err := ioutil.TempDir("", "go-cloud-blob-example")
	if err != nil {
		panic(err)
	}
	return dir, func() { os.RemoveAll(dir) }
}
Output:

d.txt
dir1/
  dir1/subdir/
    dir1/subdir/a.txt
    dir1/subdir/b.txt
dir2/
  dir2/c.txt

func (*Bucket) NewRangeReader

func (b *Bucket) NewRangeReader(ctx context.Context, key string, offset, length int64, opts *ReaderOptions) (_ *Reader, err error)

NewRangeReader returns a Reader to read content from the blob stored at key. It reads at most length bytes starting at offset (>= 0). If length is negative, it will read till the end of the blob.

If the blob does not exist, NewRangeReader returns an error for which gcerrors.Code will return gcerrors.NotFound. Exists is a lighter-weight way to check for existence.

A nil ReaderOptions is treated the same as the zero value.

The caller must call Close on the returned Reader when done reading.

Example
package main

import (
	"context"
	"io"
	"log"
	"os"

	"github.com/kainoaseto/go-cloud/blob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// PRAGMA: This example is used on github.com/kainoaseto/go-cloud; PRAGMA comments adjust how it is shown and can be ignored.
	// PRAGMA: On github.com/kainoaseto/go-cloud, hide lines until the next blank line.
	ctx := context.Background()
	var bucket *blob.Bucket

	// Open the key "foo.txt" for reading at offset 1024 and read up to 4096 bytes.
	r, err := bucket.NewRangeReader(ctx, "foo.txt", 1024, 4096, nil)
	if err != nil {
		log.Fatal(err)
	}
	defer r.Close()
	// Copy from the read range to stdout.
	if _, err := io.Copy(os.Stdout, r); err != nil {
		log.Fatal(err)
	}
}
Output:

func (*Bucket) NewReader

func (b *Bucket) NewReader(ctx context.Context, key string, opts *ReaderOptions) (*Reader, error)

NewReader is a shortcut for NewRangeReader with offset=0 and length=-1.

Example
package main

import (
	"context"
	"fmt"
	"io"
	"log"
	"os"

	"github.com/kainoaseto/go-cloud/blob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// PRAGMA: This example is used on github.com/kainoaseto/go-cloud; PRAGMA comments adjust how it is shown and can be ignored.
	// PRAGMA: On github.com/kainoaseto/go-cloud, hide lines until the next blank line.
	ctx := context.Background()
	var bucket *blob.Bucket

	// Open the key "foo.txt" for reading with the default options.
	r, err := bucket.NewReader(ctx, "foo.txt", nil)
	if err != nil {
		log.Fatal(err)
	}
	defer r.Close()
	// Readers also have a limited view of the blob's metadata.
	fmt.Println("Content-Type:", r.ContentType())
	fmt.Println()
	// Copy from the reader to stdout.
	if _, err := io.Copy(os.Stdout, r); err != nil {
		log.Fatal(err)
	}
}
Output:

func (*Bucket) NewWriter

func (b *Bucket) NewWriter(ctx context.Context, key string, opts *WriterOptions) (_ *Writer, err error)

NewWriter returns a Writer that writes to the blob stored at key. A nil WriterOptions is treated the same as the zero value.

If a blob with this key already exists, it will be replaced. The blob being written is not guaranteed to be readable until Close has been called; until then, any previous blob will still be readable. Even after Close is called, newly written blobs are not guaranteed to be returned from List; some services are only eventually consistent.

The returned Writer will store ctx for later use in Write and/or Close. To abort a write, cancel ctx; otherwise, it must remain open until Close is called.

The caller must call Close on the returned Writer, even if the write is aborted.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/kainoaseto/go-cloud/blob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// PRAGMA: This example is used on github.com/kainoaseto/go-cloud; PRAGMA comments adjust how it is shown and can be ignored.
	// PRAGMA: On github.com/kainoaseto/go-cloud, hide lines until the next blank line.
	ctx := context.Background()
	var bucket *blob.Bucket

	// Open the key "foo.txt" for writing with the default options.
	w, err := bucket.NewWriter(ctx, "foo.txt", nil)
	if err != nil {
		log.Fatal(err)
	}
	_, writeErr := fmt.Fprintln(w, "Hello, World!")
	// Always check the return value of Close when writing.
	closeErr := w.Close()
	if writeErr != nil {
		log.Fatal(writeErr)
	}
	if closeErr != nil {
		log.Fatal(closeErr)
	}
}
Output:

Example (Cancel)
package main

import (
	"context"
	"log"

	"github.com/kainoaseto/go-cloud/blob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// PRAGMA: This example is used on github.com/kainoaseto/go-cloud; PRAGMA comments adjust how it is shown and can be ignored.
	// PRAGMA: On github.com/kainoaseto/go-cloud, hide lines until the next blank line.
	ctx := context.Background()
	var bucket *blob.Bucket

	// Create a cancelable context from the existing context.
	writeCtx, cancelWrite := context.WithCancel(ctx)
	defer cancelWrite()

	// Open the key "foo.txt" for writing with the default options.
	w, err := bucket.NewWriter(writeCtx, "foo.txt", nil)
	if err != nil {
		log.Fatal(err)
	}

	// Assume some writes happened and we encountered an error.
	// Now we want to abort the write.

	if err != nil {
		// First cancel the context.
		cancelWrite()
		// You must still close the writer to avoid leaking resources.
		w.Close()
	}
}
Output:

func (*Bucket) ReadAll

func (b *Bucket) ReadAll(ctx context.Context, key string) (_ []byte, err error)

ReadAll is a shortcut for creating a Reader via NewReader with nil ReaderOptions, and reading the entire blob.

func (*Bucket) SignedURL

func (b *Bucket) SignedURL(ctx context.Context, key string, opts *SignedURLOptions) (string, error)

SignedURL returns a URL that can be used to GET the blob for the duration specified in opts.Expiry.

A nil SignedURLOptions is treated the same as the zero value.

It is valid to call SignedURL for a key that does not exist.

If the driver does not support this functionality, SignedURL will return an error for which gcerrors.Code will return gcerrors.Unimplemented.

func (*Bucket) WriteAll

func (b *Bucket) WriteAll(ctx context.Context, key string, p []byte, opts *WriterOptions) (err error)

WriteAll is a shortcut for creating a Writer via NewWriter and writing p.

If opts.ContentMD5 is not set, WriteAll will compute the MD5 of p and use it as the ContentMD5 option for the Writer it creates.

type BucketURLOpener

type BucketURLOpener interface {
	OpenBucketURL(ctx context.Context, u *url.URL) (*Bucket, error)
}

BucketURLOpener represents types that can open buckets based on a URL. The opener must not modify the URL argument. OpenBucketURL must be safe to call from multiple goroutines.

This interface is generally implemented by types in driver packages.

type CopyOptions

type CopyOptions struct {
	// BeforeCopy is a callback that will be called before the copy is
	// initiated.
	//
	// asFunc converts its argument to driver-specific types.
	// See https://github.com/kainoaseto/go-cloud/concepts/as/ for background information.
	BeforeCopy func(asFunc func(interface{}) bool) error
}

CopyOptions sets options for Copy.

type ListIterator

type ListIterator struct {
	// contains filtered or unexported fields
}

ListIterator iterates over List results.

func (*ListIterator) Next

func (i *ListIterator) Next(ctx context.Context) (*ListObject, error)

Next returns a *ListObject for the next blob. It returns (nil, io.EOF) if there are no more.

type ListObject

type ListObject struct {
	// Key is the key for this blob.
	Key string
	// ModTime is the time the blob was last modified.
	ModTime time.Time
	// Size is the size of the blob's content in bytes.
	Size int64
	// MD5 is an MD5 hash of the blob contents or nil if not available.
	MD5 []byte
	// IsDir indicates that this result represents a "directory" in the
	// hierarchical namespace, ending in ListOptions.Delimiter. Key can be
	// passed as ListOptions.Prefix to list items in the "directory".
	// Fields other than Key and IsDir will not be set if IsDir is true.
	IsDir bool
	// contains filtered or unexported fields
}

ListObject represents a single blob returned from List.

func (*ListObject) As

func (o *ListObject) As(i interface{}) bool

As converts i to driver-specific types. See https://github.com/kainoaseto/go-cloud/concepts/as/ for background information, the "As" examples in this package for examples, and the driver package documentation for the specific types supported for that driver.

Example
package main

import (
	"context"
	"io"
	"log"

	"cloud.google.com/go/storage"
	"github.com/kainoaseto/go-cloud/blob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// This example is specific to the gcsblob implementation; it demonstrates
	// access to the underlying cloud.google.com/go/storage.ObjectAttrs type.
	// The types exposed for As by gcsblob are documented in
	// https://godoc.org/github.com/kainoaseto/go-cloud/blob/gcsblob#hdr-As

	ctx := context.Background()

	b, err := blob.OpenBucket(ctx, "gs://my-bucket")
	if err != nil {
		log.Fatal(err)
	}
	defer b.Close()

	iter := b.List(nil)
	for {
		obj, err := iter.Next(ctx)
		if err == io.EOF {
			break
		}
		if err != nil {
			log.Fatal(err)
		}
		// Access storage.ObjectAttrs via oa here.
		var oa storage.ObjectAttrs
		if obj.As(&oa) {
			_ = oa.Owner
		}
	}
}
Output:

type ListOptions

type ListOptions struct {
	// Prefix indicates that only blobs with a key starting with this prefix
	// should be returned.
	Prefix string
	// Delimiter sets the delimiter used to define a hierarchical namespace,
	// like a filesystem with "directories". It is highly recommended that you
	// use "" or "/" as the Delimiter. Other values should work through this API,
	// but service UIs generally assume "/".
	//
	// An empty delimiter means that the bucket is treated as a single flat
	// namespace.
	//
	// A non-empty delimiter means that any result with the delimiter in its key
	// after Prefix is stripped will be returned with ListObject.IsDir = true,
	// ListObject.Key truncated after the delimiter, and zero values for other
	// ListObject fields. These results represent "directories". Multiple results
	// in a "directory" are returned as a single result.
	Delimiter string

	// BeforeList is a callback that will be called before each call to the
	// the underlying service's list functionality.
	// asFunc converts its argument to driver-specific types.
	// See https://github.com/kainoaseto/go-cloud/concepts/as/ for background information.
	BeforeList func(asFunc func(interface{}) bool) error
}

ListOptions sets options for listing blobs via Bucket.List.

Example
package main

import (
	"context"
	"io"
	"log"

	"cloud.google.com/go/storage"
	"github.com/kainoaseto/go-cloud/blob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// This example is specific to the gcsblob implementation; it demonstrates
	// access to the underlying cloud.google.com/go/storage.Query type.
	// The types exposed for As by gcsblob are documented in
	// https://godoc.org/github.com/kainoaseto/go-cloud/blob/gcsblob#hdr-As

	ctx := context.Background()

	b, err := blob.OpenBucket(ctx, "gs://my-bucket")
	if err != nil {
		log.Fatal(err)
	}
	defer b.Close()

	beforeList := func(as func(interface{}) bool) error {
		// Access storage.Query via q here.
		var q *storage.Query
		if as(&q) {
			_ = q.Delimiter
		}
		return nil
	}

	iter := b.List(&blob.ListOptions{Prefix: "", Delimiter: "/", BeforeList: beforeList})
	for {
		obj, err := iter.Next(ctx)
		if err == io.EOF {
			break
		}
		if err != nil {
			log.Fatal(err)
		}
		_ = obj
	}
}
Output:

type Reader

type Reader struct {
	// contains filtered or unexported fields
}

Reader reads bytes from a blob. It implements io.ReadCloser, and must be closed after reads are finished.

func (*Reader) As

func (r *Reader) As(i interface{}) bool

As converts i to driver-specific types. See https://github.com/kainoaseto/go-cloud/concepts/as/ for background information, the "As" examples in this package for examples, and the driver package documentation for the specific types supported for that driver.

Example
package main

import (
	"context"
	"log"

	"cloud.google.com/go/storage"
	"github.com/kainoaseto/go-cloud/blob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// This example is specific to the gcsblob implementation; it demonstrates
	// access to the underlying cloud.google.com/go/storage.Reader type.
	// The types exposed for As by gcsblob are documented in
	// https://godoc.org/github.com/kainoaseto/go-cloud/blob/gcsblob#hdr-As

	ctx := context.Background()

	b, err := blob.OpenBucket(ctx, "gs://my-bucket")
	if err != nil {
		log.Fatal(err)
	}
	defer b.Close()

	r, err := b.NewReader(ctx, "gopher.png", nil)
	if err != nil {
		log.Fatal(err)
	}
	defer r.Close()

	// Access storage.Reader via sr here.
	var sr *storage.Reader
	if r.As(&sr) {
		_ = sr.Attrs
	}
}
Output:

func (*Reader) Close

func (r *Reader) Close() error

Close implements io.Closer (https://golang.org/pkg/io/#Closer).

func (*Reader) ContentType

func (r *Reader) ContentType() string

ContentType returns the MIME type of the blob.

func (*Reader) ModTime

func (r *Reader) ModTime() time.Time

ModTime returns the time the blob was last modified.

func (*Reader) Read

func (r *Reader) Read(p []byte) (int, error)

Read implements io.Reader (https://golang.org/pkg/io/#Reader).

func (*Reader) Size

func (r *Reader) Size() int64

Size returns the size of the blob content in bytes.

func (*Reader) WriteTo

func (r *Reader) WriteTo(w io.Writer) (int64, error)

WriteTo reads from r and writes to w until there's no more data or an error occurs. The return value is the number of bytes written to w.

It implements the io.WriterTo interface.

type ReaderOptions

type ReaderOptions struct {
	// BeforeRead is a callback that will be called exactly once, before
	// any data is read (unless NewReader returns an error before then, in which
	// case it may not be called at all).
	//
	// asFunc converts its argument to driver-specific types.
	// See https://github.com/kainoaseto/go-cloud/concepts/as/ for background information.
	BeforeRead func(asFunc func(interface{}) bool) error
}

ReaderOptions sets options for NewReader and NewRangeReader.

type SignedURLOptions

type SignedURLOptions struct {
	// Expiry sets how long the returned URL is valid for.
	// Defaults to DefaultSignedURLExpiry.
	Expiry time.Duration

	// Method is the HTTP method that can be used on the URL; one of "GET", "PUT",
	// or "DELETE". Defaults to "GET".
	Method string

	// ContentType specifies the Content-Type HTTP header the user agent is
	// permitted to use in the PUT request. It must match exactly. See
	// EnforceAbsentContentType for behavior when ContentType is the empty string.
	// If a bucket does not implement this verification, then it returns an
	// Unimplemented error.
	//
	// Must be empty for non-PUT requests.
	ContentType string

	// If EnforceAbsentContentType is true and ContentType is the empty string,
	// then PUTing to the signed URL will fail if the Content-Type header is
	// present. Not all buckets support this: ones that do not will return an
	// Unimplemented error.
	//
	// If EnforceAbsentContentType is false and ContentType is the empty string,
	// then PUTing without a Content-Type header will succeed, but it is
	// implementation-specific whether providing a Content-Type header will fail.
	//
	// Must be false for non-PUT requests.
	EnforceAbsentContentType bool
}

SignedURLOptions sets options for SignedURL.

type URLMux

type URLMux struct {
	// contains filtered or unexported fields
}

URLMux is a URL opener multiplexer. It matches the scheme of the URLs against a set of registered schemes and calls the opener that matches the URL's scheme. See https://github.com/kainoaseto/go-cloud/concepts/urls/ for more information.

The zero value is a multiplexer with no registered schemes.

func DefaultURLMux

func DefaultURLMux() *URLMux

DefaultURLMux returns the URLMux used by OpenBucket.

Driver packages can use this to register their BucketURLOpener on the mux.

func (*URLMux) BucketSchemes

func (mux *URLMux) BucketSchemes() []string

BucketSchemes returns a sorted slice of the registered Bucket schemes.

func (*URLMux) OpenBucket

func (mux *URLMux) OpenBucket(ctx context.Context, urlstr string) (*Bucket, error)

OpenBucket calls OpenBucketURL with the URL parsed from urlstr. OpenBucket is safe to call from multiple goroutines.

func (*URLMux) OpenBucketURL

func (mux *URLMux) OpenBucketURL(ctx context.Context, u *url.URL) (*Bucket, error)

OpenBucketURL dispatches the URL to the opener that is registered with the URL's scheme. OpenBucketURL is safe to call from multiple goroutines.

func (*URLMux) RegisterBucket

func (mux *URLMux) RegisterBucket(scheme string, opener BucketURLOpener)

RegisterBucket registers the opener with the given scheme. If an opener already exists for the scheme, RegisterBucket panics.

func (*URLMux) ValidBucketScheme

func (mux *URLMux) ValidBucketScheme(scheme string) bool

ValidBucketScheme returns true iff scheme has been registered for Buckets.

type Writer

type Writer struct {
	// contains filtered or unexported fields
}

Writer writes bytes to a blob.

It implements io.WriteCloser (https://golang.org/pkg/io/#Closer), and must be closed after all writes are done.

func (*Writer) Close

func (w *Writer) Close() (err error)

Close closes the blob writer. The write operation is not guaranteed to have succeeded until Close returns with no error. Close may return an error if the context provided to create the Writer is canceled or reaches its deadline.

func (*Writer) ReadFrom

func (w *Writer) ReadFrom(r io.Reader) (int64, error)

ReadFrom reads from r and writes to w until EOF or error. The return value is the number of bytes read from r.

It implements the io.ReaderFrom interface.

func (*Writer) Write

func (w *Writer) Write(p []byte) (int, error)

Write implements the io.Writer interface (https://golang.org/pkg/io/#Writer).

Writes may happen asynchronously, so the returned error can be nil even if the actual write eventually fails. The write is only guaranteed to have succeeded if Close returns no error.

type WriterOptions

type WriterOptions struct {
	// BufferSize changes the default size in bytes of the chunks that
	// Writer will upload in a single request; larger blobs will be split into
	// multiple requests.
	//
	// This option may be ignored by some drivers.
	//
	// If 0, the driver will choose a reasonable default.
	//
	// If the Writer is used to do many small writes concurrently, using a
	// smaller BufferSize may reduce memory usage.
	BufferSize int

	// CacheControl specifies caching attributes that services may use
	// when serving the blob.
	// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control
	CacheControl string

	// ContentDisposition specifies whether the blob content is expected to be
	// displayed inline or as an attachment.
	// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition
	ContentDisposition string

	// ContentEncoding specifies the encoding used for the blob's content, if any.
	// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding
	ContentEncoding string

	// ContentLanguage specifies the language used in the blob's content, if any.
	// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Language
	ContentLanguage string

	// ContentType specifies the MIME type of the blob being written. If not set,
	// it will be inferred from the content using the algorithm described at
	// http://mimesniff.spec.whatwg.org/.
	// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type
	ContentType string

	// ContentMD5 is used as a message integrity check.
	// If len(ContentMD5) > 0, the MD5 hash of the bytes written must match
	// ContentMD5, or Close will return an error without completing the write.
	// https://tools.ietf.org/html/rfc1864
	ContentMD5 []byte

	// Metadata holds key/value strings to be associated with the blob, or nil.
	// Keys may not be empty, and are lowercased before being written.
	// Duplicate case-insensitive keys (e.g., "foo" and "FOO") will result in
	// an error.
	Metadata map[string]string

	// BeforeWrite is a callback that will be called exactly once, before
	// any data is written (unless NewWriter returns an error, in which case
	// it will not be called at all). Note that this is not necessarily during
	// or after the first Write call, as drivers may buffer bytes before
	// sending an upload request.
	//
	// asFunc converts its argument to driver-specific types.
	// See https://github.com/kainoaseto/go-cloud/concepts/as/ for background information.
	BeforeWrite func(asFunc func(interface{}) bool) error
}

WriterOptions sets options for NewWriter.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"cloud.google.com/go/storage"
	"github.com/kainoaseto/go-cloud/blob"

	_ "github.com/kainoaseto/go-cloud/blob/gcsblob"
	_ "github.com/kainoaseto/go-cloud/blob/s3blob"
)

func main() {
	// This example is specific to the gcsblob implementation; it demonstrates
	// access to the underlying cloud.google.com/go/storage.Writer type.
	// The types exposed for As by gcsblob are documented in
	// https://godoc.org/github.com/kainoaseto/go-cloud/blob/gcsblob#hdr-As

	ctx := context.Background()

	b, err := blob.OpenBucket(ctx, "gs://my-bucket")
	if err != nil {
		log.Fatal(err)
	}
	defer b.Close()

	beforeWrite := func(as func(interface{}) bool) error {
		var sw *storage.Writer
		if as(&sw) {
			fmt.Println(sw.ChunkSize)
		}
		return nil
	}

	options := blob.WriterOptions{BeforeWrite: beforeWrite}
	if err := b.WriteAll(ctx, "newfile.txt", []byte("hello\n"), &options); err != nil {
		log.Fatal(err)
	}
}
Output:

Directories

Path Synopsis
Package azureblob provides a blob implementation that uses Azure Storage’s BlockBlob.
Package azureblob provides a blob implementation that uses Azure Storage’s BlockBlob.
Package driver defines interfaces to be implemented by blob drivers, which will be used by the blob package to interact with the underlying services.
Package driver defines interfaces to be implemented by blob drivers, which will be used by the blob package to interact with the underlying services.
Package drivertest provides a conformance test for implementations of driver.
Package drivertest provides a conformance test for implementations of driver.
Package fileblob provides a blob implementation that uses the filesystem.
Package fileblob provides a blob implementation that uses the filesystem.
Package gcsblob provides a blob implementation that uses GCS.
Package gcsblob provides a blob implementation that uses GCS.
Package memblob provides an in-memory blob implementation.
Package memblob provides an in-memory blob implementation.
Package s3blob provides a blob implementation that uses S3.
Package s3blob provides a blob implementation that uses S3.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL