go-cloud: github.com/google/go-cloud/blob Index | Examples | Files | Directories

package blob

import "github.com/google/go-cloud/blob"

Package blob provides an easy way to interact with Blob objects within a bucket. It utilizes standard io packages to handle reads and writes.

Index

Examples

Package Files

blob.go

func IsNotExist Uses

func IsNotExist(err error) bool

IsNotExist returns whether an error is a driver.Error with NotFound kind.

type Bucket Uses

type Bucket struct {
    // contains filtered or unexported fields
}

Bucket manages the underlying blob service and provides read, write and delete operations on given object within it.

func NewBucket Uses

func NewBucket(b driver.Bucket) *Bucket

NewBucket creates a new Bucket for a group of objects for a blob service.

func (*Bucket) Delete Uses

func (b *Bucket) Delete(ctx context.Context, key string) error

Delete deletes the object associated with key. It returns an error if that object does not exist, which can be checked by calling IsNotExist.

func (*Bucket) NewRangeReader Uses

func (b *Bucket) NewRangeReader(ctx context.Context, key string, offset, length int64) (*Reader, error)

NewRangeReader returns a Reader that reads part of an object, reading at most length bytes starting at the given offset. If length is 0, it will read only the metadata. If length is negative, it will read till the end of the object. It returns an error if that object does not exist, which can be checked by calling IsNotExist.

The caller must call Close on the returned Reader when done reading.

Code:

// Connect to a bucket when your program starts up.
// This example uses the file-based implementation.
dir, cleanup := newTempDir()
defer cleanup()
// Write a file to read using the bucket.
err := ioutil.WriteFile(filepath.Join(dir, "foo.txt"), []byte("Hello, World!\n"), 0666)
if err != nil {
    log.Fatal(err)
}
// Create the file-based bucket.
bucket, err := fileblob.NewBucket(dir)
if err != nil {
    log.Fatal(err)
}

// Open a reader using the blob's key at a specific offset at length.
ctx := context.Background()
r, err := bucket.NewRangeReader(ctx, "foo.txt", 1, 4)
if err != nil {
    log.Fatal(err)
}
defer r.Close()
// The blob reader implements io.Reader, so we can use any function that
// accepts an io.Reader.
if _, err := io.Copy(os.Stdout, r); err != nil {
    log.Fatal(err)
}

Output:

ello

func (*Bucket) NewReader Uses

func (b *Bucket) NewReader(ctx context.Context, key string) (*Reader, error)

NewReader returns a Reader to read from an object, or an error when the object is not found by the given key, which can be checked by calling IsNotExist.

The caller must call Close on the returned Reader when done reading.

Code:

// Connect to a bucket when your program starts up.
// This example uses the file-based implementation.
dir, cleanup := newTempDir()
defer cleanup()
// Write a file to read using the bucket.
err := ioutil.WriteFile(filepath.Join(dir, "foo.txt"), []byte("Hello, World!\n"), 0666)
if err != nil {
    log.Fatal(err)
}
// Create the file-based bucket.
bucket, err := fileblob.NewBucket(dir)
if err != nil {
    log.Fatal(err)
}

// Open a reader using the blob's key.
ctx := context.Background()
r, err := bucket.NewReader(ctx, "foo.txt")
if err != nil {
    log.Fatal(err)
}
defer r.Close()
// The blob reader implements io.Reader, so we can use any function that
// accepts an io.Reader.
if _, err := io.Copy(os.Stdout, r); err != nil {
    log.Fatal(err)
}

Output:

Hello, World!

func (*Bucket) NewWriter Uses

func (b *Bucket) NewWriter(ctx context.Context, key string, opt *WriterOptions) (*Writer, error)

NewWriter returns a Writer that writes to an object associated with key.

A new object will be created unless an object with this key already exists. Otherwise any previous object with the same key will be replaced. The object is not guaranteed to be available until Close has been called.

The call may store the ctx for later use in Write and/or Close. The ctx must remain open until the returned Writer is closed.

The caller must call Close on the returned Writer when done writing.

Code:

// Connect to a bucket when your program starts up.
// This example uses the file-based implementation.
dir, cleanup := newTempDir()
defer cleanup()
bucket, err := fileblob.NewBucket(dir)
if err != nil {
    log.Fatal(err)
}

// Open a writer using the key "foo.txt" and the default options.
ctx := context.Background()
// fileblob doesn't support custom content-type yet, see
// https://github.com/google/go-cloud/issues/111.
w, err := bucket.NewWriter(ctx, "foo.txt", &blob.WriterOptions{
    ContentType: "application/octet-stream",
})
if err != nil {
    log.Fatal(err)
}
// The blob writer implements io.Writer, so we can use any function that
// accepts an io.Writer. A writer must always be closed.
_, printErr := fmt.Fprintln(w, "Hello, World!")
closeErr := w.Close()
if printErr != nil {
    log.Fatal(printErr)
}
if closeErr != nil {
    log.Fatal(closeErr)
}
// Copy the written blob to stdout.
r, err := bucket.NewReader(ctx, "foo.txt")
if err != nil {
    log.Fatal(err)
}
defer r.Close()
if _, err := io.Copy(os.Stdout, r); err != nil {
    log.Fatal(err)
}

Output:

Hello, World!

type Reader Uses

type Reader struct {
    // contains filtered or unexported fields
}

Reader implements io.ReadCloser to read a blob. It must be closed after reads are finished.

func (*Reader) Close Uses

func (r *Reader) Close() error

Close implements io.ReadCloser to close this reader.

func (*Reader) ContentType Uses

func (r *Reader) ContentType() string

ContentType returns the MIME type of the blob object.

func (*Reader) ModTime Uses

func (r *Reader) ModTime() time.Time

ModTime returns the modification time of the blob object. This is optional and will be time.Time zero value if unknown.

func (*Reader) Read Uses

func (r *Reader) Read(p []byte) (int, error)

Read implements io.ReadCloser to read from this reader.

func (*Reader) Size Uses

func (r *Reader) Size() int64

Size returns the content size of the blob object.

type Writer Uses

type Writer struct {
    // contains filtered or unexported fields
}

Writer implements io.WriteCloser to write to blob. It must be closed after all writes are done.

func (*Writer) Close Uses

func (w *Writer) Close() error

Close flushes any buffered data and completes the Write. It is the user's responsibility to call it after finishing the write and handle the error if returned.

func (*Writer) Write Uses

func (w *Writer) Write(p []byte) (n int, err error)

Write implements the io.Writer interface.

The writes happen asynchronously, which means the returned error can be nil even if the actual write fails. Use the error returned from Close to check and handle errors.

type WriterOptions Uses

type WriterOptions struct {
    // BufferSize changes the default size in bytes of the maximum part Writer can
    // write in a single request. Larger objects will be split into multiple requests.
    //
    // The support specification of this operation varies depending on the underlying
    // blob service. If zero value is given, it is set to a reasonable default value.
    // If negative value is given, it will be either disabled (if supported by the
    // service), which means Writer will write as a whole, or reset to default value.
    // It could be a no-op when not supported at all.
    //
    // If the Writer is used to write small objects concurrently, set the buffer size
    // to a smaller size to avoid high memory usage.
    BufferSize int

    // ContentType specifies the MIME type of the object being written. If not set,
    // then it will be inferred from the content using the algorithm described at
    // http://mimesniff.spec.whatwg.org/
    ContentType string
}

WriterOptions controls Writer behaviors.

Directories

PathSynopsis
driverPackage driver defines a set of interfaces that the blob package uses to interact with the underlying blob services.
fileblobPackage fileblob provides a bucket implementation that operates on the local filesystem.
gcsblobPackage gcsblob provides an implementation of using blob API on GCS.
s3blobPackage s3blob provides an implementation of using blob API on S3.

Package blob imports 7 packages (graph) and is imported by 6 packages. Updated 2018-08-14. Refresh now. Tools for package owners.