go: cloud.google.com/go/bigquery/storage/apiv1beta1 Index | Examples | Files

package storage

import "cloud.google.com/go/bigquery/storage/apiv1beta1"

Package storage is an auto-generated package for the BigQuery Storage API.

NOTE: This package is in beta. It is not stable, and may be subject to changes.

Use of Context

The ctx passed to NewClient is used for authentication requests and for creating the underlying connection, but is not used for subsequent calls. Individual methods on the client use the ctx given to them.

To close the open connection, use the Close() method.

For information about setting deadlines, reusing contexts, and more please visit godoc.org/cloud.google.com/go.

Index

Examples

Package Files

big_query_storage_client.go doc.go

func DefaultAuthScopes Uses

func DefaultAuthScopes() []string

DefaultAuthScopes reports the default set of authentication scopes to use with this package.

type BigQueryStorageCallOptions Uses

type BigQueryStorageCallOptions struct {
    CreateReadSession             []gax.CallOption
    ReadRows                      []gax.CallOption
    BatchCreateReadSessionStreams []gax.CallOption
    FinalizeStream                []gax.CallOption
    SplitReadStream               []gax.CallOption
}

BigQueryStorageCallOptions contains the retry settings for each method of BigQueryStorageClient.

type BigQueryStorageClient Uses

type BigQueryStorageClient struct {

    // The call options for this service.
    CallOptions *BigQueryStorageCallOptions
    // contains filtered or unexported fields
}

BigQueryStorageClient is a client for interacting with BigQuery Storage API.

Methods, except Close, may be called concurrently. However, fields must not be modified concurrently with method calls.

func NewBigQueryStorageClient Uses

func NewBigQueryStorageClient(ctx context.Context, opts ...option.ClientOption) (*BigQueryStorageClient, error)

NewBigQueryStorageClient creates a new big query storage client.

BigQuery storage API.

The BigQuery storage API can be used to read data stored in BigQuery.

Code:

ctx := context.Background()
c, err := storage.NewBigQueryStorageClient(ctx)
if err != nil {
    // TODO: Handle error.
}
// TODO: Use client.
_ = c

func (*BigQueryStorageClient) BatchCreateReadSessionStreams Uses

func (c *BigQueryStorageClient) BatchCreateReadSessionStreams(ctx context.Context, req *storagepb.BatchCreateReadSessionStreamsRequest, opts ...gax.CallOption) (*storagepb.BatchCreateReadSessionStreamsResponse, error)

BatchCreateReadSessionStreams creates additional streams for a ReadSession. This API can be used to dynamically adjust the parallelism of a batch processing task upwards by adding additional workers.

Code:

ctx := context.Background()
c, err := storage.NewBigQueryStorageClient(ctx)
if err != nil {
    // TODO: Handle error.
}

req := &storagepb.BatchCreateReadSessionStreamsRequest{
    // TODO: Fill request struct fields.
}
resp, err := c.BatchCreateReadSessionStreams(ctx, req)
if err != nil {
    // TODO: Handle error.
}
// TODO: Use resp.
_ = resp

func (*BigQueryStorageClient) Close Uses

func (c *BigQueryStorageClient) Close() error

Close closes the connection to the API service. The user should invoke this when the client is no longer required.

func (*BigQueryStorageClient) Connection Uses

func (c *BigQueryStorageClient) Connection() *grpc.ClientConn

Connection returns the client's connection to the API service.

func (*BigQueryStorageClient) CreateReadSession Uses

func (c *BigQueryStorageClient) CreateReadSession(ctx context.Context, req *storagepb.CreateReadSessionRequest, opts ...gax.CallOption) (*storagepb.ReadSession, error)

CreateReadSession creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.

A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.

Read sessions automatically expire 24 hours after they are created and do not require manual clean-up by the caller.

Code:

ctx := context.Background()
c, err := storage.NewBigQueryStorageClient(ctx)
if err != nil {
    // TODO: Handle error.
}

req := &storagepb.CreateReadSessionRequest{
    // TODO: Fill request struct fields.
}
resp, err := c.CreateReadSession(ctx, req)
if err != nil {
    // TODO: Handle error.
}
// TODO: Use resp.
_ = resp

func (*BigQueryStorageClient) FinalizeStream Uses

func (c *BigQueryStorageClient) FinalizeStream(ctx context.Context, req *storagepb.FinalizeStreamRequest, opts ...gax.CallOption) error

FinalizeStream triggers the graceful termination of a single stream in a ReadSession. This API can be used to dynamically adjust the parallelism of a batch processing task downwards without losing data.

This API does not delete the stream -- it remains visible in the ReadSession, and any data processed by the stream is not released to other streams. However, no additional data will be assigned to the stream once this call completes. Callers must continue reading data on the stream until the end of the stream is reached so that data which has already been assigned to the stream will be processed.

This method will return an error if there are no other live streams in the Session, or if SplitReadStream() has been called on the given Stream.

Code:

ctx := context.Background()
c, err := storage.NewBigQueryStorageClient(ctx)
if err != nil {
    // TODO: Handle error.
}

req := &storagepb.FinalizeStreamRequest{
    // TODO: Fill request struct fields.
}
err = c.FinalizeStream(ctx, req)
if err != nil {
    // TODO: Handle error.
}

func (*BigQueryStorageClient) ReadRows Uses

func (c *BigQueryStorageClient) ReadRows(ctx context.Context, req *storagepb.ReadRowsRequest, opts ...gax.CallOption) (storagepb.BigQueryStorage_ReadRowsClient, error)

ReadRows reads rows from the table in the format prescribed by the read session. Each response contains one or more table rows, up to a maximum of 10 MiB per response; read requests which attempt to read individual rows larger than this will fail.

Each request also returns a set of stream statistics reflecting the estimated total number of rows in the read stream. This number is computed based on the total table size and the number of active streams in the read session, and may change as other streams continue to read data.

Code:

ctx := context.Background()
c, err := storage.NewBigQueryStorageClient(ctx)
if err != nil {
    // TODO: Handle error.
}

req := &storagepb.ReadRowsRequest{
    // TODO: Fill request struct fields.
}
stream, err := c.ReadRows(ctx, req)
if err != nil {
    // TODO: Handle error.
}
for {
    resp, err := stream.Recv()
    if err == io.EOF {
        break
    }
    if err != nil {
        // TODO: handle error.
    }
    // TODO: Use resp.
    _ = resp
}

func (*BigQueryStorageClient) SplitReadStream Uses

func (c *BigQueryStorageClient) SplitReadStream(ctx context.Context, req *storagepb.SplitReadStreamRequest, opts ...gax.CallOption) (*storagepb.SplitReadStreamResponse, error)

SplitReadStream splits a given read stream into two Streams. These streams are referred to as the primary and the residual of the split. The original stream can still be read from in the same manner as before. Both of the returned streams can also be read from, and the total rows return by both child streams will be the same as the rows read from the original stream.

Moreover, the two child streams will be allocated back to back in the original Stream. Concretely, it is guaranteed that for streams Original, Primary, and Residual, that Original[0-j] = Primary[0-j] and Original[j-n] = Residual[0-m] once the streams have been read to completion.

This method is guaranteed to be idempotent.

Code:

ctx := context.Background()
c, err := storage.NewBigQueryStorageClient(ctx)
if err != nil {
    // TODO: Handle error.
}

req := &storagepb.SplitReadStreamRequest{
    // TODO: Fill request struct fields.
}
resp, err := c.SplitReadStream(ctx, req)
if err != nil {
    // TODO: Handle error.
}
// TODO: Use resp.
_ = resp

Package storage imports 15 packages (graph) and is imported by 1 packages. Updated 2019-08-23. Refresh now. Tools for package owners.