bqwriter

package module
v0.8.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 11, 2023 License: Apache-2.0 Imports: 17 Imported by: 1

README

bqwriter Go Workflow Status GoDoc Go Report Card license GitHub release (latest by date including pre-releases) Mentioned in Awesome Go

A Go package to write data into Google BigQuery concurrently with a high throughput. By default the InsertAll() API is used (REST API under the hood), but you can configure to use the Storage Write API (GRPC under the hood) as well.

The InsertAll API is easier to configure and can work pretty much out of the box without any configuration. It is recommended to use the Storage API as it is faster and comes with a lower cost. The latter does however require a bit more configuration on your side, including a Proto schema file as well. See the Storage example below on how to do this.

A third API is available as well, and is a bit different than the other ones. A streamer using the batch API expects the data to be written be an io.Reader of which encoded rows can be read from, in order to be batch loaded into BigQuery. As such its purpose is different from the other 2 clients, and is meant more in an environment where a lot of data has to be loaded at once into big query, rather than being part of a constant high-speed high-throughput environment.

See https://cloud.google.com/bigquery/docs/batch-loading-data for more information about the Batch API, and see the Batch example below on how to do create and use the Batch-driven Streamer.

Note that (gcloud) Authorization is implemented in the most basic manner. Write Error Handling is currently not possible either, making the Streamer a fire-and-forget BQ writer. Please read the sections on these to topics for more information and please consult he Contributing section section explains how you can actively help to get this supported if desired.

Install

import "github.com/OTA-Insight/bqwriter"

To install the packages on your system, do not clone the repo. Instead:

  1. Change to your project directory:
cd /path/to/my/project
  1. Get the package using the official Go tooling, which will also add it to your Go.mod file for you:
go get github.com/OTA-Insight/bqwriter

NOTE: This package is under development, and may occasionally make backwards-incompatible changes.

Go Versions Supported

We currently support Go versions 1.17 and newer.

Examples

In this section you'll find some quick examples to help you get started together with the official documentation which you can find at https://pkg.go.dev/github.com/OTA-Insight/bqwriter.

The Streamer client is safe for concurrent use and can be used from as many go routines as you wish. No external locking or other concurrency-safe mechanism is required from your side. To keep these examples as small as possible however they are written in a linear synchronous fashion, but it is encouraged to use the Streamer client from multiple go routines, in order to be able to write rows at a sufficiently high throughput.

Note that for the Batch-driven Streamer it is not abnormal to force it to run with a single worker routine. The batch delay can also be disabled for it as no flushing is required for it anyhow.

Please also note that errors are not handled gracefully in these examples as ot keep them small and narrow in scope.

For extra reference you can also find some more examples, be it less pragmatic, in the ./internal/test/integration directory.

Basic InsertAll Streamer
import (
    "context"

    "github.com/OTA-Insight/bqwriter"
)

// TODO: use more specific context
ctx := context.Background()

// create a BQ (stream) writer thread-safe client,
bqWriter, err := bqwriter.NewStreamer(
    ctx,
    "my-gcloud-project",
    "my-bq-dataset",
    "my-bq-table",
    nil, // use default config
)
if err != nil {
    // TODO: handle error gracefully
    panic(err)
}
// do not forget to close, to close all background resources opened
// when creating the BQ (stream) writer client
defer bqWriter.Close()

// You can now start writing data to your BQ table
bqWriter.Write(&myRow{Timestamp: time.UTC().Now(), Username: "test"})
// NOTE: only write one row at a time using `(*Streamer).Write`,
// multiple rows can be written using one `Write` call per row.

You build a Streamer client using optionally the StreamerConfig as you can see in the above example. The entire config is optional and has sane defaults, but note that there is a lot you can configure in this config prior to actually building the streamer. Please consult the https://pkg.go.dev/github.com/OTA-Insight/bqwriter#StreamerConfig for more information.

The myRow structure used in this example is one way to pass in the information of a single row to the (*Streamer).Write method. This structure implements the ValueSaver interface. An example of this:

import (
	"cloud.google.com/go/bigquery"
	"cloud.google.com/go/civil"
)

type myRow struct {
	Timestamp time.Time
	Username  string
}

func (mr *myRow) Save() (row map[string]bigquery.Value, insertID string, err error) {
	return map[string]bigquery.Value{
		"timestamp": civil.DateTimeOf(rr.Timestamp),
		"username":  mr.Username,
	}, "", nil
}

You can also pass in a struct directly and the schema will be inferred automatically based on its public items. This flexibility has a runtime cost by having to apply reflection.

A raw struct can also be stored by using the StructSaver interface, in which case you get the benefit of being able to write any kind of struct while at the same time being able to pass in the to be used scheme already such that it doesn't have to be inferred and giving you exact controls for each field on top of that.

If you have the choice however than we do recommend to implement the ValueSaver for your row struct as this gives you the best of both worlds, while at the same time also giving you the easy built-in ability to define a unique insertID per row which will help prevent potential duplicates that can otherwise happen while retrying to write rows which have failed temporarily.

Custom InsertAll Streamer

Using the same myRow structure from previous example, here is how we create a Streamer client with a more custom configuration:

import (
    "context"

    "github.com/OTA-Insight/bqwriter"
)

// TODO: use more specific context
ctx := context.Background()

// create a BQ (stream) writer thread-safe client,
bqWriter, err := bqwriter.NewStreamer(
    ctx,
    "my-gcloud-project",
    "my-bq-dataset",
    "my-bq-table",
    &bqwriter.StreamerConfig{
        // use 5 background worker threads
        WorkerCount: 5,
        // ignore errors for invalid/unknown rows/values,
        // by default these errors make a write fail
        InsertAllClient: &bqwriter.InsertAllClientConfig{
             // Write rows fail for invalid/unknown rows/values errors,
             // rather than ignoring these errors and skipping the faulty rows/values.
             // These errors are logged using the configured logger,
             // and the faulty (batched) rows are dropped silently.
            FailOnInvalidRows:    true,
            FailForUnknownValues: true, 
        },
    },
)
if err != nil {
    // TODO: handle error gracefully
    panic(err)
}
// do not forget to close, to close all background resources opened
// when creating the BQ (stream) writer client
defer bqWriter.Close()

// You can now start writing data to your BQ table
bqWriter.Write(&myRow{Timestamp: time.UTC().Now(), Username: "test"})
// NOTE: only write one row at a time using `(*Streamer).Write`,
// multiple rows can be written using one `Write` call per row.
Storage Streamer

If you can you should use the StorageStreamer. The InsertAll API is now considered legacy and is more expensive and less efficient to use compared to the storage API.

Here follows an example on how you can create such a storage API driven BigQuery streamer.

import (
    "context"

    "github.com/OTA-Insight/bqwriter"
    "google.golang.org/protobuf/reflect/protodesc"

    // TODO: define actual path to pre-compiled protobuf Go code
    "path/to/my/proto/package/protodata"
)

// TODO: use more specific context
ctx := context.Background()

// create proto descriptor to use for storage client
protoDescriptor := protodesc.ToDescriptorProto((&protodata.MyCustomProtoMessage{}).ProtoReflect().Descriptor())
// NOTE:
//  - storage writer API expects proto2 semantics, proto3 shouldn't be used (yet);
//  - the [normalizeDescriptor](https://pkg.go.dev/cloud.google.com/go/bigquery/storage/managedwriter/adapt#NormalizeDescriptor)
//    should be used to get a descriptor with nested types in order to have it work nicely with nested types;
//    - this means the line above would change to:
//      `protoDescriptor := adapt.NormalizeDescriptor((&protodata.MyCustomProtoMessage{}).ProtoReflect().Descriptor())`,
//      which does require the `"cloud.google.com/go/bigquery/storage/managedwriter/adapt"` package to be imported;
//  - known types cannot be used, you'll need to use type conversions instead
//    https://cloud.google.com/bigquery/docs/write-api#data_type_conversions,
//    e.g. int64 (micro epoch) instead of the known Google Timestamp proto type;

// create a BQ (stream) writer thread-safe client,
bqWriter, err := bqwriter.NewStreamer(
    ctx,
    "my-gcloud-project",
    "my-bq-dataset",
    "my-bq-table",
    &bqwriter.StreamerConfig{
        // use 5 background worker threads
        WorkerCount: 5,
        // create the streamer using a Protobuf message encoder for the data
        StorageClient: &bqwriter.StorageClientConfig{
            ProtobufDescriptor: protoDescriptor,
        },
    },
)
)
if err != nil {
    // TODO: handle error gracefully
    panic(err)
}
// do not forget to close, to close all background resources opened
// when creating the BQ (stream) writer client
defer bqWriter.Close()

// TOOD: populate fields of the proto message
msg := new(protodata.MyCustomProtoMessage)

// You can now start writing data to your BQ table
bqWriter.Write(msg)
// NOTE: only write one row at a time using `(*Streamer).Write`,
// multiple rows can be written using one `Write` call per row.

You must define the StorageClientConfig, as demonstrated in previous example, in order to be create a Streamer client using the Storage API. Note that you cannot create a blank StorageClientConfig or any kind of default, as you are required to configure it with either a bigquery.Schema or a descriptorpb.DescriptorProto, with the latter being preferred and used of the first.

The schema or Protobuf descriptor are used to be able to encode the data prior to writing in the correct format as Protobuf encoded binary data.

  • BigQuerySchema can be used in order to use a data encoder for the StorageClient based on a dynamically defined BigQuery schema in order to be able to encode any struct, JsonMarshaler, Json-encoded byte slice, Stringer (text proto) or string (also text proto) as a valid protobuf message based on the given BigQuery Schema;
  • ProtobufDescriptor can be used in order to use a data encoder for the StorageClient based on a pre-compiled protobuf schema in order to be able to encode any proto Message adhering to this descriptor;

ProtobufDescriptor is preferred as you might have to pay a performance penalty should you want to use the BigQuerySchema instead.

You can check out ./internal/test/integration/temporary_data_proto2.proto for an example of a proto message that can be sent over the wire. The BigQuery schema for that definition can be found in ./internal/test/integration/tmpdata.go. Finally, you can get inspired by ./internal/test/integration/generate.go to know how to generate the required Go code in order for you to configure your streamer with the right proto descriptor and being able to send rows of data using your proto definitions.

Batch Streamer

The batch streamer can be used if you want to upload a big dataset of data to bigquery without any additional cost.

Here follows an example on how you can create such a batch API driven BigQuery client.

import (
    "bytes"
    "context"
    "encoding/json"
    "path/filepath"

    "github.com/OTA-Insight/bqwriter"

    "cloud.google.com/go/bigquery"
)

func main() {
    ctx := context.Background()
	
    // By using new(bqwriter.BatchClientConfig) we will create a config with bigquery.JSON as default format
    // And the schema will be autodetected via the data.
    // Possible options are: 
    // - BigQuerySchema: Schema to use to upload to bigquery.
    // - SourceFormat: Format of the data we want to send.
    // - FailForUnknownValues: will treat records that have unknown values as invalid records.
    // - WriteDisposition: Defines what the write disposition should be to the bigquery table.
    batchConfig := new(bqwriter.BatchClientConfig)
	
    // create a BQ (stream) writer thread-safe client.
    bqWriter, err := bqwriter.NewStreamer(
        ctx,
        "my-gcloud-project",
        "my-bq-dataset",
        "my-bq-table",
        &bqwriter.StreamerConfig{
            BatchClient: batchConfig
        },
    )

    if err != nil {
        // TODO: handle error gracefully
        panic(err)
    }
		
    // do not forget to close, to close all background resources opened
    // when creating the BQ (stream) writer client
    defer bqWriter.Close()

    // a batch-driven BQ Streamer expects an io.Reader,
    // the source of the data isn't strictly defined as long as the source
    // format is supported. Usually you would fetch the data from large files
    // as this is where the batch client really shines
    files, err := filepath.Glob("/usr/joe/my/data/path/exported_data_*.json")
    if err != nil {
        // TODO: handle error gracefully
        panic(err)
    }
    for _, fp := range files {
        file, err := os.Open(fp)
        if err != nil {
            // TODO: handle error gracefully
            panic(err)
        }

        // Write the data to bigquery.
        err := bqWriter.Write(file)
        if err != nil {
            // TODO: handle error gracefully
            panic(err)
        }
    }
}

You must define the BatchClientConfig, as demonstrated in previous example, in order to create a Batch client.

Note that you cannot create a blank BatchClientConfig or any kind of default, as you are required to configure it with at least a SourceFormat.

When using the Json format make sure the casing of your fields matches exactly the fields defined in your BigQuery schema of the desired target table. While field names are normally considered case insensitive, they do seem to cause "duplicate field" issues as part of the batch load io.Reader decode process such as the following ones:

Job returned an error status {Location: "query"; Message: "Duplicate(Case Insensitive) field names: value and Value. Table: tmp_2e6895b9_b44b_4b5c_9941_def9a10e85d5_source"; Reason: "invalidQuery"}

Fix the casing of your json definition and this error should go away.

BatchClientConfig options:

  • BigQuerySchema can be used in order to use a data encoder for the batchClient based on a dynamically defined BigQuery schema in order to be able to encode any struct, JsonMarshaler, Json-encoded byte slice, Stringer (text proto) or string (also text proto) as a valid protobuf message based on the given BigQuery Schema.

    The BigQuerySchema is required for all SourceFormat except for bigquery.CSV and bigquery.JSON as these 2 formats will auto detect the schema via the content.

  • SourceFormat is used to define the format that the data is that we will send. Possible options are:

    • bigquery.CSV
    • bigquery.Avro
    • bigquery.JSON
    • bigquery.Parquet
    • bigquery.ORC
  • FailForUnknownValues causes records containing such values to be treated as invalid records.

    Defaults to false, making it ignore any invalid values, silently ignoring these errors, and publishing the rows with the unknown values removed from them.

  • WriteDisposition can be used to define what the write disposition should be to the bigquery table. Possible options are:

    • bigquery.WriteAppend
    • bigquery.WriteTruncate
    • bigquery.WriteEmpty

    Defaults to bigquery.WriteAppend, which will append the data to the table.

Future improvements

Currently, the package does not support any additional options that the different SourceFormat could have, feel free to open a feature request to add support for these.

Authorization

The streamer client will use Google Application Default Credentials for authorization credentials used in calling the API endpoints. This will allow your application to run in many environments without requiring explicit configuration.

Please open an issue should you require more advanced forms of authorization. The issue should come with an example, a clear statement of intention and motivation on why this is a useful contribution to this package. Even if you wish to contribute to this project by implementing this patch yourself, it is none the less best to create an issue prior to it, such that we can all be aligned on the specifics. Good communication is key here.

It was a choice to not support these advanced authorization methods for now. The reasons being that the package authors didn't have a need for it and it allowed to keep the API as simple and small as possible. There however some advanced authorizations still possible:

To conclude. We currently do not support advanced ways for Authorization, but we're open to include support for these, if there is sufficient interest for it. The Contributing section section explains how you can actively help to get this supported if desired.

Instrumentation

We currently support the ability to implement your logger which can be used instead of the standard logger which prints to STDERR. It is used for debug statements as well as unhandled errors. Debug statements aren't used everywhere, any unhandled error that isn't propagated is logged using the used logger.

You can find the interface you would need to implement to support your own Logger at https://godoc.org/github.com/OTA-Insight/bqwriter/log#Logger.

The internal client of the Storage-API driven Streamer also provides the tracking of stats regarding its GRPC functionality. This is implemented and utilized via the https://github.com/census-instrumentation/opencensus-go package.

If you use OpenCensus for your own project it will work out of the box.

In case your project uses another data ingestion system you can none the less get these statistics within your system of choice by registering an exporter which exports the stats to the system used by your project. Please see https://github.com/census-instrumentation/opencensus-go#views as a starting point on how to register a view yourself. OpenCensus comes with a bunch of exporters already, all listed in https://github.com/census-instrumentation/opencensus-go#exporters. You can however also implement your own one.

The official google cloud API will most likely switch to OpenCensus's successor OpenTelemetry once the latter becomes stable. For now however it is OpenCensus that is used.

Note that this extra form of instrumentation is only applicable to a Streamer using the Storage API. The InsertAll- and Batch-driven Streamers do not provide any form of stats tracking.

Please see also https://github.com/googleapis/google-cloud-go/issues/5100#issuecomment-966461501 for more information on how you can hook up a built-in or your own system into the tracking system for any storage API driven streamer.

Write Error handling

The current version of the bqwriter is written with a fire-and-forget philosophy in mind. Actual write errors occur on async worker goroutines and are only logged. Already today, you can plugin your own logger implementation in order to get these logs in your alerting systems.

Please file a detailed feature request with a real use case as part of the verbose description should you be in need of being able to handle errors.

One possible approach would be to allow a channel or callback to be defined in the StreamerConfig which would get a specific data structure for any write failure. This could contain the data which failed to write, any kind of offset/insertID as well as the actual error which occurred. The details would however to be worked out as part of the proposal.

Besides a valid use case to motivate this proposal we would also need to think carefully about how we can make the returned errors actionable. Returning it only to allow the user to log/print it is a bit silly, as that is anyway already the behavior today. The real value from this proposal would come from the fact that the data can be retried to be inserted (if it makes sense within its context, as defined by at the very least the error type), and done so in an easy and safe manner, and with actual aid to help prevent duplicates. The Google Cloud API provides for this purpose the offsets and insertID's, but the question is how we would integrate this and also to double check that this really does prevent duplicates or not.

The Contributing section section explains how you can actively help to get this supported if desired.

Contributing

Contributions are welcome. Please, see the CONTRIBUTING document for details.

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. See Contributor Code of Conduct for more information.

Developer Instructions

As a developer you need to agree to the Contributor Code of Conduct for more information. See the previous Contributing section for more info in regards of contributing to this project. In this section we'll also assume that you've read & understood the Install and Examples sections.

Please take your time and complete the forms with sufficient details when filing issues and proposals. Pull requests (PRs) should only be created once a related issue/proposal has been created and agreed upon. Also take your time and complete the PR description with sufficient detail when you're ready to create a PR.

Tests

Using GitHub actions this codebase is being tested automatically for each commit/PR.

  • $ go test -v ./...:
    • run against the Min and Max Go versions
    • all tests are expected to pass
  • $ golangci-lint run:
    • run against latest Go version only
    • is expected to generate no warnings or errors of any kind

For each contribution that you do you'll have to make sure that all these tests pass. Please do not modify any existing tests unless required because some kind of breaking change. If you do have to modify (or delete) existing tests than please document this in full detail with proper motivation as part of your PR description. Ensure your added and modified code is also sufficiently tested and covered.

Next to this, the maintainers of this repository (see CODEOWNERS) also run integration tests against a real production-like BigQuery table within the actual Google Cloud infrastructure. These test the streamer for all implementations: insertAll, storage, storage-json (a regular storage client but using a bigQuery.Schema as to be able to insert JsonMarshalled data) and batch.

You can run these tests yourself as well using the following internal cmd tool:

$ go run ./internal/test/integration --help
Usage of ./internal/test/integration/tmp/exe:
  -dataset string
        BigQuery dataset to write data to (default "benchmarks_bqwriter")
  -debug
        enable to show debug logs
  -iterations int
        how many values to write to each of the different streamer tests (default 100)
  -project string
        BigQuery project to write data to (default "oi-bigquery")
  -streamers string
        csv of streamers to test, one or multiple of following options: insertall, storage, storage-json, batch
  -table string
        BigQuery table to write data to (default "tmp")
  -workers int
        how many workers to use to run tests in parallel (default 12)

Most likely you'll need to pass the --project, --dataset and --table flag to use a BigQuery table for which you have sufficient permissions and that is used only for temporary testing purposes such as these.

Running these tests yourself is not required as part of a contribution, but it can be run by you in case you are interested in doing so for whatever reason.

FAQ

My insertAll streamer seems to insert 1 row per request instead of batching, how is this possible?

Make sure your configuration matches the needs of your bandwidth. Do not use more workers than you need for example (WorkerCount). Also make sure the MaxBatchDelay and BatchSize values are configured appropriately.

Documentation

Overview

Package bqwriter provides a compact Streamer API in order to write data concurrently to BigQuery using the insertAll or Storage API.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type BatchClientConfig added in v0.5.0

type BatchClientConfig struct {
	// BigQuerySchema can be used in order to use a data encoder for the batchClient
	// based on a dynamically defined BigQuery schema in order to be able to encode any struct,
	// JsonMarshaler, Json-encoded byte slice, Stringer (text proto) or string (also text proto)
	// as a valid protobuf message based on the given BigQuery Schema.
	//
	// This config is only required if autoDetect is false.
	BigQuerySchema *bigquery.Schema

	// SourceFormat is used to define the format that the data is that we will send.
	// Possible options are:
	//   - bigquery.CSV
	//   - bigquery.Avro
	//   - bigquery.JSON
	//   - bigquery.Parquet
	//   - bigquery.ORC
	//
	// The default SourceFormat is bigquery.JSON
	SourceFormat bigquery.DataFormat

	// FailForUnknownValues causes records containing such values
	// to be treated as invalid records.
	//
	// Defaults to false, making it ignore any invalid values, silently ignoring these errors,
	// and publishing the rows with the unknown values removed from them.
	FailForUnknownValues bool

	// WriteDisposition can be used to define what the write disposition should be to the bigquery table.
	// Possible options are:
	//   - bigquery.WriteAppend
	//   - bigquery.WriteTruncate
	//   - bigquery.WriteEmpty
	//
	// Defaults to bigquery.WriteAppend, which will append the data to the table.
	WriteDisposition bigquery.TableWriteDisposition
}

BatchClientConfig is used to configure a batch (load) driven Streamer Client. All properties have sane defaults as defined and used by this Go package.

A non-nil BatchClientConfig instance has to be passed in to the BatchClient property of a StreamerConfig in order to indicate the Streamer should be build using the Batch (load) Client under the hood.

type InsertAllClientConfig

type InsertAllClientConfig struct {
	// FailOnInvalidRows causes rows containing invalid data to fail
	// if there is an attempt to insert an invalid row.
	//
	// Defaults to false, making it ignore any invalid rows, silently ignoring these errors.
	FailOnInvalidRows bool

	// FailForUnknownValues causes records containing such values
	// to be treated as invalid records.
	//
	// Defaults to false, making it ignore any invalid values, silently ignoring these errors,
	// and publishing the rows with the unknown values removed from them.
	FailForUnknownValues bool

	// BatchSize defines the amount of rows (data) by a worker, prior to a worker
	// actually writing it to BQ. Should a worker have rows left in its local cache when closing,
	// it will flush/write these rows prior to closing.
	//
	// Defaults to constant.DefaultBatchSize if n == 0,
	// use a negative value or an explicit value of 1
	// in case you want to write each row directly.
	BatchSize int
}

InsertAllClientConfig is used to configure an InsertAll client API driven Streamer Client. All properties have sane defaults as defined and used by this Go package.

type StorageClientConfig

type StorageClientConfig struct {
	// BigQuerySchema can be used in order to use a data encoder for the StorageClient
	// based on a dynamically defined BigQuery schema in order to be able to encode any struct,
	// JsonMarshaler, Json-encoded byte slice, Stringer (text proto) or string (also text proto)
	// as a valid protobuf message based on the given BigQuery Schema.
	//
	// This config is required only if ProtobufDescriptor is not defined
	// and it will be ignored in case ProtobufDescriptor is defined. The latter is recommended
	// as a BigQuery Schema based encoder has a possible performance penalty.
	BigQuerySchema *bigquery.Schema

	// ProtobufDescriptor can be used in order to use a data encoder for the StorageClient
	// based on a pre-compiled protobuf schema in order to be able to encode any proto Message
	// adhering to this descriptor.
	//
	// This config is required only if BigQuerySchema is not defined.
	// It is however recommended to use the The ProtobufDescriptor
	// as a BigQuerySchema based encoder has a possible performance penalty.
	ProtobufDescriptor *descriptorpb.DescriptorProto
}

StorageClientConfig is used to configure a storage client API driven Streamer Client. All properties have sane defaults as defined and used by this Go package.

A non-nil StorageClientConfig instance has to be passed in to the StorageClient property of a StreamerConfig in order to indicate the Streamer should be build using the Storage API Client under the hood.

type Streamer

type Streamer struct {
	// contains filtered or unexported fields
}

Streamer is a simple BQ stream-writer, allowing you write data to a BQ table concurrently.

func NewStreamer

func NewStreamer(ctx context.Context, projectID, dataSetID, tableID string, cfg *StreamerConfig, opts ...option.ClientOption) (*Streamer, error)

NewStreamer creates a new Streamer Client. StreamerConfig is optional, all other parameters are required.

An error is returned in case the Streamer Client couldn't be created for some unexpected reason, most likely something going wrong within the layer of actually interacting with GCloud.

func (*Streamer) Close

func (s *Streamer) Close()

Close closes the streamer and all its worker goroutines.

func (*Streamer) Write

func (s *Streamer) Write(data interface{}) error

Write a row of data to a BQ table within the streamer's project. The row will be written as soon as all previous rows has been written and a worker goroutine becomes available to write it.

Jobs that failed to write but which are retryable can be retried on the same goroutine in an exponential back-off approach, should the streamer be configured to do so.

func (*Streamer) WriteWithContext added in v0.8.0

func (s *Streamer) WriteWithContext(ctx context.Context, data interface{}) error

WriteWithContext is the same as Write but allows you to pass a context that will be used to determine when the write should be aborted.

type StreamerConfig

type StreamerConfig struct {
	// WorkerCount defines the amount of workers to be used,
	// each on their own goroutine and with an opt-out channel buffer per routine.
	// Use a negative value in order to just want a single worker (same as defining it as 1 explicitly).
	//
	// Defaults to constant.DefaultWorkerCount if not defined explicitly.
	WorkerCount int

	// WorkerQueueSize defines the size of the job queue per worker used
	// in order to allow the Streamer's users to write rows even if all workers are currently
	// too busy to accept new incoming rows.
	//
	// Use a negative value in order to provide no buffer for the workers at all,
	// not rcommended but a possibility for you to choose non the less.
	//
	// Defaults to constant.MaxTotalElapsedRetryTime if not defined explicitly
	WorkerQueueSize int

	// MaxBatchDelay defines the max amount of time a worker batches rows,
	// prior to writing the batched rows, even when not yet full.
	//
	// Defaults to constant.DefaultMaxBatchDelay if d == 0.
	MaxBatchDelay time.Duration

	// Logger allows you to attach a logger to be used by the streamer,
	// instead of the default built-in STDERR logging implementation,
	// with the latter being used as the default in case this logger isn't defined explicitly.
	Logger log.Logger

	// InsertAllClient allows you to overwrite any or all of the defaults used to configure an
	// InsertAll client API driven Streamer Client. Note that this optional configuration is ignored
	// all together in case StorageClient is defined as a non-nil value.
	InsertAllClient *InsertAllClientConfig

	// StorageClient allows you to create a Storage API driven Streamer Client.
	// You can do so using `new(StorageClientConfig)` in order to create a StorageClient
	// with all possible configurations configured using their defaults as defined by this Go package.
	StorageClient *StorageClientConfig

	// BatchClient allows you to create a Batch API driven Streamer client.
	// You can do so using `new(BatchClientConfig)` in order to create a BatchClient
	// with all possible configurations configured using their defaults as defined by this Go package.
	BatchClient *BatchClientConfig
}

StreamerConfig is used to build a Streamer (client). All configurations found in this structure are optional and have sane defaults defined for them. All required parameters are to be passed in as separate arguments to the NewStreamer constructor method.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL