Documentation ¶
Overview ¶
Package artifactexporter handles uploading artifacts to BigQuery. This is the replacement of the legacy artifact exporter in bqexp
Index ¶
Constants ¶
const ArtifactRequestOverhead = 100
ArtifactRequestOverhead is the overhead (in bytes) applying to each artifact when calculating the size.
const MaxRBECasBatchSize = 2 * 1024 * 1024 // 2 MB
MaxRBECasBatchSize is the batch size limit when we read artifact content. TODO(nqmtuan): Call the Capacity API to find out the exact size limit for batch operations. For now, hardcode to be 2MB. It should be under the limit, since BatchUpdateBlobs in BatchCreateArtifacts can handle up to 10MB.
const MaxShardContentSize = bqutil.RowMaxBytes - 10*1024
MaxShardContentSize is the maximum content size in BQ row. Artifacts content bigger than this size needs to be sharded. Leave 10 KB for other fields, the rest is content.
Variables ¶
var (
ErrInvalidUTF8 = fmt.Errorf("invalid UTF-8 character")
)
var ExportArtifactsTask = tq.RegisterTaskClass(tq.TaskClass{ ID: "export-artifacts", Prototype: &taskspb.ExportArtifacts{}, Kind: tq.Transactional, Queue: "artifactexporter", RoutingPrefix: "/internal/tasks/artifactexporter", })
ExportArtifactsTask describes how to route export artifact task.
Functions ¶
func InitServer ¶
InitServer initializes a artifactexporter server.
Types ¶
type BQExportClient ¶
type BQExportClient interface {
InsertArtifactRows(ctx context.Context, rows []*bqpb.TextArtifactRow) error
}
BQExportClient is the interface for exporting artifacts.
type Client ¶
type Client struct {
// contains filtered or unexported fields
}
Client provides methods to export artifacts to BigQuery via the BigQuery Write API.
func (*Client) InsertArtifactRows ¶
InsertArtifactRows inserts the given rows in BigQuery.