cockroach: Index | Files | Directories

package storageccl

import ""


Package Files

export.go export_storage.go import.go key_rewriter.go writebatch.go


const (
    // S3AccessKeyParam is the query parameter for access_key in an S3 URI.
    S3AccessKeyParam = "AWS_ACCESS_KEY_ID"
    // S3SecretParam is the query parameter for the 'secret' in an S3 URI.
    S3SecretParam = "AWS_SECRET_ACCESS_KEY"
    // S3TempTokenParam is the query parameter for session_token in an S3 URI.
    S3TempTokenParam = "AWS_SESSION_TOKEN"
    // S3EndpointParam is the query parameter for the 'endpoint' in an S3 URI.
    S3EndpointParam = "AWS_ENDPOINT"
    // S3RegionParam is the query parameter for the 'endpoint' in an S3 URI.
    S3RegionParam = "AWS_REGION"

    // AzureAccountNameParam is the query parameter for account_name in an azure URI.
    AzureAccountNameParam = "AZURE_ACCOUNT_NAME"
    // AzureAccountKeyParam is the query parameter for account_key in an azure URI.
    AzureAccountKeyParam = "AZURE_ACCOUNT_KEY"

    // GoogleBillingProjectParam is the query parameter for the billing project
    // in a gs URI.
    GoogleBillingProjectParam = "GOOGLE_BILLING_PROJECT"

    // AuthParam is the query parameter for the cluster settings named
    // key in a URI.
    AuthParam = "AUTH"

    // CredentialsParam is the query parameter for the base64-encoded contents of
    // the Google Application Credentials JSON file.
    CredentialsParam = "CREDENTIALS"

func ExportStorageConfFromURI Uses

func ExportStorageConfFromURI(path string) (roachpb.ExportStorage, error)

ExportStorageConfFromURI generates an ExportStorage config from a URI string.

func ImportBufferConfigSizes Uses

func ImportBufferConfigSizes(st *cluster.Settings, isPKAdder bool) (int64, int64, int64)

ImportBufferConfigSizes determines the minimum, maximum and step size for the BulkAdder buffer used in import.

func MakeLocalStorageURI Uses

func MakeLocalStorageURI(path string) (string, error)

MakeLocalStorageURI converts a local path (absolute or relative) to a valid nodelocal URI.

func MaxImportBatchSize Uses

func MaxImportBatchSize(st *cluster.Settings) int64

MaxImportBatchSize determines the maximum size of the payload in an AddSSTable request. It uses the ImportBatchSize setting directly unless the specified value would exceed the maximum Raft command size, in which case it returns the maximum batch size that will fit within a Raft command.

func ParseWorkloadConfig Uses

func ParseWorkloadConfig(uri *url.URL) (*roachpb.ExportStorage_Workload, error)

ParseWorkloadConfig parses a workload config URI to a proto config.

func SHA512ChecksumData Uses

func SHA512ChecksumData(data []byte) ([]byte, error)

SHA512ChecksumData returns the SHA512 checksum of data.

func SanitizeExportStorageURI Uses

func SanitizeExportStorageURI(path string) (string, error)

SanitizeExportStorageURI returns the export storage URI with sensitive credentials stripped.

type ExportStorage Uses

type ExportStorage interface {

    // Conf should return the serializable configuration required to reconstruct
    // this ExportStorage implementation.
    Conf() roachpb.ExportStorage

    // ReadFile should return a Reader for requested name.
    ReadFile(ctx context.Context, basename string) (io.ReadCloser, error)

    // WriteFile should write the content to requested name.
    WriteFile(ctx context.Context, basename string, content io.ReadSeeker) error

    // Delete removes the named file from the store.
    Delete(ctx context.Context, basename string) error

    // Size returns the length of the named file in bytes.
    Size(ctx context.Context, basename string) (int64, error)

ExportStorage provides functions to read and write files in some storage, namely various cloud storage providers, for example to store backups. Generally an implementation is instantiated pointing to some base path or prefix and then gets and puts files using the various methods to interact with individual files contained within that path or prefix. However, implementations must also allow callers to provide the full path to a given file as the "base" path, and then read or write it with the methods below by simply passing an empty filename. Implementations that use stdlib's `path.Join` to concatenate their base path with the provided filename will find its semantics well suited to this -- it elides empty components and does not append surplus slashes.

func ExportStorageFromURI Uses

func ExportStorageFromURI(
    ctx context.Context, uri string, settings *cluster.Settings,
) (ExportStorage, error)

ExportStorageFromURI returns an ExportStorage for the given URI.

func MakeExportStorage Uses

func MakeExportStorage(
    ctx context.Context, dest roachpb.ExportStorage, settings *cluster.Settings,
) (ExportStorage, error)

MakeExportStorage creates an ExportStorage from the given config.

type KeyRewriter Uses

type KeyRewriter struct {
    // contains filtered or unexported fields

KeyRewriter rewrites old table IDs to new table IDs. It is able to descend into interleaved keys, and is able to function on partial keys for spans and splits.

func MakeKeyRewriter Uses

func MakeKeyRewriter(descs map[sqlbase.ID]*sqlbase.TableDescriptor) (*KeyRewriter, error)

MakeKeyRewriter makes a KeyRewriter from a map of descs keyed by original ID.

func MakeKeyRewriterFromRekeys Uses

func MakeKeyRewriterFromRekeys(rekeys []roachpb.ImportRequest_TableRekey) (*KeyRewriter, error)

MakeKeyRewriterFromRekeys makes a KeyRewriter from Rekey protos.

func (*KeyRewriter) RewriteKey Uses

func (kr *KeyRewriter) RewriteKey(key []byte, isFromSpan bool) ([]byte, bool, error)

RewriteKey modifies key (possibly in place), changing all table IDs to their new value, including any interleaved table children and prefix ends. This function works by inspecting the key for table and index IDs, then uses the corresponding table and index descriptors to determine if interleaved data is present and if it is, to find the next prefix of an interleaved child, then calls itself recursively until all interleaved children have been rekeyed. If it encounters a table ID for which it does not have a configured rewrite, it returns the prefix of the key that was rewritten key. The returned boolean is true if and only if all of the table IDs found in the key were rewritten. If isFromSpan is true, failures in value decoding are assumed to be due to valid span manipulations, like PrefixEnd or Next having altered the trailing byte(s) to corrupt the value encoding -- in such a case we will not be able to decode the value (to determine how much further to scan for table IDs) but we can assume that since these manipulations are only done to the trailing byte that we're likely at the end anyway and do not need to search for any further table IDs to replace.



Package storageccl imports 52 packages (graph) and is imported by 14 packages. Updated 2019-09-16. Refresh now. Tools for package owners.