replication

package
v0.0.0-...-932836e Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 23, 2020 License: Apache-2.0 Imports: 9 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func NewReplicatorServer

func NewReplicatorServer(replicator BlobReplicator) replicator_pb.ReplicatorServer

NewReplicatorServer creates a gRPC stub for the Replicator service that forwards all calls to BlobReplicator.

Types

type BlobReplicator

type BlobReplicator interface {
	// Replicate a single object between backends, while at the same
	// time giving a handle back to it.
	ReplicateSingle(ctx context.Context, digest digest.Digest) buffer.Buffer
	// Replicate a set of objects between backends.
	ReplicateMultiple(ctx context.Context, digests digest.Set) error
}

BlobReplicator provides the strategy that is used by MirroredBlobAccess to replicate objects between storage backends. This strategy is called into when MirroredBlobAccess detects that a certain object is only present in one of the two backends.

func NewLocalBlobReplicator

func NewLocalBlobReplicator(source blobstore.BlobAccess, sink blobstore.BlobAccess) BlobReplicator

NewLocalBlobReplicator creates a BlobReplicator that can be used to let MirroredBlobAccess repair inconsistencies between backends directly.

This replicator tends to be sufficient for the Action Cache (AC), but for the Content Addressable Storage (CAS) it may be inefficient. If MirroredBlobAccess is used by many clients, each having a high concurrency, this replicator may cause redundant replications and load spikes. A separate replication daemon (bb_replicator) should be used for such setups.

func NewQueuedBlobReplicator

func NewQueuedBlobReplicator(source blobstore.BlobAccess, base BlobReplicator, existenceCache *digest.ExistenceCache) BlobReplicator

NewQueuedBlobReplicator creates a decorator for BlobReplicator that serializes and deduplicates requests. It can be used to place a limit on the amount of replication traffic.

TODO: The current implementation is a bit simplistic, in that it does not guarantee fairness. Should all requests be processed in FIFO order? Alternatively, should we replicate objects with most waiters first?

func NewRemoteBlobReplicator

func NewRemoteBlobReplicator(source blobstore.BlobAccess, client grpc.ClientConnInterface) BlobReplicator

NewRemoteBlobReplicator creates a BlobReplicator that forwards requests to a remote gRPC service. This service may be used to deduplicate and queue replication actions globally.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL