pull

package
v0.40.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 19, 2022 License: Apache-2.0 Imports: 22 Imported by: 0

Documentation

Index

Constants

View Source
const (
	Listed = iota
	DownloadStart
	DownloadStats
	DownloadSuccess
	DownloadFailed
)

Variables

View Source
var ErrDBUpToDate = errors.New("the database does not need to be pulled as it's already up to date")

ErrDBUpToDate is the error code returned from NewPuller in the event that there is no work to do.

View Source
var ErrIncompatibleSourceChunkStore = errors.New("the chunk store of the source database does not implement NBSCompressedChunkStore.")

ErrIncompatibleSourceChunkStore is the error code returned from NewPuller in the event that the source ChunkStore does not implement `NBSCompressedChunkStore`.

View Source
var ErrNoData = errors.New("no data")

Functions

func Clone

func Clone(ctx context.Context, srcCS, sinkCS chunks.ChunkStore, eventCh chan<- TableFileEvent) error

func Pull

func Pull(ctx context.Context, srcCS, sinkCS chunks.ChunkStore, walkAddrs WalkAddrs, sourceHash hash.Hash, progressCh chan PullProgress) error

Pull objects that descend from sourceHash from srcDB to sinkDB.

func PullWithoutBatching

func PullWithoutBatching(ctx context.Context, srcCS, sinkCS chunks.ChunkStore, walkAddrs WalkAddrs, sourceHash hash.Hash, progressCh chan PullProgress) error

PullWithoutBatching effectively removes the batching of chunk retrieval done on each level of the tree. This means all chunks from one level of the tree will be retrieved from the underlying chunk store in one call, which pushes the optimization problem down to the chunk store which can make smarter decisions.

Types

type CloneTableFileEvent

type CloneTableFileEvent int

type CmpChnkAndRefs

type CmpChnkAndRefs struct {
	// contains filtered or unexported fields
}

CmpChnkAndRefs holds a CompressedChunk and all of it's references

type FilledWriters

type FilledWriters struct {
	// contains filtered or unexported fields
}

FilledWriters store CmpChunkTableWriter that have been filled and are ready to be flushed. In the future will likely add the md5 of the data to this structure to be used to verify table upload calls.

type PullProgress

type PullProgress struct {
	DoneCount, KnownCount, ApproxWrittenBytes uint64
}

type Puller

type Puller struct {
	// contains filtered or unexported fields
}

Puller is used to sync data between to Databases

func NewPuller

func NewPuller(ctx context.Context, tempDir string, chunksPerTF int, srcCS, sinkCS chunks.ChunkStore, walkAddrs WalkAddrs, rootChunkHash hash.Hash, statsCh chan Stats) (*Puller, error)

NewPuller creates a new Puller instance to do the syncing. If a nil puller is returned without error that means that there is nothing to pull and the sinkDB is already up to date.

func (*Puller) Logf

func (p *Puller) Logf(fmt string, args ...interface{})

func (*Puller) Pull

func (p *Puller) Pull(ctx context.Context) error

Pull executes the sync operation

type Stats

type Stats struct {
	FinishedSendBytes uint64
	BufferedSendBytes uint64
	SendBytesPerSec   float64

	TotalSourceChunks        uint64
	FetchedSourceChunks      uint64
	FetchedSourceBytes       uint64
	FetchedSourceBytesPerSec float64
}

type TableFileEvent

type TableFileEvent struct {
	EventType  CloneTableFileEvent
	TableFiles []nbs.TableFile
	Stats      []iohelp.ReadStats
}

type WalkAddrs

type WalkAddrs func(chunks.Chunk, func(hash.Hash, bool) error) error

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL