walg

package module
v0.1.7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 19, 2018 License: Apache-2.0 Imports: 38 Imported by: 0

README

WAL-G

Build Status Go Report Card

WAL-G is an archival restoration tool for Postgres.

WAL-G is the successor of WAL-E with a number of key differences. WAL-G uses LZ4 compression, multiple processors and non-exclusive base backups for Postgres. More information on the design and implementation of WAL-G can be found on the Citus Data blog post "Introducing WAL-G by Citus: Faster Disaster Recovery for Postgres".

Table of Contents

Installation

A precompiled binary for Linux AMD 64 of the latest version of WAL-G can be obtained under the Releases tab.

To decompress the binary, use:

tar -zxvf wal-g.linux-amd64.tar.gz

For other incompatible systems, please consult the Development section for more information.

Configuration

Required

To connect to Amazon S3, WAL-G requires that these variables be set:

  • WALE_S3_PREFIX (eg. s3://bucket/path/to/folder)

WAL-G determines AWS credentials like other AWS tools. You can set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (optionally with AWS_SECURITY_TOKEN), or ~/.aws/credentials (optionally with AWS_PROFILE), or you can set nothing to automatically fetch credentials from the EC2 metadata service.

WAL-G uses the usual PostgreSQL environment variables to configure its connection, especially including PGHOST, PGPORT, PGUSER, and PGPASSWORD/PGPASSFILE/~/.pgpass.

PGHOST can connect over a UNIX socket. This mode is preferred for localhost connections, set PGHOST=/var/run/postgresql to use it. WAL-G will connect over TCP if PGHOST is an IP address.

Optional

WAL-G can automatically determine the S3 bucket's region using s3:GetBucketLocation, but if you wish to avoid this API call or forbid it from the applicable IAM policy, specify:

  • AWS_REGION(eg. us-west-2)

Concurrency values can be configured using:

  • WALG_DOWNLOAD_CONCURRENCY

To configure how many goroutines to use during backup-fetch and wal-push, use WALG_DOWNLOAD_CONCURRENCY. By default, WAL-G uses the minimum of the number of files to extract and 10.

  • WALG_UPLOAD_CONCURRENCY

To configure how many concurrency streams to use during backup uploading, use WALG_UPLOAD_CONCURRENCY. By default, WAL-G uses 10 streams.

  • AWS_ENDPOINT

Overrides the default hostname to connect to an S3-compatible service. i.e, http://s3-like-service:9000

  • AWS_S3_FORCE_PATH_STYLE

To enable path-style addressing(i.e., http://s3.amazonaws.com/BUCKET/KEY) when connecting to an S3-compatible service that lack of support for sub-domain style bucket URLs (i.e., http://BUCKET.s3.amazonaws.com/KEY). Defaults to false.

Example: Using Minio.io S3-compatible storage

AWS_ACCESS_KEY_ID: "<minio-key>"
AWS_SECRET_ACCESS_KEY: "<minio-secret>"
WALE_S3_PREFIX: "s3://my-minio-bucket/sub-dir"
AWS_ENDPOINT: "http://minio:9000"
AWS_S3_FORCE_PATH_STYLE: "true"
AWS_REGION: us-east-1
  • WALG_S3_STORAGE_CLASS

To configure the S3 storage class used for backup files, use WALG_S3_STORAGE_CLASS. By default, WAL-G uses the "STANDARD" storage class. Other supported values include "STANDARD_IA" for Infrequent Access and "REDUCED_REDUNDANCY" for Reduced Redundancy.

  • WALE_GPG_KEY_ID

To configure GPG key for encryption and decryption. By default, no encryption is used. Public keyring is cached in the file "/.walg_key_cache".

  • WALG_DELTA_MAX_STEPS

Delta-backup is difference between previously taken backup and present state. WALG_DELTA_MAX_STEPS determines how many delta backups can be between full backups. Defaults to 0. Restoration process will automatically fetch all necessary deltas and base backup and compose valid restored backup (you still need WALs after start of last backup to restore consistent cluster). Delta computation is based on ModTime of file system and LSN number of pages in datafiles.

  • WALG_DELTA_ORIGIN

To configure base for next delta backup (only if WALG_DELTA_MAX_STEPS is not exceeded). WALG_DELTA_ORIGIN can be LATEST (chaining increments), LATEST_FULL (for bases where volatile part is compact and chaining has no meaning - deltas overwrite each other). Defaults to LATEST.

Usage

WAL-G currently supports these commands:

  • backup-fetch

When fetching base backups, the user should pass in the name of the backup and a path to a directory to extract to. If this directory does not exist, WAL-G will create it and any dependent subdirectories.

wal-g backup-fetch ~/extract/to/here example-backup

WAL-G can also fetch the latest backup using:

wal-g backup-fetch ~/extract/to/here LATEST
  • backup-push

When uploading backups to S3, the user should pass in the path containing the backup started by Postgres as in:

wal-g backup-push /backup/directory/path

If backup is pushed from replication slave, WAL-G will control timeline of the server. In case of promotion to master or timeline switch, backup will be uploaded but not finalized, WAL-G will exit with an error. In this case logs will contain information necessary to finalize the backup. You can use backuped data if you clearly understand entangled risks.

  • wal-fetch

When fetching WAL archives from S3, the user should pass in the archive name and the name of the file to download to. This file should not exist as WAL-G will create it for you.

WAL-G will also prefetch WAL files ahead of asked WAL file. These files will be cached in ./.wal-g/prefetch directory. Cache files older than recently asked WAL file will be deleted from the cache, to prevent cache bloat. If the file is requested with wal-fetch this will also remove it from cache, but trigger fulfilment of cache with new file.

wal-g wal-fetch example-archive new-file-name
  • wal-push

When uploading WAL archives to S3, the user should pass in the absolute path to where the archive is located.

wal-g wal-push /path/to/archive
  • backup-list

Lists names and creation time of available backups.

  • delete

Is used to delete backups and WALs before them. By default delete will perform dry run. If you want to execute deletion you have to add --confirm flag at the end of the command.

delete can operate in two modes: retain and before.

retain [FULL|FIND_FULL] %number%

if FULL is specified keep 5 full backups and everything in the middle

before [FIND_FULL] %name%

if FIND_FULL is specified WAL-G will calculate minimum backup needed to keep all deltas alive. If FIND_FULL is not specified and call can produce orphaned deltas - call will fail with the list.

retain 5 will fail if 5th is delta

retain FULL 5 will keep 5 full backups and all deltas of them

retail FIND_FULL will find necessary full for 5th

before base_000010000123123123 will fail if base_000010000123123123 is delta

before FIND_FULL base_000010000123123123 will keep everything after base of base_000010000123123123

Development

Installing

To compile and build the binary:

go get github.com/wal-g/wal-g
make all

Users can also install WAL-G by using make install. Specifying the GOBIN environment variable before installing allows the user to specify the installation location. On default, make install puts the compiled binary in go/bin.

export GOBIN=/usr/local/bin
make install
Testing

WAL-G relies heavily on unit tests. These tests do not require S3 configuration as the upload/download parts are tested using mocked objects. For more information on testing, please consult test_tools.

WAL-G will perform a round-trip compression/decompression test that generates a directory for data (eg. data...), compressed files (eg. compressed), and extracted files (eg. extracted). These directories will only get cleaned up if the files in the original data directory match the files in the extracted one.

Test coverage can be obtained using:

go test -v -coverprofile=coverage.out
go tool cover -html=coverage.out

Authors

See also the list of contributors who participated in this project.

License

This project is licensed under the Apache License, Version 2.0, but the lzo support is licensed under GPL 3.0+. Please refer to the LICENSE.md file for more details.

Acknowledgements

WAL-G would not have happened without the support of Citus Data

WAL-G came into existence as a result of the collaboration between a summer engineering intern at Citus, Katie Li, and Daniel Farina, the original author of WAL-E who currently serves as a principal engineer on the Citus Cloud team. Citus Data also has an open source extension to Postgres that distributes database queries horizontally to deliver scale and performance.

Documentation

Index

Constants

View Source
const (
	BlockSize uint16 = 8192
)
View Source
const SentinelSuffix = "_backup_stop_sentinel.json"
View Source
const (
	WalSegmentSize = uint64(16 * 1024 * 1024) // xlog.c line 113

)

Variables

View Source
var Compressed uint32

Compressed is used to log compression ratio.

View Source
var CrypterUseMischief = errors.New("Crypter is not checked before use")
View Source
var DeleteConfirmed bool
View Source
var DeleteDryrun bool
View Source
var EXCLUDE = make(map[string]Empty)

EXCLUDE is a list of excluded members from the bundled backup.

View Source
var InvalidBlock = errors.New("Block is not valid")
View Source
var LatestNotFound = errors.New("No backups found")
View Source
var MAXBACKOFF = float64(32)

MAXBACKOFF is the maxmimum backoff time in seconds for upload.

View Source
var MAXRETRIES = 7

MAXRETRIES is the maximum number of retries for upload.

View Source
var SentinelNotUploaded = errors.New("Sentinel was not uploaded due to timeline change during backup")
View Source
var Uncompressed uint32

Uncompressed is used to log compression ratio.

Functions

func ApplyFileIncrement added in v0.1.3

func ApplyFileIncrement(fileName string, increment io.Reader) error

func CheckType

func CheckType(path string) string

CheckType grabs the file extension from PATH.

func Configure

func Configure() (*TarUploader, *Prefix, error)

Configure connects to S3 and creates an uploader. It makes sure that a valid session has started; if invalid, returns AWS error and `<nil>` values.

Requires these environment variables to be set: WALE_S3_PREFIX

Able to configure the upload part size in the S3 uploader.

func Connect

func Connect() (*pgx.Conn, error)

Connect establishes a connection to postgres using a UNIX socket. Must export PGHOST and run with `sudo -E -u postgres`. If PGHOST is not set or if the connection fails, an error is returned and the connection is `<nil>`.

Example: PGHOST=/var/run/postgresql or PGHOST=10.0.0.1

func CreateUploader

func CreateUploader(svc s3iface.S3API, partsize, concurrency int) s3manageriface.UploaderAPI

CreateUploader returns an uploader with customizable concurrency and partsize.

func DecompressLz4

func DecompressLz4(d io.Writer, s io.Reader) (int64, error)

DecompressLz4 decompresses a .lz4 file. Returns an error upon failure.

func DecompressLzo

func DecompressLzo(d io.Writer, s io.Reader) error

DecompressLzo decompresses an .lzo file. Returns the first error encountered.

func DeleteBackupsBefore added in v0.1.3

func DeleteBackupsBefore(backups []BackupTime, skipline int, pre *Prefix)

func DeleteBeforeTarget added in v0.1.3

func DeleteBeforeTarget(target string, bk *Backup, pre *Prefix, find_full bool, backups []BackupTime, dryRun bool)

func DeleteWALBefore added in v0.1.3

func DeleteWALBefore(bt BackupTime, pre *Prefix)

func DeltaFetchRecursion added in v0.1.3

func DeltaFetchRecursion(backupName string, pre *Prefix, dirArc string) (lsn *uint64)

This function composes Backup object and recursively searches for necessary base backup

func DownloadWALFile added in v0.1.7

func DownloadWALFile(pre *Prefix, walFileName string, location string)

func ExtractAll

func ExtractAll(ti TarInterpreter, files []ReaderMaker) error

ExtractAll Handles all files passed in. Supports `.lzo`, `.lz4, and `.tar`. File type `.nop` is used for testing purposes. Each file is extracted in its own goroutine and ExtractAll will wait for all goroutines to finish. Returns the first error encountered.

func FormatName

func FormatName(s string) (string, error)

FormatName grabs the name of the WAL file and returns it in the form of `base_...`. If no match is found, returns an empty string and a `NoMatchAvailableError`.

func GetBackupPath added in v0.1.4

func GetBackupPath(prefix *Prefix) *string

func GetDeltaConfig added in v0.1.3

func GetDeltaConfig() (max_deltas int, from_full bool)

func GetKeyRingId added in v0.1.3

func GetKeyRingId() string

func GetPubRingArmour added in v0.1.3

func GetPubRingArmour(keyId string) ([]byte, error)

Here we read armoured version of Key by calling GPG process

func GetSecretRingArmour added in v0.1.3

func GetSecretRingArmour(keyId string) ([]byte, error)

func HandleBackupFetch added in v0.1.3

func HandleBackupFetch(backupName string, pre *Prefix, dirArc string, mem bool) (lsn *uint64)

func HandleBackupList added in v0.1.3

func HandleBackupList(pre *Prefix)

func HandleBackupPush added in v0.1.3

func HandleBackupPush(dirArc string, tu *TarUploader, pre *Prefix)

func HandleDelete added in v0.1.3

func HandleDelete(pre *Prefix, args []string)

func HandleTar

func HandleTar(bundle TarBundle, path string, info os.FileInfo, crypter Crypter) error

HandleTar creates underlying tar writer and handles one given file. Does not follow symlinks. If file is in EXCLUDE, will not be included in the final tarball. EXCLUDED directories are created but their contents are not written to local disk.

func HandleWALFetch added in v0.1.3

func HandleWALFetch(pre *Prefix, walFileName string, location string, triggerPrefetch bool)

func HandleWALPrefetch added in v0.1.3

func HandleWALPrefetch(pre *Prefix, walFileName string, location string)

func HandleWALPush added in v0.1.3

func HandleWALPush(tu *TarUploader, dirArc string)

func IsPagedFile added in v0.1.3

func IsPagedFile(info os.FileInfo, fileName string) bool

func MoveFileAndCreateDirs added in v0.1.3

func MoveFileAndCreateDirs(incrementalPath string, targetPath string, fileName string) (err error)

func NextWALFileName added in v0.1.3

func NextWALFileName(name string) (nextname string, err error)

func ParseLsn added in v0.1.3

func ParseLsn(lsnStr string) (lsn uint64, err error)

func ParsePageHeader added in v0.1.3

func ParsePageHeader(data []byte) (lsn uint64, valid bool)

func ParseWALFileName added in v0.1.3

func ParseWALFileName(name string) (timelineId uint32, logSegNo uint64, err error)

func PrepareDirs added in v0.1.3

func PrepareDirs(fileName string, targetPath string) error

func PrintDeleteUsageAndFail added in v0.1.3

func PrintDeleteUsageAndFail()

func ReadDatabaseFile added in v0.1.3

func ReadDatabaseFile(fileName string, lsn *uint64, isNew bool) (io.ReadCloser, bool, int64, error)
func ResolveSymlink(path string) string

func UnwrapBackup added in v0.1.3

func UnwrapBackup(bk *Backup, dirArc string, pre *Prefix, sentinel S3TarBallSentinelDto)

Do the job of unpacking Backup object

func UploadWALFile added in v0.1.5

func UploadWALFile(tu *TarUploader, dirArc string)

func WALFileName added in v0.1.3

func WALFileName(lsn uint64, conn *pgx.Conn) (string, uint32, error)

Types

type Archive

type Archive struct {
	Prefix  *Prefix
	Archive *string
}

Archive contains information associated with a WAL archive.

func (*Archive) CheckExistence

func (a *Archive) CheckExistence() (bool, error)

CheckExistence checks that the specified WAL file exists.

func (*Archive) GetArchive

func (a *Archive) GetArchive() (io.ReadCloser, error)

GetArchive downloads the specified archive from S3.

type Backup

type Backup struct {
	Prefix *Prefix
	Path   *string
	Name   *string
	Js     *string
}

Backup contains information about a valid backup generated and uploaded by WAL-G.

func (*Backup) CheckExistence

func (b *Backup) CheckExistence() (bool, error)

CheckExistence checks that the specified backup exists.

func (*Backup) GetBackups added in v0.1.3

func (b *Backup) GetBackups() ([]BackupTime, error)

Recives backup descriptions and sorts them by time

func (*Backup) GetKeys

func (b *Backup) GetKeys() ([]string, error)

GetKeys returns all the keys for the files in the specified backup.

func (*Backup) GetLatest

func (b *Backup) GetLatest() (string, error)

GetLatest sorts the backups by last modified time and returns the latest backup key.

func (*Backup) GetWals added in v0.1.3

func (b *Backup) GetWals(before string) ([]*s3.ObjectIdentifier, error)

Returns all WAL file keys less then key provided

type BackupFileDescription added in v0.1.3

type BackupFileDescription struct {
	IsIncremented bool // should never be both incremented and Skipped
	IsSkipped     bool
	MTime         time.Time
}

type BackupFileList added in v0.1.3

type BackupFileList map[string]BackupFileDescription

type BackupTime

type BackupTime struct {
	Name        string
	Time        time.Time
	WalFileName string
}

BackupTime is used to sort backups by latest modified time.

func GetBackupTimeSlices added in v0.1.3

func GetBackupTimeSlices(backups []*s3.Object) []BackupTime

Converts S3 objects to backup description

type BgUploader added in v0.1.5

type BgUploader struct {
	// contains filtered or unexported fields
}

The state of concurrent WAL upload

func (*BgUploader) Start added in v0.1.5

func (u *BgUploader) Start(walFilePath string, maxParallelWorkers int32, tu *TarUploader)

Start up checking what's inside archive_status

func (*BgUploader) Stop added in v0.1.5

func (u *BgUploader) Stop()

func (*BgUploader) Upload added in v0.1.5

func (u *BgUploader) Upload(info os.FileInfo)

type Bundle

type Bundle struct {
	MinSize            int64
	Sen                *Sentinel
	Tb                 TarBall
	Tbm                TarBallMaker
	Crypter            OpenPGPCrypter
	Timeline           uint32
	Replica            bool
	IncrementFromLsn   *uint64
	IncrementFromFiles BackupFileList
}

A Bundle represents the directory to be walked. Contains at least one TarBall if walk has started. Each TarBall will be at least MinSize bytes. The Sentinel is used to ensure complete uploaded backups; in this case, pg_control is used as the sentinel.

func (*Bundle) CheckTimelineChanged added in v0.1.3

func (b *Bundle) CheckTimelineChanged(conn *pgx.Conn) bool

func (*Bundle) GetIncrementBaseFiles added in v0.1.3

func (b *Bundle) GetIncrementBaseFiles() BackupFileList

func (*Bundle) GetIncrementBaseLsn added in v0.1.3

func (b *Bundle) GetIncrementBaseLsn() *uint64

func (*Bundle) GetTarBall

func (b *Bundle) GetTarBall() TarBall

func (*Bundle) HandleLabelFiles

func (bundle *Bundle) HandleLabelFiles(conn *pgx.Conn) (uint64, error)

HandleLabelFiles creates the `backup_label` and `tablespace_map` files and uploads it to S3 by stopping the backup. Returns error upon failure.

func (*Bundle) HandleSentinel

func (bundle *Bundle) HandleSentinel() error

HandleSentinel uploads the compressed tar file of `pg_control`. Will only be called after the rest of the backup is successfully uploaded to S3. Returns an error upon failure.

func (*Bundle) NewTarBall

func (b *Bundle) NewTarBall()

func (*Bundle) StartBackup added in v0.1.3

func (b *Bundle) StartBackup(conn *pgx.Conn, backup string) (backupName string, lsn uint64, version int, err error)

StartBackup starts a non-exclusive base backup immediately. When finishing the backup, `backup_label` and `tablespace_map` contents are not immediately written to a file but returned instead. Returns empty string and an error if backup fails.

func (*Bundle) TarWalker

func (bundle *Bundle) TarWalker(path string, info os.FileInfo, err error) error

TarWalker walks files provided by the passed in directory and creates compressed tar members labeled as `part_00i.tar.lzo`.

To see which files and directories are Skipped, please consult 'structs.go'. Excluded directories will be created but their contents will not be included in the tar bundle.

type CachedKey added in v0.1.3

type CachedKey struct {
	KeyId string `json:"keyId"`
	Body  []byte `json:"body"`
}

type Cleaner added in v0.1.3

type Cleaner interface {
	GetFiles(directory string) ([]string, error)
	Remove(file string)
}

type Crypter added in v0.1.3

type Crypter interface {
	IsUsed() bool
	Encrypt(writer io.WriteCloser) (io.WriteCloser, error)
	Decrypt(reader io.ReadCloser) (io.Reader, error)
}

type DelayWriteCloser added in v0.1.3

type DelayWriteCloser struct {
	// contains filtered or unexported fields
}

Encryption starts writing header immediately. But there is a lot of places where writer is instantiated long before pipe is ready. This is why here is used special writer, which delays encryption initialization before actual write. If no write occurs, initialization still is performed, to handle zero-byte files correctly

func (*DelayWriteCloser) Close added in v0.1.3

func (d *DelayWriteCloser) Close() error

func (*DelayWriteCloser) Write added in v0.1.3

func (d *DelayWriteCloser) Write(p []byte) (n int, err error)

type DeleteCommandArguments added in v0.1.3

type DeleteCommandArguments struct {
	// contains filtered or unexported fields
}

func ParseDeleteArguments added in v0.1.3

func ParseDeleteArguments(args []string, fallBackFunc func()) (result DeleteCommandArguments)

type Empty

type Empty struct{}

Empty is used for channel signaling.

type EmptyWriteIgnorer

type EmptyWriteIgnorer struct {
	io.WriteCloser
}

EmptyWriteIgnorer handles 0 byte write in LZ4 package to stop pipe reader/writer from blocking.

func (EmptyWriteIgnorer) Write

func (e EmptyWriteIgnorer) Write(p []byte) (int, error)

type ExponentialTicker

type ExponentialTicker struct {
	MaxRetries int

	MaxWait float64
	// contains filtered or unexported fields
}

ExponentialTicker is used for exponential backoff for uploading to S3. If the max wait time is reached, retries will occur after max wait time intervals up to max retries.

func NewExpTicker

func NewExpTicker(retries int, wait float64) *ExponentialTicker

NewExpTicker creates a new ExponentialTicker with configurable max number of retries and max wait time.

func (*ExponentialTicker) Sleep

func (et *ExponentialTicker) Sleep()

Sleep will wait in seconds.

func (*ExponentialTicker) Update

func (et *ExponentialTicker) Update()

Update increases running count of retries by 1 and exponentially increases the wait time until the max wait time is reached.

type FileSystemCleaner added in v0.1.3

type FileSystemCleaner struct{}

func (FileSystemCleaner) GetFiles added in v0.1.3

func (this FileSystemCleaner) GetFiles(directory string) (files []string, err error)

func (FileSystemCleaner) Remove added in v0.1.3

func (this FileSystemCleaner) Remove(file string)

type FileTarInterpreter

type FileTarInterpreter struct {
	NewDir             string
	Sentinel           S3TarBallSentinelDto
	IncrementalBaseDir string
}

FileTarInterpreter extracts input to disk.

func (*FileTarInterpreter) Interpret

func (ti *FileTarInterpreter) Interpret(tr io.Reader, cur *tar.Header) error

Interpret extracts a tar file to disk and creates needed directories. Returns the first error encountered. Calls fsync after each file is written successfully.

type IncrementalPageReader added in v0.1.3

type IncrementalPageReader struct {
	// contains filtered or unexported fields
}

Reader consturcts difference map during initialization and than re-read file Diff map can be of 1Gb/PostgresBlockSize elements == 512Kb

func (*IncrementalPageReader) AdvanceFileReader added in v0.1.3

func (pr *IncrementalPageReader) AdvanceFileReader() error

func (*IncrementalPageReader) Close added in v0.1.3

func (pr *IncrementalPageReader) Close() error

func (*IncrementalPageReader) DrainMoreData added in v0.1.3

func (pr *IncrementalPageReader) DrainMoreData() error

func (*IncrementalPageReader) Initialize added in v0.1.3

func (pr *IncrementalPageReader) Initialize() (size int64, err error)

func (*IncrementalPageReader) Read added in v0.1.3

func (pr *IncrementalPageReader) Read(p []byte) (n int, err error)

type Lz4CascadeClose

type Lz4CascadeClose struct {
	*lz4.Writer
	Underlying io.WriteCloser
}

Lz4CascadeClose bundles multiple closures into one function. Calling Close() will close the lz4 and underlying writer.

func (*Lz4CascadeClose) Close

func (lcc *Lz4CascadeClose) Close() error

Close returns the first encountered error from closing the lz4 writer or the underlying writer.

type Lz4CascadeClose2 added in v0.1.3

type Lz4CascadeClose2 struct {
	*lz4.Writer
	Underlying  io.WriteCloser
	Underlying2 io.WriteCloser
}

Cascade closers with two independent closers. This peculiar behavior is required to handle OpenGPG Writer behavior

func (*Lz4CascadeClose2) Close added in v0.1.3

func (lcc *Lz4CascadeClose2) Close() error

Close returns the first encountered error from closing the lz4 writer or the underlying writer.

type Lz4Error

type Lz4Error struct {
	// contains filtered or unexported fields
}

Lz4Error is used to catch specific errors from Lz4PipeWriter when uploading to S3. Will not retry upload if this error occurs.

func (Lz4Error) Error

func (e Lz4Error) Error() string

type LzPipeWriter

type LzPipeWriter struct {
	Input  io.Reader
	Output io.Reader
}

LzPipeWriter allows for flexibility of using compressed output. Input is read and compressed to a pipe reader.

func (*LzPipeWriter) Compress

func (p *LzPipeWriter) Compress(crypter Crypter)

Compress compresses input to a pipe reader. Output must be used or pipe will block.

type NilWriter added in v0.1.3

type NilWriter struct{}

Writer to /dev/null

func (*NilWriter) Write added in v0.1.3

func (this *NilWriter) Write(p []byte) (n int, err error)

type NoMatchAvailableError

type NoMatchAvailableError struct {
	// contains filtered or unexported fields
}

NoMatchAvailableError is used to signal no match found in string.

func (NoMatchAvailableError) Error

func (e NoMatchAvailableError) Error() string

type OpenPGPCrypter added in v0.1.3

type OpenPGPCrypter struct {
	// contains filtered or unexported fields
}

Crypter incapsulates specific of cypher method Includes keys, infrastructutre information etc If many encryption methods will be used it worth to extract interface

func (*OpenPGPCrypter) ConfigureGPGCrypter added in v0.1.3

func (crypter *OpenPGPCrypter) ConfigureGPGCrypter()

Internal OpenPGPCrypter initialization

func (*OpenPGPCrypter) Decrypt added in v0.1.3

func (crypter *OpenPGPCrypter) Decrypt(reader io.ReadCloser) (io.Reader, error)

Created decripted reader from ordinary reader

func (*OpenPGPCrypter) Encrypt added in v0.1.3

func (crypter *OpenPGPCrypter) Encrypt(writer io.WriteCloser) (io.WriteCloser, error)

Creates encryption writer from ordinary writer

func (*OpenPGPCrypter) IsUsed added in v0.1.3

func (crypter *OpenPGPCrypter) IsUsed() bool

Function to check necessity of Crypter use Must be called prior to any other crypter call

type Prefix

type Prefix struct {
	Svc    s3iface.S3API
	Bucket *string
	Server *string
}

Prefix contains the S3 service client, bucket and string.

type RaskyReader

type RaskyReader struct {
	R io.Reader
}

RaskyReader handles cases when the Rasky lzo package crashes. Occurs if byte size is too small (1-5).

func (*RaskyReader) Read

func (r *RaskyReader) Read(p []byte) (int, error)

Read ensures all bytes are get read for Rasky package.

type ReadCascadeClose added in v0.1.3

type ReadCascadeClose struct {
	io.Reader
	io.Closer
}

Compose io.ReadCloser from two parts

type ReaderMaker

type ReaderMaker interface {
	Reader() (io.ReadCloser, error)
	Format() string
	Path() string
}

ReaderMaker is the generic interface used by extract. It allows for ease of handling different file formats.

type S3ReaderMaker

type S3ReaderMaker struct {
	Backup     *Backup
	Key        *string
	FileFormat string
}

S3ReaderMaker handles cases where backups need to be uploaded to S3.

func (*S3ReaderMaker) Format

func (s *S3ReaderMaker) Format() string

func (*S3ReaderMaker) Path

func (s *S3ReaderMaker) Path() string

func (*S3ReaderMaker) Reader

func (s *S3ReaderMaker) Reader() (io.ReadCloser, error)

Reader creates a new S3 reader for each S3 object.

type S3TarBall

type S3TarBall struct {
	Lsn              *uint64
	IncrementFromLsn *uint64
	IncrementFrom    string
	Files            BackupFileList
	// contains filtered or unexported fields
}

S3TarBall represents a tar file that is going to be uploaded to S3.

func (*S3TarBall) BaseDir

func (s *S3TarBall) BaseDir() string

func (*S3TarBall) CloseTar

func (s *S3TarBall) CloseTar() error

CloseTar closes the tar writer, flushing any unwritten data to the underlying writer before also closing the underlying writer.

func (*S3TarBall) Finish

func (s *S3TarBall) Finish(sentinel *S3TarBallSentinelDto) error

Finish writes an empty .json file and uploads it with the the backup name. Finish will wait until all tar file parts have been uploaded. The json file will only be uploaded if all other parts of the backup are present in S3. an alert is given with the corresponding error.

func (*S3TarBall) GetFiles added in v0.1.3

func (b *S3TarBall) GetFiles() BackupFileList

func (*S3TarBall) Nop

func (s *S3TarBall) Nop() bool

func (*S3TarBall) Number

func (s *S3TarBall) Number() int

func (*S3TarBall) SetFiles added in v0.1.3

func (b *S3TarBall) SetFiles(files BackupFileList)

func (*S3TarBall) SetSize

func (s *S3TarBall) SetSize(i int64)

func (*S3TarBall) SetUp

func (s *S3TarBall) SetUp(crypter Crypter, names ...string)

SetUp creates a new tar writer and starts upload to S3. Upload will block until the tar file is finished writing. If a name for the file is not given, default name is of the form `part_....tar.lz4`.

func (*S3TarBall) Size

func (s *S3TarBall) Size() int64

func (*S3TarBall) StartUpload

func (s *S3TarBall) StartUpload(name string, crypter Crypter) io.WriteCloser

StartUpload creates a lz4 writer and runs upload in the background once a compressed tar member is finished writing.

func (*S3TarBall) Trim

func (s *S3TarBall) Trim() string

func (*S3TarBall) Tw

func (s *S3TarBall) Tw() *tar.Writer

type S3TarBallMaker

type S3TarBallMaker struct {
	BaseDir          string
	Trim             string
	BkupName         string
	Tu               *TarUploader
	Lsn              *uint64
	IncrementFromLsn *uint64
	IncrementFrom    string
	Files            BackupFileList
	// contains filtered or unexported fields
}

S3TarBallMaker creates tarballs that are uploaded to S3.

func (*S3TarBallMaker) Make

func (s *S3TarBallMaker) Make() TarBall

Make returns a tarball with required S3 fields.

type S3TarBallSentinelDto added in v0.1.3

type S3TarBallSentinelDto struct {
	LSN               *uint64
	IncrementFromLSN  *uint64 `json:"DeltaFromLSN,omitempty"`
	IncrementFrom     *string `json:"DeltaFrom,omitempty"`
	IncrementFullName *string `json:"DeltaFullName,omitempty"`
	IncrementCount    *int    `json:"DeltaCount,omitempty"`

	Files BackupFileList

	PgVersion int
	FinishLSN *uint64
}

func (*S3TarBallSentinelDto) IsIncremental added in v0.1.3

func (dto *S3TarBallSentinelDto) IsIncremental() bool

type Sentinel

type Sentinel struct {
	Info os.FileInfo
	// contains filtered or unexported fields
}

Sentinel is used to signal completion of a walked directory.

type TarBall

type TarBall interface {
	SetUp(crypter Crypter, args ...string)
	CloseTar() error
	Finish(sentinel *S3TarBallSentinelDto) error
	BaseDir() string
	Trim() string
	Nop() bool
	Number() int
	Size() int64
	SetSize(int64)
	Tw() *tar.Writer
	SetFiles(files BackupFileList)
	GetFiles() BackupFileList
}

A TarBall represents one tar file.

type TarBallMaker

type TarBallMaker interface {
	Make() TarBall
}

TarBallMaker is used to allow for flexible creation of different TarBalls.

type TarBundle

type TarBundle interface {
	NewTarBall()
	GetTarBall() TarBall
	GetIncrementBaseLsn() *uint64
	GetIncrementBaseFiles() BackupFileList
}

TarBundle represents one completed directory.

type TarInterpreter

type TarInterpreter interface {
	Interpret(r io.Reader, hdr *tar.Header) error
}

TarInterpreter behaves differently for different file types.

type TarUploader

type TarUploader struct {
	Upl          s3manageriface.UploaderAPI
	MaxRetries   int
	MaxWait      float64
	StorageClass string
	Success      bool
	// contains filtered or unexported fields
}

TarUploader contains fields associated with uploading tarballs. Multiple tarballs can share one uploader. Must call CreateUploader() in 'upload.go'.

func NewTarUploader

func NewTarUploader(svc s3iface.S3API, bucket, server, region string, r int, w float64) *TarUploader

NewTarUploader creates a new tar uploader without the actual S3 uploader. CreateUploader() is used to configure byte size and concurrency streams for the uploader.

func (*TarUploader) Clone added in v0.1.7

func (tu *TarUploader) Clone() *TarUploader

func (*TarUploader) Finish

func (tu *TarUploader) Finish()

Finish waits for all waiting parts to be uploaded. If an error occurs, prints alert to stderr.

func (*TarUploader) UploadWal

func (tu *TarUploader) UploadWal(path string) (string, error)

UploadWal compresses a WAL file using LZ4 and uploads to S3. Returns the first error encountered and an empty string upon failure.

type TimeSlice

type TimeSlice []BackupTime

TimeSlice represents a backup and its last modified time.

func (TimeSlice) Len

func (p TimeSlice) Len() int

func (TimeSlice) Less

func (p TimeSlice) Less(i, j int) bool

func (TimeSlice) Swap

func (p TimeSlice) Swap(i, j int)

type UnsetEnvVarError

type UnsetEnvVarError struct {
	// contains filtered or unexported fields
}

UnsetEnvVarError is used to indicate required environment variables for WAL-G.

func (UnsetEnvVarError) Error

func (e UnsetEnvVarError) Error() string

type UnsupportedFileTypeError

type UnsupportedFileTypeError struct {
	Path       string
	FileFormat string
}

UnsupportedFileTypeError is used to signal file types that are unsupported by WAL-G.

func (UnsupportedFileTypeError) Error

func (e UnsupportedFileTypeError) Error() string

type WalFiles

type WalFiles interface {
	CheckExistence() (bool, error)
}

WalFiles represent any file generated by WAL-G.

type ZeroReader

type ZeroReader struct{}

ZeroReader generates a slice of zeroes. Used to pad tar in cases where length of file changes.

func (*ZeroReader) Read

func (z *ZeroReader) Read(p []byte) (int, error)

Directories

Path Synopsis
cmd

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL