restic

package
v0.0.0-...-53483ec Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 27, 2023 License: BSD-2-Clause Imports: 35 Imported by: 3

Documentation

Overview

Package restic is the top level package for the restic backup program, please see https://github.com/restic/restic for more information.

This package exposes the main objects that are handled in restic.

Index

Constants

View Source
const RepoVersion = 1

RepoVersion is the version that is written to the config when a repository is newly created with Init().

Variables

View Source
var ErrNoSnapshotFound = errors.New("no snapshot found")

ErrNoSnapshotFound is returned when no snapshot for the given criteria could be found.

Functions

func ApplyPolicy

func ApplyPolicy(list Snapshots, p ExpirePolicy) (keep, remove Snapshots, reasons []KeepReason)

ApplyPolicy returns the snapshots from list that are to be kept and removed according to the policy p. list is sorted in the process. reasons contains the reasons to keep each snapshot, it is in the same order as keep.

func CiphertextLength

func CiphertextLength(plaintextSize int) int

CiphertextLength returns the encrypted length of a blob with plaintextSize bytes.

func Find

func Find(ctx context.Context, be Lister, t FileType, prefix string) (string, error)

Find loads the list of all files of type t and searches for names which start with prefix. If none is found, nil and ErrNoIDPrefixFound is returned. If more than one is found, nil and ErrMultipleIDMatches is returned.

func FindUsedBlobs

func FindUsedBlobs(ctx context.Context, repo TreeLoader, treeIDs IDs, blobs BlobSet, p *progress.Counter) error

FindUsedBlobs traverses the tree ID and adds all seen blobs (trees and data blobs) to the set blobs. Already seen tree blobs will not be visited again.

func FixTime

func FixTime(t time.Time) time.Time

FixTime returns a time.Time which can safely be used to marshal as JSON. If the timestamp is earlier than year zero, the year is set to zero. In the same way, if the year is larger than 9999, the year is set to 9999. Other than the year nothing is changed.

func ForAllLocks

func ForAllLocks(ctx context.Context, repo Repository, excludeID *ID, fn func(ID, *Lock, error) error) error

ForAllLocks reads all locks in parallel and calls the given callback. It is guaranteed that the function is not run concurrently. If the callback returns an error, this function is cancelled and also returns that error. If a lock ID is passed via excludeID, it will be ignored.

func ForAllSnapshots

func ForAllSnapshots(ctx context.Context, repo Repository, excludeIDs IDSet, fn func(ID, *Snapshot, error) error) error

ForAllSnapshots reads all snapshots in parallel and calls the given function. It is guaranteed that the function is not run concurrently. If the called function returns an error, this function is cancelled and also returns this error. If a snapshot ID is in excludeIDs, it will be ignored.

func Getxattr

func Getxattr(path, name string) ([]byte, error)

Getxattr retrieves extended attribute data associated with path.

func GroupSnapshots

func GroupSnapshots(snapshots Snapshots, options string) (map[string]Snapshots, bool, error)

GroupSnapshots takes a list of snapshots and a grouping criteria and creates a group list of snapshots.

func IsAlreadyLocked

func IsAlreadyLocked(err error) bool

IsAlreadyLocked returns true iff err is an instance of ErrAlreadyLocked.

func Listxattr

func Listxattr(path string) ([]string, error)

Listxattr retrieves a list of names of extended attributes associated with the given path in the file system.

func NewBlobBuffer

func NewBlobBuffer(size int) []byte

NewBlobBuffer returns a buffer that is large enough to hold a blob of size plaintext bytes, including the crypto overhead.

func PlaintextLength

func PlaintextLength(ciphertextSize int) int

PlaintextLength returns the plaintext length of a blob with ciphertextSize bytes.

func PrefixLength

func PrefixLength(ctx context.Context, be Lister, t FileType) (int, error)

PrefixLength returns the number of bytes required so that all prefixes of all names of type t are unique.

func ReadAt

func ReadAt(ctx context.Context, be Backend, h Handle, offset int64, p []byte) (n int, err error)

ReadAt reads from the backend handle h at the given position.

func ReaderAt

func ReaderAt(ctx context.Context, be Backend, h Handle) io.ReaderAt

ReaderAt returns an io.ReaderAt for a file in the backend. The returned reader should not escape the caller function to avoid unexpected interactions with the embedded context

func RemoveAllLocks

func RemoveAllLocks(ctx context.Context, repo Repository) error

RemoveAllLocks removes all locks forcefully.

func RemoveStaleLocks

func RemoveStaleLocks(ctx context.Context, repo Repository) error

RemoveStaleLocks deletes all locks detected as stale from the repository.

func Setxattr

func Setxattr(path, name string, data []byte) error

Setxattr associates name and data together as an attribute of path.

func StreamTrees

func StreamTrees(ctx context.Context, wg *errgroup.Group, repo TreeLoader, trees IDs, skip func(tree ID) bool, p *progress.Counter) <-chan TreeItem

StreamTrees iteratively loads the given trees and their subtrees. The skip method is guaranteed to always be called from the same goroutine. To shutdown the started goroutines, either read all items from the channel or cancel the context. Then `Wait()` on the errgroup until all goroutines were stopped.

func TestDisableCheckPolynomial

func TestDisableCheckPolynomial(t testing.TB)

TestDisableCheckPolynomial disables the check that the polynomial used for the chunker.

func TestSetLockTimeout

func TestSetLockTimeout(t testing.TB, d time.Duration)

TestSetLockTimeout can be used to reduce the lock wait timeout for tests.

Types

type Backend

type Backend interface {
	// Location returns a string that describes the type and location of the
	// repository.
	Location() string

	// Hasher may return a hash function for calculating a content hash for the backend
	Hasher() hash.Hash

	// Test a boolean value whether a File with the name and type exists.
	Test(ctx context.Context, h Handle) (bool, error)

	// Remove removes a File described  by h.
	Remove(ctx context.Context, h Handle) error

	// Close the backend
	Close() error

	// Save stores the data from rd under the given handle.
	Save(ctx context.Context, h Handle, rd RewindReader) error

	// Load runs fn with a reader that yields the contents of the file at h at the
	// given offset. If length is larger than zero, only a portion of the file
	// is read.
	//
	// The function fn may be called multiple times during the same Load invocation
	// and therefore must be idempotent.
	//
	// Implementations are encouraged to use backend.DefaultLoad
	Load(ctx context.Context, h Handle, length int, offset int64, fn func(rd io.Reader) error) error

	// Stat returns information about the File identified by h.
	Stat(ctx context.Context, h Handle) (FileInfo, error)

	// List runs fn for each file in the backend which has the type t. When an
	// error occurs (or fn returns an error), List stops and returns it.
	//
	// The function fn is called exactly once for each file during successful
	// execution and at most once in case of an error.
	//
	// The function fn is called in the same Goroutine that List() is called
	// from.
	List(ctx context.Context, t FileType, fn func(FileInfo) error) error

	// IsNotExist returns true if the error was caused by a non-existing file
	// in the backend.
	IsNotExist(err error) bool

	// Delete removes all data in the backend.
	Delete(ctx context.Context) error
}

Backend is used to store and access data.

Backend operations that return an error will be retried when a Backend is wrapped in a RetryBackend. To prevent that from happening, the operations should return a github.com/cenkalti/backoff/v4.PermanentError. Errors from the context package need not be wrapped, as context cancellation is checked separately by the retrying logic.

type Blob

type Blob struct {
	BlobHandle
	Length uint
	Offset uint
}

Blob is one part of a file or a tree.

func (*Blob) Decrypt

func (blob *Blob) Decrypt(reader io.ReaderAt, key *crypto.Key) ([]byte, error)

DecryptAndCheck decrypts the blob contents.

Does not check content validity.

func (*Blob) DecryptAndCheck

func (blob *Blob) DecryptAndCheck(reader io.ReaderAt, key *crypto.Key, check bool) ([]byte, error)

DecryptAndCheck decrypts the blob contents, optionally checking if the content is valid.

func (*Blob) DecryptFromPack

func (blob *Blob) DecryptFromPack(path string, key *crypto.Key) ([]byte, error)

DecryptAndCheck decrypts the blob contents.

Does not check content validity.

func (Blob) String

func (b Blob) String() string

type BlobHandle

type BlobHandle struct {
	ID   ID
	Type BlobType
}

BlobHandle identifies a blob of a given type.

func NewRandomBlobHandle

func NewRandomBlobHandle() BlobHandle

func TestParseHandle

func TestParseHandle(s string, t BlobType) BlobHandle

TestParseHandle parses s as a ID, panics if that fails and creates a BlobHandle with t.

func (BlobHandle) String

func (h BlobHandle) String() string

type BlobHandles

type BlobHandles []BlobHandle

BlobHandles is an ordered list of BlobHandles that implements sort.Interface.

func (BlobHandles) Len

func (h BlobHandles) Len() int

func (BlobHandles) Less

func (h BlobHandles) Less(i, j int) bool

func (BlobHandles) String

func (h BlobHandles) String() string

func (BlobHandles) Swap

func (h BlobHandles) Swap(i, j int)

type BlobSet

type BlobSet map[BlobHandle]struct{}

BlobSet is a set of blobs.

func NewBlobSet

func NewBlobSet(handles ...BlobHandle) BlobSet

NewBlobSet returns a new BlobSet, populated with ids.

func (BlobSet) Delete

func (s BlobSet) Delete(h BlobHandle)

Delete removes id from the set.

func (BlobSet) Equals

func (s BlobSet) Equals(other BlobSet) bool

Equals returns true iff s equals other.

func (BlobSet) Has

func (s BlobSet) Has(h BlobHandle) bool

Has returns true iff id is contained in the set.

func (BlobSet) Insert

func (s BlobSet) Insert(h BlobHandle)

Insert adds id to the set.

func (BlobSet) Intersect

func (s BlobSet) Intersect(other BlobSet) (result BlobSet)

Intersect returns a new set containing the handles that are present in both sets.

func (BlobSet) List

func (s BlobSet) List() BlobHandles

List returns a sorted slice of all BlobHandle in the set.

func (BlobSet) Merge

func (s BlobSet) Merge(other BlobSet)

Merge adds the blobs in other to the current set.

func (BlobSet) String

func (s BlobSet) String() string

func (BlobSet) Sub

func (s BlobSet) Sub(other BlobSet) (result BlobSet)

Sub returns a new set containing all handles that are present in s but not in other.

type BlobType

type BlobType uint8

BlobType specifies what a blob stored in a pack is.

const (
	InvalidBlob BlobType = iota
	DataBlob
	TreeBlob
	NumBlobTypes // Number of types. Must be last in this enumeration.
)

These are the blob types that can be stored in a pack.

func (BlobType) MarshalJSON

func (t BlobType) MarshalJSON() ([]byte, error)

MarshalJSON encodes the BlobType into JSON.

func (BlobType) String

func (t BlobType) String() string

func (*BlobType) UnmarshalJSON

func (t *BlobType) UnmarshalJSON(buf []byte) error

UnmarshalJSON decodes the BlobType from JSON.

type ByteReader

type ByteReader struct {
	*bytes.Reader
	Len int64
	// contains filtered or unexported fields
}

ByteReader implements a RewindReader for a byte slice.

func NewByteReader

func NewByteReader(buf []byte, hasher hash.Hash) *ByteReader

NewByteReader prepares a ByteReader that can then be used to read buf.

func (*ByteReader) Hash

func (b *ByteReader) Hash() []byte

Hash return a hash of the data if requested by the backed.

func (*ByteReader) Length

func (b *ByteReader) Length() int64

Length returns the number of bytes read from the reader after Rewind is called.

func (*ByteReader) Rewind

func (b *ByteReader) Rewind() error

Rewind restarts the reader from the beginning of the data.

type Config

type Config struct {
	Version           uint        `json:"version"`
	ID                string      `json:"id"`
	ChunkerPolynomial chunker.Pol `json:"chunker_polynomial"`
}

Config contains the configuration for a repository.

func CreateConfig

func CreateConfig() (Config, error)

CreateConfig creates a config file with a randomly selected polynomial and ID.

func LoadConfig

func LoadConfig(ctx context.Context, r JSONUnpackedLoader) (Config, error)

LoadConfig returns loads, checks and returns the config for a repository.

func TestCreateConfig

func TestCreateConfig(t testing.TB, pol chunker.Pol) (cfg Config)

TestCreateConfig creates a config for use within tests.

type Duration

type Duration struct {
	Hours, Days, Months, Years int
}

Duration is similar to time.Duration, except it only supports larger ranges like hours, days, months, and years.

func ParseDuration

func ParseDuration(s string) (Duration, error)

ParseDuration parses a duration from a string. The format is:

6y5m234d37h

func (*Duration) Set

func (d *Duration) Set(s string) error

Set calls ParseDuration and updates d.

func (Duration) String

func (d Duration) String() string

func (Duration) Type

func (d Duration) Type() string

Type returns the type of Duration, usable within github.com/spf13/pflag and in help texts.

func (Duration) Zero

func (d Duration) Zero() bool

Zero returns true if the duration is empty (all values are set to zero).

type ErrAlreadyLocked

type ErrAlreadyLocked struct {
	// contains filtered or unexported fields
}

ErrAlreadyLocked is returned when NewLock or NewExclusiveLock are unable to acquire the desired lock.

func (ErrAlreadyLocked) Error

func (e ErrAlreadyLocked) Error() string

type ExpirePolicy

type ExpirePolicy struct {
	Last          int       // keep the last n snapshots
	Hourly        int       // keep the last n hourly snapshots
	Daily         int       // keep the last n daily snapshots
	Weekly        int       // keep the last n weekly snapshots
	Monthly       int       // keep the last n monthly snapshots
	Yearly        int       // keep the last n yearly snapshots
	Within        Duration  // keep snapshots made within this duration
	WithinHourly  Duration  // keep hourly snapshots made within this duration
	WithinDaily   Duration  // keep daily snapshots made within this duration
	WithinWeekly  Duration  // keep weekly snapshots made within this duration
	WithinMonthly Duration  // keep monthly snapshots made within this duration
	WithinYearly  Duration  // keep yearly snapshots made within this duration
	Tags          []TagList // keep all snapshots that include at least one of the tag lists.
}

ExpirePolicy configures which snapshots should be automatically removed.

func (ExpirePolicy) Empty

func (e ExpirePolicy) Empty() bool

Empty returns true iff no policy has been configured (all values zero).

func (ExpirePolicy) String

func (e ExpirePolicy) String() (s string)

func (ExpirePolicy) Sum

func (e ExpirePolicy) Sum() int

Sum returns the maximum number of snapshots to be kept according to this policy.

type ExtendedAttribute

type ExtendedAttribute struct {
	Name  string `json:"name"`
	Value []byte `json:"value"`
}

ExtendedAttribute is a tuple storing the xattr name and value.

type FileInfo

type FileInfo struct {
	Size int64
	Name string
}

FileInfo is contains information about a file in the backend.

type FileReader

type FileReader struct {
	io.ReadSeeker
	Len int64
	// contains filtered or unexported fields
}

FileReader implements a RewindReader for an open file.

func NewFileReader

func NewFileReader(f io.ReadSeeker, hash []byte) (*FileReader, error)

NewFileReader wraps f in a *FileReader.

func (*FileReader) Hash

func (f *FileReader) Hash() []byte

Hash return a hash of the data if requested by the backed.

func (*FileReader) Length

func (f *FileReader) Length() int64

Length returns the length of the file.

func (*FileReader) Rewind

func (f *FileReader) Rewind() error

Rewind seeks to the beginning of the file.

type FileType

type FileType string

FileType is the type of a file in the backend.

const (
	PackFile     FileType = "data" // use data, as packs are stored under /data in repo
	KeyFile      FileType = "key"
	LockFile     FileType = "lock"
	SnapshotFile FileType = "snapshot"
	IndexFile    FileType = "index"
	ConfigFile   FileType = "config"
)

These are the different data types a backend can store.

type Handle

type Handle struct {
	Type              FileType
	ContainedBlobType BlobType
	Name              string
}

Handle is used to store and access data in a backend.

func (Handle) String

func (h Handle) String() string

func (Handle) Valid

func (h Handle) Valid() error

Valid returns an error if h is not valid.

type HardlinkIndex

type HardlinkIndex struct {
	Index map[HardlinkKey]string
	// contains filtered or unexported fields
}

HardlinkIndex contains a list of inodes, devices these inodes are one, and associated file names.

func NewHardlinkIndex

func NewHardlinkIndex() *HardlinkIndex

NewHardlinkIndex create a new index for hard links

func (*HardlinkIndex) Add

func (idx *HardlinkIndex) Add(inode uint64, device uint64, name string)

Add adds a link to the index.

func (*HardlinkIndex) GetFilename

func (idx *HardlinkIndex) GetFilename(inode uint64, device uint64) string

GetFilename obtains the filename from the index.

func (*HardlinkIndex) Has

func (idx *HardlinkIndex) Has(inode uint64, device uint64) bool

Has checks wether the link already exist in the index.

func (*HardlinkIndex) Remove

func (idx *HardlinkIndex) Remove(inode uint64, device uint64)

Remove removes a link from the index.

type HardlinkKey

type HardlinkKey struct {
	Inode, Device uint64
}

HardlinkKey is a composed key for finding inodes on a specific device.

type ID

type ID [idSize]byte

ID references content within a repository.

func FindLatestSnapshot

func FindLatestSnapshot(ctx context.Context, repo Repository, targets []string, tagLists []TagList, hostnames []string, timeStampLimit *time.Time) (ID, error)

FindLatestSnapshot finds latest snapshot with optional target/directory, tags, hostname, and timestamp filters.

func FindSnapshot

func FindSnapshot(ctx context.Context, repo Repository, s string) (ID, error)

FindSnapshot takes a string and tries to find a snapshot whose ID matches the string as closely as possible.

func Hash

func Hash(data []byte) ID

Hash returns the ID for data.

func IDFromHash

func IDFromHash(hash []byte) (id ID)

IDFromHash returns the ID for the hash.

func NewRandomID

func NewRandomID() ID

NewRandomID returns a randomly generated ID. When reading from rand fails, the function panics.

func ParseID

func ParseID(s string) (ID, error)

ParseID converts the given string to an ID.

func TestParseID

func TestParseID(s string) ID

TestParseID parses s as a ID and panics if that fails.

func (*ID) DirectoryPrefix

func (id *ID) DirectoryPrefix() string

func (ID) Equal

func (id ID) Equal(other ID) bool

Equal compares an ID to another other.

func (ID) EqualString

func (id ID) EqualString(other string) (bool, error)

EqualString compares this ID to another one, given as a string.

func (ID) IsNull

func (id ID) IsNull() bool

IsNull returns true iff id only consists of null bytes.

func (ID) MarshalJSON

func (id ID) MarshalJSON() ([]byte, error)

MarshalJSON returns the JSON encoding of id.

func (*ID) Str

func (id *ID) Str() string

Str returns the shortened string version of id.

func (ID) String

func (id ID) String() string

func (*ID) UnmarshalJSON

func (id *ID) UnmarshalJSON(b []byte) error

UnmarshalJSON parses the JSON-encoded data and stores the result in id.

type IDSet

type IDSet map[ID]struct{}

IDSet is a set of IDs.

func NewIDSet

func NewIDSet(ids ...ID) IDSet

NewIDSet returns a new IDSet, populated with ids.

func (IDSet) Delete

func (s IDSet) Delete(id ID)

Delete removes id from the set.

func (IDSet) Equals

func (s IDSet) Equals(other IDSet) bool

Equals returns true iff s equals other.

func (IDSet) Has

func (s IDSet) Has(id ID) bool

Has returns true iff id is contained in the set.

func (IDSet) Insert

func (s IDSet) Insert(id ID)

Insert adds id to the set.

func (IDSet) Intersect

func (s IDSet) Intersect(other IDSet) (result IDSet)

Intersect returns a new set containing the IDs that are present in both sets.

func (IDSet) List

func (s IDSet) List() IDs

List returns a slice of all IDs in the set.

func (IDSet) Merge

func (s IDSet) Merge(other IDSet)

Merge adds the blobs in other to the current set.

func (IDSet) String

func (s IDSet) String() string

func (IDSet) Sub

func (s IDSet) Sub(other IDSet) (result IDSet)

Sub returns a new set containing all IDs that are present in s but not in other.

type IDs

type IDs []ID

IDs is an ordered list of IDs that implements sort.Interface.

func (IDs) Len

func (ids IDs) Len() int

func (IDs) Less

func (ids IDs) Less(i, j int) bool

func (IDs) String

func (ids IDs) String() string

func (IDs) Swap

func (ids IDs) Swap(i, j int)

func (IDs) Uniq

func (ids IDs) Uniq() (list IDs)

Uniq returns list without duplicate IDs. The returned list retains the order of the original list so that the order of the first occurrence of each ID stays the same.

type JSONUnpackedLoader

type JSONUnpackedLoader interface {
	LoadJSONUnpacked(context.Context, FileType, ID, interface{}) error
}

JSONUnpackedLoader loads unpacked JSON.

type KeepReason

type KeepReason struct {
	Snapshot *Snapshot `json:"snapshot"`

	// description text which criteria match, e.g. "daily", "monthly"
	Matches []string `json:"matches"`

	// the counters after evaluating the current snapshot
	Counters struct {
		Last    int `json:"last,omitempty"`
		Hourly  int `json:"hourly,omitempty"`
		Daily   int `json:"daily,omitempty"`
		Weekly  int `json:"weekly,omitempty"`
		Monthly int `json:"monthly,omitempty"`
		Yearly  int `json:"yearly,omitempty"`
	} `json:"counters"`
}

KeepReason specifies why a particular snapshot was kept, and the counters at that point in the policy evaluation.

type Lister

type Lister interface {
	List(context.Context, FileType, func(FileInfo) error) error
}

Lister allows listing files in a backend.

type Lock

type Lock struct {
	Time      time.Time `json:"time"`
	Exclusive bool      `json:"exclusive"`
	Hostname  string    `json:"hostname"`
	Username  string    `json:"username"`
	PID       int       `json:"pid"`
	UID       uint32    `json:"uid,omitempty"`
	GID       uint32    `json:"gid,omitempty"`
	// contains filtered or unexported fields
}

Lock represents a process locking the repository for an operation.

There are two types of locks: exclusive and non-exclusive. There may be many different non-exclusive locks, but at most one exclusive lock, which can only be acquired while no non-exclusive lock is held.

A lock must be refreshed regularly to not be considered stale, this must be triggered by regularly calling Refresh.

func LoadLock

func LoadLock(ctx context.Context, repo Repository, id ID) (*Lock, error)

LoadLock loads and unserializes a lock from a repository.

func NewExclusiveLock

func NewExclusiveLock(ctx context.Context, repo Repository) (*Lock, error)

NewExclusiveLock returns a new, exclusive lock for the repository. If another lock (normal and exclusive) is already held by another process, ErrAlreadyLocked is returned.

func NewLock

func NewLock(ctx context.Context, repo Repository) (*Lock, error)

NewLock returns a new, non-exclusive lock for the repository. If an exclusive lock is already held by another process, ErrAlreadyLocked is returned.

func (*Lock) Refresh

func (l *Lock) Refresh(ctx context.Context) error

Refresh refreshes the lock by creating a new file in the backend with a new timestamp. Afterwards the old lock is removed.

func (*Lock) Stale

func (l *Lock) Stale() bool

Stale returns true if the lock is stale. A lock is stale if the timestamp is older than 30 minutes or if it was created on the current machine and the process isn't alive any more.

func (Lock) String

func (l Lock) String() string

func (*Lock) Unlock

func (l *Lock) Unlock() error

Unlock removes the lock from the repository.

type MasterIndex

type MasterIndex interface {
	Has(BlobHandle) bool
	Lookup(BlobHandle) []PackedBlob
	Count(BlobType) uint
	PackSize(ctx context.Context, onlyHdr bool) map[ID]int64

	// Each returns a channel that yields all blobs known to the index. When
	// the context is cancelled, the background goroutine terminates. This
	// blocks any modification of the index.
	Each(ctx context.Context) <-chan PackedBlob
}

MasterIndex keeps track of the blobs are stored within files.

type MultipleIDMatchesError

type MultipleIDMatchesError struct {
	// contains filtered or unexported fields
}

A MultipleIDMatchesError is returned by Find() when multiple IDs with a given prefix are found.

func (*MultipleIDMatchesError) Error

func (e *MultipleIDMatchesError) Error() string

type NoIDByPrefixError

type NoIDByPrefixError struct {
	// contains filtered or unexported fields
}

A NoIDByPrefixError is returned by Find() when no ID for a given prefix could be found.

func (*NoIDByPrefixError) Error

func (e *NoIDByPrefixError) Error() string

type Node

type Node struct {
	Name               string              `json:"name"`
	Type               string              `json:"type"`
	Mode               os.FileMode         `json:"mode,omitempty"`
	ModTime            time.Time           `json:"mtime,omitempty"`
	AccessTime         time.Time           `json:"atime,omitempty"`
	ChangeTime         time.Time           `json:"ctime,omitempty"`
	UID                uint32              `json:"uid"`
	GID                uint32              `json:"gid"`
	User               string              `json:"user,omitempty"`
	Group              string              `json:"group,omitempty"`
	Inode              uint64              `json:"inode,omitempty"`
	DeviceID           uint64              `json:"device_id,omitempty"` // device id of the file, stat.st_dev
	Size               uint64              `json:"size,omitempty"`
	Links              uint64              `json:"links,omitempty"`
	LinkTarget         string              `json:"linktarget,omitempty"`
	ExtendedAttributes []ExtendedAttribute `json:"extended_attributes,omitempty"`
	Device             uint64              `json:"device,omitempty"` // in case of Type == "dev", stat.st_rdev
	Content            IDs                 `json:"content"`
	Subtree            *ID                 `json:"subtree,omitempty"`

	Error string `json:"error,omitempty"`

	Path string `json:"-"`
}

Node is a file, directory or other item in a backup.

func NodeFromFileInfo

func NodeFromFileInfo(path string, fi os.FileInfo) (*Node, error)

NodeFromFileInfo returns a new node from the given path and FileInfo. It returns the first error that is encountered, together with a node.

func (*Node) CreateAt

func (node *Node) CreateAt(ctx context.Context, path string, repo Repository) error

CreateAt creates the node at the given path but does NOT restore node meta data.

func (Node) Equals

func (node Node) Equals(other Node) bool

func (Node) GetExtendedAttribute

func (node Node) GetExtendedAttribute(a string) []byte

GetExtendedAttribute gets the extended attribute.

func (Node) MarshalJSON

func (node Node) MarshalJSON() ([]byte, error)

func (Node) RestoreMetadata

func (node Node) RestoreMetadata(path string) error

RestoreMetadata restores node metadata

func (Node) RestoreTimestamps

func (node Node) RestoreTimestamps(path string) error

func (Node) String

func (node Node) String() string

func (*Node) UnmarshalJSON

func (node *Node) UnmarshalJSON(data []byte) error

type Nodes

type Nodes []*Node

Nodes is a slice of nodes that can be sorted.

func (Nodes) Len

func (n Nodes) Len() int

func (Nodes) Less

func (n Nodes) Less(i, j int) bool

func (Nodes) Swap

func (n Nodes) Swap(i, j int)

type PackedBlob

type PackedBlob struct {
	Blob
	PackID ID
}

PackedBlob is a blob stored within a file.

type Progress

type Progress struct {
	OnStart  func()
	OnUpdate ProgressFunc
	OnDone   ProgressFunc
	// contains filtered or unexported fields
}

Progress reports progress on an operation.

func NewProgress

func NewProgress() *Progress

NewProgress returns a new progress reporter. When Start() is called, the function OnStart is executed once. Afterwards the function OnUpdate is called when new data arrives or at least every d interval. The function OnDone is called when Done() is called. Both functions are called synchronously and can use shared state.

func (*Progress) Done

func (p *Progress) Done()

Done closes the progress report.

func (*Progress) Report

func (p *Progress) Report(s Stat)

Report adds the statistics from s to the current state and tries to report the accumulated statistics via the feedback channel.

func (*Progress) Reset

func (p *Progress) Reset()

Reset resets all statistic counters to zero.

func (*Progress) Start

func (p *Progress) Start()

Start resets and runs the progress reporter.

type ProgressFunc

type ProgressFunc func(s Stat, runtime time.Duration, ticker bool)

ProgressFunc is used to report progress back to the user.

type Repository

type Repository interface {

	// Backend returns the backend used by the repository
	Backend() Backend

	Key() *crypto.Key

	SetIndex(MasterIndex) error

	Index() MasterIndex
	SaveFullIndex(context.Context) error
	SaveIndex(context.Context) error
	LoadIndex(context.Context) error

	Config() Config

	LookupBlobSize(ID, BlobType) (uint, bool)

	// List calls the function fn for each file of type t in the repository.
	// When an error is returned by fn, processing stops and List() returns the
	// error.
	//
	// The function fn is called in the same Goroutine List() was called from.
	List(ctx context.Context, t FileType, fn func(ID, int64) error) error

	// ListPack returns the list of blobs saved in the pack id and the length of
	// the the pack header.
	ListPack(context.Context, ID, int64) ([]Blob, uint32, error)

	Flush(context.Context) error

	SaveUnpacked(context.Context, FileType, []byte) (ID, error)
	SaveJSONUnpacked(context.Context, FileType, interface{}) (ID, error)

	LoadJSONUnpacked(ctx context.Context, t FileType, id ID, dest interface{}) error
	// LoadAndDecrypt loads and decrypts the file with the given type and ID,
	// using the supplied buffer (which must be empty). If the buffer is nil, a
	// new buffer will be allocated and returned.
	LoadAndDecrypt(ctx context.Context, buf []byte, t FileType, id ID) (data []byte, err error)

	LoadBlob(context.Context, BlobType, ID, []byte) ([]byte, error)
	SaveBlob(context.Context, BlobType, []byte, ID, bool) (ID, bool, error)

	LoadTree(context.Context, ID) (*Tree, error)
	SaveTree(context.Context, *Tree) (ID, error)
}

Repository stores data in a backend. It provides high-level functions and transparently encrypts/decrypts data.

type RewindReader

type RewindReader interface {
	io.Reader

	// Rewind rewinds the reader so the same data can be read again from the
	// start.
	Rewind() error

	// Length returns the number of bytes that can be read from the Reader
	// after calling Rewind.
	Length() int64

	// Hash return a hash of the data if requested by the backed.
	Hash() []byte
}

RewindReader allows resetting the Reader to the beginning of the data.

type Snapshot

type Snapshot struct {
	Time     time.Time `json:"time"`
	Parent   *ID       `json:"parent,omitempty"`
	Tree     *ID       `json:"tree"`
	Paths    []string  `json:"paths"`
	Hostname string    `json:"hostname,omitempty"`
	Username string    `json:"username,omitempty"`
	UID      uint32    `json:"uid,omitempty"`
	GID      uint32    `json:"gid,omitempty"`
	Excludes []string  `json:"excludes,omitempty"`
	Tags     []string  `json:"tags,omitempty"`
	Original *ID       `json:"original,omitempty"`
	// contains filtered or unexported fields
}

Snapshot is the state of a resource at one point in time.

func LoadSnapshot

func LoadSnapshot(ctx context.Context, repo Repository, id ID) (*Snapshot, error)

LoadSnapshot loads the snapshot with the id and returns it.

func NewSnapshot

func NewSnapshot(paths []string, tags []string, hostname string, time time.Time) (*Snapshot, error)

NewSnapshot returns an initialized snapshot struct for the current user and time.

func TestCreateSnapshot

func TestCreateSnapshot(t testing.TB, repo Repository, at time.Time, depth int, duplication float32) *Snapshot

TestCreateSnapshot creates a snapshot filled with fake data. The fake data is generated deterministically from the timestamp `at`, which is also used as the snapshot's timestamp. The tree's depth can be specified with the parameter depth. The parameter duplication is a probability that the same blob will saved again.

func (*Snapshot) AddTags

func (sn *Snapshot) AddTags(addTags []string) (changed bool)

AddTags adds the given tags to the snapshots tags, preventing duplicates. It returns true if any changes were made.

func (*Snapshot) HasHostname

func (sn *Snapshot) HasHostname(hostnames []string) bool

HasHostname returns true if either - the snapshot hostname is in the list of the given hostnames, or - the list of given hostnames is empty

func (*Snapshot) HasPaths

func (sn *Snapshot) HasPaths(paths []string) bool

HasPaths returns true if the snapshot has all of the paths.

func (*Snapshot) HasTagList

func (sn *Snapshot) HasTagList(l []TagList) bool

HasTagList returns true if either

  • the snapshot satisfies at least one TagList, so there is a TagList in l for which all tags are included in sn, or
  • l is empty

func (*Snapshot) HasTags

func (sn *Snapshot) HasTags(l []string) bool

HasTags returns true if the snapshot has all the tags in l.

func (Snapshot) ID

func (sn Snapshot) ID() *ID

ID returns the snapshot's ID.

func (*Snapshot) RemoveTags

func (sn *Snapshot) RemoveTags(removeTags []string) (changed bool)

RemoveTags removes the given tags from the snapshots tags and returns true if any changes were made.

func (Snapshot) String

func (sn Snapshot) String() string

type SnapshotGroupKey

type SnapshotGroupKey struct {
	Hostname string   `json:"hostname"`
	Paths    []string `json:"paths"`
	Tags     []string `json:"tags"`
}

SnapshotGroupKey is the structure for identifying groups in a grouped snapshot list. This is used by GroupSnapshots()

type Snapshots

type Snapshots []*Snapshot

Snapshots is a list of snapshots.

func FindFilteredSnapshots

func FindFilteredSnapshots(ctx context.Context, repo Repository, hosts []string, tags []TagList, paths []string) (Snapshots, error)

FindFilteredSnapshots yields Snapshots filtered from the list of all snapshots.

func (Snapshots) Len

func (sn Snapshots) Len() int

Len returns the number of snapshots in sn.

func (Snapshots) Less

func (sn Snapshots) Less(i, j int) bool

Less returns true iff the ith snapshot has been made after the jth.

func (Snapshots) Swap

func (sn Snapshots) Swap(i, j int)

Swap exchanges the two snapshots.

type Stat

type Stat struct {
	Files  uint64
	Dirs   uint64
	Bytes  uint64
	Trees  uint64
	Blobs  uint64
	Errors uint64
}

Stat captures newly done parts of the operation.

func (*Stat) Add

func (s *Stat) Add(other Stat)

Add accumulates other into s.

func (Stat) String

func (s Stat) String() string

type TagList

type TagList []string

TagList is a list of tags.

func (*TagList) Set

func (l *TagList) Set(s string) error

Set updates the TagList's value.

func (TagList) String

func (l TagList) String() string

func (TagList) Type

func (TagList) Type() string

Type returns a description of the type.

type TagLists

type TagLists []TagList

TagLists consists of several TagList.

func (TagLists) Flatten

func (l TagLists) Flatten() (tags TagList)

Flatten returns the list of all tags provided in TagLists

func (*TagLists) Set

func (l *TagLists) Set(s string) error

Set updates the TagList's value.

func (TagLists) String

func (l TagLists) String() string

func (TagLists) Type

func (TagLists) Type() string

Type returns a description of the type.

type Tree

type Tree struct {
	Nodes []*Node `json:"nodes"`
}

Tree is an ordered list of nodes.

func NewTree

func NewTree(capacity int) *Tree

NewTree creates a new tree object with the given initial capacity.

func (*Tree) Equals

func (t *Tree) Equals(other *Tree) bool

Equals returns true if t and other have exactly the same nodes.

func (*Tree) Find

func (t *Tree) Find(name string) *Node

Find returns a node with the given name, or nil if none could be found.

func (*Tree) Insert

func (t *Tree) Insert(node *Node) error

Insert adds a new node at the correct place in the tree.

func (*Tree) Sort

func (t *Tree) Sort()

Sort sorts the nodes by name.

func (*Tree) String

func (t *Tree) String() string

func (*Tree) Subtrees

func (t *Tree) Subtrees() (trees IDs)

Subtrees returns a slice of all subtree IDs of the tree.

type TreeItem

type TreeItem struct {
	ID
	Error error
	*Tree
}

TreeItem is used to return either an error or the tree for a tree id

type TreeLoader

type TreeLoader interface {
	LoadTree(context.Context, ID) (*Tree, error)
	LookupBlobSize(id ID, tpe BlobType) (uint, bool)
}

TreeLoader loads a tree from a repository.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL