api

package
v1.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 4, 2014 License: LGPL-3.0 Imports: 18 Imported by: 7

Documentation

Overview

Package api contains types, interfaces and functions shared among commands and codecs.

As such, it provides the common ground between them.

Index

Constants

View Source
const (
	StatisticalResultInterval  = 125 * time.Millisecond
	StatisticalLoggingInterval = 1 * time.Second
	TimeEpsilon                = 40 * time.Millisecond
)
View Source
const IndexBaseName = "godi"

Variables

View Source
var (
	FilterSymlinks = FileFilter{/* contains filtered or unexported fields */}
	FilterHidden   = FileFilter{/* contains filtered or unexported fields */}
	FilterSeals    = FileFilter{/* contains filtered or unexported fields */}
	FilterVolatile = FileFilter{/* contains filtered or unexported fields */}
)

Functions

func Aggregate

func Aggregate(results <-chan Result, done <-chan bool,
	resultHandler func(Result, chan<- Result) bool,
	finalizer func(chan<- Result),
	stats *Stats) <-chan Result

Aggregate is a general purpose implementation to gather fileInfo results

func AppendUniqueString added in v1.1.0

func AppendUniqueString(dest []string, elm string) []string

Append elm if it is not yet on dest

func Gather

func Gather(files <-chan FileInfo, results chan<- Result, stats *Stats,
	makeResult func(*FileInfo, *FileInfo, error) Result,
	rctrl *gio.ReadChannelController,
	wctrls gio.RootedWriteControllers)

Drains FileInfos from files channel, reads them using ctrl and generates hashes. Creates a Result using makeResult() and sends it down the results channel. If wctrls is set, we will setup parallel writer which writes the bytes used for hashing to all controllers at the same time, which will be as slow as the slowest device

func Generate

func Generate(rctrls io.RootedReadControllers,
	runner Runner,
	generate func([]string, chan<- FileInfo, chan<- Result)) <-chan Result

Generate does all boilerplate required to be a valid generator Will produce as many generators as there are devices, each is handed a list of trees to handle

func IndexPath added in v1.1.0

func IndexPath(tree string, extension string) string

return a path to an index file residing at tree

func IndexTrackingResultHandlerAdapter added in v1.1.0

func IndexTrackingResultHandlerAdapter(indices *[]string, handler func(r Result)) func(r Result)

Returns a handler whichasd will track seal/index files, and call the given handler aftrewards, writing the into the provided slice

func ParseSources added in v1.1.0

func ParseSources(items []string, allowFiles bool) (res []string, err error)

Parse all valid source items from the given list. May either be files or directories. The returned list may be shorter, as contained paths are skipped automatically. Paths will be normalized.

func StartEngine

func StartEngine(runner Runner,
	aggregateHandler func(Result)) (err error)

Runner Init must have been called beforehand as we don't know the values here The handlers receive a result of the respective stage and may perform whichever operation. It returns true if it used the result it obtained, false otherwise. Returns the last error we received in either generator or aggregation stage

Types

type BasicResult

type BasicResult struct {
	Finfo FileInfo
	Msg   string
	Err   error
	Prio  Importance
}

Implements information about any operation It's the minimum we need to work

func (*BasicResult) Error

func (s *BasicResult) Error() error

func (*BasicResult) FileInformation

func (s *BasicResult) FileInformation() *FileInfo

func (*BasicResult) Info

func (s *BasicResult) Info() (string, Importance)

type BasicRunner added in v1.1.0

type BasicRunner struct {
	// Items we work on
	Items []string
	// A map of readers which maps from a root to the reader to use to read files that share the same root
	RootedReaders io.RootedReadControllers
	// A channel to let everyone know we should finish as soon as possible - this is done by closing the channel
	Done chan bool

	// our statistics instance
	Stats Stats

	// The maximum log-level. We just keep this value here because the cli makes a difference between CHECK and RUN !
	// This member shouldn't be needed as logging is not done by the runner anyway - it's all done by result handlers.
	// Only they are concerned, which is a function of the CLI entirely
	// TODO(st) Fork codegangsa/CLI and make the fix, use the fork from that point on ... .
	Level   Importance
	Filters []FileFilter
}

A partial implementation of a runner, which can be shared between the various commands

func (*BasicRunner) CancelChannel added in v1.1.0

func (b *BasicRunner) CancelChannel() chan bool

func (*BasicRunner) InitBasicRunner added in v1.1.0

func (b *BasicRunner) InitBasicRunner(numReaders int, items []string, maxLogLevel Importance, filters []FileFilter)

Initialize our Readers and items with the given information, including our cannel

func (*BasicRunner) LogLevel added in v1.1.0

func (b *BasicRunner) LogLevel() Importance

func (*BasicRunner) NumChannels added in v1.1.0

func (b *BasicRunner) NumChannels() int

func (*BasicRunner) Statistics added in v1.1.0

func (b *BasicRunner) Statistics() *Stats

type FileFilter added in v1.1.0

type FileFilter struct {
	// contains filtered or unexported fields
}

A utility to encapsulate a file-filter These exist in special modes to filter entire classes of files, and as FNMatch compatible string

func ParseFileFilter added in v1.1.0

func ParseFileFilter(name string) (FileFilter, error)

Return a new FileFilter matching the given string. Every string which is not a special kind of filter will be interpreted as fnmatch filter. Err is returned if the glob is invalid

func (*FileFilter) Matches added in v1.1.0

func (f *FileFilter) Matches(name string, mode os.FileMode) bool

func (FileFilter) String added in v1.1.0

func (f FileFilter) String() string

type FileHashMismatch added in v1.1.0

type FileHashMismatch struct {
	Path string
}

Thrown if a file hash didn't match - it's used in the verify implementation, primarily

func (*FileHashMismatch) Error added in v1.1.0

func (f *FileHashMismatch) Error() string

type FileInfo

type FileInfo struct {

	// path to file to handle
	Path string

	// Path relative to the directory it was found in
	RelaPath string

	// Provides information about the type of the file
	Mode os.FileMode

	// size of file
	Size int64

	// hashes of file
	Sha1 []byte
	MD5  []byte
}

A struct holding information about a task, including

func (*FileInfo) Root added in v1.1.0

func (f *FileInfo) Root() string

Compute the root of this file - it is the top-level directory used to specify all files to process

type FileSizeMismatch added in v1.1.0

type FileSizeMismatch struct {
	Path      string
	Want, Got int64
}

Thrown if the filesize we read didn't match with the filesize we were supposed to read

func (*FileSizeMismatch) Error added in v1.1.0

func (f *FileSizeMismatch) Error() string

type HashStatAdapter added in v1.1.0

type HashStatAdapter struct {
	// contains filtered or unexported fields
}

Intercepts Write calls and updates the stats accordingly. Implements only what we need, forwrading the calls as needed

func (*HashStatAdapter) Reset added in v1.1.0

func (h *HashStatAdapter) Reset()

func (*HashStatAdapter) Sum added in v1.1.0

func (h *HashStatAdapter) Sum(b []byte) []byte

func (*HashStatAdapter) Write added in v1.1.0

func (h *HashStatAdapter) Write(b []byte) (int, error)

type Importance added in v1.1.0

type Importance uint8
const (
	Info Importance = iota
	Warn
	Error
	PeriodicalStatistics // We just make it very important to assure it's always shown basically, unless logging is off
	Valuable
	LogDisabled
)

func ParseImportance added in v1.1.0

func ParseImportance(p string) (Importance, error)

Parse a priority from the given string. error will be set if this fails

func (Importance) MayLog added in v1.1.0

func (p Importance) MayLog(op Importance) bool

MayLog returns true if the given priority may be logged as seen from our log-level. Results may always be logged

func (Importance) String added in v1.1.0

func (p Importance) String() string

type Result

type Result interface {
	// Return a string indicating the result, which can can also state an error
	// The priority show the kind of result messgae, allowing you to filter them effectively
	Info() (string, Importance)

	// Return an error instance indicating what exactly when wrong
	Error() error

	// Return the FileInformation we represent
	FileInformation() *FileInfo
}

type Runner

type Runner interface {

	// Intialize required members to deal with controlled reading and writing. numReaders and numWriters
	// can be assumed to be valid
	// Sets the items we are supposed to be working on - must be checked by implementation, as they are
	// very generic in nature
	Init(numReaders, numWriters int, items []string, maxLogLevel Importance, filters []FileFilter) error

	// Return the amount of io-channels the runner may be using in parallel per device
	NumChannels() int

	// Return the minimum allowed level for logging
	// TODO(st): get rid of this method !
	LogLevel() Importance

	// Statistics returns the commands shared statistics structure
	Statistics() *Stats

	// CancelChannel returns the channel to close when the operation should stop prematurely
	// NOTE: Only valid after Init was called, and it's an error to call it beforehand
	CancelChannel() chan bool

	// Launches generators, gatherers and an aggregator, setting up their connections to fit.
	// Must close FileInfo channel when done
	// Must listen for SIGTERM|SIGINT signals and abort if received
	// May report errrors or information about the progress through generateResult, which must NOT be closed when done. Return nothing
	// if there is nothing to report
	// Must listen on done and return asap
	Generate() (aggregationResult <-chan Result)

	// Will be launched as go routine and perform whichever operation on the FileInfo received from input channel
	// Produces one result per input FileInfo and returns it in the given results channel
	// Must listen for SIGTERM|SIGINT signals
	// Use the wait group to mark when done, which is when the results channel need to be closed.
	// Must listen on done and return asap
	Gather(rctrl *io.ReadChannelController, files <-chan FileInfo, results chan<- Result)

	// Aggregate the result channel and produce whatever you have to produce from the result of the Gather steps
	// When you are done, place a single result instance into accumResult and close the channel
	// You must listen on done to know if the operation was aborted prematurely. This information should be useful
	// for your result.
	Aggregate(results <-chan Result) <-chan Result
}

An interface to help implementing types which read one ore more data streams, run an operation on them whose result is aggregated and provided to the caller.

type StatisticsFilter added in v1.1.0

type StatisticsFilter struct {
	LastResultShownAt    time.Time     // time at which you have used a result, whichever prio
	FirstStatisticsAfter time.Duration // time after which the first message will show
}

Utility type to determine if a Statistical result should be shown Assign the last time you used any result to this instance

func (*StatisticsFilter) OK added in v1.1.0

func (s *StatisticsFilter) OK(prio Importance) bool

Returns true if we can use the statistical information. You have to check if your result messsage has the right prio

type Stats added in v1.1.0

type Stats struct {
	io.Stats

	BytesHashed uint64 // Total of bytes hashed so far, counting all active hashers
	NumHashers  uint32 // Amount of hashers running in parallel

	//GENERATOR INFORMATION
	NumSkippedFiles uint32 // Amount of files we skipped right away
	StopTheEngines  uint32 // Amount of gather procs which had write errors on all destinations

	// AGGREGATION
	// Aggregation step is single-threaded - no atomic operation needed
	ErrCount       uint // Amount of errors that hit the aggregation step
	NumUndoneFiles uint // Amout of files removed during undo
	WasCancelled   bool // is true if the user cancelled

}

A structure to keep information about what is currently going on. It is means to be used as shared resource, used by multiple threads, which is why thread-safe counters are used. Implementations must keep these numbers up-to-date, while async processors will digest and present the data in some form

func (*Stats) CopyTo added in v1.1.0

func (s *Stats) CopyTo(d *Stats)

Similar to io.Stats.CopyTo(), but with our fields

func (*Stats) DeltaString added in v1.1.0

func (s *Stats) DeltaString(d *Stats, td time.Duration, sep string) string

Prints performance metrics as a single line full of useful information, similar to io.Stats.DeltaString, but may add additional information

func (*Stats) String added in v1.1.0

func (s *Stats) String() (out string)

String generates a string with general information

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL