Documentation ¶
Overview ¶
Package api contains types, interfaces and functions shared among commands and codecs.
As such, it provides the common ground between them.
Index ¶
- Constants
- Variables
- func Aggregate(results <-chan Result, done <-chan bool, ...) <-chan Result
- func AppendUniqueString(dest []string, elm string) []string
- func Gather(files <-chan FileInfo, results chan<- Result, stats *Stats, ...)
- func Generate(rctrls io.RootedReadControllers, runner Runner, ...) <-chan Result
- func IndexPath(tree string, extension string) string
- func IndexTrackingResultHandlerAdapter(indices *[]string, handler func(r Result) bool) func(r Result) bool
- func ParseSources(items []string, allowFiles bool) (res []string, err error)
- func StartEngine(runner Runner, aggregateHandler func(Result) bool) (err error)
- type BasicResult
- type BasicRunner
- type FileFilter
- type FileHashMismatch
- type FileInfo
- type FileSizeMismatch
- type HashStatAdapter
- type Importance
- type Result
- type Runner
- type Stats
Constants ¶
const IndexBaseName = "godi"
Variables ¶
var ( FilterSymlinks = FileFilter{/* contains filtered or unexported fields */} FilterHidden = FileFilter{/* contains filtered or unexported fields */} FilterSeals = FileFilter{/* contains filtered or unexported fields */} FilterVolatile = FileFilter{/* contains filtered or unexported fields */} )
Functions ¶
func Aggregate ¶
func Aggregate(results <-chan Result, done <-chan bool, resultHandler func(Result, chan<- Result) bool, finalizer func(chan<- Result), stats *Stats) <-chan Result
Aggregate is a general purpose implementation to gather fileInfo results
func AppendUniqueString ¶ added in v1.0.0
Append elm if it is not yet on dest
func Gather ¶
func Gather(files <-chan FileInfo, results chan<- Result, stats *Stats, makeResult func(*FileInfo, *FileInfo, error) Result, rctrl *gio.ReadChannelController, wctrls gio.RootedWriteControllers)
Drains FileInfos from files channel, reads them using ctrl and generates hashes. Creates a Result using makeResult() and sends it down the results channel. If wctrls is set, we will setup parallel writer which writes the bytes used for hashing to all controllers at the same time, which will be as slow as the slowest device
func Generate ¶
func Generate(rctrls io.RootedReadControllers, runner Runner, generate func([]string, chan<- FileInfo, chan<- Result)) <-chan Result
Generate does all boilerplate required to be a valid generator Will produce as many generators as there are devices, each is handed a list of trees to handle
func IndexTrackingResultHandlerAdapter ¶ added in v1.0.0
func IndexTrackingResultHandlerAdapter(indices *[]string, handler func(r Result) bool) func(r Result) bool
Returns a handler whichasd will track seal/index files, and call the given handler aftrewards, writing the into the provided slice
func ParseSources ¶ added in v1.0.0
Parse all valid source items from the given list. May either be files or directories. The returned list may be shorter, as contained paths are skipped automatically. Paths will be normalized.
func StartEngine ¶
Runner Init must have been called beforehand as we don't know the values here The handlers receive a result of the respective stage and may perform whichever operation Returns the last error we received in either generator or aggregation stage
Types ¶
type BasicResult ¶
type BasicResult struct { Finfo FileInfo Msg string Err error Prio Importance }
Implements information about any operation It's the minimum we need to work
func (*BasicResult) Error ¶
func (s *BasicResult) Error() error
func (*BasicResult) FileInformation ¶
func (s *BasicResult) FileInformation() *FileInfo
func (*BasicResult) Info ¶
func (s *BasicResult) Info() (string, Importance)
type BasicRunner ¶ added in v1.0.0
type BasicRunner struct { // Items we work on Items []string // A map of readers which maps from a root to the reader to use to read files that share the same root RootedReaders io.RootedReadControllers // A channel to let everyone know we should finish as soon as possible - this is done by closing the channel Done chan bool // our statistics instance Stats Stats // The maximum log-level. We just keep this value here because the cli makes a difference between CHECK and RUN ! // This member shouldn't be needed as logging is not done by the runner anyway - it's all done by result handlers. // Only they are concerned, which is a function of the CLI entirely // TODO(st) Fork codegangsa/CLI and make the fix, use the fork from that point on ... . Level Importance Filters []FileFilter }
A partial implementation of a runner, which can be shared between the various commands
func (*BasicRunner) CancelChannel ¶ added in v1.0.0
func (b *BasicRunner) CancelChannel() chan bool
func (*BasicRunner) InitBasicRunner ¶ added in v1.0.0
func (b *BasicRunner) InitBasicRunner(numReaders int, items []string, maxLogLevel Importance, filters []FileFilter)
Initialize our Readers and items with the given information, including our cannel
func (*BasicRunner) LogLevel ¶ added in v1.0.0
func (b *BasicRunner) LogLevel() Importance
func (*BasicRunner) NumChannels ¶ added in v1.0.0
func (b *BasicRunner) NumChannels() int
func (*BasicRunner) Statistics ¶ added in v1.0.0
func (b *BasicRunner) Statistics() *Stats
type FileFilter ¶ added in v1.0.0
type FileFilter struct {
// contains filtered or unexported fields
}
A utility to encapsulate a file-filter These exist in special modes to filter entire classes of files, and as FNMatch compatible string
func FileFilterFromString ¶ added in v1.0.0
func FileFilterFromString(name string) (FileFilter, error)
Return a new FileFilter matching the given string. Every string which is not a special kind of filter will be interpreted as fnmatch filter. Err is returned if the glob is invalid
func (*FileFilter) Matches ¶ added in v1.0.0
func (f *FileFilter) Matches(name string, mode os.FileMode) bool
func (FileFilter) String ¶ added in v1.0.0
func (f FileFilter) String() string
type FileHashMismatch ¶ added in v1.0.0
type FileHashMismatch struct {
Path string
}
Thrown if a file hash didn't match - it's used in the verify implementation, primarily
func (*FileHashMismatch) Error ¶ added in v1.0.0
func (f *FileHashMismatch) Error() string
type FileInfo ¶
type FileInfo struct { // path to file to handle Path string // Path relative to the directory it was found in RelaPath string // Provides information about the type of the file Mode os.FileMode // size of file Size int64 // hashes of file Sha1 []byte MD5 []byte }
A struct holding information about a task, including
type FileSizeMismatch ¶ added in v1.0.0
Thrown if the filesize we read didn't match with the filesize we were supposed to read
func (*FileSizeMismatch) Error ¶ added in v1.0.0
func (f *FileSizeMismatch) Error() string
type HashStatAdapter ¶ added in v1.0.0
type HashStatAdapter struct {
// contains filtered or unexported fields
}
Intercepts Write calls and updates the stats accordingly. Implements only what we need, forwrading the calls as needed
func (*HashStatAdapter) Reset ¶ added in v1.0.0
func (h *HashStatAdapter) Reset()
func (*HashStatAdapter) Sum ¶ added in v1.0.0
func (h *HashStatAdapter) Sum(b []byte) []byte
type Importance ¶ added in v1.0.0
type Importance uint8
const ( Progress Importance = iota Info Warn Error Valuable LogDisabled )
func ImportanceFromString ¶ added in v1.0.0
func ImportanceFromString(p string) (Importance, error)
Parse a priority from the given string. error will be set if this fails
func (Importance) MayLog ¶ added in v1.0.0
func (p Importance) MayLog(op Importance) bool
MayLog returns true if the given priority may be logged as seen from our log-level. Results may always be logged
func (Importance) String ¶ added in v1.0.0
func (p Importance) String() string
type Result ¶
type Result interface { // Return a string indicating the result, which can can also state an error // The priority show the kind of result messgae, allowing you to filter them effectively Info() (string, Importance) // Return an error instance indicating what exactly when wrong Error() error // Return the FileInformation we represent FileInformation() *FileInfo }
type Runner ¶
type Runner interface { // Intialize required members to deal with controlled reading and writing. numReaders and numWriters // can be assumed to be valid // Sets the items we are supposed to be working on - must be checked by implementation, as they are // very generic in nature Init(numReaders, numWriters int, items []string, maxLogLevel Importance, filters []FileFilter) error // Return the amount of io-channels the runner may be using in parallel per device NumChannels() int // Return the minimum allowed level for logging // TODO(st): get rid of this method ! LogLevel() Importance // Statistics returns the commands shared statistics structure Statistics() *Stats // CancelChannel returns the channel to close when the operation should stop prematurely // NOTE: Only valid after Init was called, and it's an error to call it beforehand CancelChannel() chan bool // Launches generators, gatherers and an aggregator, setting up their connections to fit. // Must close FileInfo channel when done // Must listen for SIGTERM|SIGINT signals and abort if received // May report errrors or information about the progress through generateResult, which must NOT be closed when done. Return nothing // if there is nothing to report // Must listen on done and return asap Generate() (aggregationResult <-chan Result) // Will be launched as go routine and perform whichever operation on the FileInfo received from input channel // Produces one result per input FileInfo and returns it in the given results channel // Must listen for SIGTERM|SIGINT signals // Use the wait group to mark when done, which is when the results channel need to be closed. // Must listen on done and return asap Gather(rctrl *io.ReadChannelController, files <-chan FileInfo, results chan<- Result) // Aggregate the result channel and produce whatever you have to produce from the result of the Gather steps // When you are done, place a single result instance into accumResult and close the channel // You must listen on done to know if the operation was aborted prematurely. This information should be useful // for your result. Aggregate(results <-chan Result) <-chan Result }
An interface to help implementing types which read one ore more data streams, run an operation on them whose result is aggregated and provided to the caller.
type Stats ¶ added in v1.0.0
type Stats struct { io.Stats BytesHashed uint64 // Total of bytes hashed so far, counting all active hashers NumHashers uint32 // Amount of hashers running in parallel //GENERATOR INFORMATION NumSkippedFiles uint32 // Amount of files we skipped right away StopTheEngines uint32 // Amount of gather procs which had write errors on all destinations // AGGREGATION // Aggregation step is single-threaded - no atomic operation needed ErrCount uint // Amount of errors that hit the aggregation step NumUndoneFiles uint // Amout of files removed during undo WasCancelled bool // is true if the user cancelled }
A structure to keep information about what is currently going on. It is means to be used as shared resource, used by multiple threads, which is why thread-safe counters are used. Implementations must keep these numbers up-to-date, while async processors will digest and present the data in some form
func (*Stats) DeltaString ¶ added in v1.0.0
Prints performance metrics as a single line full of useful information, similar to io.Stats.DeltaString, but may add additional information