Documentation ¶
Overview ¶
Package floc allows to orchestrate goroutines with ease. The goal of the project is to make the process of running goroutines in parallel and synchronizing them easy.
Floc follows for objectives:
-- Split the overall work into the number of small jobs. Floc cannot force you to do that but doing that grants many advantages starting from simpler testing and up to better control on execution.
-- Make end algorithms more clear and simpler by expressing them through the combination of jobs. In short terms floc allows to express job through jobs.
-- Provide better control over execution with one entry point and one exit point. That is achieved by allowing any job finish execution with Cancel or Complete.
-- Simple parallelism and synchronization of jobs.
-- As little overhead, in comparison to direct use of goroutines and sync primitives, as possible.
The package categorizes middleware used for flow building in subpackages.
-- `guard` contains middleware which help protect flow from falling into panic or unpredicted behavior.
-- `pred` contains some basic predicates for AND, OR, NOT logics.
-- `run` provides middleware for designing flow, i.e. for running job sequentially, in parallel, in background and so on.
Here is a quick example of what the package capable of.
// The job computes something complex and does writing of results in // background. job := run.Sequence( run.Background(WriteToDisk), run.While(pred.Not(TestComputed), run.Sequence( run.Parallel( ComputeSomething, ComputeSomethindElse, guard.Panic(ComputeDangerousThing), ), run.Parallel( PrepareForWrite, UpdateComputedFlag, ), ) )), CompleteWithSuccess, ) // The entry point: produce the result. floc.Run(flow, state, update, job) // The exit point: consume the result. result, data := flow.Result()
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func NewFlowWithDisable ¶
func NewFlowWithDisable(parent Flow) (Flow, DisableFunc)
NewFlowWithDisable creates a new instance of the flow, containing the parent flow, and a disable function which allows to disable calls to Complete and Cancel.
func NewFlowWithResume ¶
func NewFlowWithResume(parent Flow) (Flow, ResumeFunc)
NewFlowWithResume creates a new instance of the flow, containing the parent flow, and a resume function which allows to resume execution of the flow.
Types ¶
type DisableFunc ¶
type DisableFunc func()
DisableFunc when invoked disables calls to Complete and Cancel.
type Flow ¶
type Flow interface { Releaser // Done returns a channel that's closed when the flow done. // Successive calls to Done return the same value. Done() <-chan struct{} // Complete finishes the flow with success status and stops // execution of further jobs if any. Complete(data interface{}) // Cancel cancels the execution of the flow. Cancel(data interface{}) // IsFinished tests if execution of the flow is either completed or canceled. IsFinished() bool // Result returns the result code and the result data of the flow. The call // to the function is effective only if the flow is finished. Result() (result Result, data interface{}) }
Flow provides the control over execution of the flow.
type Job ¶
Job is the proptotype of function which do some piece of the overall work. With the parameters it has the implementation can control execution of flow and read/write state directly or with update function.
type Releaser ¶
type Releaser interface {
// Release should be called once when the object is not needed anymore.
Release()
}
Releaser is responsible for releasing underlying resources.
type Result ¶
type Result int32
Result is the result of flow execution.
func (Result) Int32 ¶
Int32 returns the underlying value as int32. That is handy while working with atomic operations.
func (Result) IsCanceled ¶
IsCanceled tests if the resilt is Canceled.
func (Result) IsCompleted ¶
IsCompleted tests if the resilt is Completed.
func (Result) IsFinished ¶
IsFinished tests if the result is either Completed or Canceled.
type ResultSet ¶
type ResultSet struct {
// contains filtered or unexported fields
}
ResultSet is the set of possible results. This set is the simple implementation of Set with no check for duplicate values and it covers only basic needs of floc.
func NewResultSet ¶
NewResultSet constructs the set with given results. The function validates all result values first and panics on any invalid result.
type ResumeFunc ¶
type ResumeFunc func() Flow
ResumeFunc when invoked resumes the execution of the flow. Effective in case the flow was Canceled or Completed. The function returns the parent Flow.
type State ¶
type State interface { Releaser // Returns the contained data. Data() (data interface{}) // Returns the contained data with read-only locker. DataWithReadLocker() (data interface{}, readLocker sync.Locker) // Returns the contained data with read/write locker. DataWithWriteLocker() (data interface{}, writeLocker sync.Locker) // Returns the contained data with read-only and read/write lockers. DataWithReadAndWriteLockers() (data interface{}, readLocker, writeLocker sync.Locker) }
State is the container of data shared amongst jobs. Depending on implementation the data can be thread-safe or not.
The state is aware of possible implementation of Releaser interface by contained data. So if the contained data implements Releaser call to state.Release() will be propagated to data.Release() as well.
type Data struct{} func (Data) Release() { fmt.Println("Data released") } state := floc.NewState(Data{}) state.Release() // Output: Data released
func NewState ¶
func NewState(data interface{}) State
NewState create a new instance of the state container which can contain any arbitrary data. Data can either be of primitive type or complex structure or even interface or function. What the state should contain depends on task.
type Events struct { HeaderReady bool BodyReady bool DataReady bool } state := floc.NewState(new(Events))
The container can contain nil value as well if no contained data is required.
state := floc.NewState(nil)
Source Files ¶
Directories ¶
Path | Synopsis |
---|---|
Package guard contains jobs which allows to protect execution of the flow from crashing or from unpredicted behavior.
|
Package guard contains jobs which allows to protect execution of the flow from crashing or from unpredicted behavior. |
Package pred provides predicates for basic logics.
|
Package pred provides predicates for basic logics. |
Package run is the collection of jobs which make the architecture of the flow.
|
Package run is the collection of jobs which make the architecture of the flow. |