scheduler

package
v0.0.0-...-ae3a0a2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 14, 2022 License: MPL-2.0 Imports: 29 Imported by: 0

Documentation

Index

Constants

View Source
const (
	AnnotationForcesCreate            = "forces create"
	AnnotationForcesDestroy           = "forces destroy"
	AnnotationForcesInplaceUpdate     = "forces in-place update"
	AnnotationForcesDestructiveUpdate = "forces create/destroy update"
)
View Source
const (
	UpdateTypeIgnore            = "ignore"
	UpdateTypeCreate            = "create"
	UpdateTypeDestroy           = "destroy"
	UpdateTypeMigrate           = "migrate"
	UpdateTypeCanary            = "canary"
	UpdateTypeInplaceUpdate     = "in-place update"
	UpdateTypeDestructiveUpdate = "create/destroy update"
)

UpdateTypes denote the type of update to occur against the task group.

View Source
const (
	FilterConstraintHostVolumes                    = "missing compatible host volumes"
	FilterConstraintCSIPluginTemplate              = "CSI plugin %s is missing from client %s"
	FilterConstraintCSIPluginUnhealthyTemplate     = "CSI plugin %s is unhealthy on client %s"
	FilterConstraintCSIPluginMaxVolumesTemplate    = "CSI plugin %s has the maximum number of volumes on client %s"
	FilterConstraintCSIVolumesLookupFailed         = "CSI volume lookup failed"
	FilterConstraintCSIVolumeNotFoundTemplate      = "missing CSI Volume %s"
	FilterConstraintCSIVolumeNoReadTemplate        = "CSI volume %s is unschedulable or has exhausted its available reader claims"
	FilterConstraintCSIVolumeNoWriteTemplate       = "CSI volume %s is unschedulable or is read-only"
	FilterConstraintCSIVolumeInUseTemplate         = "CSI volume %s has exhausted its available writer claims"
	FilterConstraintCSIVolumeGCdAllocationTemplate = "" /* 141-byte string literal not displayed */
	FilterConstraintDrivers                        = "missing drivers"
	FilterConstraintDevices                        = "missing devices"
	FilterConstraintsCSIPluginTopology             = "did not meet topology requirement"
)
View Source
const (
	// SchedulerVersion is the version of the scheduler. Changes to the
	// scheduler that are incompatible with prior schedulers will increment this
	// version. It is used to disallow dequeueing when the versions do not match
	// across the leader and the dequeueing scheduler.
	SchedulerVersion uint16 = 1
)

Variables

View Source
var BuiltinSchedulers = map[string]Factory{
	"service":  NewServiceScheduler,
	"batch":    NewBatchScheduler,
	"system":   NewSystemScheduler,
	"sysbatch": NewSysBatchScheduler,
}

BuiltinSchedulers contains the built in registered schedulers which are available

Functions

func Annotate

func Annotate(diff *structs.JobDiff, annotations *structs.PlanAnnotations) error

Annotate takes the diff between the old and new version of a Job, the scheduler's plan annotations and will add annotations to the diff to aide human understanding of the plan.

Currently the things that are annotated are: * Task group changes will be annotated with:

  • Count up and count down changes
  • Update counts (creates, destroys, migrates, etc)

* Task changes will be annotated with:

  • forces create/destroy update
  • forces in-place update

func NewAllocReconciler

func NewAllocReconciler(logger log.Logger, allocUpdateFn allocUpdateType, batch bool,
	jobID string, job *structs.Job, deployment *structs.Deployment,
	existingAllocs []*structs.Allocation, taintedNodes map[string]*structs.Node, evalID string,
	evalPriority int, supportsDisconnectedClients bool) *allocReconciler

NewAllocReconciler creates a new reconciler that should be used to determine the changes required to bring the cluster state inline with the declared jobspec

func NewPropertySet

func NewPropertySet(ctx Context, job *structs.Job) *propertySet

NewPropertySet returns a new property set used to guarantee unique property values for new allocation placements.

Types

type BasePreemptionResource

type BasePreemptionResource struct {
	// contains filtered or unexported fields
}

BasePreemptionResource implements PreemptionResource for CPU/Memory/Disk

func (*BasePreemptionResource) Distance

func (b *BasePreemptionResource) Distance() float64

func (*BasePreemptionResource) MeetsRequirements

func (b *BasePreemptionResource) MeetsRequirements() bool

type BinPackIterator

type BinPackIterator struct {
	// contains filtered or unexported fields
}

BinPackIterator is a RankIterator that scores potential options based on a bin-packing algorithm.

func NewBinPackIterator

func NewBinPackIterator(ctx Context, source RankIterator, evict bool, priority int, schedConfig *structs.SchedulerConfiguration) *BinPackIterator

NewBinPackIterator returns a BinPackIterator which tries to fit tasks potentially evicting other tasks based on a given priority.

func (*BinPackIterator) Next

func (iter *BinPackIterator) Next() *RankedNode

func (*BinPackIterator) Reset

func (iter *BinPackIterator) Reset()

func (*BinPackIterator) SetJob

func (iter *BinPackIterator) SetJob(job *structs.Job)

func (*BinPackIterator) SetTaskGroup

func (iter *BinPackIterator) SetTaskGroup(taskGroup *structs.TaskGroup)

type CSIVolumeChecker

type CSIVolumeChecker struct {
	// contains filtered or unexported fields
}

func NewCSIVolumeChecker

func NewCSIVolumeChecker(ctx Context) *CSIVolumeChecker

func (*CSIVolumeChecker) Feasible

func (c *CSIVolumeChecker) Feasible(n *structs.Node) bool

func (*CSIVolumeChecker) SetJobID

func (c *CSIVolumeChecker) SetJobID(jobID string)

func (*CSIVolumeChecker) SetNamespace

func (c *CSIVolumeChecker) SetNamespace(namespace string)

func (*CSIVolumeChecker) SetVolumes

func (c *CSIVolumeChecker) SetVolumes(allocName string, volumes map[string]*structs.VolumeRequest)

type ComputedClassFeasibility

type ComputedClassFeasibility byte
const (
	// EvalComputedClassUnknown is the initial state until the eligibility has
	// been explicitly marked to eligible/ineligible or escaped.
	EvalComputedClassUnknown ComputedClassFeasibility = iota

	// EvalComputedClassIneligible is used to mark the computed class as
	// ineligible for the evaluation.
	EvalComputedClassIneligible

	// EvalComputedClassIneligible is used to mark the computed class as
	// eligible for the evaluation.
	EvalComputedClassEligible

	// EvalComputedClassEscaped signals that computed class can not determine
	// eligibility because a constraint exists that is not captured by computed
	// node classes.
	EvalComputedClassEscaped
)

type ConstraintChecker

type ConstraintChecker struct {
	// contains filtered or unexported fields
}

ConstraintChecker is a FeasibilityChecker which returns nodes that match a given set of constraints. This is used to filter on job, task group, and task constraints.

func NewConstraintChecker

func NewConstraintChecker(ctx Context, constraints []*structs.Constraint) *ConstraintChecker

NewConstraintChecker creates a ConstraintChecker for a set of constraints

func (*ConstraintChecker) Feasible

func (c *ConstraintChecker) Feasible(option *structs.Node) bool

func (*ConstraintChecker) SetConstraints

func (c *ConstraintChecker) SetConstraints(constraints []*structs.Constraint)

type Context

type Context interface {
	// State is used to inspect the current global state
	State() State

	// Plan returns the current plan
	Plan() *structs.Plan

	// Logger provides a way to log
	Logger() log.Logger

	// Metrics returns the current metrics
	Metrics() *structs.AllocMetric

	// Reset is invoked after making a placement
	Reset()

	// ProposedAllocs returns the proposed allocations for a node which are
	// the existing allocations, removing evictions, and adding any planned
	// placements.
	ProposedAllocs(nodeID string) ([]*structs.Allocation, error)

	// RegexpCache is a cache of regular expressions
	RegexpCache() map[string]*regexp.Regexp

	// VersionConstraintCache is a cache of version constraints
	VersionConstraintCache() map[string]VerConstraints

	// SemverConstraintCache is a cache of semver constraints
	SemverConstraintCache() map[string]VerConstraints

	// Eligibility returns a tracker for node eligibility in the context of the
	// eval.
	Eligibility() *EvalEligibility

	// SendEvent provides best-effort delivery of scheduling and placement
	// events.
	SendEvent(event interface{})
}

Context is used to track contextual information used for placement

type ContextualIterator

type ContextualIterator interface {
	SetJob(*structs.Job)
	SetTaskGroup(*structs.TaskGroup)
}

ContextualIterator is an iterator that can have the job and task group set on it.

type DeviceChecker

type DeviceChecker struct {
	// contains filtered or unexported fields
}

DeviceChecker is a FeasibilityChecker which returns whether a node has the devices necessary to scheduler a task group.

func NewDeviceChecker

func NewDeviceChecker(ctx Context) *DeviceChecker

NewDeviceChecker creates a DeviceChecker

func (*DeviceChecker) Feasible

func (c *DeviceChecker) Feasible(option *structs.Node) bool

func (*DeviceChecker) SetTaskGroup

func (c *DeviceChecker) SetTaskGroup(tg *structs.TaskGroup)

type DistinctHostsIterator

type DistinctHostsIterator struct {
	// contains filtered or unexported fields
}

DistinctHostsIterator is a FeasibleIterator which returns nodes that pass the distinct_hosts constraint. The constraint ensures that multiple allocations do not exist on the same node.

func NewDistinctHostsIterator

func NewDistinctHostsIterator(ctx Context, source FeasibleIterator) *DistinctHostsIterator

NewDistinctHostsIterator creates a DistinctHostsIterator from a source.

func (*DistinctHostsIterator) Next

func (iter *DistinctHostsIterator) Next() *structs.Node

func (*DistinctHostsIterator) Reset

func (iter *DistinctHostsIterator) Reset()

func (*DistinctHostsIterator) SetJob

func (iter *DistinctHostsIterator) SetJob(job *structs.Job)

func (*DistinctHostsIterator) SetTaskGroup

func (iter *DistinctHostsIterator) SetTaskGroup(tg *structs.TaskGroup)

type DistinctPropertyIterator

type DistinctPropertyIterator struct {
	// contains filtered or unexported fields
}

DistinctPropertyIterator is a FeasibleIterator which returns nodes that pass the distinct_property constraint. The constraint ensures that multiple allocations do not use the same value of the given property.

func NewDistinctPropertyIterator

func NewDistinctPropertyIterator(ctx Context, source FeasibleIterator) *DistinctPropertyIterator

NewDistinctPropertyIterator creates a DistinctPropertyIterator from a source.

func (*DistinctPropertyIterator) Next

func (iter *DistinctPropertyIterator) Next() *structs.Node

func (*DistinctPropertyIterator) Reset

func (iter *DistinctPropertyIterator) Reset()

func (*DistinctPropertyIterator) SetJob

func (iter *DistinctPropertyIterator) SetJob(job *structs.Job)

func (*DistinctPropertyIterator) SetTaskGroup

func (iter *DistinctPropertyIterator) SetTaskGroup(tg *structs.TaskGroup)

type DriverChecker

type DriverChecker struct {
	// contains filtered or unexported fields
}

DriverChecker is a FeasibilityChecker which returns whether a node has the drivers necessary to scheduler a task group.

func NewDriverChecker

func NewDriverChecker(ctx Context, drivers map[string]struct{}) *DriverChecker

NewDriverChecker creates a DriverChecker from a set of drivers

func (*DriverChecker) Feasible

func (c *DriverChecker) Feasible(option *structs.Node) bool

func (*DriverChecker) SetDrivers

func (c *DriverChecker) SetDrivers(d map[string]struct{})

type EvalCache

type EvalCache struct {
	// contains filtered or unexported fields
}

EvalCache is used to cache certain things during an evaluation

func (*EvalCache) RegexpCache

func (e *EvalCache) RegexpCache() map[string]*regexp.Regexp

func (*EvalCache) SemverConstraintCache

func (e *EvalCache) SemverConstraintCache() map[string]VerConstraints

func (*EvalCache) VersionConstraintCache

func (e *EvalCache) VersionConstraintCache() map[string]VerConstraints

type EvalContext

type EvalContext struct {
	EvalCache
	// contains filtered or unexported fields
}

EvalContext is a Context used during an Evaluation

func NewEvalContext

func NewEvalContext(eventsCh chan<- interface{}, s State, p *structs.Plan, log log.Logger) *EvalContext

NewEvalContext constructs a new EvalContext

func (*EvalContext) Eligibility

func (e *EvalContext) Eligibility() *EvalEligibility

func (*EvalContext) Logger

func (e *EvalContext) Logger() log.Logger

func (*EvalContext) Metrics

func (e *EvalContext) Metrics() *structs.AllocMetric

func (*EvalContext) Plan

func (e *EvalContext) Plan() *structs.Plan

func (*EvalContext) ProposedAllocs

func (e *EvalContext) ProposedAllocs(nodeID string) ([]*structs.Allocation, error)

func (*EvalContext) Reset

func (e *EvalContext) Reset()

func (*EvalContext) SendEvent

func (e *EvalContext) SendEvent(event interface{})

func (*EvalContext) SetState

func (e *EvalContext) SetState(s State)

func (*EvalContext) State

func (e *EvalContext) State() State

type EvalEligibility

type EvalEligibility struct {
	// contains filtered or unexported fields
}

EvalEligibility tracks eligibility of nodes by computed node class over the course of an evaluation.

func NewEvalEligibility

func NewEvalEligibility() *EvalEligibility

NewEvalEligibility returns an eligibility tracker for the context of an evaluation.

func (*EvalEligibility) GetClasses

func (e *EvalEligibility) GetClasses() map[string]bool

GetClasses returns the tracked classes to their eligibility, across the job and task groups.

func (*EvalEligibility) HasEscaped

func (e *EvalEligibility) HasEscaped() bool

HasEscaped returns whether any of the constraints in the passed job have escaped computed node classes.

func (*EvalEligibility) JobStatus

func (e *EvalEligibility) JobStatus(class string) ComputedClassFeasibility

JobStatus returns the eligibility status of the job.

func (*EvalEligibility) QuotaLimitReached

func (e *EvalEligibility) QuotaLimitReached() string

QuotaLimitReached returns the quota name if the quota limit has been reached.

func (*EvalEligibility) SetJob

func (e *EvalEligibility) SetJob(job *structs.Job)

SetJob takes the job being evaluated and calculates the escaped constraints at the job and task group level.

func (*EvalEligibility) SetJobEligibility

func (e *EvalEligibility) SetJobEligibility(eligible bool, class string)

SetJobEligibility sets the eligibility status of the job for the computed node class.

func (*EvalEligibility) SetQuotaLimitReached

func (e *EvalEligibility) SetQuotaLimitReached(quota string)

SetQuotaLimitReached marks that the quota limit has been reached for the given quota

func (*EvalEligibility) SetTaskGroupEligibility

func (e *EvalEligibility) SetTaskGroupEligibility(eligible bool, tg, class string)

SetTaskGroupEligibility sets the eligibility status of the task group for the computed node class.

func (*EvalEligibility) TaskGroupStatus

func (e *EvalEligibility) TaskGroupStatus(tg, class string) ComputedClassFeasibility

TaskGroupStatus returns the eligibility status of the task group.

type Factory

type Factory func(log.Logger, chan<- interface{}, State, Planner) Scheduler

Factory is used to instantiate a new Scheduler

type FeasibilityChecker

type FeasibilityChecker interface {
	Feasible(*structs.Node) bool
}

FeasibilityChecker is used to check if a single node meets feasibility constraints.

type FeasibilityWrapper

type FeasibilityWrapper struct {
	// contains filtered or unexported fields
}

FeasibilityWrapper is a FeasibleIterator which wraps both job and task group FeasibilityCheckers in which feasibility checking can be skipped if the computed node class has previously been marked as eligible or ineligible.

func NewFeasibilityWrapper

func NewFeasibilityWrapper(ctx Context, source FeasibleIterator,
	jobCheckers, tgCheckers, tgAvailable []FeasibilityChecker) *FeasibilityWrapper

NewFeasibilityWrapper returns a FeasibleIterator based on the passed source and FeasibilityCheckers.

func (*FeasibilityWrapper) Next

func (w *FeasibilityWrapper) Next() *structs.Node

Next returns an eligible node, only running the FeasibilityCheckers as needed based on the sources computed node class.

func (*FeasibilityWrapper) Reset

func (w *FeasibilityWrapper) Reset()

func (*FeasibilityWrapper) SetTaskGroup

func (w *FeasibilityWrapper) SetTaskGroup(tg string)

type FeasibleIterator

type FeasibleIterator interface {
	// Next yields a feasible node or nil if exhausted
	Next() *structs.Node

	// Reset is invoked when an allocation has been placed
	// to reset any stale state.
	Reset()
}

FeasibleIterator is used to iteratively yield nodes that match feasibility constraints. The iterators may manage some state for performance optimizations.

func NewQuotaIterator

func NewQuotaIterator(_ Context, source FeasibleIterator) FeasibleIterator

type FeasibleRankIterator

type FeasibleRankIterator struct {
	// contains filtered or unexported fields
}

FeasibleRankIterator is used to consume from a FeasibleIterator and return an unranked node with base ranking.

func NewFeasibleRankIterator

func NewFeasibleRankIterator(ctx Context, source FeasibleIterator) *FeasibleRankIterator

NewFeasibleRankIterator is used to return a new FeasibleRankIterator from a FeasibleIterator source.

func (*FeasibleRankIterator) Next

func (iter *FeasibleRankIterator) Next() *RankedNode

func (*FeasibleRankIterator) Reset

func (iter *FeasibleRankIterator) Reset()

type GenericScheduler

type GenericScheduler struct {
	// contains filtered or unexported fields
}

GenericScheduler is used for 'service' and 'batch' type jobs. This scheduler is designed for long-lived services, and as such spends more time attempting to make a high quality placement. This is the primary scheduler for most workloads. It also supports a 'batch' mode to optimize for fast decision making at the cost of quality.

func (*GenericScheduler) Process

func (s *GenericScheduler) Process(eval *structs.Evaluation) (err error)

Process is used to handle a single evaluation

type GenericStack

type GenericStack struct {
	// contains filtered or unexported fields
}

GenericStack is the Stack used for the Generic scheduler. It is designed to make better placement decisions at the cost of performance.

func NewGenericStack

func NewGenericStack(batch bool, ctx Context) *GenericStack

NewGenericStack constructs a stack used for selecting service placements

func (*GenericStack) Select

func (s *GenericStack) Select(tg *structs.TaskGroup, options *SelectOptions) *RankedNode

func (*GenericStack) SetJob

func (s *GenericStack) SetJob(job *structs.Job)

func (*GenericStack) SetNodes

func (s *GenericStack) SetNodes(baseNodes []*structs.Node)

type Harness

type Harness struct {
	State *state.StateStore

	Planner Planner

	Plans        []*structs.Plan
	Evals        []*structs.Evaluation
	CreateEvals  []*structs.Evaluation
	ReblockEvals []*structs.Evaluation
	// contains filtered or unexported fields
}

Harness is a lightweight testing harness for schedulers. It manages a state store copy and provides the planner interface. It can be extended for various testing uses or for invoking the scheduler without side effects.

func NewHarness

func NewHarness(t testing.TB) *Harness

NewHarness is used to make a new testing harness

func NewHarnessWithState

func NewHarnessWithState(t testing.TB, state *state.StateStore) *Harness

NewHarnessWithState creates a new harness with the given state for testing purposes.

func (*Harness) AssertEvalStatus

func (h *Harness) AssertEvalStatus(t testing.TB, state string)

func (*Harness) CreateEval

func (h *Harness) CreateEval(eval *structs.Evaluation) error

func (*Harness) NextIndex

func (h *Harness) NextIndex() uint64

NextIndex returns the next index

func (*Harness) OptimizePlan

func (h *Harness) OptimizePlan(optimize bool)

OptimizePlan is a function used only for Harness to help set the optimzePlan field, since Harness doesn't have access to a Server object

func (*Harness) Process

func (h *Harness) Process(factory Factory, eval *structs.Evaluation) error

Process is used to process an evaluation given a factory function to create the scheduler

func (*Harness) ReblockEval

func (h *Harness) ReblockEval(eval *structs.Evaluation) error

func (*Harness) Scheduler

func (h *Harness) Scheduler(factory Factory) Scheduler

Scheduler is used to return a new scheduler from a snapshot of current state using the harness for planning.

func (*Harness) ServersMeetMinimumVersion

func (h *Harness) ServersMeetMinimumVersion(_ *version.Version, _ bool) bool

func (*Harness) Snapshot

func (h *Harness) Snapshot() State

Snapshot is used to snapshot the current state

func (*Harness) SubmitPlan

func (h *Harness) SubmitPlan(plan *structs.Plan) (*structs.PlanResult, State, error)

SubmitPlan is used to handle plan submission

func (*Harness) UpdateEval

func (h *Harness) UpdateEval(eval *structs.Evaluation) error

type HostVolumeChecker

type HostVolumeChecker struct {
	// contains filtered or unexported fields
}

HostVolumeChecker is a FeasibilityChecker which returns whether a node has the host volumes necessary to schedule a task group.

func NewHostVolumeChecker

func NewHostVolumeChecker(ctx Context) *HostVolumeChecker

NewHostVolumeChecker creates a HostVolumeChecker from a set of volumes

func (*HostVolumeChecker) Feasible

func (h *HostVolumeChecker) Feasible(candidate *structs.Node) bool

func (*HostVolumeChecker) SetVolumes

func (h *HostVolumeChecker) SetVolumes(volumes map[string]*structs.VolumeRequest)

SetVolumes takes the volumes required by a task group and updates the checker.

type JobAntiAffinityIterator

type JobAntiAffinityIterator struct {
	// contains filtered or unexported fields
}

JobAntiAffinityIterator is used to apply an anti-affinity to allocating along side other allocations from this job. This is used to help distribute load across the cluster.

func NewJobAntiAffinityIterator

func NewJobAntiAffinityIterator(ctx Context, source RankIterator, jobID string) *JobAntiAffinityIterator

NewJobAntiAffinityIterator is used to create a JobAntiAffinityIterator that applies the given penalty for co-placement with allocs from this job.

func (*JobAntiAffinityIterator) Next

func (iter *JobAntiAffinityIterator) Next() *RankedNode

func (*JobAntiAffinityIterator) Reset

func (iter *JobAntiAffinityIterator) Reset()

func (*JobAntiAffinityIterator) SetJob

func (iter *JobAntiAffinityIterator) SetJob(job *structs.Job)

func (*JobAntiAffinityIterator) SetTaskGroup

func (iter *JobAntiAffinityIterator) SetTaskGroup(tg *structs.TaskGroup)

type LimitIterator

type LimitIterator struct {
	// contains filtered or unexported fields
}

LimitIterator is a RankIterator used to limit the number of options that are returned before we artificially end the stream.

func NewLimitIterator

func NewLimitIterator(ctx Context, source RankIterator, limit int, scoreThreshold float64, maxSkip int) *LimitIterator

NewLimitIterator returns a LimitIterator with a fixed limit of returned options. Up to maxSkip options whose score is below scoreThreshold are skipped if there are additional options available in the source iterator

func (*LimitIterator) Next

func (iter *LimitIterator) Next() *RankedNode

func (*LimitIterator) Reset

func (iter *LimitIterator) Reset()

func (*LimitIterator) SetLimit

func (iter *LimitIterator) SetLimit(limit int)

type MaxScoreIterator

type MaxScoreIterator struct {
	// contains filtered or unexported fields
}

MaxScoreIterator is a RankIterator used to return only a single result of the item with the highest score. This iterator will consume all of the possible inputs and only returns the highest ranking result.

func NewMaxScoreIterator

func NewMaxScoreIterator(ctx Context, source RankIterator) *MaxScoreIterator

NewMaxScoreIterator returns a MaxScoreIterator over the given source

func (*MaxScoreIterator) Next

func (iter *MaxScoreIterator) Next() *RankedNode

func (*MaxScoreIterator) Reset

func (iter *MaxScoreIterator) Reset()

type NetworkChecker

type NetworkChecker struct {
	// contains filtered or unexported fields
}

NetworkChecker is a FeasibilityChecker which returns whether a node has the network resources necessary to schedule the task group

func NewNetworkChecker

func NewNetworkChecker(ctx Context) *NetworkChecker

func (*NetworkChecker) Feasible

func (c *NetworkChecker) Feasible(option *structs.Node) bool

func (*NetworkChecker) SetNetwork

func (c *NetworkChecker) SetNetwork(network *structs.NetworkResource)

type NetworkPreemptionResource

type NetworkPreemptionResource struct {
	// contains filtered or unexported fields
}

NetworkPreemptionResource implements PreemptionResource for network assignments It only looks at MBits needed

func (*NetworkPreemptionResource) Distance

func (n *NetworkPreemptionResource) Distance() float64

func (*NetworkPreemptionResource) MeetsRequirements

func (n *NetworkPreemptionResource) MeetsRequirements() bool

type NodeAffinityIterator

type NodeAffinityIterator struct {
	// contains filtered or unexported fields
}

NodeAffinityIterator is used to resolve any affinity rules in the job or task group, and apply a weighted score to nodes if they match.

func NewNodeAffinityIterator

func NewNodeAffinityIterator(ctx Context, source RankIterator) *NodeAffinityIterator

NewNodeAffinityIterator is used to create a NodeAffinityIterator that applies a weighted score according to whether nodes match any affinities in the job or task group.

func (*NodeAffinityIterator) Next

func (iter *NodeAffinityIterator) Next() *RankedNode

func (*NodeAffinityIterator) Reset

func (iter *NodeAffinityIterator) Reset()

func (*NodeAffinityIterator) SetJob

func (iter *NodeAffinityIterator) SetJob(job *structs.Job)

func (*NodeAffinityIterator) SetTaskGroup

func (iter *NodeAffinityIterator) SetTaskGroup(tg *structs.TaskGroup)

type NodeReschedulingPenaltyIterator

type NodeReschedulingPenaltyIterator struct {
	// contains filtered or unexported fields
}

NodeReschedulingPenaltyIterator is used to apply a penalty to a node that had a previous failed allocation for the same job. This is used when attempting to reschedule a failed alloc

func NewNodeReschedulingPenaltyIterator

func NewNodeReschedulingPenaltyIterator(ctx Context, source RankIterator) *NodeReschedulingPenaltyIterator

NewNodeReschedulingPenaltyIterator is used to create a NodeReschedulingPenaltyIterator that applies the given scoring penalty for placement onto nodes in penaltyNodes

func (*NodeReschedulingPenaltyIterator) Next

func (*NodeReschedulingPenaltyIterator) Reset

func (iter *NodeReschedulingPenaltyIterator) Reset()

func (*NodeReschedulingPenaltyIterator) SetPenaltyNodes

func (iter *NodeReschedulingPenaltyIterator) SetPenaltyNodes(penaltyNodes map[string]struct{})

type Planner

type Planner interface {
	// SubmitPlan is used to submit a plan for consideration.
	// This will return a PlanResult or an error. It is possible
	// that this will result in a state refresh as well.
	SubmitPlan(*structs.Plan) (*structs.PlanResult, State, error)

	// UpdateEval is used to update an evaluation. This should update
	// a copy of the input evaluation since that should be immutable.
	UpdateEval(*structs.Evaluation) error

	// CreateEval is used to create an evaluation. This should set the
	// PreviousEval to that of the current evaluation.
	CreateEval(*structs.Evaluation) error

	// ReblockEval takes a blocked evaluation and re-inserts it into the blocked
	// evaluation tracker. This update occurs only in-memory on the leader. The
	// evaluation must exist in a blocked state prior to this being called such
	// that on leader changes, the evaluation will be reblocked properly.
	ReblockEval(*structs.Evaluation) error

	// ServersMeetMinimumVersion returns whether the Nomad servers in the
	// worker's region are at least on the given Nomad version. The
	// checkFailedServers parameter specifies whether version for the failed
	// servers should be verified.
	ServersMeetMinimumVersion(minVersion *version.Version, checkFailedServers bool) bool
}

Planner interface is used to submit a task allocation plan.

type PortCollisionEvent

type PortCollisionEvent struct {
	Reason      string
	Node        *structs.Node
	Allocations []*structs.Allocation

	// TODO: this is a large struct, but may be required to debug unexpected
	// port collisions. Re-evaluate its need in the future if the bug is fixed
	// or not caused by this field.
	NetIndex *structs.NetworkIndex
}

PortCollisionEvent is an event that can happen during scheduling when an unexpected port collision is detected.

func (*PortCollisionEvent) Copy

func (*PortCollisionEvent) Sanitize

func (ev *PortCollisionEvent) Sanitize() *PortCollisionEvent

type PreemptionResource

type PreemptionResource interface {
	// MeetsRequirements returns true if the available resources match needed resources
	MeetsRequirements() bool

	// Distance returns values in the range [0, MaxFloat], lower is better
	Distance() float64
}

PreemptionResource interface is implemented by different types of resources.

type PreemptionResourceFactory

type PreemptionResourceFactory func(availableResources *structs.ComparableResources, resourceAsk *structs.ComparableResources) PreemptionResource

PreemptionResourceFactory returns a new PreemptionResource

func GetBasePreemptionResourceFactory

func GetBasePreemptionResourceFactory() PreemptionResourceFactory

GetBasePreemptionResourceFactory returns a preemption resource factory for CPU/Memory/Disk

func GetNetworkPreemptionResourceFactory

func GetNetworkPreemptionResourceFactory() PreemptionResourceFactory

GetNetworkPreemptionResourceFactory returns a preemption resource factory for network assignments

type PreemptionScoringIterator

type PreemptionScoringIterator struct {
	// contains filtered or unexported fields
}

PreemptionScoringIterator is used to score nodes according to the combination of preemptible allocations in them

func (*PreemptionScoringIterator) Next

func (iter *PreemptionScoringIterator) Next() *RankedNode

func (*PreemptionScoringIterator) Reset

func (iter *PreemptionScoringIterator) Reset()

type Preemptor

type Preemptor struct {
	// contains filtered or unexported fields
}

Preemptor is used to track existing allocations and find suitable allocations to preempt

func NewPreemptor

func NewPreemptor(jobPriority int, ctx Context, jobID *structs.NamespacedID) *Preemptor

func (*Preemptor) PreemptForDevice

func (p *Preemptor) PreemptForDevice(ask *structs.RequestedDevice, devAlloc *deviceAllocator) []*structs.Allocation

PreemptForDevice tries to find allocations to preempt to meet devices needed This is called once per device request when assigning devices to the task

func (*Preemptor) PreemptForNetwork

func (p *Preemptor) PreemptForNetwork(networkResourceAsk *structs.NetworkResource, netIdx *structs.NetworkIndex) []*structs.Allocation

PreemptForNetwork tries to find allocations to preempt to meet network resources. This is called once per task when assigning a network to the task. While finding allocations to preempt, this only considers allocations that share the same network device

func (*Preemptor) PreemptForTaskGroup

func (p *Preemptor) PreemptForTaskGroup(resourceAsk *structs.AllocatedResources) []*structs.Allocation

PreemptForTaskGroup computes a list of allocations to preempt to accommodate the resources asked for. Only allocs with a job priority < 10 of jobPriority are considered This method is meant only for finding preemptible allocations based on CPU/Memory/Disk

func (*Preemptor) SetCandidates

func (p *Preemptor) SetCandidates(allocs []*structs.Allocation)

SetCandidates initializes the candidate set from which preemptions are chosen

func (*Preemptor) SetNode

func (p *Preemptor) SetNode(node *structs.Node)

SetNode sets the node

func (*Preemptor) SetPreemptions

func (p *Preemptor) SetPreemptions(allocs []*structs.Allocation)

SetPreemptions initializes a map tracking existing counts of preempted allocations per job/task group. This is used while scoring preemption options

type RankIterator

type RankIterator interface {
	// Next yields a ranked option or nil if exhausted
	Next() *RankedNode

	// Reset is invoked when an allocation has been placed
	// to reset any stale state.
	Reset()
}

RankIterator is used to iteratively yield nodes along with ranking metadata. The iterators may manage some state for performance optimizations.

func NewPreemptionScoringIterator

func NewPreemptionScoringIterator(ctx Context, source RankIterator) RankIterator

NewPreemptionScoringIterator is used to create a score based on net aggregate priority of preempted allocations.

type RankedNode

type RankedNode struct {
	Node           *structs.Node
	FinalScore     float64
	Scores         []float64
	TaskResources  map[string]*structs.AllocatedTaskResources
	TaskLifecycles map[string]*structs.TaskLifecycleConfig
	AllocResources *structs.AllocatedSharedResources

	// Proposed is used to cache the proposed allocations on the
	// node. This can be shared between iterators that require it.
	Proposed []*structs.Allocation

	// PreemptedAllocs is used by the BinpackIterator to identify allocs
	// that should be preempted in order to make the placement
	PreemptedAllocs []*structs.Allocation
}

Rank is used to provide a score and various ranking metadata along with a node when iterating. This state can be modified as various rank methods are applied.

func (*RankedNode) GoString

func (r *RankedNode) GoString() string

func (*RankedNode) ProposedAllocs

func (r *RankedNode) ProposedAllocs(ctx Context) ([]*structs.Allocation, error)

func (*RankedNode) SetTaskResources

func (r *RankedNode) SetTaskResources(task *structs.Task,
	resource *structs.AllocatedTaskResources)

type RejectPlan

type RejectPlan struct {
	Harness *Harness
}

RejectPlan is used to always reject the entire plan and force a state refresh

func (*RejectPlan) CreateEval

func (r *RejectPlan) CreateEval(*structs.Evaluation) error

func (*RejectPlan) ReblockEval

func (r *RejectPlan) ReblockEval(*structs.Evaluation) error

func (*RejectPlan) ServersMeetMinimumVersion

func (r *RejectPlan) ServersMeetMinimumVersion(minVersion *version.Version, checkFailedServers bool) bool

func (*RejectPlan) SubmitPlan

func (r *RejectPlan) SubmitPlan(*structs.Plan) (*structs.PlanResult, State, error)

func (*RejectPlan) UpdateEval

func (r *RejectPlan) UpdateEval(eval *structs.Evaluation) error

type Scheduler

type Scheduler interface {
	// Process is used to handle a new evaluation. The scheduler is free to
	// apply any logic necessary to make the task placements. The state and
	// planner will be provided prior to any invocations of process.
	Process(*structs.Evaluation) error
}

Scheduler is the top level instance for a scheduler. A scheduler is meant to only encapsulate business logic, pushing the various plumbing into Nomad itself. They are invoked to process a single evaluation at a time. The evaluation may result in task allocations which are computed optimistically, as there are many concurrent evaluations being processed. The task allocations are submitted as a plan, and the current leader will coordinate the commits to prevent oversubscription or improper allocations based on stale state.

func NewBatchScheduler

func NewBatchScheduler(logger log.Logger, eventsCh chan<- interface{}, state State, planner Planner) Scheduler

NewBatchScheduler is a factory function to instantiate a new batch scheduler

func NewScheduler

func NewScheduler(name string, logger log.Logger, eventsCh chan<- interface{}, state State, planner Planner) (Scheduler, error)

NewScheduler is used to instantiate and return a new scheduler given the scheduler name, initial state, and planner.

func NewServiceScheduler

func NewServiceScheduler(logger log.Logger, eventsCh chan<- interface{}, state State, planner Planner) Scheduler

NewServiceScheduler is a factory function to instantiate a new service scheduler

func NewSysBatchScheduler

func NewSysBatchScheduler(logger log.Logger, eventsCh chan<- interface{}, state State, planner Planner) Scheduler

func NewSystemScheduler

func NewSystemScheduler(logger log.Logger, eventsCh chan<- interface{}, state State, planner Planner) Scheduler

NewSystemScheduler is a factory function to instantiate a new system scheduler.

type ScoreNormalizationIterator

type ScoreNormalizationIterator struct {
	// contains filtered or unexported fields
}

ScoreNormalizationIterator is used to combine scores from various prior iterators and combine them into one final score. The current implementation averages the scores together.

func NewScoreNormalizationIterator

func NewScoreNormalizationIterator(ctx Context, source RankIterator) *ScoreNormalizationIterator

NewScoreNormalizationIterator is used to create a ScoreNormalizationIterator that averages scores from various iterators into a final score.

func (*ScoreNormalizationIterator) Next

func (iter *ScoreNormalizationIterator) Next() *RankedNode

func (*ScoreNormalizationIterator) Reset

func (iter *ScoreNormalizationIterator) Reset()

type SelectOptions

type SelectOptions struct {
	PenaltyNodeIDs map[string]struct{}
	PreferredNodes []*structs.Node
	Preempt        bool
	AllocName      string
}

type SetStatusError

type SetStatusError struct {
	Err        error
	EvalStatus string
}

SetStatusError is used to set the status of the evaluation to the given error

func (*SetStatusError) Error

func (s *SetStatusError) Error() string

type SpreadIterator

type SpreadIterator struct {
	// contains filtered or unexported fields
}

SpreadIterator is used to spread allocations across a specified attribute according to preset weights

func NewSpreadIterator

func NewSpreadIterator(ctx Context, source RankIterator) *SpreadIterator

func (*SpreadIterator) Next

func (iter *SpreadIterator) Next() *RankedNode

func (*SpreadIterator) Reset

func (iter *SpreadIterator) Reset()

func (*SpreadIterator) SetJob

func (iter *SpreadIterator) SetJob(job *structs.Job)

func (*SpreadIterator) SetTaskGroup

func (iter *SpreadIterator) SetTaskGroup(tg *structs.TaskGroup)

type Stack

type Stack interface {
	// SetNodes is used to set the base set of potential nodes
	SetNodes([]*structs.Node)

	// SetTaskGroup is used to set the job for selection
	SetJob(job *structs.Job)

	// Select is used to select a node for the task group
	Select(tg *structs.TaskGroup, options *SelectOptions) *RankedNode
}

Stack is a chained collection of iterators. The stack is used to make placement decisions. Different schedulers may customize the stack they use to vary the way placements are made.

type State

type State interface {
	// Config returns the configuration of the state store
	Config() *state.StateStoreConfig

	// Nodes returns an iterator over all the nodes.
	// The type of each result is *structs.Node
	Nodes(ws memdb.WatchSet) (memdb.ResultIterator, error)

	// AllocsByJob returns the allocations by JobID
	AllocsByJob(ws memdb.WatchSet, namespace, jobID string, all bool) ([]*structs.Allocation, error)

	// AllocsByNode returns all the allocations by node
	AllocsByNode(ws memdb.WatchSet, node string) ([]*structs.Allocation, error)

	// AllocByID returns the allocation
	AllocByID(ws memdb.WatchSet, allocID string) (*structs.Allocation, error)

	// AllocsByNodeTerminal returns all the allocations by node filtering by terminal status
	AllocsByNodeTerminal(ws memdb.WatchSet, node string, terminal bool) ([]*structs.Allocation, error)

	// GetNodeByID is used to lookup a node by ID
	NodeByID(ws memdb.WatchSet, nodeID string) (*structs.Node, error)

	// GetJobByID is used to lookup a job by ID
	JobByID(ws memdb.WatchSet, namespace, id string) (*structs.Job, error)

	// DeploymentsByJobID returns the deployments associated with the job
	DeploymentsByJobID(ws memdb.WatchSet, namespace, jobID string, all bool) ([]*structs.Deployment, error)

	// JobByIDAndVersion returns the job associated with id and specific version
	JobByIDAndVersion(ws memdb.WatchSet, namespace, id string, version uint64) (*structs.Job, error)

	// LatestDeploymentByJobID returns the latest deployment matching the given
	// job ID
	LatestDeploymentByJobID(ws memdb.WatchSet, namespace, jobID string) (*structs.Deployment, error)

	// SchedulerConfig returns config options for the scheduler
	SchedulerConfig() (uint64, *structs.SchedulerConfiguration, error)

	// CSIVolumeByID fetch CSI volumes, containing controller jobs
	CSIVolumeByID(memdb.WatchSet, string, string) (*structs.CSIVolume, error)

	// CSIVolumeByID fetch CSI volumes, containing controller jobs
	CSIVolumesByNodeID(memdb.WatchSet, string, string) (memdb.ResultIterator, error)

	// LatestIndex returns the greatest index value for all indexes.
	LatestIndex() (uint64, error)
}

State is an immutable view of the global state. This allows schedulers to make intelligent decisions based on allocations of other schedulers and to enforce complex constraints that require more information than is available to a local state scheduler.

type StateEnterprise

type StateEnterprise interface {
}

StateEnterprise are the available state store methods for the enterprise version.

type StaticIterator

type StaticIterator struct {
	// contains filtered or unexported fields
}

StaticIterator is a FeasibleIterator which returns nodes in a static order. This is used at the base of the iterator chain only for testing due to deterministic behavior.

func NewRandomIterator

func NewRandomIterator(ctx Context, nodes []*structs.Node) *StaticIterator

NewRandomIterator constructs a static iterator from a list of nodes after applying the Fisher-Yates algorithm for a random shuffle. This is applied in-place

func NewStaticIterator

func NewStaticIterator(ctx Context, nodes []*structs.Node) *StaticIterator

NewStaticIterator constructs a random iterator from a list of nodes

func (*StaticIterator) Next

func (iter *StaticIterator) Next() *structs.Node

func (*StaticIterator) Reset

func (iter *StaticIterator) Reset()

func (*StaticIterator) SetNodes

func (iter *StaticIterator) SetNodes(nodes []*structs.Node)

type StaticRankIterator

type StaticRankIterator struct {
	// contains filtered or unexported fields
}

StaticRankIterator is a RankIterator that returns a static set of results. This is largely only useful for testing.

func NewStaticRankIterator

func NewStaticRankIterator(ctx Context, nodes []*RankedNode) *StaticRankIterator

NewStaticRankIterator returns a new static rank iterator over the given nodes

func (*StaticRankIterator) Next

func (iter *StaticRankIterator) Next() *RankedNode

func (*StaticRankIterator) Reset

func (iter *StaticRankIterator) Reset()

type SystemScheduler

type SystemScheduler struct {
	// contains filtered or unexported fields
}

SystemScheduler is used for 'system' and 'sysbatch' jobs. This scheduler is designed for jobs that should be run on every client. The 'system' mode will ensure those jobs continuously run regardless of successful task exits, whereas 'sysbatch' considers the task complete on success.

func (*SystemScheduler) Process

func (s *SystemScheduler) Process(eval *structs.Evaluation) (err error)

Process is used to handle a single evaluation.

type SystemStack

type SystemStack struct {
	// contains filtered or unexported fields
}

SystemStack is the Stack used for the System scheduler. It is designed to attempt to make placements on all nodes.

func NewSystemStack

func NewSystemStack(sysbatch bool, ctx Context) *SystemStack

NewSystemStack constructs a stack used for selecting system and sysbatch job placements.

sysbatch is used to determine which scheduler config option is used to control the use of preemption.

func (*SystemStack) Select

func (s *SystemStack) Select(tg *structs.TaskGroup, options *SelectOptions) *RankedNode

func (*SystemStack) SetJob

func (s *SystemStack) SetJob(job *structs.Job)

func (*SystemStack) SetNodes

func (s *SystemStack) SetNodes(baseNodes []*structs.Node)

type VerConstraints

type VerConstraints interface {
	Check(v *version.Version) bool
	String() string
}

VerConstraints is the interface implemented by both go-verson constraints and semver constraints.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL