gonum: gonum.org/v1/gonum/optimize Index | Examples | Files | Directories

package optimize

import "gonum.org/v1/gonum/optimize"

Package optimize implements algorithms for finding the optimum value of functions.

Index

Examples

Package Files

backtracking.go bfgs.go bisection.go cg.go cmaes.go doc.go errors.go functionconvergence.go global.go gradientdescent.go guessandcheck.go interfaces.go lbfgs.go linesearch.go local.go minimize.go morethuente.go neldermead.go newton.go printer.go stepsizers.go termination.go types.go

Variables

var (
    // ErrZeroDimensional signifies an optimization was called with an input of length 0.
    ErrZeroDimensional = errors.New("optimize: zero dimensional input")

    // ErrLinesearcherFailure signifies that a Linesearcher has iterated too
    // many times. This may occur if the gradient tolerance is set too low.
    ErrLinesearcherFailure = errors.New("linesearch: failed to converge")

    // ErrNonDescentDirection signifies that LinesearchMethod has received a
    // search direction from a NextDirectioner in which the function is not
    // decreasing.
    ErrNonDescentDirection = errors.New("linesearch: non-descent search direction")

    // ErrNoProgress signifies that LinesearchMethod cannot make further
    // progress because there is no change in location after Linesearcher step
    // due to floating-point arithmetic.
    ErrNoProgress = errors.New("linesearch: no change in location after Linesearcher step")

    // ErrLinesearcherBound signifies that a Linesearcher reached a step that
    // lies out of allowed bounds.
    ErrLinesearcherBound = errors.New("linesearch: step out of bounds")
)

func ArmijoConditionMet Uses

func ArmijoConditionMet(currObj, initObj, initGrad, step, decrease float64) bool

ArmijoConditionMet returns true if the Armijo condition (aka sufficient decrease) has been met. Under normal conditions, the following should be true, though this is not enforced:

- initGrad < 0
- step > 0
- 0 < decrease < 1

func StrongWolfeConditionsMet Uses

func StrongWolfeConditionsMet(currObj, currGrad, initObj, initGrad, step, decrease, curvature float64) bool

StrongWolfeConditionsMet returns true if the strong Wolfe conditions have been met. The strong Wolfe conditions ensure sufficient decrease in the function value, and sufficient decrease in the magnitude of the projected gradient. Under normal conditions, the following should be true, though this is not enforced:

- initGrad < 0
- step > 0
- 0 <= decrease < curvature < 1

func WeakWolfeConditionsMet Uses

func WeakWolfeConditionsMet(currObj, currGrad, initObj, initGrad, step, decrease, curvature float64) bool

WeakWolfeConditionsMet returns true if the weak Wolfe conditions have been met. The weak Wolfe conditions ensure sufficient decrease in the function value, and sufficient decrease in the value of the projected gradient. Under normal conditions, the following should be true, though this is not enforced:

- initGrad < 0
- step > 0
- 0 <= decrease < curvature< 1

type BFGS Uses

type BFGS struct {
    // Linesearcher selects suitable steps along the descent direction.
    // Accepted steps should satisfy the strong Wolfe conditions.
    // If Linesearcher == nil, an appropriate default is chosen.
    Linesearcher Linesearcher
    // contains filtered or unexported fields
}

BFGS implements the Broyden–Fletcher–Goldfarb–Shanno optimization method. It is a quasi-Newton method that performs successive rank-one updates to an estimate of the inverse Hessian of the objective function. It exhibits super-linear convergence when in proximity to a local minimum. It has memory cost that is O(n^2) relative to the input dimension.

func (*BFGS) Init Uses

func (b *BFGS) Init(loc *Location) (Operation, error)

func (*BFGS) InitDirection Uses

func (b *BFGS) InitDirection(loc *Location, dir []float64) (stepSize float64)

func (*BFGS) Iterate Uses

func (b *BFGS) Iterate(loc *Location) (Operation, error)

func (*BFGS) Needs Uses

func (*BFGS) Needs() struct {
    Gradient bool
    Hessian  bool
}

func (*BFGS) NextDirection Uses

func (b *BFGS) NextDirection(loc *Location, dir []float64) (stepSize float64)

type Backtracking Uses

type Backtracking struct {
    DecreaseFactor    float64 // Constant factor in the sufficient decrease (Armijo) condition.
    ContractionFactor float64 // Step size multiplier at each iteration (step *= ContractionFactor).
    // contains filtered or unexported fields
}

Backtracking is a Linesearcher that uses backtracking to find a point that satisfies the Armijo condition with the given decrease factor. If the Armijo condition has not been met, the step size is decreased by ContractionFactor.

The Armijo condition only requires the gradient at the beginning of each major iteration (not at successive step locations), and so Backtracking may be a good linesearch for functions with expensive gradients. Backtracking is not appropriate for optimizers that require the Wolfe conditions to be met, such as BFGS.

Both DecreaseFactor and ContractionFactor must be between zero and one, and Backtracking will panic otherwise. If either DecreaseFactor or ContractionFactor are zero, it will be set to a reasonable default.

func (*Backtracking) Init Uses

func (b *Backtracking) Init(f, g float64, step float64) Operation

func (*Backtracking) Iterate Uses

func (b *Backtracking) Iterate(f, _ float64) (Operation, float64, error)

type Bisection Uses

type Bisection struct {
    // CurvatureFactor is the constant factor in the curvature condition.
    // Smaller values result in a more exact line search.
    // A set value must be in the interval (0, 1), otherwise Init will panic.
    // If it is zero, it will be defaulted to 0.9.
    CurvatureFactor float64
    // contains filtered or unexported fields
}

Bisection is a Linesearcher that uses a bisection to find a point that satisfies the strong Wolfe conditions with the given curvature factor and a decrease factor of zero.

func (*Bisection) Init Uses

func (b *Bisection) Init(f, g float64, step float64) Operation

func (*Bisection) Iterate Uses

func (b *Bisection) Iterate(f, g float64) (Operation, float64, error)

type CG Uses

type CG struct {
    // Linesearcher must satisfy the strong Wolfe conditions at every iteration.
    // If Linesearcher == nil, an appropriate default is chosen.
    Linesearcher Linesearcher
    // Variant implements the particular CG formula for computing β_k.
    // If Variant is nil, an appropriate default is chosen.
    Variant CGVariant
    // InitialStep estimates the initial line search step size, because the CG
    // method does not generate well-scaled search directions.
    // If InitialStep is nil, an appropriate default is chosen.
    InitialStep StepSizer

    // IterationRestartFactor determines the frequency of restarts based on the
    // problem dimension. The negative gradient direction is taken whenever
    // ceil(IterationRestartFactor*(problem dimension)) iterations have elapsed
    // without a restart. For medium and large-scale problems
    // IterationRestartFactor should be set to 1, low-dimensional problems a
    // larger value should be chosen. Note that if the ceil function returns 1,
    // CG will be identical to gradient descent.
    // If IterationRestartFactor is 0, it will be set to 6.
    // CG will panic if IterationRestartFactor is negative.
    IterationRestartFactor float64
    // AngleRestartThreshold sets the threshold angle for restart. The method
    // is restarted if the cosine of the angle between two consecutive
    // gradients is smaller than or equal to AngleRestartThreshold, that is, if
    //  ∇f_k·∇f_{k+1} / (|∇f_k| |∇f_{k+1}|) <= AngleRestartThreshold.
    // A value of AngleRestartThreshold closer to -1 (successive gradients in
    // exact opposite directions) will tend to reduce the number of restarts.
    // If AngleRestartThreshold is 0, it will be set to -0.9.
    // CG will panic if AngleRestartThreshold is not in the interval [-1, 0].
    AngleRestartThreshold float64
    // contains filtered or unexported fields
}

CG implements the nonlinear conjugate gradient method for solving nonlinear unconstrained optimization problems. It is a line search method that generates the search directions d_k according to the formula

d_{k+1} = -∇f_{k+1} + β_k*d_k,   d_0 = -∇f_0.

Variants of the conjugate gradient method differ in the choice of the parameter β_k. The conjugate gradient method usually requires fewer function evaluations than the gradient descent method and no matrix storage, but L-BFGS is usually more efficient.

CG implements a restart strategy that takes the steepest descent direction (i.e., d_{k+1} = -∇f_{k+1}) whenever any of the following conditions holds:

- A certain number of iterations has elapsed without a restart. This number
  is controllable via IterationRestartFactor and if equal to 0, it is set to
  a reasonable default based on the problem dimension.
- The angle between the gradients at two consecutive iterations ∇f_k and
  ∇f_{k+1} is too large.
- The direction d_{k+1} is not a descent direction.
- β_k returned from CGVariant.Beta is equal to zero.

The line search for CG must yield step sizes that satisfy the strong Wolfe conditions at every iteration, otherwise the generated search direction might fail to be a descent direction. The line search should be more stringent compared with those for Newton-like methods, which can be achieved by setting the gradient constant in the strong Wolfe conditions to a small value.

See also William Hager, Hongchao Zhang, A survey of nonlinear conjugate gradient methods. Pacific Journal of Optimization, 2 (2006), pp. 35-58, and references therein.

func (*CG) Init Uses

func (cg *CG) Init(loc *Location) (Operation, error)

func (*CG) InitDirection Uses

func (cg *CG) InitDirection(loc *Location, dir []float64) (stepSize float64)

func (*CG) Iterate Uses

func (cg *CG) Iterate(loc *Location) (Operation, error)

func (*CG) Needs Uses

func (*CG) Needs() struct {
    Gradient bool
    Hessian  bool
}

func (*CG) NextDirection Uses

func (cg *CG) NextDirection(loc *Location, dir []float64) (stepSize float64)

type CGVariant Uses

type CGVariant interface {
    // Init is called at the first iteration and provides a way to initialize
    // any internal state.
    Init(loc *Location)
    // Beta returns the value of the scaling parameter that is computed
    // according to the particular variant of the CG method.
    Beta(grad, gradPrev, dirPrev []float64) float64
}

CGVariant calculates the scaling parameter, β, used for updating the conjugate direction in the nonlinear conjugate gradient (CG) method.

type CmaEsChol Uses

type CmaEsChol struct {
    // InitStepSize sets the initial size of the covariance matrix adaptation.
    // If InitStepSize is 0, a default value of 0.5 is used. InitStepSize cannot
    // be negative, or CmaEsChol will panic.
    InitStepSize float64
    // Population sets the population size for the algorithm. If Population is
    // 0, a default value of 4 + math.Floor(3*math.Log(float64(dim))) is used.
    // Population cannot be negative or CmaEsChol will panic.
    Population int
    // InitMean is the initial mean of the multivariate normal for sampling
    // input locations. If InitMean is nil, the zero vector is used. If InitMean
    // is not nil, it must have length equal to the problem dimension.
    InitMean []float64
    // InitCholesky specifies the Cholesky decomposition of the covariance
    // matrix for the initial sampling distribution. If InitCholesky is nil,
    // a default value of I is used. If it is non-nil, then it must have
    // InitCholesky.Size() be equal to the problem dimension.
    InitCholesky *mat.Cholesky
    // StopLogDet sets the threshold for stopping the optimization if the
    // distribution becomes too peaked. The log determinant is a measure of the
    // (log) "volume" of the normal distribution, and when it is too small
    // the samples are almost the same. If the log determinant of the covariance
    // matrix becomes less than StopLogDet, the optimization run is concluded.
    // If StopLogDet is 0, a default value of dim*log(1e-16) is used.
    // If StopLogDet is NaN, the stopping criterion is not used, though
    // this can cause numeric instabilities in the algorithm.
    StopLogDet float64
    // ForgetBest, when true, does not track the best overall function value found,
    // instead returning the new best sample in each iteration. If ForgetBest
    // is false, then the minimum value returned will be the lowest across all
    // iterations, regardless of when that sample was generated.
    ForgetBest bool
    // Src allows a random number generator to be supplied for generating samples.
    // If Src is nil the generator in golang.org/x/math/rand is used.
    Src *rand.Rand
    // contains filtered or unexported fields
}

CmaEsChol implements the covariance matrix adaptation evolution strategy (CMA-ES) based on the Cholesky decomposition. The full algorithm is described in

Krause, Oswin, Dídac Rodríguez Arbonès, and Christian Igel. "CMA-ES with
optimal covariance update and storage complexity." Advances in Neural
Information Processing Systems. 2016.
https://papers.nips.cc/paper/6457-cma-es-with-optimal-covariance-update-and-storage-complexity.pdf

CMA-ES is a global optimization method that progressively adapts a population of samples. CMA-ES combines techniques from local optimization with global optimization. Specifically, the CMA-ES algorithm uses an initial multivariate normal distribution to generate a population of input locations. The input locations with the lowest function values are used to update the parameters of the normal distribution, a new set of input locations are generated, and this procedure is iterated until convergence.

As the normal distribution is progressively updated according to the best samples, it can be that the mean of the distribution is updated in a gradient-descent like fashion, followed by a shrinking covariance. It is recommended that the algorithm be run multiple times (with different InitMean) to have a better chance of finding the global minimum.

The CMA-ES-Chol algorithm differs from the standard CMA-ES algorithm in that it directly updates the Cholesky decomposition of the normal distribution. This changes the runtime from O(dimension^3) to O(dimension^2*population) The evolution of the multi-variate normal will be similar to the baseline CMA-ES algorithm, but the covariance update equation is not identical.

For more information about the CMA-ES algorithm, see

https://en.wikipedia.org/wiki/CMA-ES
https://arxiv.org/pdf/1604.00772.pdf

func (*CmaEsChol) Done Uses

func (cma *CmaEsChol) Done()

func (*CmaEsChol) InitGlobal Uses

func (cma *CmaEsChol) InitGlobal(dim, tasks int) int

func (*CmaEsChol) IterateGlobal Uses

func (cma *CmaEsChol) IterateGlobal(task int, loc *Location) (Operation, error)

func (*CmaEsChol) Needs Uses

func (cma *CmaEsChol) Needs() struct{ Gradient, Hessian bool }

func (*CmaEsChol) Status Uses

func (cma *CmaEsChol) Status() (Status, error)

Status returns the status of the method.

type ConstantStepSize Uses

type ConstantStepSize struct {
    Size float64
}

ConstantStepSize is a StepSizer that returns the same step size for every iteration.

func (ConstantStepSize) Init Uses

func (c ConstantStepSize) Init(_ *Location, _ []float64) float64

func (ConstantStepSize) StepSize Uses

func (c ConstantStepSize) StepSize(_ *Location, _ []float64) float64

type DaiYuan Uses

type DaiYuan struct {
    // contains filtered or unexported fields
}

DaiYuan implements the Dai-Yuan variant of the CG method that computes the scaling parameter β_k according to the formula

β_k = |∇f_{k+1}|^2 / d_k·y_k,

where y_k = ∇f_{k+1} - ∇f_k.

func (*DaiYuan) Beta Uses

func (dy *DaiYuan) Beta(grad, gradPrev, dirPrev []float64) (beta float64)

func (*DaiYuan) Init Uses

func (dy *DaiYuan) Init(loc *Location)

type ErrFunc Uses

type ErrFunc float64

ErrFunc is returned when an initial function value is invalid. The error state may be either +Inf or NaN. ErrFunc satisfies the error interface.

func (ErrFunc) Error Uses

func (err ErrFunc) Error() string

type ErrGrad Uses

type ErrGrad struct {
    Grad  float64 // Grad is the invalid gradient value.
    Index int     // Index is the position at which the invalid gradient was found.
}

ErrGrad is returned when an initial gradient is invalid. The error gradient may be either ±Inf or NaN. ErrGrad satisfies the error interface.

func (ErrGrad) Error Uses

func (err ErrGrad) Error() string

type FirstOrderStepSize Uses

type FirstOrderStepSize struct {
    // InitialStepFactor sets the step size for the first iteration to be InitialStepFactor / |g|_∞.
    // If InitialStepFactor is zero, it will be set to one.
    InitialStepFactor float64
    // MinStepSize is the lower bound on the estimated step size.
    // MinStepSize times GradientAbsTol should always be greater than machine epsilon.
    // If MinStepSize is zero, it will be set to 1e-3.
    MinStepSize float64
    // MaxStepSize is the upper bound on the estimated step size.
    // If MaxStepSize is zero, it will be set to 1.
    MaxStepSize float64
    // contains filtered or unexported fields
}

FirstOrderStepSize estimates the initial line search step size based on the assumption that the first-order change in the function will be the same as that obtained at the previous iteration. That is, the initial step size s^0_k is chosen so that

s^0_k ∇f_k⋅p_k = s_{k-1} ∇f_{k-1}⋅p_{k-1}

This is useful for line search methods that do not produce well-scaled descent directions, such as gradient descent or conjugate gradient methods.

func (*FirstOrderStepSize) Init Uses

func (fo *FirstOrderStepSize) Init(loc *Location, dir []float64) (stepSize float64)

func (*FirstOrderStepSize) StepSize Uses

func (fo *FirstOrderStepSize) StepSize(loc *Location, dir []float64) (stepSize float64)

type FletcherReeves Uses

type FletcherReeves struct {
    // contains filtered or unexported fields
}

FletcherReeves implements the Fletcher-Reeves variant of the CG method that computes the scaling parameter β_k according to the formula

β_k = |∇f_{k+1}|^2 / |∇f_k|^2.

func (*FletcherReeves) Beta Uses

func (fr *FletcherReeves) Beta(grad, _, _ []float64) (beta float64)

func (*FletcherReeves) Init Uses

func (fr *FletcherReeves) Init(loc *Location)

type FunctionConverge Uses

type FunctionConverge struct {
    Absolute   float64
    Relative   float64
    Iterations int
    // contains filtered or unexported fields
}

FunctionConverge tests for the convergence of function values. See comment in Settings.

func (*FunctionConverge) FunctionConverged Uses

func (fc *FunctionConverge) FunctionConverged(f float64) Status

func (*FunctionConverge) Init Uses

func (fc *FunctionConverge) Init(f float64)

type GlobalMethod Uses

type GlobalMethod interface {
    Needser
    // InitGlobal communicates the input dimension and maximum number of tasks,
    // and returns the number of concurrent processes. The return must be less
    // than or equal to tasks.
    InitGlobal(dim, tasks int) int

    // IterateGlobal retrieves information from the location associated with
    // the given task ID, and returns the next operation to perform with that
    // Location. IterateGlobal may assume that the same pointer is associated
    // with the same task.
    IterateGlobal(task int, loc *Location) (Operation, error)

    // Done communicates that the optimization has concluded to allow for shutdown.
    // After Done is called, no more calls to IterateGlobal will be made.
    Done()
}

GlobalMethod is an optimization method which seeks to find the global minimum of an objective function.

At the beginning of the optimization, InitGlobal is called to communicate the dimension of the input and maximum number of concurrent tasks. The actual number of concurrent tasks will be set from the return of InitGlobal, which must not be greater than the input tasks.

During the optimization, a reverse-communication interface is used between the GlobalMethod and the caller. GlobalMethod acts as a client that asks the caller to perform needed operations given the return from IterateGlobal. This allows and enforces automation of maintaining statistics and checking for (various types of) convergence.

The return from IterateGlobal can be an Evaluation, a MajorIteration or NoOperation.

An evaluation is one or more of the Evaluation operations (FuncEvaluation, GradEvaluation, etc.) combined with the bitwise or operator. In an evaluation operation, the requested fields of Problem will be evaluated at the value in Location.X, filling the corresponding fields of Location. These values can be retrieved and used upon the next call to IterateGlobal with that task id. The GlobalMethod interface requires that entries of Location are not modified aside from the commanded evaluations. Thus, the type implementing GlobalMethod may use multiple Operations to set the Location fields at a particular x value.

When IterateGlobal declares MajorIteration, the caller updates the optimal location to the values in Location, and checks for convergence. The type implementing GlobalMethod must make sure that the fields of Location are valid and consistent.

IterateGlobal must not return InitIteration and PostIteration operations. These are reserved for the clients to be passed to Recorders. A Method must also not combine the Evaluation operations with the Iteration operations.

type GradientDescent Uses

type GradientDescent struct {
    // Linesearcher selects suitable steps along the descent direction.
    // If Linesearcher is nil, a reasonable default will be chosen.
    Linesearcher Linesearcher
    // StepSizer determines the initial step size along each direction.
    // If StepSizer is nil, a reasonable default will be chosen.
    StepSizer StepSizer
    // contains filtered or unexported fields
}

GradientDescent implements the steepest descent optimization method that performs successive steps along the direction of the negative gradient.

func (*GradientDescent) Init Uses

func (g *GradientDescent) Init(loc *Location) (Operation, error)

func (*GradientDescent) InitDirection Uses

func (g *GradientDescent) InitDirection(loc *Location, dir []float64) (stepSize float64)

func (*GradientDescent) Iterate Uses

func (g *GradientDescent) Iterate(loc *Location) (Operation, error)

func (*GradientDescent) Needs Uses

func (*GradientDescent) Needs() struct {
    Gradient bool
    Hessian  bool
}

func (*GradientDescent) NextDirection Uses

func (g *GradientDescent) NextDirection(loc *Location, dir []float64) (stepSize float64)

type GuessAndCheck Uses

type GuessAndCheck struct {
    Rander distmv.Rander
    // contains filtered or unexported fields
}

GuessAndCheck is a global optimizer that evaluates the function at random locations. Not a good optimizer, but useful for comparison and debugging.

func (*GuessAndCheck) Done Uses

func (g *GuessAndCheck) Done()

func (*GuessAndCheck) InitGlobal Uses

func (g *GuessAndCheck) InitGlobal(dim, tasks int) int

func (*GuessAndCheck) IterateGlobal Uses

func (g *GuessAndCheck) IterateGlobal(task int, loc *Location) (Operation, error)

func (*GuessAndCheck) Needs Uses

func (g *GuessAndCheck) Needs() struct{ Gradient, Hessian bool }

type HagerZhang Uses

type HagerZhang struct {
    // contains filtered or unexported fields
}

HagerZhang implements the Hager-Zhang variant of the CG method that computes the scaling parameter β_k according to the formula

β_k = (y_k - 2 d_k |y_k|^2/(d_k·y_k))·∇f_{k+1} / (d_k·y_k),

where y_k = ∇f_{k+1} - ∇f_k.

func (*HagerZhang) Beta Uses

func (hz *HagerZhang) Beta(grad, gradPrev, dirPrev []float64) (beta float64)

func (*HagerZhang) Init Uses

func (hz *HagerZhang) Init(loc *Location)

type HestenesStiefel Uses

type HestenesStiefel struct {
    // contains filtered or unexported fields
}

HestenesStiefel implements the Hestenes-Stiefel variant of the CG method that computes the scaling parameter β_k according to the formula

β_k = max(0, ∇f_{k+1}·y_k / d_k·y_k),

where y_k = ∇f_{k+1} - ∇f_k.

func (*HestenesStiefel) Beta Uses

func (hs *HestenesStiefel) Beta(grad, gradPrev, dirPrev []float64) (beta float64)

func (*HestenesStiefel) Init Uses

func (hs *HestenesStiefel) Init(loc *Location)

type LBFGS Uses

type LBFGS struct {
    // Linesearcher selects suitable steps along the descent direction.
    // Accepted steps should satisfy the strong Wolfe conditions.
    // If Linesearcher is nil, a reasonable default will be chosen.
    Linesearcher Linesearcher
    // Store is the size of the limited-memory storage.
    // If Store is 0, it will be defaulted to 15.
    Store int
    // contains filtered or unexported fields
}

LBFGS implements the limited-memory BFGS method for gradient-based unconstrained minimization.

It stores a modified version of the inverse Hessian approximation H implicitly from the last Store iterations while the normal BFGS method stores and manipulates H directly as a dense matrix. Therefore LBFGS is more appropriate than BFGS for large problems as the cost of LBFGS scales as O(Store * dim) while BFGS scales as O(dim^2). The "forgetful" nature of LBFGS may also make it perform better than BFGS for functions with Hessians that vary rapidly spatially.

func (*LBFGS) Init Uses

func (l *LBFGS) Init(loc *Location) (Operation, error)

func (*LBFGS) InitDirection Uses

func (l *LBFGS) InitDirection(loc *Location, dir []float64) (stepSize float64)

func (*LBFGS) Iterate Uses

func (l *LBFGS) Iterate(loc *Location) (Operation, error)

func (*LBFGS) Needs Uses

func (*LBFGS) Needs() struct {
    Gradient bool
    Hessian  bool
}

func (*LBFGS) NextDirection Uses

func (l *LBFGS) NextDirection(loc *Location, dir []float64) (stepSize float64)

type LinesearchMethod Uses

type LinesearchMethod struct {
    // NextDirectioner specifies the search direction of each linesearch.
    NextDirectioner NextDirectioner
    // Linesearcher performs a linesearch along the search direction.
    Linesearcher Linesearcher
    // contains filtered or unexported fields
}

LinesearchMethod represents an abstract optimization method in which a function is optimized through successive line search optimizations.

func (*LinesearchMethod) Init Uses

func (ls *LinesearchMethod) Init(loc *Location) (Operation, error)

func (*LinesearchMethod) Iterate Uses

func (ls *LinesearchMethod) Iterate(loc *Location) (Operation, error)

type Linesearcher Uses

type Linesearcher interface {
    // Init initializes the Linesearcher and a new line search. Value and
    // derivative contain φ(0) and φ'(0), respectively, and step contains the
    // first trial step length. It returns an Operation that must be one of
    // FuncEvaluation, GradEvaluation, FuncEvaluation|GradEvaluation. The
    // caller must evaluate φ(step), φ'(step), or both, respectively, and pass
    // the result to Linesearcher in value and derivative arguments to Iterate.
    Init(value, derivative float64, step float64) Operation

    // Iterate takes in the values of φ and φ' evaluated at the previous step
    // and returns the next operation.
    //
    // If op is one of FuncEvaluation, GradEvaluation,
    // FuncEvaluation|GradEvaluation, the caller must evaluate φ(step),
    // φ'(step), or both, respectively, and pass the result to Linesearcher in
    // value and derivative arguments on the next call to Iterate.
    //
    // If op is MajorIteration, a sufficiently accurate minimum of φ has been
    // found at the previous step and the line search has concluded. Init must
    // be called again to initialize a new line search.
    //
    // If err is nil, op must not specify another operation. If err is not nil,
    // the values of op and step are undefined.
    Iterate(value, derivative float64) (op Operation, step float64, err error)
}

Linesearcher is a type that can perform a line search. It tries to find an (approximate) minimum of the objective function along the search direction dir_k starting at the most recent location x_k, i.e., it tries to minimize the function

φ(step) := f(x_k + step * dir_k) where step > 0.

Typically, a Linesearcher will be used in conjunction with LinesearchMethod for performing gradient-based optimization through sequential line searches.

type Location Uses

type Location struct {
    X        []float64
    F        float64
    Gradient []float64
    Hessian  *mat.SymDense
}

Location represents a location in the optimization procedure.

type Method Uses

type Method interface {
    // Init initializes the method based on the initial data in loc, updates it
    // and returns the first operation to be carried out by the caller.
    // The initial location must be valid as specified by Needs.
    Init(loc *Location) (Operation, error)

    // Iterate retrieves data from loc, performs one iteration of the method,
    // updates loc and returns the next operation.
    Iterate(loc *Location) (Operation, error)

    Needser
}

A Method can optimize an objective function.

It uses a reverse-communication interface between the optimization method and the caller. Method acts as a client that asks the caller to perform needed operations via Operation returned from Init and Iterate methods. This provides independence of the optimization algorithm on user-supplied data and their representation, and enables automation of common operations like checking for (various types of) convergence and maintaining statistics.

A Method can command an Evaluation, a MajorIteration or NoOperation operations.

An evaluation operation is one or more of the Evaluation operations (FuncEvaluation, GradEvaluation, etc.) which can be combined with the bitwise or operator. In an evaluation operation, the requested fields of Problem will be evaluated at the point specified in Location.X. The corresponding fields of Location will be filled with the results that can be retrieved upon the next call to Iterate. The Method interface requires that entries of Location are not modified aside from the commanded evaluations. Thus, the type implementing Method may use multiple Operations to set the Location fields at a particular x value.

Instead of an Evaluation, a Method may declare MajorIteration. In a MajorIteration, the values in the fields of Location are treated as a potential optimizer. The convergence of the optimization routine (GradientThreshold, etc.) is checked at this new best point. In a MajorIteration, the fields of Location must be valid and consistent.

A Method must not return InitIteration and PostIteration operations. These are reserved for the clients to be passed to Recorders. A Method must also not combine the Evaluation operations with the Iteration operations.

type MoreThuente Uses

type MoreThuente struct {
    // DecreaseFactor is the constant factor in the sufficient decrease
    // (Armijo) condition.
    // It must be in the interval [0, 1). The default value is 0.
    DecreaseFactor float64
    // CurvatureFactor is the constant factor in the Wolfe conditions. Smaller
    // values result in a more exact line search.
    // A set value must be in the interval (0, 1). If it is zero, it will be
    // defaulted to 0.9.
    CurvatureFactor float64
    // StepTolerance sets the minimum acceptable width for the linesearch
    // interval. If the relative interval length is less than this value,
    // ErrLinesearcherFailure is returned.
    // It must be non-negative. If it is zero, it will be defaulted to 1e-10.
    StepTolerance float64

    // MinimumStep is the minimum step that the linesearcher will take.
    // It must be non-negative and less than MaximumStep. Defaults to no
    // minimum (a value of 0).
    MinimumStep float64
    // MaximumStep is the maximum step that the linesearcher will take.
    // It must be greater than MinimumStep. If it is zero, it will be defaulted
    // to 1e20.
    MaximumStep float64
    // contains filtered or unexported fields
}

MoreThuente is a Linesearcher that finds steps that satisfy both the sufficient decrease and curvature conditions (the strong Wolfe conditions).

References:

- More, J.J. and D.J. Thuente: Line Search Algorithms with Guaranteed Sufficient
  Decrease. ACM Transactions on Mathematical Software 20(3) (1994), 286-307

func (*MoreThuente) Init Uses

func (mt *MoreThuente) Init(f, g float64, step float64) Operation

func (*MoreThuente) Iterate Uses

func (mt *MoreThuente) Iterate(f, g float64) (Operation, float64, error)

type Needser Uses

type Needser interface {
    // Needs specifies information about the objective function needed by the
    // optimizer beyond just the function value. The information is used
    // internally for initialization and must match evaluation types returned
    // by Init and Iterate during the optimization process.
    Needs() struct {
        Gradient bool
        Hessian  bool
    }
}

type NelderMead Uses

type NelderMead struct {
    InitialVertices [][]float64
    InitialValues   []float64
    Reflection      float64 // Reflection parameter (>0)
    Expansion       float64 // Expansion parameter (>1)
    Contraction     float64 // Contraction parameter (>0, <1)
    Shrink          float64 // Shrink parameter (>0, <1)
    SimplexSize     float64 // size of auto-constructed initial simplex
    // contains filtered or unexported fields
}

NelderMead is an implementation of the Nelder-Mead simplex algorithm for gradient-free nonlinear optimization (not to be confused with Danzig's simplex algorithm for linear programming). The implementation follows the algorithm described in

http://epubs.siam.org/doi/pdf/10.1137/S1052623496303470

If an initial simplex is provided, it is used and initLoc is ignored. If InitialVertices and InitialValues are both nil, an initial simplex will be generated automatically using the initial location as one vertex, and each additional vertex as SimplexSize away in one dimension.

If the simplex update parameters (Reflection, etc.) are zero, they will be set automatically based on the dimension according to the recommendations in

http://www.webpages.uidaho.edu/~fuchang/res/ANMS.pdf

func (*NelderMead) Init Uses

func (n *NelderMead) Init(loc *Location) (Operation, error)

func (*NelderMead) Iterate Uses

func (n *NelderMead) Iterate(loc *Location) (Operation, error)

func (*NelderMead) Needs Uses

func (*NelderMead) Needs() struct {
    Gradient bool
    Hessian  bool
}

type Newton Uses

type Newton struct {
    // Linesearcher is used for selecting suitable steps along the descent
    // direction d. Accepted steps should satisfy at least one of the Wolfe,
    // Goldstein or Armijo conditions.
    // If Linesearcher == nil, an appropriate default is chosen.
    Linesearcher Linesearcher
    // Increase is the factor by which a scalar tau is successively increased
    // so that (H + tau*I) is positive definite. Larger values reduce the
    // number of trial Hessian factorizations, but also reduce the second-order
    // information in H.
    // Increase must be greater than 1. If Increase is 0, it is defaulted to 5.
    Increase float64
    // contains filtered or unexported fields
}

Newton implements a modified Newton's method for Hessian-based unconstrained minimization. It applies regularization when the Hessian is not positive definite, and it can converge to a local minimum from any starting point.

Newton iteratively forms a quadratic model to the objective function f and tries to minimize this approximate model. It generates a sequence of locations x_k by means of

solve H_k d_k = -∇f_k for d_k,
x_{k+1} = x_k + α_k d_k,

where H_k is the Hessian matrix of f at x_k and α_k is a step size found by a line search.

Away from a minimizer H_k may not be positive definite and d_k may not be a descent direction. Newton implements a Hessian modification strategy that adds successively larger multiples of identity to H_k until it becomes positive definite. Note that the repeated trial factorization of the modified Hessian involved in this process can be computationally expensive.

If the Hessian matrix cannot be formed explicitly or if the computational cost of its factorization is prohibitive, BFGS or L-BFGS quasi-Newton method can be used instead.

func (*Newton) Init Uses

func (n *Newton) Init(loc *Location) (Operation, error)

func (*Newton) InitDirection Uses

func (n *Newton) InitDirection(loc *Location, dir []float64) (stepSize float64)

func (*Newton) Iterate Uses

func (n *Newton) Iterate(loc *Location) (Operation, error)

func (*Newton) Needs Uses

func (n *Newton) Needs() struct {
    Gradient bool
    Hessian  bool
}

func (*Newton) NextDirection Uses

func (n *Newton) NextDirection(loc *Location, dir []float64) (stepSize float64)

type NextDirectioner Uses

type NextDirectioner interface {
    // InitDirection initializes the NextDirectioner at the given starting location,
    // putting the initial direction in place into dir, and returning the initial
    // step size. InitDirection must not modify Location.
    InitDirection(loc *Location, dir []float64) (step float64)

    // NextDirection updates the search direction and step size. Location is
    // the location seen at the conclusion of the most recent linesearch. The
    // next search direction is put in place into dir, and the next step size
    // is returned. NextDirection must not modify Location.
    NextDirection(loc *Location, dir []float64) (step float64)
}

NextDirectioner implements a strategy for computing a new line search direction at each major iteration. Typically, a NextDirectioner will be used in conjunction with LinesearchMethod for performing gradient-based optimization through sequential line searches.

type Operation Uses

type Operation uint64

Operation represents the set of operations commanded by Method at each iteration. It is a bitmap of various Iteration and Evaluation constants. Individual constants must NOT be combined together by the binary OR operator except for the Evaluation operations.

const (
    // NoOperation specifies that no evaluation or convergence check should
    // take place.
    NoOperation Operation = 0
    // InitIteration is sent to Recorder to indicate the initial location.
    // All fields of the location to record must be valid.
    // Method must not return it.
    InitIteration Operation = 1 << (iota - 1)
    // PostIteration is sent to Recorder to indicate the final location
    // reached during an optimization run.
    // All fields of the location to record must be valid.
    // Method must not return it.
    PostIteration
    // MajorIteration indicates that the next candidate location for
    // an optimum has been found and convergence should be checked.
    MajorIteration
    // FuncEvaluation specifies that the objective function
    // should be evaluated.
    FuncEvaluation
    // GradEvaluation specifies that the gradient
    // of the objective function should be evaluated.
    GradEvaluation
    // HessEvaluation specifies that the Hessian
    // of the objective function should be evaluated.
    HessEvaluation
)

Supported Operations.

func (Operation) String Uses

func (op Operation) String() string

type PolakRibierePolyak Uses

type PolakRibierePolyak struct {
    // contains filtered or unexported fields
}

PolakRibierePolyak implements the Polak-Ribiere-Polyak variant of the CG method that computes the scaling parameter β_k according to the formula

β_k = max(0, ∇f_{k+1}·y_k / |∇f_k|^2),

where y_k = ∇f_{k+1} - ∇f_k.

func (*PolakRibierePolyak) Beta Uses

func (pr *PolakRibierePolyak) Beta(grad, gradPrev, _ []float64) (beta float64)

func (*PolakRibierePolyak) Init Uses

func (pr *PolakRibierePolyak) Init(loc *Location)

type Printer Uses

type Printer struct {
    Writer          io.Writer
    HeadingInterval int
    ValueInterval   time.Duration
    // contains filtered or unexported fields
}

Printer writes column-format output to the specified writer as the optimization progresses. By default, it writes to os.Stdout.

func NewPrinter Uses

func NewPrinter() *Printer

func (*Printer) Init Uses

func (p *Printer) Init() error

func (*Printer) Record Uses

func (p *Printer) Record(loc *Location, op Operation, stats *Stats) error

type Problem Uses

type Problem struct {
    // Func evaluates the objective function at the given location. Func
    // must not modify x.
    Func func(x []float64) float64

    // Grad evaluates the gradient at x and stores the result in-place in grad.
    // Grad must not modify x.
    Grad func(grad []float64, x []float64)

    // Hess evaluates the Hessian at x and stores the result in-place in hess.
    // Hess must not modify x.
    Hess func(hess mat.MutableSymmetric, x []float64)

    // Status reports the status of the objective function being optimized and any
    // error. This can be used to terminate early, for example when the function is
    // not able to evaluate itself. The user can use one of the pre-provided Status
    // constants, or may call NewStatus to create a custom Status value.
    Status func() (Status, error)
}

Problem describes the optimization problem to be solved.

type QuadraticStepSize Uses

type QuadraticStepSize struct {
    // Threshold determines that the initial step size should be estimated by
    // quadratic interpolation when the relative change in the objective
    // function is larger than Threshold.  Otherwise the initial step size is
    // set to 2*previous step size.
    // If Threshold is zero, it will be set to 1e-12.
    Threshold float64
    // InitialStepFactor sets the step size for the first iteration to be InitialStepFactor / |g|_∞.
    // If InitialStepFactor is zero, it will be set to one.
    InitialStepFactor float64
    // MinStepSize is the lower bound on the estimated step size.
    // MinStepSize times GradientAbsTol should always be greater than machine epsilon.
    // If MinStepSize is zero, it will be set to 1e-3.
    MinStepSize float64
    // MaxStepSize is the upper bound on the estimated step size.
    // If MaxStepSize is zero, it will be set to 1.
    MaxStepSize float64
    // contains filtered or unexported fields
}

QuadraticStepSize estimates the initial line search step size as the minimum of a quadratic that interpolates f(x_{k-1}), f(x_k) and ∇f_k⋅p_k. This is useful for line search methods that do not produce well-scaled descent directions, such as gradient descent or conjugate gradient methods. The step size is bounded away from zero.

func (*QuadraticStepSize) Init Uses

func (q *QuadraticStepSize) Init(loc *Location, dir []float64) (stepSize float64)

func (*QuadraticStepSize) StepSize Uses

func (q *QuadraticStepSize) StepSize(loc *Location, dir []float64) (stepSize float64)

type Recorder Uses

type Recorder interface {
    Init() error
    Record(*Location, Operation, *Stats) error
}

A Recorder can record the progress of the optimization, for example to print the progress to StdOut or to a log file. A Recorder must not modify any data.

type Result Uses

type Result struct {
    Location
    Stats
    Status Status
}

Result represents the answer of an optimization run. It contains the optimum location as well as the Status at convergence and Statistics taken during the run.

func Global Uses

func Global(p Problem, dim int, settings *Settings, method GlobalMethod) (*Result, error)

Global uses a global optimizer to search for the global minimum of a function. A maximization problem can be transformed into a minimization problem by multiplying the function by -1.

The first argument represents the problem to be minimized. Its fields are routines that evaluate the objective function, gradient, and other quantities related to the problem. The objective function, p.Func, must not be nil. The optimization method used may require other fields to be non-nil as specified by method.Needs. Global will panic if these are not met. The method can be determined automatically from the supplied problem which is described below.

If p.Status is not nil, it is called before every evaluation. If the returned Status is other than NotTerminated or if the error is not nil, the optimization run is terminated.

The third argument contains the settings for the minimization. The DefaultGlobalSettings function can be called for a Settings struct with the default values initialized. If settings == nil, the default settings are used. Global optimization methods typically do not make assumptions about the number and location of local minima. Thus, the only convergence metric used is the function values found at major iterations of the optimization. Bounds on the length of optimization are obeyed, such as the number of allowed function evaluations.

The final argument is the optimization method to use. If method == nil, then an appropriate default is chosen based on the properties of the other arguments (dimension, gradient-free or gradient-based, etc.).

If method implements Statuser, method.Status is called before every call to method.Iterate. If the returned Status is not NotTerminated or the error is non-nil, the optimization run is terminated.

Global returns a Result struct and any error that occurred. See the documentation of Result for more information.

Be aware that the default behavior of Global is to find the minimum. For certain functions and optimization methods, this process can take many function evaluations. The Settings input struct can be used to limit this, for example by modifying the maximum runtime or maximum function evaluations.

Global cannot guarantee strict adherence to the bounds specified in Settings when performing concurrent evaluations and updates.

func Local Uses

func Local(p Problem, initX []float64, settings *Settings, method Method) (*Result, error)

Local finds a local minimum of a minimization problem using a sequential algorithm. A maximization problem can be transformed into a minimization problem by multiplying the function by -1.

The first argument represents the problem to be minimized. Its fields are routines that evaluate the objective function, gradient, and other quantities related to the problem. The objective function, p.Func, must not be nil. The optimization method used may require other fields to be non-nil as specified by method.Needs. Local will panic if these are not met. The method can be determined automatically from the supplied problem which is described below.

If p.Status is not nil, it is called before every evaluation. If the returned Status is not NotTerminated or the error is not nil, the optimization run is terminated.

The second argument is the initial location at which to start the minimization. The initial location must be supplied, and must have a length equal to the problem dimension.

The third argument contains the settings for the minimization. It is here that gradient tolerance, etc. are specified. The DefaultSettings function can be called for a Settings struct with the default values initialized. If settings == nil, the default settings are used. See the documentation for the Settings structure for more information. The optimization Method used may also contain settings, see documentation for the appropriate optimizer.

The final argument is the optimization method to use. If method == nil, then an appropriate default is chosen based on the properties of the other arguments (dimension, gradient-free or gradient-based, etc.). The optimization methods in this package are designed such that reasonable defaults occur if options are not specified explicitly. For example, the code

method := &optimize.BFGS{}

creates a pointer to a new BFGS struct. When Local is called, the settings in the method will be populated with default values. The methods are also designed such that they can be reused in future calls to Local.

If method implements Statuser, method.Status is called before every call to method.Iterate. If the returned Status is not NotTerminated or the error is non-nil, the optimization run is terminated.

Local returns a Result struct and any error that occurred. See the documentation of Result for more information.

Be aware that the default behavior of Local is to find the minimum. For certain functions and optimization methods, this process can take many function evaluations. If you would like to put limits on this, for example maximum runtime or maximum function evaluations, modify the Settings input struct.

Code:

p := optimize.Problem{
    Func: functions.ExtendedRosenbrock{}.Func,
    Grad: functions.ExtendedRosenbrock{}.Grad,
}

x := []float64{1.3, 0.7, 0.8, 1.9, 1.2}
settings := optimize.DefaultSettings()
settings.Recorder = nil
settings.GradientThreshold = 1e-12
settings.FunctionConverge = nil

result, err := optimize.Local(p, x, settings, &optimize.BFGS{})
if err != nil {
    log.Fatal(err)
}
if err = result.Status.Err(); err != nil {
    log.Fatal(err)
}
fmt.Printf("result.Status: %v\n", result.Status)
fmt.Printf("result.X: %v\n", result.X)
fmt.Printf("result.F: %v\n", result.F)
fmt.Printf("result.Stats.FuncEvaluations: %d\n", result.Stats.FuncEvaluations)

Output:

result.Status: GradientThreshold
result.X: [1 1 1 1 1]
result.F: 0
result.Stats.FuncEvaluations: 35

type Settings Uses

type Settings struct {
    UseInitialData  bool          // Use supplied information about the conditions at the initial x.
    InitialValue    float64       // Function value at the initial x.
    InitialGradient []float64     // Gradient at the initial x.
    InitialHessian  *mat.SymDense // Hessian at the initial x.

    // FunctionThreshold is the threshold for acceptably small values of the
    // objective function. FunctionThreshold status is returned if
    // the objective function is less than this value.
    // The default value is -inf.
    FunctionThreshold float64

    // GradientThreshold determines the accuracy to which the minimum is found.
    // GradientThreshold status is returned if the infinity norm of
    // the gradient is less than this value.
    // Has no effect if gradient information is not used.
    // The default value is 1e-6.
    GradientThreshold float64

    // FunctionConverge tests that the function value decreases by a
    // significant amount over the specified number of iterations.
    //
    // If f < f_best and
    //  f_best - f > FunctionConverge.Relative * maxabs(f, f_best) + FunctionConverge.Absolute
    // then a significant decrease has occurred, and f_best is updated.
    //
    // If there is no significant decrease for FunctionConverge.Iterations
    // major iterations, FunctionConvergence status is returned.
    //
    // If this is nil or if FunctionConverge.Iterations == 0, it has no effect.
    FunctionConverge *FunctionConverge

    // MajorIterations is the maximum number of iterations allowed.
    // IterationLimit status is returned if the number of major iterations
    // equals or exceeds this value.
    // If it equals zero, this setting has no effect.
    // The default value is 0.
    MajorIterations int

    // Runtime is the maximum runtime allowed. RuntimeLimit status is returned
    // if the duration of the run is longer than this value. Runtime is only
    // checked at iterations of the Method.
    // If it equals zero, this setting has no effect.
    // The default value is 0.
    Runtime time.Duration

    // FuncEvaluations is the maximum allowed number of function evaluations.
    // FunctionEvaluationLimit status is returned if the total number of calls
    // to Func equals or exceeds this number.
    // If it equals zero, this setting has no effect.
    // The default value is 0.
    FuncEvaluations int

    // GradEvaluations is the maximum allowed number of gradient evaluations.
    // GradientEvaluationLimit status is returned if the total number of calls
    // to Grad equals or exceeds this number.
    // If it equals zero, this setting has no effect.
    // The default value is 0.
    GradEvaluations int

    // HessEvaluations is the maximum allowed number of Hessian evaluations.
    // HessianEvaluationLimit status is returned if the total number of calls
    // to Hess equals or exceeds this number.
    // If it equals zero, this setting has no effect.
    // The default value is 0.
    HessEvaluations int

    Recorder Recorder

    // Concurrent represents how many concurrent evaluations are possible.
    Concurrent int
}

Settings represents settings of the optimization run. It contains initial settings, convergence information, and Recorder information. In general, users should use DefaultSettings rather than constructing a Settings literal.

If UseInitData is true, InitialValue, InitialGradient and InitialHessian specify function information at the initial location.

If Recorder is nil, no information will be recorded.

func DefaultSettings Uses

func DefaultSettings() *Settings

DefaultSettings returns a new Settings struct containing the default settings.

func DefaultSettingsGlobal Uses

func DefaultSettingsGlobal() *Settings

DefaultSettingsGlobal returns the default settings for Global optimization.

type Stats Uses

type Stats struct {
    MajorIterations int           // Total number of major iterations
    FuncEvaluations int           // Number of evaluations of Func
    GradEvaluations int           // Number of evaluations of Grad
    HessEvaluations int           // Number of evaluations of Hess
    Runtime         time.Duration // Total runtime of the optimization
}

Stats contains the statistics of the run.

type Status Uses

type Status int

Status represents the status of the optimization. Programs should not rely on the underlying numeric value of the Status being constant.

const (
    NotTerminated Status = iota
    Success
    FunctionThreshold
    FunctionConvergence
    GradientThreshold
    StepConvergence
    FunctionNegativeInfinity
    MethodConverge
    Failure
    IterationLimit
    RuntimeLimit
    FunctionEvaluationLimit
    GradientEvaluationLimit
    HessianEvaluationLimit
)

func NewStatus Uses

func NewStatus(name string, early bool, err error) Status

NewStatus returns a unique Status variable to represent a custom status. NewStatus is intended to be called only during package initialization, and calls to NewStatus are not thread safe.

NewStatus takes in three arguments, the string that should be output from Status.String, a boolean if the status indicates early optimization conclusion, and the error to return from Err (if any).

func (Status) Early Uses

func (s Status) Early() bool

Early returns true if the status indicates the optimization ended before a minimum was found. As an example, if the maximum iterations was reached, a minimum was not found, but if the gradient norm was reached then a minimum was found.

func (Status) Err Uses

func (s Status) Err() error

Err returns the error associated with an early ending to the minimization. If Early returns false, Err will return nil.

func (Status) String Uses

func (s Status) String() string

type Statuser Uses

type Statuser interface {
    Status() (Status, error)
}

Statuser can report the status and any error. It is intended for methods as an additional error reporting mechanism apart from the errors returned from Init and Iterate.

type StepSizer Uses

type StepSizer interface {
    Init(loc *Location, dir []float64) float64
    StepSize(loc *Location, dir []float64) float64
}

StepSizer can set the next step size of the optimization given the last Location. Returned step size must be positive.

Directories

PathSynopsis
functionsPackage functions provides objective functions for testing optimization algorithms.

Package optimize imports 12 packages (graph) and is imported by 2 packages. Updated 2017-12-15. Refresh now. Tools for package owners.