skipper: Index | Files | Directories

package skipper

import ""

Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules.

Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host.

Skipper can load and update the route definitions from multiple data sources without being restarted.

It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'.

Skipper took the core design and inspiration from Vulcand:


Skipper is 'go get' compatible. If needed, create a 'go workspace' first:

mkdir ws
cd ws
export GOPATH=$(pwd)
export PATH=$PATH:$GOPATH/bin

Get the Skipper packages:

go get

Create a file with a route:

echo 'hello: Path("/hello") -> ""' > example.eskip

Optionally, verify the syntax of the file:

eskip check example.eskip

Start Skipper and make an HTTP request:

skipper -routes-file example.eskip &
curl localhost:9090/hello

Routing Mechanism

The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request.

Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context.

Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision.

For further details, see the 'proxy' and 'filters' package documentation.

Matching Requests

Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions:

- method

- path (optionally with wildcards)

- path regular expressions

- host regular expressions

- headers

- header regular expressions

It is also possible to create custom predicates with any other matching criteria.

The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route.

For further details, see the 'routing' package documentation.

Filters - Augmenting Requests

Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route.

For further details, see the 'filters' package documentation.

Service Backends

Each route has one of the following backends: HTTP endpoint, shunt, loopback or dynamic.

Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "". (The path and query are sent from the original request, or set by filters.)

A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response.

A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect.

A dynamic route means that the final target will be defined in a filter. One of the filters in the chain must set the target backend url explicitly.

Route Definitions

Route definitions consist of the following:

- request matching conditions (predicates)

- filter chain (optional)

- backend

The eskip package implements the in-memory and text representations of route definitions, including a parser.

(Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.)

For further details, see the 'eskip' package documentation

Authentication and Authorization

Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see:

Data Sources

Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients:

- Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: .

- Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and

- etcd: Skipper can load routes and receive updates from etcd clusters ( See the 'etcd' package.

- static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates.

Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package.

Circuit Breaker

Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see:

Running Skipper

Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package.

Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter:

skipper -help

An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line:

eskip -help

Extending Skipper

Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources.

Custom Predicates

To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments.

Example, randompredicate.go:

package main

import (

type randomSpec struct {}

type randomPredicate struct {
    chance float64

func (s *randomSpec) Name() string { return "Random" }

func (s *randomSpec) Create(args []interface{}) (routing.Predicate, error) {
    p := &randomPredicate{.5}
    if len(args) > 0 {
        if c, ok := args[0].(float64); ok {
            p.chance = c

    return p, nil

func (p *randomPredicate) Match(_ *http.Request) bool {
    return rand.Float64() < p.chance

In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random':

Random(.33) -> "";
* -> ""

Custom Filters

To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed.

Example, hellofilter.go:

package main

import (

type helloSpec struct {}

type helloFilter struct {
    who string

func (s *helloSpec) Name() string { return "hello" }

func (s *helloSpec) CreateFilter(config []interface{}) (filters.Filter, error) {
    if len(config) == 0 {
        return nil, filters.ErrInvalidFilterParameters

    if who, ok := config[0].(string); ok {
        return &helloFilter{who}, nil
    } else {
        return nil, filters.ErrInvalidFilterParameters

func (f *helloFilter) Request(ctx filters.FilterContext) {}

func (f *helloFilter) Response(ctx filters.FilterContext) {
    ctx.Response().Header.Set("X-Hello", fmt.Sprintf("Hello, %s!", f.who))

The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as:

* -> hello("world") -> ""

Custom Build

The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command.

Example, hello.go:

package main

import (


func main() {
        Address: ":9090",
        RoutesFile: "routes.eskip",
        CustomPredicates: []routing.PredicateSpec{&randomSpec{}},
        CustomFilters: []filters.Spec{&helloSpec{}}}))

A file containing the routes, routes.eskip:

    Random(.05) -> hello("fish?") -> "";
    * -> hello("world") -> ""

Start the custom router:

go run hello.go

Proxy Package Used Individually

The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing.

Logging and Metrics

Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots.

For details, see the 'logging' and 'metrics' packages documentation.

Performance Considerations

The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure.

Benchmarks for the tree lookup can be run by:

go test -bench=Tree

In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.


Package Files

doc.go plugins.go skipper.go


const DefaultPluginDir = "./plugins"

func Run Uses

func Run(o Options) error

Run skipper.

type Options Uses

type Options struct {
    // WaitForHealthcheckInterval sets the time that skipper waits
    // for the loadbalancer in front to become unhealthy. Defaults
    // to 0.
    WaitForHealthcheckInterval time.Duration

    // StatusChecks is an experimental feature. It defines a
    // comma separated list of HTTP URLs to do GET requests to,
    // that have to return 200 before skipper becomes ready
    StatusChecks []string

    // WhitelistedHealthcheckCIDR appends the whitelisted IP Range to the inernalIPS range for healthcheck purposes
    WhitelistedHealthCheckCIDR []string

    // Network address that skipper should listen on.
    Address string

    // EnableTCPQueue enables controlling the
    // concurrently processed requests at the TCP listener.
    EnableTCPQueue bool

    // ExpectedBytesPerRequest is used by the TCP LIFO listener.
    // It defines the expected average memory required to process an incoming
    // request. It is used only when MaxTCPListenerConcurrency is not defined.
    // It is used together with the memory limit defined in:
    // /sys/fs/cgroup/memory/memory.limit_in_bytes.
    // See also:
    ExpectedBytesPerRequest int

    // MaxTCPListenerConcurrency is used by the TCP LIFO listener.
    // It defines the max number of concurrently accepted connections, excluding
    // the pending ones in the queue.
    // When undefined and the EnableTCPQueue is true,
    MaxTCPListenerConcurrency int

    // MaxTCPListenerQueue is used by the TCP LIFO listener.
    // If defines the maximum number of pending connection waiting in the queue.
    MaxTCPListenerQueue int

    // List of custom filter specifications.
    CustomFilters []filters.Spec

    // Urls of nodes in an etcd cluster, storing route definitions.
    EtcdUrls []string

    // Path prefix for skipper related data in the etcd storage.
    EtcdPrefix string

    // Timeout used for a single request when querying for updates
    // in etcd. This is independent of, and an addition to,
    // SourcePollTimeout. When not set, the internally defined 1s
    // is used.
    EtcdWaitTimeout time.Duration

    // Skip TLS certificate check for etcd connections.
    EtcdInsecure bool

    // If set this value is used as Bearer token for etcd OAuth authorization.
    EtcdOAuthToken string

    // If set this value is used as username for etcd basic authorization.
    EtcdUsername string

    // If set this value is used as password for etcd basic authorization.
    EtcdPassword string

    // If set enables skipper to generate based on ingress resources in kubernetes cluster
    Kubernetes bool

    // If set makes skipper authenticate with the kubernetes API server with service account assigned to the
    // skipper POD.
    // If omitted skipper will rely on kubectl proxy to authenticate with API server
    KubernetesInCluster bool

    // Kubernetes API base URL. Only makes sense if KubernetesInCluster is set to false. If omitted and
    // skipper is not running in-cluster, the default API URL will be used.
    KubernetesURL string

    // KubernetesHealthcheck, when Kubernetes ingress is set, indicates
    // whether an automatic healthcheck route should be generated. The
    // generated route will report healthyness when the Kubernetes API
    // calls are successful. The healthcheck endpoint is accessible from
    // internal IPs, with the path /kube-system/healthz.
    KubernetesHealthcheck bool

    // KubernetesHTTPSRedirect, when Kubernetes ingress is set, indicates
    // whether an automatic redirect route should be generated to redirect
    // HTTP requests to their HTTPS equivalent. The generated route will
    // match requests with the X-Forwarded-Proto and X-Forwarded-Port,
    // expected to be set by the load-balancer.
    KubernetesHTTPSRedirect bool

    // KubernetesHTTPSRedirectCode overrides the default redirect code (308)
    // when used together with -kubernetes-https-redirect.
    KubernetesHTTPSRedirectCode int

    // KubernetesIngressClass is a regular expression, that will make
    // skipper load only the ingress resources that have a matching
    // annotation. For backwards compatibility,
    // the ingresses without an annotation, or an empty annotation, will
    // be loaded, too.
    KubernetesIngressClass string

    // KubernetesRouteGroupClass is a regular expression, that will make skipper
    // load only the RouteGroup resources that have a matching
    // annotation. Any RouteGroups without the
    // annotation, or which an empty annotation, will be loaded too.
    KubernetesRouteGroupClass string

    // PathMode controls the default interpretation of ingress paths in cases
    // when the ingress doesn't specify it with an annotation.
    KubernetesPathMode kubernetes.PathMode

    // KubernetesNamespace is used to switch between monitoring ingresses in the cluster-scope or limit
    // the ingresses to only those in the specified namespace. Defaults to "" which means monitor ingresses
    // in the cluster-scope.
    KubernetesNamespace string

    // KubernetesEnableEastWest enables cluster internal service to service communication, aka east-west traffic
    KubernetesEnableEastWest bool

    // KubernetesEastWestDomain sets the cluster internal domain used to create additional routes in skipper, defaults to skipper.cluster.local
    KubernetesEastWestDomain string

    // *DEPRECATED* API endpoint of the Innkeeper service, storing route definitions.
    InnkeeperUrl string

    // *DEPRECATED* Fixed token for innkeeper authentication. (Used mainly in
    // development environments.)
    InnkeeperAuthToken string

    // *DEPRECATED* Filters to be prepended to each route loaded from Innkeeper.
    InnkeeperPreRouteFilters string

    // *DEPRECATED* Filters to be appended to each route loaded from Innkeeper.
    InnkeeperPostRouteFilters string

    // *DEPRECATED* Skip TLS certificate check for Innkeeper connections.
    InnkeeperInsecure bool

    // *DEPRECATED* OAuth2 URL for Innkeeper authentication.
    OAuthUrl string

    // *DEPRECATED* Directory where oauth credentials are stored, with file names:
    // client.json and user.json.
    OAuthCredentialsDir string

    // *DEPRECATED* The whitespace separated list of OAuth2 scopes.
    OAuthScope string

    // File containing static route definitions.
    RoutesFile string

    // File containing route definitions with file watch enabled. (For the skipper
    // command this option is used when starting it with the -routes-file flag.)
    WatchRoutesFile string

    // InlineRoutes can define routes as eskip text.
    InlineRoutes string

    // Polling timeout of the routing data sources.
    SourcePollTimeout time.Duration

    // DefaultFilters will be applied to all routes automatically.
    DefaultFilters *eskip.DefaultFilters

    // Deprecated. See ProxyFlags. When used together with ProxyFlags,
    // the values will be combined with |.
    ProxyOptions proxy.Options

    // Flags controlling the proxy behavior.
    ProxyFlags proxy.Flags

    // Tells the proxy maximum how many idle connections can it keep
    // alive.
    IdleConnectionsPerHost int

    // Defines the time period of how often the idle connections maintained
    // by the proxy are closed.
    CloseIdleConnsPeriod time.Duration

    // Defines ReadTimeoutServer for server http connections.
    ReadTimeoutServer time.Duration

    // Defines ReadHeaderTimeout for server http connections.
    ReadHeaderTimeoutServer time.Duration

    // Defines WriteTimeout for server http connections.
    WriteTimeoutServer time.Duration

    // Defines IdleTimeout for server http connections.
    IdleTimeoutServer time.Duration

    // Defines MaxHeaderBytes for server http connections.
    MaxHeaderBytes int

    // Enable connection state metrics for server http connections.
    EnableConnMetricsServer bool

    // TimeoutBackend sets the TCP client connection timeout for
    // proxy http connections to the backend.
    TimeoutBackend time.Duration

    // ResponseHeaderTimeout sets the HTTP response timeout for
    // proxy http connections to the backend.
    ResponseHeaderTimeoutBackend time.Duration

    // ExpectContinueTimeoutBackend sets the HTTP timeout to expect a
    // response for status Code 100 for proxy http connections to
    // the backend.
    ExpectContinueTimeoutBackend time.Duration

    // KeepAliveBackend sets the TCP keepalive for proxy http
    // connections to the backend.
    KeepAliveBackend time.Duration

    // DualStackBackend sets if the proxy TCP connections to the
    // backend should be dual stack.
    DualStackBackend bool

    // TLSHandshakeTimeoutBackend sets the TLS handshake timeout
    // for proxy connections to the backend.
    TLSHandshakeTimeoutBackend time.Duration

    // MaxIdleConnsBackend sets MaxIdleConns, which limits the
    // number of idle connections to all backends, 0 means no
    // limit.
    MaxIdleConnsBackend int

    // DisableHTTPKeepalives sets DisableKeepAlives, which forces
    // a backend to always create a new connection.
    DisableHTTPKeepalives bool

    // Flag indicating to ignore trailing slashes in paths during route
    // lookup.
    IgnoreTrailingSlash bool

    // Priority routes that are matched against the requests before
    // the standard routes from the data clients.
    PriorityRoutes []proxy.PriorityRoute

    // Specifications of custom, user defined predicates.
    CustomPredicates []routing.PredicateSpec

    // Custom data clients to be used together with the default etcd and Innkeeper.
    CustomDataClients []routing.DataClient

    // CustomHttpHandlerWrap provides ability to wrap http.Handler created by skipper.
    // http.Handler is used for accepting incoming http requests.
    // It allows to add additional logic (for example tracing) by providing a wrapper function
    // which accepts original skipper handler as an argument and returns a wrapped handler
    CustomHttpHandlerWrap func(http.Handler) http.Handler

    // CustomHttpRoundTripperWrap provides ability to wrap http.RoundTripper created by skipper.
    // http.RoundTripper is used for making outgoing requests (backends)
    // It allows to add additional logic (for example tracing) by providing a wrapper function
    // which accepts original skipper http.RoundTripper as an argument and returns a wrapped roundtripper
    CustomHttpRoundTripperWrap func(http.RoundTripper) http.RoundTripper

    // WaitFirstRouteLoad prevents starting the listener before the first batch
    // of routes were applied.
    WaitFirstRouteLoad bool

    // SuppressRouteUpdateLogs indicates to log only summaries of the routing updates
    // instead of full details of the updated/deleted routes.
    SuppressRouteUpdateLogs bool

    // Dev mode. Currently this flag disables prioritization of the
    // consumer side over the feeding side during the routing updates to
    // populate the updated routes faster.
    DevMode bool

    // Network address for the support endpoints
    SupportListener string

    // Deprecated: Network address for the /metrics endpoint
    MetricsListener string

    // Skipper provides a set of metrics with different keys which are exposed via HTTP in JSON
    // You can customize those key names with your own prefix
    MetricsPrefix string

    // EnableProfile exposes profiling information on /profile of the
    // metrics listener.
    EnableProfile bool

    // Flag that enables reporting of the Go garbage collector statistics exported in debug.GCStats
    EnableDebugGcMetrics bool

    // Flag that enables reporting of the Go runtime statistics exported in runtime and specifically runtime.MemStats
    EnableRuntimeMetrics bool

    // If set, detailed response time metrics will be collected
    // for each route, additionally grouped by status and method.
    EnableServeRouteMetrics bool

    // If set, detailed response time metrics will be collected
    // for each host, additionally grouped by status and method.
    EnableServeHostMetrics bool

    // If set, detailed response time metrics will be collected
    // for each backend host
    EnableBackendHostMetrics bool

    // EnableAllFiltersMetrics enables collecting combined filter
    // metrics per each route. Without the DisableMetricsCompatibilityDefaults,
    // it is enabled by default.
    EnableAllFiltersMetrics bool

    // EnableCombinedResponseMetrics enables collecting response time
    // metrics combined for every route.
    EnableCombinedResponseMetrics bool

    // EnableRouteResponseMetrics enables collecting response time
    // metrics per each route. Without the DisableMetricsCompatibilityDefaults,
    // it is enabled by default.
    EnableRouteResponseMetrics bool

    // EnableRouteBackendErrorsCounters enables counters for backend
    // errors per each route. Without the DisableMetricsCompatibilityDefaults,
    // it is enabled by default.
    EnableRouteBackendErrorsCounters bool

    // EnableRouteStreamingErrorsCounters enables counters for streaming
    // errors per each route. Without the DisableMetricsCompatibilityDefaults,
    // it is enabled by default.
    EnableRouteStreamingErrorsCounters bool

    // EnableRouteBackendMetrics enables backend response time metrics
    // per each route. Without the DisableMetricsCompatibilityDefaults, it is
    // enabled by default.
    EnableRouteBackendMetrics bool

    // EnableRouteCreationMetrics enables the OriginMarker to track route creation times. Disabled by default
    EnableRouteCreationMetrics bool

    // When set, makes the histograms use an exponentially decaying sample
    // instead of the default uniform one.
    MetricsUseExpDecaySample bool

    // Use custom buckets for prometheus histograms.
    HistogramMetricBuckets []float64

    // The following options, for backwards compatibility, are true
    // by default: EnableAllFiltersMetrics, EnableRouteResponseMetrics,
    // EnableRouteBackendErrorsCounters, EnableRouteStreamingErrorsCounters,
    // EnableRouteBackendMetrics. With this compatibility flag, the default
    // for these options can be set to false.
    DisableMetricsCompatibilityDefaults bool

    // Implementation of a Metrics handler. If provided this is going to be used
    // instead of creating a new one based on the Kind of metrics wanted. This
    // is useful in case you want to report metrics to a custom aggregator.
    MetricsBackend metrics.Metrics

    // Output file for the application log. Default value: /dev/stderr.
    // When /dev/stderr or /dev/stdout is passed in, it will be resolved
    // to os.Stderr or os.Stdout.
    // Warning: passing an arbitrary file will try to open it append
    // on start and use it, or fail on start, but the current
    // implementation doesn't support any more proper handling
    // of temporary failures or log-rolling.
    ApplicationLogOutput string

    // Application log prefix. Default value: "[APP]".
    ApplicationLogPrefix string

    // Enables logs in JSON format
    ApplicationLogJSONEnabled bool

    // ApplicationLogJsonFormatter, when set and JSON logging is enabled, is passed along to to the underlying
    // Logrus logger for application logs. To enable structured logging, use ApplicationLogJSONEnabled.
    ApplicationLogJsonFormatter *log.JSONFormatter

    // Output file for the access log. Default value: /dev/stderr.
    // When /dev/stderr or /dev/stdout is passed in, it will be resolved
    // to os.Stderr or os.Stdout.
    // Warning: passing an arbitrary file will try to open for append
    // it on start and use it, or fail on start, but the current
    // implementation doesn't support any more proper handling
    // of temporary failures or log-rolling.
    AccessLogOutput string

    // Disables the access log.
    AccessLogDisabled bool

    // Enables logs in JSON format
    AccessLogJSONEnabled bool

    // AccessLogStripQuery, when set, causes the query strings stripped
    // from the request URI in the access logs.
    AccessLogStripQuery bool

    // AccessLogJsonFormatter, when set and JSON logging is enabled, is passed along to to the underlying
    // Logrus logger for access logs. To enable structured logging, use AccessLogJSONEnabled.
    AccessLogJsonFormatter *log.JSONFormatter

    DebugListener string

    // Path of certificate(s) when using TLS, mutiple may be given comma separated
    CertPathTLS string
    // Path of key(s) when using TLS, multiple may be given comma separated. For
    // multiple keys, the order must match the one given in CertPathTLS
    KeyPathTLS string

    // TLS Settings for Proxy Server
    ProxyTLS *tls.Config

    // Client TLS to connect to Backends
    ClientTLS *tls.Config

    // Flush interval for upgraded Proxy connections
    BackendFlushInterval time.Duration

    // Experimental feature to handle protocol Upgrades for Websockets, SPDY, etc.
    ExperimentalUpgrade bool

    // ExperimentalUpgradeAudit enables audit log of both the request line
    // and the response messages during web socket upgrades.
    ExperimentalUpgradeAudit bool

    // MaxLoopbacks defines the maximum number of loops that the proxy can execute when the routing table
    // contains loop backends (<loopback>).
    MaxLoopbacks int

    // EnableBreakers enables the usage of the breakers in the route definitions without initializing any
    // by default. It is a shortcut for setting the BreakerSettings to:
    // 	[]circuit.BreakerSettings{{Type: BreakerDisabled}}
    EnableBreakers bool

    // BreakerSettings contain global and host specific settings for the circuit breakers.
    BreakerSettings []circuit.BreakerSettings

    // EnableRatelimiters enables the usage of the ratelimiter in the route definitions without initializing any
    // by default. It is a shortcut for setting the RatelimitSettings to:
    // 	[]ratelimit.Settings{{Type: DisableRatelimit}}
    EnableRatelimiters bool

    // RatelimitSettings contain global and host specific settings for the ratelimiters.
    RatelimitSettings []ratelimit.Settings

    // EnableRouteLIFOMetrics enables metrics for the individual route LIFO queues, if any.
    EnableRouteLIFOMetrics bool

    // OpenTracing enables opentracing
    OpenTracing []string

    // OpenTracingInitialSpan can override the default initial, pre-routing, span name.
    // Default: "ingress".
    OpenTracingInitialSpan string

    // OpenTracingExcludedProxyTags can disable a tag so that it is not recorded. By default every tag is included.
    OpenTracingExcludedProxyTags []string

    // OpenTracingLogFilterLifecycleEvents flag is used to enable/disable the logs for events marking request and
    // response filters' start & end times.
    OpenTracingLogFilterLifecycleEvents bool

    // OpenTracingLogStreamEvents flag is used to enable/disable the logs that marks the
    // times when response headers & payload are streamed to the client
    OpenTracingLogStreamEvents bool

    // OpenTracingBackendNameTag enables an additional tracing tag containing a backend name
    // for a route when it's available (e.g. for RouteGroups)
    OpenTracingBackendNameTag bool

    // PluginDir defines the directory to load plugins from, DEPRECATED, use PluginDirs
    PluginDir string
    // PluginDirs defines the directories to load plugins from
    PluginDirs []string

    // FilterPlugins loads additional filters from modules. The first value in each []string
    // needs to be the plugin name (as on disk, without path, without ".so" suffix). The
    // following values are passed as arguments to the plugin while loading, see also
    FilterPlugins [][]string

    // PredicatePlugins loads additional predicates from modules. See above for FilterPlugins
    // what the []string should contain.
    PredicatePlugins [][]string

    // DataClientPlugins loads additional data clients from modules. See above for FilterPlugins
    // what the []string should contain.
    DataClientPlugins [][]string

    // Plugins combine multiple types of the above plugin types in one plugin (where
    // necessary because of shared data between e.g. a filter and a data client).
    Plugins [][]string

    // DefaultHTTPStatus is the HTTP status used when no routes are found
    // for a request.
    DefaultHTTPStatus int

    // EnablePrometheusMetrics enables Prometheus format metrics.
    // This option is *deprecated*. The recommended way to enable prometheus metrics is to
    // use the MetricsFlavours option.
    EnablePrometheusMetrics bool

    // An instance of a Prometheus registry. It allows registering and serving custom metrics when skipper is used as a
    // library.
    // A new registry is created if this option is nil.
    PrometheusRegistry *prometheus.Registry

    // MetricsFlavours sets the metrics storage and exposed format
    // of metrics endpoints.
    MetricsFlavours []string

    // LoadBalancerHealthCheckInterval enables and sets the
    // interval when to schedule health checks for dead or
    // unhealthy routes
    LoadBalancerHealthCheckInterval time.Duration

    // ReverseSourcePredicate enables the automatic use of IP
    // whitelisting in different places to use the reversed way of
    // identifying a client IP within the X-Forwarded-For
    // header. Amazon's ALB for example writes the client IP to
    // the last item of the string list of the X-Forwarded-For
    // header, in this case you want to set this to true.
    ReverseSourcePredicate bool

    // OAuthTokeninfoURL sets the the URL to be queried for
    // information for all auth.NewOAuthTokeninfo*() filters.
    OAuthTokeninfoURL string

    // OAuthTokeninfoTimeout sets timeout duration while calling oauth token service
    OAuthTokeninfoTimeout time.Duration

    // OAuthTokenintrospectionTimeout sets timeout duration while calling oauth tokenintrospection service
    OAuthTokenintrospectionTimeout time.Duration

    // OIDCSecretsFile path to the file containing key to encrypt OpenID token
    OIDCSecretsFile string

    // SecretsRegistry to store and load secretsencrypt
    SecretsRegistry *secrets.Registry

    // CredentialsPaths directories or files where credentials are stored one secret per file
    CredentialsPaths []string

    // CredentialsUpdateInterval sets the interval to update secrets
    CredentialsUpdateInterval time.Duration

    // API Monitoring feature is active (feature toggle)
    ApiUsageMonitoringEnable                bool
    ApiUsageMonitoringRealmKeys             string
    ApiUsageMonitoringClientKeys            string
    ApiUsageMonitoringRealmsTrackingPattern string
    // *DEPRECATED* ApiUsageMonitoringDefaultClientTrackingPattern
    ApiUsageMonitoringDefaultClientTrackingPattern string

    // Default filters directory enables default filters mechanism and sets the directory where the filters are located
    DefaultFiltersDir string

    // WebhookTimeout sets timeout duration while calling a custom webhook auth service
    WebhookTimeout time.Duration

    // MaxAuditBody sets the maximum read size of the body read by the audit log filter
    MaxAuditBody int

    // EnableSwarm enables skipper fleet communication, required by e.g.
    // the cluster ratelimiter
    EnableSwarm bool
    // redis based swarm
    SwarmRedisURLs         []string
    SwarmRedisReadTimeout  time.Duration
    SwarmRedisWriteTimeout time.Duration
    SwarmRedisPoolTimeout  time.Duration
    SwarmRedisMinIdleConns int
    SwarmRedisMaxIdleConns int
    // swim based swarm
    SwarmKubernetesNamespace          string
    SwarmKubernetesLabelSelectorKey   string
    SwarmKubernetesLabelSelectorValue string
    SwarmPort                         int
    SwarmMaxMessageBuffer             int
    SwarmLeaveTimeout                 time.Duration
    // swim based swarm for local testing
    SwarmStaticSelf  string //
    SwarmStaticOther string //,
    // contains filtered or unexported fields

Options to start skipper.


circuitPackage circuit implements circuit breaker functionality for the proxy.
cmd/eskipThis utility can be used to verify, print, update or delete eskip formatted routes from and to different data sources.
cmd/skipperThis command provides an executable version of skipper with the default set of filters.
dataclients/kubernetesPackage kubernetes implements Kubernetes Ingress support for Skipper.
dataclients/kubernetes/definitionsPackage definitions provides type definitions, parsing, marshaling and validation for Kubernetes resources used by Skipper.
dataclients/routestringPackage routestring provides a DataClient implementation for setting route configuration in form of simple eskip string.
eskipPackage eskip implements an in-memory representation of Skipper routes and a DSL for describing Skipper route expressions, route definitions and complete routing tables.
eskipfilePackage eskipfile implements the DataClient interface for reading the skipper route definitions from an eskip formatted file.
etcdPackage etcd implements a DataClient for reading the skipper route definitions from an etcd service.
etcd/etcdtestPackage etcdtest implements an easy startup script to start a local etcd instance for testing purpose.
filtersPackage filters contains definitions for skipper filtering and a default, built-in set of filters.
filters/accesslogPackage accesslog provides request filters that give the ability to override AccessLogDisabled setting.
filters/apiusagemonitoringPackage apiusagemonitoring provides filters gathering metrics around API calls
filters/authPackage auth provides authentication related filters.
filters/builtinPackage builtin provides a small, generic set of filters.
filters/circuitPackage circuit provides filters to control the circuit breaker settings on the route level.
filters/cookiePackage cookie implements filters to append to requests or responses.
filters/corsPackage cors implements the origin header for CORS.
filters/diagPackage diag provides a set of network throttling filters for diagnostic purpose.
filters/filtertestPackage filtertest implements mock versions of the Filter, Spec and FilterContext interfaces used during tests.
filters/flowidPackage flowid implements a filter used for identifying incoming requests through their complete lifecycle for logging and monitoring or else.
filters/logPackage log provides a request logging filter, usable also for audit logging.
filters/ratelimitPackage ratelimit provides filters to control the rate limitter settings on the route level.
filters/schedulerPackage scheduler implements filter logic that changes the http request scheduling behavior of the proxy.
filters/sedPackage sed provides stream editor filters for request and response payload.
filters/servePackage serve provides a wrapper of net/http.Handler to be used as a filter.
filters/teePackage tee provides a unix-like tee feature for routing.
filters/tracingPackage tracing provides filters to instrument distributed tracing.
innkeeperPackage innkeeper implements a DataClient for reading skipper route definitions from an Innkeeper service.
loadbalancerPackage loadbalancer implements load balancer algorithms that are applied by the proxy.
loggingPackage logging implements application log instrumentation and Apache combined access log.
metricsPackage metrics implements collection of common performance metrics.
netPackage net provides generic network related functions used across Skipper, which might be useful also in other contexts than Skipper.
oauthPackage oauth implements an authentication client to be used with OAuth2 authentication services.
pathmuxPackage pathmux implements a tree lookup for values associated to paths.
predicates/authPackage auth implements custom predicates to match based on content of the HTTP Authorization header.
predicates/cookiePackage cookie implements predicate to check parsed cookie headers by name and value.
predicates/cronPackage cron implements custom predicates to match routes only when they also match the system time matches the given cron-like expressions.
predicates/intervalPackage interval implements custom predicates to match routes only during some period of time.
predicates/methodsPackage methods implements a custom predicate to match routes based on the http method in request
predicates/queryPackage source implements a custom predicate to match routes based on the Query Params in URL
predicates/sourcePackage source implements a custom predicate to match routes based on the source IP of a request.
predicates/trafficPackage traffic implements a predicate to control the matching probability for a given route by setting its weight.
proxyPackage proxy implements an HTTP reverse proxy based on continuously updated skipper routing rules.
ratelimitPackage ratelimit implements rate limiting functionality for the proxy.
rfcPackage rfc provides standards related functions.
routingPackage routing implements matching of http requests to a continuously updatable set of skipper routes.
routing/testdataclientPackage testdataclient provides a test implementation for the DataClient interface of the skipper/routing package.
schedulerPackage scheduler provides a registry to be used as a postprocessor for the routes that use a LIFO filter.
scriptPackage script provides lua scripting for skipper
script/base64Package base64 provides an easy way to encode and decode base64
secretsPackage secrets implements features we need to create, get, update, rotate secrets and encryption decryption across a fleet of skipper instances.
swarmPackage swarm implements the exchange of information between Skipper instances using a gossip protocol called SWIM.
tracingPackage tracing handles opentracing support for skipper
tracing/tracingtestPackage tracingtest provides an OpenTracing implementation for testing purposes.

Package skipper imports 54 packages (graph) and is imported by 6 packages. Updated 2020-11-24. Refresh now. Tools for package owners.