util

package
v0.0.0-...-ee1864c Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 28, 2018 License: Apache-2.0 Imports: 34 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	// Logger is a shared go-kit logger.
	// TODO: Change all components to take a non-global logger via their constructors.
	Logger = log.NewNopLogger()
)

Functions

func DefaultValues

func DefaultValues(rs ...Registerer)

DefaultValues intiates a set of configs (Registerers) with their defaults.

func Event

func Event() log.Logger

Event is the log-like API for event sampling

func GetFirstAddressOf

func GetFirstAddressOf(name string) (string, error)

GetFirstAddressOf returns the first IPv4 address of the supplied interface name.

func InitEvents

func InitEvents(freq int)

InitEvents initializes event sampling, with the given frequency. Zero=off.

func InitLogger

func InitLogger(cfg *server.Config)

InitLogger initialises the global gokit logger (util.Logger) and overrides the default logger for the server.

func Max64

func Max64(a, b int64) int64

Max64 returns the maximum of two int64s

func MergeNSampleSets

func MergeNSampleSets(sampleSets ...[]model.SamplePair) []model.SamplePair

MergeNSampleSets merges and dedupes n sets of already sorted sample pairs.

func MergeSampleSets

func MergeSampleSets(a, b []model.SamplePair) []model.SamplePair

MergeSampleSets merges and dedupes two sets of already sorted sample pairs.

func Min

func Min(a, b int) int

Min returns the minimum of two ints

func Min64

func Min64(a, b int64) int64

Min64 returns the minimum of two int64s

func NewPrometheusLogger

func NewPrometheusLogger(l logging.Level) (log.Logger, error)

NewPrometheusLogger creates a new instance of PrometheusLogger which exposes Prometheus counters for various log levels.

func ParseProtoReader

func ParseProtoReader(ctx context.Context, reader io.Reader, req proto.Message, compression CompressionType) ([]byte, error)

ParseProtoReader parses a compressed proto from an io.Reader.

func RegisterFlags

func RegisterFlags(rs ...Registerer)

RegisterFlags registers flags with the provided Registerers

func SerializeProtoResponse

func SerializeProtoResponse(w http.ResponseWriter, resp proto.Message, compression CompressionType) error

SerializeProtoResponse serializes a protobuf response into an HTTP response.

func SplitFiltersAndMatchers

func SplitFiltersAndMatchers(allMatchers []*labels.Matcher) (filters, matchers []*labels.Matcher)

SplitFiltersAndMatchers splits empty matchers off, which are treated as filters, see #220

func WithContext

func WithContext(ctx context.Context, l log.Logger) log.Logger

WithContext returns a Logger that has information about the current user in its details.

e.g.

log := util.WithContext(ctx)
log.Errorf("Could not chunk chunks: %v", err)

func WithUserID

func WithUserID(userID string, l log.Logger) log.Logger

WithUserID returns a Logger that has information about the current user in its details.

func WriteJSONResponse

func WriteJSONResponse(w http.ResponseWriter, v interface{})

WriteJSONResponse writes some JSON as a HTTP response.

Types

type Backoff

type Backoff struct {
	// contains filtered or unexported fields
}

Backoff implements exponential backoff with randomized wait times

func NewBackoff

func NewBackoff(ctx context.Context, cfg BackoffConfig) *Backoff

NewBackoff creates a Backoff object. Pass a Context that can also terminate the operation.

func (*Backoff) Err

func (b *Backoff) Err() error

Err returns the reason for terminating the backoff, or nil if it didn't terminate

func (*Backoff) NumRetries

func (b *Backoff) NumRetries() int

NumRetries returns the number of retries so far

func (*Backoff) Ongoing

func (b *Backoff) Ongoing() bool

Ongoing returns true if caller should keep going

func (*Backoff) Reset

func (b *Backoff) Reset()

Reset the Backoff back to its initial condition

func (*Backoff) Wait

func (b *Backoff) Wait()

Wait sleeps for the backoff time then increases the retry count and backoff time Returns immediately if Context is terminated

type BackoffConfig

type BackoffConfig struct {
	MinBackoff time.Duration // start backoff at this level
	MaxBackoff time.Duration // increase exponentially to this level
	MaxRetries int           // give up after this many; zero means infinite retries
}

BackoffConfig configures a Backoff

type CompressionType

type CompressionType int

CompressionType for encoding and decoding requests and responses.

const (
	NoCompression CompressionType = iota
	FramedSnappy
	RawSnappy
)

Values for CompressionType

func CompressionTypeFor

func CompressionTypeFor(version string) CompressionType

CompressionTypeFor a given version of the Prometheus remote storage protocol. See https://github.com/prometheus/prometheus/issues/2692.

type DayValue

type DayValue struct {
	model.Time
	// contains filtered or unexported fields
}

DayValue is a model.Time that can be used as a flag. NB it only parses days!

func NewDayValue

func NewDayValue(t model.Time) DayValue

NewDayValue makes a new DayValue; will round t down to the nearest midnight.

func (*DayValue) IsSet

func (v *DayValue) IsSet() bool

IsSet returns true is the DayValue has been set.

func (*DayValue) Set

func (v *DayValue) Set(s string) error

Set implements flag.Value

func (DayValue) String

func (v DayValue) String() string

String implements flag.Value

type HashBucketHistogram

type HashBucketHistogram interface {
	prometheus.Metric
	prometheus.Collector

	Observe(string, uint32)
	Stop()
}

HashBucketHistogram is used to track a histogram of per-bucket rates.

For instance, I want to know that 50% of rows are getting X QPS or lower and 99% are getting Y QPS of lower. At first glance, this would involve tracking write rate per row, and periodically sticking those numbers in a histogram. To make this fit in memory: instead of per-row, we keep N buckets of counters and hash the key to a bucket. Then every second we update a histogram with the bucket values (and zero the buckets).

Note, we want this metric to be relatively independent of the number of hash buckets and QPS of the service - we're trying to measure how well load balanced the write load is. So we normalise the values in the hash buckets such that if all buckets are '1', then we have even load. We do this by multiplying the number of ops per bucket by the number of buckets, and dividing by the number of ops.

func NewHashBucketHistogram

func NewHashBucketHistogram(opts HashBucketHistogramOpts) HashBucketHistogram

NewHashBucketHistogram makes a new HashBucketHistogram

type HashBucketHistogramOpts

type HashBucketHistogramOpts struct {
	prometheus.HistogramOpts
	HashBuckets int
}

HashBucketHistogramOpts are the options for making a HashBucketHistogram

type Op

type Op interface {
	Key() string
	Priority() int64 // The larger the number the higher the priority.
}

Op is an operation on the priority queue.

type PriorityQueue

type PriorityQueue struct {
	// contains filtered or unexported fields
}

PriorityQueue is a priority queue.

func NewPriorityQueue

func NewPriorityQueue() *PriorityQueue

NewPriorityQueue makes a new priority queue.

func (*PriorityQueue) Close

func (pq *PriorityQueue) Close()

Close signals that the queue should be closed when it is empty. A closed queue will not accept new items.

func (*PriorityQueue) Dequeue

func (pq *PriorityQueue) Dequeue() Op

Dequeue will return the op with the highest priority; block if queue is empty; returns nil if queue is closed.

func (*PriorityQueue) DiscardAndClose

func (pq *PriorityQueue) DiscardAndClose()

DiscardAndClose closes the queue and removes all the items from it.

func (*PriorityQueue) Enqueue

func (pq *PriorityQueue) Enqueue(op Op) bool

Enqueue adds an operation to the queue in priority order. Returns true if added; false if the operation was already on the queue.

func (*PriorityQueue) Length

func (pq *PriorityQueue) Length() int

Length returns the length of the queue.

type PrometheusLogger

type PrometheusLogger struct {
	// contains filtered or unexported fields
}

PrometheusLogger exposes Prometheus counters for each of go-kit's log levels.

func (*PrometheusLogger) Log

func (pl *PrometheusLogger) Log(kv ...interface{}) error

Log increments the appropriate Prometheus counter depending on the log level.

type Registerer

type Registerer interface {
	RegisterFlags(*flag.FlagSet)
}

Registerer is a thing that can RegisterFlags

type SampleStreamIterator

type SampleStreamIterator struct {
	// contains filtered or unexported fields
}

SampleStreamIterator is a struct and not just a renamed type because otherwise the Metric field and Metric() methods would clash.

func NewSampleStreamIterator

func NewSampleStreamIterator(ss *model.SampleStream) SampleStreamIterator

NewSampleStreamIterator creates a SampleStreamIterator

func (SampleStreamIterator) Close

func (it SampleStreamIterator) Close()

Close implements the SeriesIterator interface.

func (SampleStreamIterator) Metric

func (it SampleStreamIterator) Metric() metric.Metric

Metric implements the SeriesIterator interface.

func (SampleStreamIterator) RangeValues

func (it SampleStreamIterator) RangeValues(in metric.Interval) []model.SamplePair

RangeValues implements the SeriesIterator interface.

func (SampleStreamIterator) ValueAtOrBeforeTime

func (it SampleStreamIterator) ValueAtOrBeforeTime(ts model.Time) model.SamplePair

ValueAtOrBeforeTime implements the SeriesIterator interface.

type URLValue

type URLValue struct {
	*url.URL
}

URLValue is a url.URL that can be used as a flag.

func (*URLValue) Set

func (v *URLValue) Set(s string) error

Set implements flag.Value

func (URLValue) String

func (v URLValue) String() string

String implements flag.Value

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL