metrics

package module
v0.0.0-...-0250d8a Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 27, 2021 License: GPL-3.0 Imports: 16 Imported by: 3

README

Build Status go report GoDoc

Description

This is a implementation of handy metrics library for high loaded Golang application with export to prometheus (passive export) and/or to StatsD (active export). But the primary method is the passive export (a special page where somebody get fetch all the metrics).

How to use

Count the number of HTTP requests of every method

(and request rate by measuring the rate of the count):

metrics.Count(`requests`, metrics.Tags{
    `method`: request.Method,
}).Increment()
Measure the latency
startTime := time.Now()

[... do your routines here ...]

metrics.TimingBuffered(`latency`, nil).ConsiderValue(time.Since(startTime))
Export the metrics for prometheus
import "github.com/trafficstars/statuspage"

func sendMetrics(w http.ResponseWriter, r *http.Request) {
    statuspage.WriteMetricsPrometheus(w)
}

func main() {
[...]
    http.HandleFunc("/metrics.prometheus", sendMetrics)
[...]
}
Export the metrics to StatsD

import (
	"github.com/trafficstars/metrics"
)

func newStatsdSender(address string) (*statsdSender, error) {
[... init ...]
}

func (sender *statsdSender) SendInt64(metric metrics.Metric, key string, int64) error {
[... send the metric to statsd ...]
}

func (sender *statsdSender) SendUint64(metric metrics.Metric, key string, uint64) error {
[... send the metric to statsd ...]
}

func (sender *statsdSender) SendFloat64(metric metrics.Metric, key string, float64) error {
[... send the metric to statsd ...]
}

func main() {
[...]
    metricsSender, err := newStatsdSender(`localhost:8125`)
    if err != nil {
		log.Fatal(err)
    }
    metrics.SetSender(metricsSender)
[...]
}

(the buffer should be implemented on the sender side if it's required)

Hello world

package main

import (
        "fmt"
        "math/rand"
        "net/http"
        "time"

        "github.com/trafficstars/metrics"
        "github.com/trafficstars/statuspage"
)

func hello(w http.ResponseWriter, r *http.Request) {
    answerInt := rand.Intn(10)

    startTime := time.Now()

    // just a metric
    tags := metrics.Tags{`answer_int`: answerInt}
    metrics.Count(`hello`, tags).Increment()

    time.Sleep(time.Millisecond)
    fmt.Fprintf(w, "Hello world! The answerInt == %v\n", answerInt)

    // just a one more metric
    tags["endpoint"] = "hello"
    metrics.TimingBuffered(`latency`, tags).ConsiderValue(time.Since(startTime))
}

func sendMetrics(w http.ResponseWriter, r *http.Request) {
	startTime := time.Now()

    statuspage.WriteMetricsPrometheus(w)

    metrics.TimingBuffered(`latency`, metrics.Tags{
		"endpoint": "sendMetrics",
    }).ConsiderValue(time.Since(startTime))
}

func main() {
    http.HandleFunc("/", hello)
    http.HandleFunc("/metrics.prometheus", sendMetrics) // here we export metrics for prometheus
    http.ListenAndServe(":8000", nil)
}

Framework "echo"

The same as above, but just use our handler:

// import "github.com/trafficstars/statuspage/handler/echostatuspage"

r := echo.New()
r.GET("/status.prometheus", echostatuspage.StatusPrometheus)

Aggregative metrics

Aggregative metrics are similar to prometheus' summary. There're available three methods of summarizing/aggregation of observed values:

  • Simple.
  • Flow.
  • Buffered.

There's two types of Aggregative metrics:

  • Timing (receives time.Duration as the argument to method ConsiderValue).
  • Gauge (receives float64 as the argument to method ConsiderValue).

ConsiderValue is analog of prometheus' Observe

So there're available next aggregative metrics:

  • TimingFlow
  • TimingBuffered
  • TimingSimple
  • GaugeFlow
  • GaugeBuffered
  • GaugeSimple
Slicing

An aggregative metric has aggregative/summarized statistics for a few periods at the same time:

  • Last -- is the very last value ever received via ConsiderValue.
  • Current -- is the statistics for the current second (which is not complete, yet)
  • 1S -- is the statistics for the previous second
  • 5S -- is the statistics for the previous 5 seconds
  • 1M -- is the statistics for the previous minute
  • ...
  • 6H -- is the statistics for the last 6 hours
  • 1D -- is the statistics for the last day
  • Total -- is the total statistics

Once per second the Current became 1S and an new empty Current appears instead. And there's a history of the last 5 statistics for 1S which is used to recalculate statistics for 5S. There's in turn a history of the last 12 statistics for 5S which is used to recalculate statistics for 1M. And so on.

This process is called "slicing" (which is done once per second by default).

To change aggregation periods and slicing interval you can use methods SetAggregationPeriods and SetSlicerInterval accordingly.

A note: So if you have one aggregative metric it will export every value (max, count, ...) for every aggregation period (Total, Last, Current, 1S, 5S, ...).

Aggregation types

If you have no time to read how every aggregation type works then just read "Use case" of every type.

Simple

"Simple" just calculates only min, max, avg and count. It's works quite simple and stupid, doesn't require extra CPU and/or RAM.

Use case

Any case where it's not required to get percentile values.

Flow

"Flow" calculates min, max, avg, count, per1, per10, per50, per90 and per99 ("per" is a shorthand for "percentile"). It doesn't store observed values (only summarized/aggregated ones)

Use case
  • It's required to get percentile values, but they could be inaccurate.
  • There's a lot of values per second.
  • There will be a lot of such metrics.
How the calculation of percentile values works

It just increases/decreases the value (let it call "P") to reach required ratio of [values lower than the value "P"] to [values higher than the value "P"].

Let's image ConsiderValue was called. We do not store previous values so we:

  1. Pick a random number [0..1). If it's less than the required percentile then we think that this value should be lower than the current value (and vice versa).
  2. Correct the current value if the prediction in the first stage was wrong.

The function (that implements the above algorithm) is called guessPercentileValue (see common_aggregative_flow.go).

There's a constant iterationsRequiredPerSecond to tune accuracy of the algorithm. The more this constant value is the more accurate is the algorithm, but more values is required (to be passed through ConsiderValue) per second to approach the real value. It's set to 20, so this kind of aggregative metrics shouldn't be used if the next condition is not satisfied: VPS >> 20 (VPS means "values per second", >> means "much more than").

flow (400 events)

flow (4000 events)

The more values are passed the more inert is the value and the more accurate it is. So, again, the "Flow" method should be used only on high VPS.

Attention! There's an unsolved problem of correct merging percentile-related statistics: For example, to calculate percentile statistics for interval "5 seconds" it's required to merge statistics for 5 different seconds (with their-own percentile values), so the resulting percentile value is calculated as just the weighted average of percentile values. It's correct only if the load is monotone. Otherwise it will be inaccurate, but usually good enough.

Buffered

"Buffered" calculates min, max, avg, count and stores values samples to be able to calculate any percentile values at any time. This method more precise than the "Flow", but requires much more RAM. The size of the buffer with the sample values is regulated via method SetAggregativeBufferSize (the default value is "1000"); the more buffer size is the more accuracy of percentile values is, but more RAM is required.

Buffered method is much faster than the Flow method:

BenchmarkConsiderValueFlow-8            20000000               120 ns/op               0 B/op          0 allocs/op
BenchmarkConsiderValueBuffered-8        20000000                75.6 ns/op             0 B/op          0 allocs/op
BenchmarkConsiderValueSimple-8          30000000                54.3 ns/op             0 B/op          0 allocs/op
Use case
  • It's required to get precise percentile values.
  • It's required to use really fast metrics.
  • There won't be a lot of such metrics (otherwise it will utilize a lot of RAM).
Buffer handling

There're two buffer-related specifics:

  • The buffer is limited, so how do we handle the rest events (if there're more events per second than the buffer size)?
  • How are two buffers get merged to the new one of the same size (see "slicing")?

Both problems are solved using the same initial idea: Let's imagine we received a 1001-th value (via ConsiderValue), while our buffer is only 1000 elements long. Then:

  • We just skip it with probability 1/1001.
  • If it's not skipped then override a random element of the buffer by it.

If we receive a 1002-th event, then we skip it with probability 2/1002... And so on.

It's proven that it's any event value will have an equal probability to get into the buffer. And 1000 elements is enough to calculate value of percentile 99 (there will be 10 element with a higher value).

buffered (on this graph the percentile values are absolutely correct, because there's less than 1000 events)

buffered long (4000 events)

Func metrics

There're also metrics with the "Func" ending:

  • GaugeFloat64Func.
  • GaugeInt64Func.

This metrics accepts a function as an argument so they call the function to update their value by themselves.

An example:

server := echo.New()
    
[...]
    
engineInstance := fasthttp.WithConfig(engine.Config{
     Address:      srv.Address,
})

metrics.GaugeInt64Func(
    "concurrent_incoming_connections",
    nil,
    func() int64 { return int64(engineInstance.GetOpenConnectionsCount()) },
).SetGCEnabled(false)

server.Run(engineInstance)

Performance

BenchmarkRegistry-8                                     20000000                75.1 ns/op             0 B/op          0 allocs/op
BenchmarkRegistryReal-8                                  5000000               299 ns/op               0 B/op          0 allocs/op
BenchmarkAddToRegistryReal-8                             5000000               351 ns/op               0 B/op          0 allocs/op
BenchmarkRegistryRealReal_lazy-8                         5000000               390 ns/op             352 B/op          3 allocs/op
BenchmarkRegistryRealReal_normal-8                       5000000               317 ns/op              16 B/op          1 allocs/op
BenchmarkRegistryRealReal_FastTags_withHiddenTag-8       5000000               254 ns/op               0 B/op          0 allocs/op
BenchmarkRegistryRealReal_FastTags-8                    10000000               233 ns/op               0 B/op          0 allocs/op

For comparison mutex.Lock/mutex.Unlock takes

BenchmarkMutexLockUnlock-8              30000000                57.7 ns/op

Also you can bypass any metric retrieval, for example:

metric := metrics.GaugeInt64(`concurrent_requests`)
metric.SetGCEnabled(false)
http.HandleFunc("/", func (w http.ResponseWriter, r *http.Request) {
	metric.Increment()
	[...]
	metric.Decrement()
})

The incremental and decremental are done in an atomic way and it's safe to use in a concurrent way:

BenchmarkIncrementDecrement-8           100000000               14.0 ns/op             0 B/op          0 allocs/op

Also there's another approach to metrics retrieval -- metric families. It's when a family is retrieved beforehand (like the metric in the example above), but the specific metric is searched through the family using tags. It's a faster retrieval method, but less handy. We do not support such approach, yet; but I hope we will. IIRC, such approach is used in the official prometheus metrics library for Golang.

Tags

There're two implementations of tags:

  • Tags -- just a map[string]interface{}. It's just handy (syntax sugar).
  • FastTags -- faster tags. They prevents unnecessary memory allocations and just works a little faster (usually).

Examples:

// Tags
metrics.Count(`requests`, metrics.Tags{
	`method`: request.Method,
}).Increment()

(see BenchmarkRegistryRealReal_lazy)

// FastTags
tags := metrics.NewFastTags().
	Set(`method`, request.Method)
metrics.Count(`requests`, tags).Increment()
tags.Release()

(see BenchmarkRegistryRealReal_FastTags)

It's also possible to use memory reuse for Tags, too. It reduces memory allocations, but doesn't eliminate them and takes away the syntax sugar:

tags := metrics.NewTags()
tags[`method`] = request.Method
metrics.Count(`requests`, tags).Increment()
tags.Release()

(see BenchmarkRegistryRealReal_normal)

So for a very high-loaded application I'd recommend to use FastTags, while for the rest cases you may just use syntax-sugared Tags.

The case without tags at all is the case BenchmarkRegistry (the fastest one):

metrics.Count(`requests`, nil).Increment()

Garbage collection

In our use cases it appeared we have a lot of short-term metrics (which appears for a few seconds/hours in disappears), so if we keep all the metrics in RAM then our application reaches the RAM limit and dies. Therefore a "garbage collection" (GC) was implemented. The GC just checks which metrics haven't change their values for a long time and removes them.

So every metric has uselessCounter which may reach gcUselessLimit (currently 5). If the threshold is reached, then the metrics is Stop-ped and the registry's GC will removed it from the internal storage.

The check if the metric value have changes is done by an Iterator (see "Iterators"). Default interval is 1 minute (so the metrics should be "useless" for at least 5 minutes to be removed).

To disable the GC for a metric you can call bethod SetGCEnabled(false)

An example:

metric := metrics.GaugeInt64(`concurrent_requests`, nil)
metric.SetGCEnabled(false)

[...]
metric.Increase()
[...]
metric.Decrease()
[...]

Developer notes

The structure of a metric object

To deduplicate code it's used an approach similar to C++'s inheritance, but using Golang's composition. Here's the scheme: composition/inheritance

  • registryItem makes possible to register the metric into the registry
  • common handles the common (for all metric types) routines like GC or Sender.
  • commonAggregative handles the common routines for all aggregative metrics (like statistics slicing, see "slicing")

Iterators

There're 3 different background routines for every metric:

  • GC (recheck if the metric wasn't changed long time ago and could be removed)
  • Sender (see "Sender")
  • Slicer (see "Slicing")

If I just run a separate goroutine for every routine and metric then the goroutines start to consumer a lot of CPU, so to deduplicate goroutines there were implemented "iterators".

An iterator just runs every callback function from a slice with specified time interval.

Note: I was too lazy (and there actually was no need) to separate GC and Sender, so it's the same routine.

Bugs

GoDoc doesn't show public methods of some embedded private structures, sorry for that :(

May be related to https://github.com/golang/go/issues/6127 or may be I did something wrong.

Documentation

Index

Constants

View Source
const (
	TypeCount = iota
	TypeGaugeInt64
	TypeGaugeInt64Func
	TypeGaugeFloat64
	TypeGaugeFloat64Func
	TypeGaugeAggregativeFlow
	TypeGaugeAggregativeBuffered
	TypeGaugeAggregativeSimple
	TypeTimingFlow
	TypeTimingBuffered
	TypeTimingSimple
)

Variables

View Source
var (
	// ErrAlreadyExists should never be returned: it's an internal error.
	// If you get this error then please let us know.
	ErrAlreadyExists = errors.New(`such metric is already registered`)
)

Functions

func GC

func GC()

func GetDefaultGCEnabled

func GetDefaultGCEnabled() bool

func GetDefaultIsRunned

func GetDefaultIsRunned() bool

func GetDefaultIterateInterval

func GetDefaultIterateInterval() time.Duration

func IsDisabled

func IsDisabled() bool

func IsHiddenTag

func IsHiddenTag(tagKey string, tagValue interface{}) bool

func MemoryReuseEnabled

func MemoryReuseEnabled() bool

MemoryReuseEnabled returns if memory reuse is enabled.

func Reset

func Reset()

func SetAggregationPeriods

func SetAggregationPeriods(newAggregationPeriods []AggregationPeriod)

SetAggregationPeriods affects only new metrics (it doesn't affect already created on). You may use function `Reset()` to "update" configuration of all metrics.

Every higher aggregation period should be a multiple of the lower one.

func SetAggregativeBufferSize

func SetAggregativeBufferSize(newBufferSize uint)

SetAggregativeBufferSize sets the size of the buffer to be used to store value samples The more this values is the more precise is the percentile value, but more RAM & CPU is consumed. (see "Buffered" in README.md)

func SetDefaultGCEnabled

func SetDefaultGCEnabled(newValue bool)

func SetDefaultIsRan

func SetDefaultIsRan(newValue bool)

func SetDefaultPercentiles

func SetDefaultPercentiles(p []float64)

func SetDefaultTags

func SetDefaultTags(newDefaultAnyTags AnyTags)

func SetDisableFastTags

func SetDisableFastTags(newDisableFastTags bool)

SetDisableFastTags forces to use Tags instead of FastTags. So if SetDisableFastTags(true) is set then NewFastTags() will return "Tags" instead of "FastTags".

This is supposed to be used only for debugging (like to check if there's a bug caused by FastTags).

func SetDisabled

func SetDisabled(newIsDisabled bool) bool

func SetHiddenTags

func SetHiddenTags(newRawHiddenTags HiddenTags)

func SetLimit

func SetLimit(newLimit uint)

func SetMemoryReuseEnabled

func SetMemoryReuseEnabled(isEnabled bool)

SetMemoryReuseEnabled defines if memory reuse will be enabled (default -- enabled).

func SetMetricsIterateIntervaler

func SetMetricsIterateIntervaler(newMetricsIterateIntervaler IterateIntervaler)

func SetSender

func SetSender(newMetricSender Sender)

SetSender sets a handler responsible to send metric values to a metrics server (like StatsD)

func SetSlicerInterval

func SetSlicerInterval(newSlicerInterval time.Duration)

SetSlicerInterval affects only new metrics (it doesn't affect already created one). You may use function `Reset()` to "update" configuration of all metrics.

func TagValueToString

func TagValueToString(vI interface{}) string

Types

type AggregationPeriod

type AggregationPeriod struct {
	Interval uint64 // in slicerInterval-s
}

AggregationPeriod is used to define aggregation periods (see "Slicing" in "README.md")

func GetAggregationPeriods

func GetAggregationPeriods() (r []AggregationPeriod)

GetAggregationPeriods returns aggregations periods (see "Slicing" in README.md)

func GetBaseAggregationPeriod

func GetBaseAggregationPeriod() *AggregationPeriod

GetBaseAggregationPeriod returns AggregationPeriod equals to the slicer's interval (see "Slicing" in README.md)

func (*AggregationPeriod) String

func (period *AggregationPeriod) String() string

String returns a string representation of the aggregation period

It will return in a short format (like "5s", "1h") if the amount of seconds could be represented as exact value of days, hours or minutes, or if the amount of seconds is less than 60. Otherwise the format will be like `1h5m0s`.

type AggregativeStatistics

type AggregativeStatistics interface {
	// GetPercentile returns the value for a given percentile (0.0 .. 1.0).
	// It returns nil if the percentile could not be calculated (it could be in case of using "flow" [instead of
	// "buffered"] aggregative metrics)
	//
	// If you need to calculate multiple percentiles then use GetPercentiles() to get better performance
	GetPercentile(percentile float64) *float64

	// GetPercentiles returns values for given percentiles (0.0 .. 1.0).
	// A value is nil if the percentile could not be calculated.
	GetPercentiles(percentile []float64) []*float64

	// GetDefaultPercentiles returns default percentiles and its values.
	GetDefaultPercentiles() (percentiles []float64, values []float64)

	// Set forces all the values in the statistics to be equal to the passed values
	Set(staticValue float64)

	// ConsiderValue is analog of "Observe" of https://godoc.org/github.com/prometheus/client_golang/prometheus#Observer
	// It's used to merge the value to the statistics. For example if there were considered only values 1, 2 and 3 then
	// the average value will be 2.
	ConsiderValue(value float64)

	// Release is used for memory reuse (it's called when it's known that the statistics won't be used anymore)
	// This method is not supposed to be called from external code, it designed for internal uses only.
	Release()

	//
	MergeStatistics(AggregativeStatistics)
}

type AggregativeValue

type AggregativeValue struct {
	sync.Mutex

	Count AtomicUint64
	Min   AtomicFloat64
	Avg   AtomicFloat64
	Max   AtomicFloat64
	Sum   AtomicFloat64

	AggregativeStatistics
}

AggregativeValue is a struct that contains all the values related to an aggregation period.

func (*AggregativeValue) Do

func (aggrV *AggregativeValue) Do(fn func(*AggregativeValue))

Do is like LockDo, but without Lock :)

func (*AggregativeValue) GetAvg

func (aggrV *AggregativeValue) GetAvg() float64

GetAvg just returns the average value

func (*AggregativeValue) LockDo

func (aggrV *AggregativeValue) LockDo(fn func(*AggregativeValue))

LockDo is just a wrapper around Lock()/Unlock(). It's quite handy to understand who caused a deadlock in stack traces.

func (*AggregativeValue) MergeData

func (r *AggregativeValue) MergeData(e *AggregativeValue)

MergeData merges/joins the statistics of the argument.

func (*AggregativeValue) Release

func (v *AggregativeValue) Release()

Release is an opposite to NewAggregativeValue and it saves the variable to a pool to a prevent memory allocation in future. It's not necessary to call this method when you finished to work with an AggregativeValue, but recommended to (for better performance).

func (*AggregativeValue) String

func (v *AggregativeValue) String() string

String returns a JSON string representing values (min, max, count, ...) of an aggregative value

type AggregativeValues

type AggregativeValues struct {
	// contains filtered or unexported fields
}

AggregativeValues is a full collection of "AggregativeValue"-s (see "Slicing" in README.md)

func (*AggregativeValues) ByPeriod

func (vs *AggregativeValues) ByPeriod(idx int) *AggregativeValue

func (*AggregativeValues) Current

func (vs *AggregativeValues) Current() *AggregativeValue

func (*AggregativeValues) Last

func (vs *AggregativeValues) Last() *AggregativeValue

func (*AggregativeValues) Total

func (vs *AggregativeValues) Total() *AggregativeValue

type AnyTags

type AnyTags interface {
	// Get value of tag by key
	Get(key string) interface{}

	// Set tag by key and value (if tag by the key does not exist then add it otherwise overwrite the value)
	Set(key string, value interface{}) AnyTags

	// Each iterates over all tags and passes tag key/value to the function (the argument)
	// The function may return false if it's required to finish the loop prematurely
	Each(func(key string, value interface{}) bool)

	// ToFastTags returns the tags as "*FastTags"
	ToFastTags() *FastTags

	// ToMap gets the tags as a map, overwrites values by according keys using overwriteMaps and returns
	// the result
	ToMap(overwriteMaps ...map[string]interface{}) map[string]interface{}

	// Release puts tags the structure/slice back to the pool to be reused in future
	Release()

	// WriteAsString writes tags in StatsD format through the WriteStringer passed as the argument
	WriteAsString(interface{ WriteString(string) (int, error) })

	// String returns the tags as a string in the StatsD format
	String() string

	// Len returns the amount/count of tags
	Len() int
}

AnyTags is an abstraction over "Tags" and "*FastTags"

func NewFastTags

func NewFastTags() AnyTags

NewFastTags returns an implementation of AnyTags with a full memory reuse support (if SetDisableFastTags(true) is not set).

This implementation is supposed to be used if it's required to reduce a pressure on GC (see "GCCPUFraction", https://golang.org/pkg/runtime/#MemStats).

It could be required if there's a metric that is retrieved very often and it's required to reduce CPU utilization.

If SetDisableFastTags(true) is set then it returns the same as "NewTags" (without full memory reuse).

See "Tags" in README.md

type AtomicFloat64

type AtomicFloat64 uint64

AtomicFloat64 is an implementation of atomic float64 using uint64 atomic instructions and `math.Float64frombits()`/`math.Float64bits()`

func (*AtomicFloat64) Add

func (f *AtomicFloat64) Add(a float64) float64

Add adds the value to the current one (operator "plus")

func (*AtomicFloat64) AddFast

func (f *AtomicFloat64) AddFast(n float64) float64

AddFast is like Add but without atomicity (faster, but unsafe)

func (*AtomicFloat64) Get

func (f *AtomicFloat64) Get() float64

Get returns the current value

func (*AtomicFloat64) GetFast

func (f *AtomicFloat64) GetFast() float64

GetFast is like Get but without atomicity (faster, but unsafe)

func (*AtomicFloat64) Set

func (f *AtomicFloat64) Set(n float64)

Set sets a new value

func (*AtomicFloat64) SetFast

func (f *AtomicFloat64) SetFast(n float64)

SetFast is like Set but without atomicity (faster, but unsafe)

type AtomicFloat64Interface

type AtomicFloat64Interface interface {
	// Get returns the current value
	Get() float64

	// Set sets a new value
	Set(float64)

	// Add adds the value to the current one (operator "plus")
	Add(float64) float64

	// GetFast is like Get but without atomicity
	GetFast() float64

	// SetFast is like Set but without atomicity
	SetFast(float64)

	// AddFast is like Add but without atomicity
	AddFast(float64) float64
}

AtomicFloat64Interface is an interface of an atomic float64 implementation It's an abstraction over (*AtomicFloat64) and (*AtomicFloat64Ptr)

type AtomicFloat64Ptr

type AtomicFloat64Ptr struct {
	Pointer *float64
}

AtomicFloat64Ptr is like AtomicFloat64 but stores the value in a pointer "*float64" (which in turn could be changed from the outside to point at some other variable).

func (*AtomicFloat64Ptr) Add

func (f *AtomicFloat64Ptr) Add(n float64) float64

Add adds the value to the current one (operator "plus")

func (*AtomicFloat64Ptr) AddFast

func (f *AtomicFloat64Ptr) AddFast(n float64) float64

AddFast is like Add but without atomicity (faster, but unsafe)

func (*AtomicFloat64Ptr) Get

func (f *AtomicFloat64Ptr) Get() float64

Get returns the current value

func (*AtomicFloat64Ptr) GetFast

func (f *AtomicFloat64Ptr) GetFast() float64

GetFast is like Get but without atomicity (faster, but unsafe)

func (*AtomicFloat64Ptr) Set

func (f *AtomicFloat64Ptr) Set(n float64)

Set sets a new value

func (*AtomicFloat64Ptr) SetFast

func (f *AtomicFloat64Ptr) SetFast(n float64)

SetFast is like Set but without atomicity (faster, but unsafe)

type AtomicUint64

type AtomicUint64 uint64

AtomicUint64 is just a handy wrapper for uint64 with atomic primitives.

func (*AtomicUint64) Add

func (v *AtomicUint64) Add(a uint64) uint64

Add adds the value to the current one (operator "plus")

func (*AtomicUint64) Get

func (v *AtomicUint64) Get() uint64

Get returns the current value

func (*AtomicUint64) Set

func (v *AtomicUint64) Set(n uint64)

Set sets a new value

type ExceptValues

type ExceptValues []interface{}

type FastTag

type FastTag struct {
	Key         string
	StringValue string
	// contains filtered or unexported fields
}

FastTag is an element of FastTags (see "FastTags")

func (*FastTag) GetValue

func (tag *FastTag) GetValue() interface{}

GetValue returns the value of the tag. It returns it as an int64 if the value could be represented as an integer, or as a string if it cannot be represented as an integer.

func (*FastTag) Release

func (tag *FastTag) Release()

Release puts the FastTag back into the pool. The pool is use for memory reuse (to do not GC and reallocate memory).

This method is supposed to be used to internal needs, only.

func (*FastTag) Set

func (tag *FastTag) Set(key string, value interface{})

Set sets the key and the value.

The value will be stored as a string and, if possible, as an int64.

type FastTags

type FastTags struct {
	Slice []*FastTag
	// contains filtered or unexported fields
}

func GetDefaultTags

func GetDefaultTags() *FastTags

func (*FastTags) Each

func (tags *FastTags) Each(fn func(k string, v interface{}) bool)

Each is a function to call function "fn" for each tag. A key and a value of a tag will be passed as "k" and "v" arguments, accordingly.

func (*FastTags) Get

func (tags *FastTags) Get(key string) interface{}

Get returns the value of the tag with key "key".

If there's no such tag then nil will be returned.

func (*FastTags) IsSet

func (tags *FastTags) IsSet(key string) bool

IsSet returns true if there's a tag with key "key", otherwise -- false.

func (*FastTags) Len

func (tags *FastTags) Len() int

Len returns the amount/count of tags

func (*FastTags) Less

func (tags *FastTags) Less(i, j int) bool

Less returns if the Key of the tag by index "i" is less (strings comparison) than the Key of the tag by index "j".

func (*FastTags) Release

func (tags *FastTags) Release()

Release clears the tags and puts the them back into the pool. It's required for memory reusing.

See "Tags" in README.md

func (*FastTags) Set

func (tags *FastTags) Set(key string, value interface{}) AnyTags

Set sets the value of the tag with key "key" to "value". If there's no such tag then creates it and sets the value.

func (*FastTags) Sort

func (tags *FastTags) Sort()

Sort sorts tags by keys (using Swap, Less and Len)

func (*FastTags) String

func (tags *FastTags) String() string

String returns tags as a string compatible with StatsD format of tags.

func (*FastTags) Swap

func (tags *FastTags) Swap(i, j int)

Swap just swaps tags by indexes "i" and "j"

func (*FastTags) ToFastTags

func (tags *FastTags) ToFastTags() *FastTags

ToFastTags does nothing and returns the same tags.

This method is required to implement interface "AnyTags".

func (*FastTags) ToMap

func (tags *FastTags) ToMap(fieldMaps ...map[string]interface{}) map[string]interface{}

ToMap returns tags as an map of tag keys to tag values ("map[string]interface{}").

Any maps passed as an argument will overwrite values of the resulting map.

func (*FastTags) WriteAsString

func (tags *FastTags) WriteAsString(writeStringer interface{ WriteString(string) (int, error) })

WriteAsString writes tags in StatsD format through the WriteStringer (passed as the argument)

type HiddenTag

type HiddenTag struct {
	Key          string
	ExceptValues ExceptValues
}

type HiddenTags

type HiddenTags []HiddenTag

type IterateIntervaler

type IterateIntervaler interface {
	MetricsIterateInterval() time.Duration
}

type Metric

type Metric interface {
	Iterate()
	GetInterval() time.Duration
	Run(time.Duration)
	Stop()
	Send(Sender)
	GetKey() []byte
	GetType() Type
	GetName() string
	GetTags() *FastTags
	GetFloat64() float64
	IsRunning() bool
	Release()
	IsGCEnabled() bool
	SetGCEnabled(bool)
	GetTag(string) interface{}
	Registry() *Registry
	// contains filtered or unexported methods
}

func Get

func Get(metricType Type, key string, tags AnyTags) Metric

type MetricCount

type MetricCount struct {
	// contains filtered or unexported fields
}

MetricCount is the type of a "Count" metric.

Count metric is an analog of prometheus' "Counter", see: https://godoc.org/github.com/prometheus/client_golang/prometheus#Counter

func Count

func Count(key string, tags AnyTags) *MetricCount

Count returns a metric of type "MetricCount".

For the same key and tags it will return the same metric.

If there's no such metric then it will create it, register it in the registry and return it. If there's already such metric then it will just return the metric.

func (*MetricCount) Add

func (m *MetricCount) Add(delta int64) int64

Add adds (+) the value of "delta" to the internal value and returns the result

func (*MetricCount) Get

func (m *MetricCount) Get() int64

Get returns the current internal value

func (*MetricCount) GetFloat64

func (m *MetricCount) GetFloat64() float64

GetFloat64 returns the current internal value as float64 (the same as `float64(Get())`)

func (*MetricCount) GetType

func (m *MetricCount) GetType() Type

GetType always returns "TypeCount" (because of type "MetricCount")

func (*MetricCount) Increment

func (m *MetricCount) Increment() int64

Increment is an analog of Add(1). It just adds "1" to the internal value and returns the result.

func (*MetricCount) Release

func (m *MetricCount) Release()

func (*MetricCount) Send

func (m *MetricCount) Send(sender Sender)

Send initiates a sending of the internal value via the sender

func (*MetricCount) Set

func (m *MetricCount) Set(newValue int64)

Set overwrites the internal value by the value of the argument "newValue"

func (*MetricCount) SetValuePointer

func (m *MetricCount) SetValuePointer(newValuePtr *int64)

SetValuePointer sets another pointer to be used to store the internal value of the metric

type MetricGaugeAggregativeBuffered

type MetricGaugeAggregativeBuffered struct {
	// contains filtered or unexported fields
}

MetricGaugeAggregativeBuffered is an aggregative/summarizive metric (like "average", "percentile 99" and so on). It's an analog of prometheus' "Summary" (see https://prometheus.io/docs/concepts/metric_types/#summary).

MetricGaugeAggregativeBuffered uses the "Buffered" method to aggregate the statistics (see "Buffered" in README.md)

func GaugeAggregativeBuffered

func GaugeAggregativeBuffered(key string, tags AnyTags) *MetricGaugeAggregativeBuffered

GaugeAggregativeBuffered returns a metric of type "MetricGaugeAggregativeBuffered".

For the same key and tags it will return the same metric.

If there's no such metric then it will create it, register it in the registry and return it. If there's already such metric then it will just return the metric.

MetricGaugeAggregativeBuffered is an aggregative/summarizive metric (like "average", "percentile 99" and so on). It's an analog of prometheus' "Summary" (see https://prometheus.io/docs/concepts/metric_types/#summary).

MetricGaugeAggregativeBuffered uses the "Buffered" method to aggregate the statistics (see "Buffered" in README.md)

func (*MetricGaugeAggregativeBuffered) ConsiderValue

func (m *MetricGaugeAggregativeBuffered) ConsiderValue(v float64)

ConsiderValue adds a value to the statistics, it's an analog of prometheus' "Observe" (see https://godoc.org/github.com/prometheus/client_golang/prometheus#Summary)

func (*MetricGaugeAggregativeBuffered) GetType

func (m *MetricGaugeAggregativeBuffered) GetType() Type

GetType always returns TypeGaugeAggregativeBuffered (because of object type "MetricGaugeAggregativeBuffered")

func (*MetricGaugeAggregativeBuffered) NewAggregativeStatistics

func (m *MetricGaugeAggregativeBuffered) NewAggregativeStatistics() AggregativeStatistics

NewAggregativeStatistics returns a "Buffered" (see "Buffered" in README.md) implementation of AggregativeStatistics.

func (*MetricGaugeAggregativeBuffered) Release

func (m *MetricGaugeAggregativeBuffered) Release()

type MetricGaugeAggregativeFlow

type MetricGaugeAggregativeFlow struct {
	// contains filtered or unexported fields
}

MetricGaugeAggregativeFlow is an aggregative/summarizive metric (like "average", "percentile 99" and so on).. It's an analog of prometheus' "Summary" (see https://prometheus.io/docs/concepts/metric_types/#summary).

MetricGaugeAggregativeFlow uses the "Flow" method to aggregate the statistics (see "Flow" in README.md)

func GaugeAggregativeFlow

func GaugeAggregativeFlow(key string, tags AnyTags) *MetricGaugeAggregativeFlow

GaugeAggregativeFlow returns a metric of type "MetricGaugeAggregativeFlow".

For the same key and tags it will return the same metric.

If there's no such metric then it will create it, register it in the registry and return it. If there's already such metric then it will just return the metric.

MetricGaugeAggregativeFlow is an aggregative/summarizive metric (like "average", "percentile 99" and so on).. It's an analog of prometheus' "Summary" (see https://prometheus.io/docs/concepts/metric_types/#summary).

MetricGaugeAggregativeFlow uses the "Flow" method to aggregate the statistics (see "Flow" in README.md)

func (*MetricGaugeAggregativeFlow) ConsiderValue

func (m *MetricGaugeAggregativeFlow) ConsiderValue(v float64)

ConsiderValue adds a value to the statistics, it's an analog of prometheus' "Observe" (see https://godoc.org/github.com/prometheus/client_golang/prometheus#Summary)

func (*MetricGaugeAggregativeFlow) GetType

func (m *MetricGaugeAggregativeFlow) GetType() Type

GetType always returns TypeGaugeAggregativeFlow (because of object type "MetricGaugeAggregativeFlow")

func (*MetricGaugeAggregativeFlow) NewAggregativeStatistics

func (m *MetricGaugeAggregativeFlow) NewAggregativeStatistics() AggregativeStatistics

NewAggregativeStatistics returns a "Flow" (see "Flow" in README.md) implementation of AggregativeStatistics.

func (*MetricGaugeAggregativeFlow) Release

func (m *MetricGaugeAggregativeFlow) Release()

type MetricGaugeAggregativeSimple

type MetricGaugeAggregativeSimple struct {
	// contains filtered or unexported fields
}

MetricGaugeAggregativeSimple is an aggregative/summarizive metric (like "average", "percentile 99" and so on).. It's an analog of prometheus' "Summary" (see https://prometheus.io/docs/concepts/metric_types/#summary).

MetricGaugeAggregativeSimple uses the "Simple" method to aggregate the statistics (see "Simple" in README.md)

func GaugeAggregativeSimple

func GaugeAggregativeSimple(key string, tags AnyTags) *MetricGaugeAggregativeSimple

GaugeAggregativeSimple returns a metric of type "MetricGaugeAggregativeSimple".

For the same key and tags it will return the same metric.

If there's no such metric then it will create it, register it in the registry and return it. If there's already such metric then it will just return the metric.

MetricGaugeAggregativeSimple is an aggregative/summarizive metric (like "average", "percentile 99" and so on).. It's an analog of prometheus' "Summary" (see https://prometheus.io/docs/concepts/metric_types/#summary).

MetricGaugeAggregativeSimple uses the "Simple" method to aggregate the statistics (see "Simple" in README.md)

func (*MetricGaugeAggregativeSimple) ConsiderValue

func (m *MetricGaugeAggregativeSimple) ConsiderValue(v float64)

ConsiderValue adds a value to the statistics, it's an analog of prometheus' "Observe" (see https://godoc.org/github.com/prometheus/client_golang/prometheus#Summary)

func (*MetricGaugeAggregativeSimple) GetType

func (m *MetricGaugeAggregativeSimple) GetType() Type

GetType always returns TypeGaugeAggregativeSimple (because of object type "MetricGaugeAggregativeSimple")

func (*MetricGaugeAggregativeSimple) NewAggregativeStatistics

func (m *MetricGaugeAggregativeSimple) NewAggregativeStatistics() AggregativeStatistics

NewAggregativeStatistics returns nil

"Simple" doesn't calculate percentile values, so it doesn't have specific aggregative statistics, so "nil"

See "Simple" in README.md

func (*MetricGaugeAggregativeSimple) Release

func (m *MetricGaugeAggregativeSimple) Release()

type MetricGaugeFloat64

type MetricGaugeFloat64 struct {
	// contains filtered or unexported fields
}

MetricGaugeFloat64 is just a gauge metric which stores the value as float64. It's an analog of "Gauge" metric of prometheus, see: https://prometheus.io/docs/concepts/metric_types/#gauge

func GaugeFloat64

func GaugeFloat64(key string, tags AnyTags) *MetricGaugeFloat64

GaugeFloat64 returns a metric of type "MetricGaugeFloat64".

For the same key and tags it will return the same metric.

If there's no such metric then it will create it, register it in the registry and return it. If there's already such metric then it will just return the metric.

MetricGaugeFloat64 is just a gauge metric which stores the value as float64. It's an analog of "Gauge" metric of prometheus, see: https://prometheus.io/docs/concepts/metric_types/#gauge

func (*MetricGaugeFloat64) Add

func (m *MetricGaugeFloat64) Add(delta float64) float64

Add adds (+) the value of "delta" to the internal value and returns the result

func (*MetricGaugeFloat64) Get

func (m *MetricGaugeFloat64) Get() float64

Get returns the current internal value

func (*MetricGaugeFloat64) GetFloat64

func (m *MetricGaugeFloat64) GetFloat64() float64

GetFloat64 returns the current internal value

(the same as `Get` for float64 metrics)

func (*MetricGaugeFloat64) GetType

func (m *MetricGaugeFloat64) GetType() Type

GetType always returns TypeGaugeFloat64 (because of object type "MetricGaugeFloat64")

func (*MetricGaugeFloat64) Release

func (m *MetricGaugeFloat64) Release()

func (*MetricGaugeFloat64) Send

func (m *MetricGaugeFloat64) Send(sender Sender)

Send initiates a sending of the internal value via the sender

func (*MetricGaugeFloat64) Set

func (m *MetricGaugeFloat64) Set(newValue float64)

Set overwrites the internal value by the value of the argument "newValue"

func (*MetricGaugeFloat64) SetValuePointer

func (w *MetricGaugeFloat64) SetValuePointer(newValuePtr *float64)

SetValuePointer sets another pointer to be used to store the internal value of the metric

type MetricGaugeFloat64Func

type MetricGaugeFloat64Func struct {
	// contains filtered or unexported fields
}

MetricGaugeFloat64Func is a gauge metric which uses a float64 value returned by a function.

This metric is the same as MetricGaugeFloat64, but uses a function as a source of values.

func GaugeFloat64Func

func GaugeFloat64Func(key string, tags AnyTags, fn func() float64) *MetricGaugeFloat64Func

GaugeFloat64Func returns a metric of type "MetricGaugeFloat64Func".

MetricGaugeFloat64Func is a gauge metric which uses a float64 value returned by the function "fn".

This metric is the same as MetricGaugeFloat64, but uses the function "fn" as a source of values.

Usually if somebody uses this metrics it requires to disable the GC: `metric.SetGCEnabled(false)`

func (*MetricGaugeFloat64Func) EqualsTo

func (m *MetricGaugeFloat64Func) EqualsTo(cmpI iterator) bool

EqualsTo checks if it's the same metric passed as the argument

func (*MetricGaugeFloat64Func) Get

func (*MetricGaugeFloat64Func) GetCommons

func (m *MetricGaugeFloat64Func) GetCommons() *common

GetCommons returns the *common of a metric (it supposed to be used for internal routines only). The "*common" is a structure that is common through all types of metrics (with GC info, registry info and so on).

func (*MetricGaugeFloat64Func) GetFloat64

func (m *MetricGaugeFloat64Func) GetFloat64() float64

func (*MetricGaugeFloat64Func) GetInterval

func (m *MetricGaugeFloat64Func) GetInterval() time.Duration

GetInterval return the iteration interval (between sending or GC checks)

func (*MetricGaugeFloat64Func) GetType

func (m *MetricGaugeFloat64Func) GetType() Type

func (*MetricGaugeFloat64Func) IsGCEnabled

func (m *MetricGaugeFloat64Func) IsGCEnabled() bool

IsGCEnabled returns if the GC enabled for this metric (see method `SetGCEnabled`)

func (*MetricGaugeFloat64Func) IsRunning

func (m *MetricGaugeFloat64Func) IsRunning() bool

IsRunning returns if the metric is run()'ed and not Stop()'ed.

func (*MetricGaugeFloat64Func) Iterate

func (m *MetricGaugeFloat64Func) Iterate()

Iterate runs routines supposed to be runned once per selected interval. This routines are sending the metric value via sender (see `SetSender`) and GC (to remove the metric if it is not used for a long time).

func (*MetricGaugeFloat64Func) MarshalJSON

func (m *MetricGaugeFloat64Func) MarshalJSON() ([]byte, error)

MarshalJSON returns JSON representation of a metric for external monitoring systems

func (*MetricGaugeFloat64Func) Release

func (m *MetricGaugeFloat64Func) Release()

func (*MetricGaugeFloat64Func) Run

func (m *MetricGaugeFloat64Func) Run(interval time.Duration)

Run starts the metric. We did not check if it is safe to call this method from external code. Not recommended to use, yet. Metrics starts automatically after it's creation, so there's no need to call this method, usually.

func (*MetricGaugeFloat64Func) Send

func (m *MetricGaugeFloat64Func) Send(sender Sender)

func (*MetricGaugeFloat64Func) SetGCEnabled

func (m *MetricGaugeFloat64Func) SetGCEnabled(enabled bool)

SetGCEnabled sets if this metric could be stopped and removed from the metrics registry if the value do not change for a long time

func (*MetricGaugeFloat64Func) Stop

func (m *MetricGaugeFloat64Func) Stop()

Stop ends any activity on this metric, except Garbage collector that will remove this metric from the metrics registry.

type MetricGaugeInt64

type MetricGaugeInt64 struct {
	// contains filtered or unexported fields
}

MetricGaugeInt64 is just a gauge metric which stores the value as int64. It's an analog of "Gauge" metric of prometheus, see: https://prometheus.io/docs/concepts/metric_types/#gauge

func GaugeInt64

func GaugeInt64(key string, tags AnyTags) *MetricGaugeInt64

GaugeInt64 returns a metric of type "MetricGaugeFloat64".

For the same key and tags it will return the same metric.

If there's no such metric then it will create it, register it in the registry and return it. If there's already such metric then it will just return the metric.

MetricGaugeInt64 is just a gauge metric which stores the value as int64. It's an analog of "Gauge" metric of prometheus, see: https://prometheus.io/docs/concepts/metric_types/#gauge

func (*MetricGaugeInt64) Add

func (m *MetricGaugeInt64) Add(delta int64) int64

Add adds (+) the value of "delta" to the internal value and returns the result

func (*MetricGaugeInt64) Decrement

func (m *MetricGaugeInt64) Decrement() int64

Decrement is an analog of Add(-1). It just subtracts "1" from the internal value and returns the result.

func (*MetricGaugeInt64) Get

func (m *MetricGaugeInt64) Get() int64

Get returns the current internal value

func (*MetricGaugeInt64) GetFloat64

func (m *MetricGaugeInt64) GetFloat64() float64

GetFloat64 returns the current internal value as float64 (the same as `float64(Get())`)

func (*MetricGaugeInt64) GetType

func (m *MetricGaugeInt64) GetType() Type

GetType always returns TypeGaugeInt64 (because of object type "MetricGaugeInt64")

func (*MetricGaugeInt64) Increment

func (m *MetricGaugeInt64) Increment() int64

Increment is an analog of Add(1). It just adds "1" to the internal value and returns the result.

func (*MetricGaugeInt64) Release

func (m *MetricGaugeInt64) Release()

func (*MetricGaugeInt64) Send

func (m *MetricGaugeInt64) Send(sender Sender)

Send initiates a sending of the internal value via the sender

func (*MetricGaugeInt64) Set

func (m *MetricGaugeInt64) Set(newValue int64)

Set overwrites the internal value by the value of the argument "newValue"

func (*MetricGaugeInt64) SetValuePointer

func (m *MetricGaugeInt64) SetValuePointer(newValuePtr *int64)

SetValuePointer sets another pointer to be used to store the internal value of the metric

type MetricGaugeInt64Func

type MetricGaugeInt64Func struct {
	// contains filtered or unexported fields
}

func GaugeInt64Func

func GaugeInt64Func(key string, tags AnyTags, fn func() int64) *MetricGaugeInt64Func

func (*MetricGaugeInt64Func) EqualsTo

func (m *MetricGaugeInt64Func) EqualsTo(cmpI iterator) bool

EqualsTo checks if it's the same metric passed as the argument

func (*MetricGaugeInt64Func) Get

func (m *MetricGaugeInt64Func) Get() int64

func (*MetricGaugeInt64Func) GetCommons

func (m *MetricGaugeInt64Func) GetCommons() *common

GetCommons returns the *common of a metric (it supposed to be used for internal routines only). The "*common" is a structure that is common through all types of metrics (with GC info, registry info and so on).

func (*MetricGaugeInt64Func) GetFloat64

func (m *MetricGaugeInt64Func) GetFloat64() float64

func (*MetricGaugeInt64Func) GetInterval

func (m *MetricGaugeInt64Func) GetInterval() time.Duration

GetInterval return the iteration interval (between sending or GC checks)

func (*MetricGaugeInt64Func) GetType

func (m *MetricGaugeInt64Func) GetType() Type

func (*MetricGaugeInt64Func) IsGCEnabled

func (m *MetricGaugeInt64Func) IsGCEnabled() bool

IsGCEnabled returns if the GC enabled for this metric (see method `SetGCEnabled`)

func (*MetricGaugeInt64Func) IsRunning

func (m *MetricGaugeInt64Func) IsRunning() bool

IsRunning returns if the metric is run()'ed and not Stop()'ed.

func (*MetricGaugeInt64Func) Iterate

func (m *MetricGaugeInt64Func) Iterate()

Iterate runs routines supposed to be runned once per selected interval. This routines are sending the metric value via sender (see `SetSender`) and GC (to remove the metric if it is not used for a long time).

func (*MetricGaugeInt64Func) MarshalJSON

func (m *MetricGaugeInt64Func) MarshalJSON() ([]byte, error)

MarshalJSON returns JSON representation of a metric for external monitoring systems

func (*MetricGaugeInt64Func) Release

func (m *MetricGaugeInt64Func) Release()

func (*MetricGaugeInt64Func) Run

func (m *MetricGaugeInt64Func) Run(interval time.Duration)

Run starts the metric. We did not check if it is safe to call this method from external code. Not recommended to use, yet. Metrics starts automatically after it's creation, so there's no need to call this method, usually.

func (*MetricGaugeInt64Func) Send

func (m *MetricGaugeInt64Func) Send(sender Sender)

func (*MetricGaugeInt64Func) SetGCEnabled

func (m *MetricGaugeInt64Func) SetGCEnabled(enabled bool)

SetGCEnabled sets if this metric could be stopped and removed from the metrics registry if the value do not change for a long time

func (*MetricGaugeInt64Func) Stop

func (m *MetricGaugeInt64Func) Stop()

Stop ends any activity on this metric, except Garbage collector that will remove this metric from the metrics registry.

type MetricTimingBuffered

type MetricTimingBuffered struct {
	// contains filtered or unexported fields
}

func TimingBuffered

func TimingBuffered(key string, tags AnyTags) *MetricTimingBuffered

func (*MetricTimingBuffered) ConsiderValue

func (m *MetricTimingBuffered) ConsiderValue(v time.Duration)

func (*MetricTimingBuffered) GetType

func (m *MetricTimingBuffered) GetType() Type

func (*MetricTimingBuffered) NewAggregativeStatistics

func (m *MetricTimingBuffered) NewAggregativeStatistics() AggregativeStatistics

NewAggregativeStatistics returns a "Buffered" (see "Buffered" in README.md) implementation of AggregativeStatistics.

func (*MetricTimingBuffered) Release

func (m *MetricTimingBuffered) Release()

type MetricTimingFlow

type MetricTimingFlow struct {
	// contains filtered or unexported fields
}

func TimingFlow

func TimingFlow(key string, tags AnyTags) *MetricTimingFlow

func (*MetricTimingFlow) ConsiderValue

func (m *MetricTimingFlow) ConsiderValue(v time.Duration)

func (*MetricTimingFlow) GetType

func (m *MetricTimingFlow) GetType() Type

func (*MetricTimingFlow) NewAggregativeStatistics

func (m *MetricTimingFlow) NewAggregativeStatistics() AggregativeStatistics

NewAggregativeStatistics returns a "Flow" (see "Flow" in README.md) implementation of AggregativeStatistics.

func (*MetricTimingFlow) Release

func (m *MetricTimingFlow) Release()

type MetricTimingSimple

type MetricTimingSimple struct {
	// contains filtered or unexported fields
}

func TimingSimple

func TimingSimple(key string, tags AnyTags) *MetricTimingSimple

func (*MetricTimingSimple) ConsiderValue

func (m *MetricTimingSimple) ConsiderValue(v time.Duration)

func (*MetricTimingSimple) GetType

func (m *MetricTimingSimple) GetType() Type

func (*MetricTimingSimple) NewAggregativeStatistics

func (m *MetricTimingSimple) NewAggregativeStatistics() AggregativeStatistics

NewAggregativeStatistics returns nil

"Simple" doesn't calculate percentile values, so it doesn't have specific aggregative statistics, so "nil"

See "Simple" in README.md

func (*MetricTimingSimple) Release

func (m *MetricTimingSimple) Release()

type Metrics

type Metrics []Metric

func List

func List() *Metrics

func (*Metrics) Release

func (s *Metrics) Release()

func (Metrics) Sort

func (s Metrics) Sort()

type NonAtomicFloat64

type NonAtomicFloat64 float64

NonAtomicFloat64 just an implementation of AtomicFloat64Interface without any atomicity

Supposed to be used for debugging, only

func (*NonAtomicFloat64) Add

func (f *NonAtomicFloat64) Add(a float64) float64

Add adds the value to the current one (operator "plus")

func (*NonAtomicFloat64) AddFast

func (f *NonAtomicFloat64) AddFast(n float64) float64

AddFast is the same as Add

func (*NonAtomicFloat64) Get

func (f *NonAtomicFloat64) Get() float64

Get returns the current value

func (*NonAtomicFloat64) GetFast

func (f *NonAtomicFloat64) GetFast() float64

GetFast is the same as Get

func (*NonAtomicFloat64) Set

func (f *NonAtomicFloat64) Set(n float64)

Set sets a new value

func (*NonAtomicFloat64) SetFast

func (f *NonAtomicFloat64) SetFast(n float64)

SetFast is the same as Set

type Registry

type Registry struct {
	// contains filtered or unexported fields
}

func New

func New() *Registry

func (*Registry) Count

func (r *Registry) Count(key string, tags AnyTags) *MetricCount

Count returns a metric of type "MetricCount".

For the same key and tags it will return the same metric.

If there's no such metric then it will create it, register it in the registry and return it. If there's already such metric then it will just return the metric.

func (*Registry) GC

func (r *Registry) GC()

func (*Registry) GaugeAggregativeBuffered

func (r *Registry) GaugeAggregativeBuffered(key string, tags AnyTags) *MetricGaugeAggregativeBuffered

GaugeAggregativeBuffered returns a metric of type "MetricGaugeAggregativeBuffered".

For the same key and tags it will return the same metric.

If there's no such metric then it will create it, register it in the registry and return it. If there's already such metric then it will just return the metric.

MetricGaugeAggregativeBuffered is an aggregative/summarizive metric (like "average", "percentile 99" and so on). It's an analog of prometheus' "Summary" (see https://prometheus.io/docs/concepts/metric_types/#summary).

MetricGaugeAggregativeBuffered uses the "Buffered" method to aggregate the statistics (see "Buffered" in README.md)

func (*Registry) GaugeAggregativeFlow

func (r *Registry) GaugeAggregativeFlow(key string, tags AnyTags) *MetricGaugeAggregativeFlow

GaugeAggregativeFlow returns a metric of type "MetricGaugeAggregativeFlow".

For the same key and tags it will return the same metric.

If there's no such metric then it will create it, register it in the registry and return it. If there's already such metric then it will just return the metric.

MetricGaugeAggregativeFlow is an aggregative/summarizive metric (like "average", "percentile 99" and so on).. It's an analog of prometheus' "Summary" (see https://prometheus.io/docs/concepts/metric_types/#summary).

MetricGaugeAggregativeFlow uses the "Flow" method to aggregate the statistics (see "Flow" in README.md)

func (*Registry) GaugeAggregativeSimple

func (r *Registry) GaugeAggregativeSimple(key string, tags AnyTags) *MetricGaugeAggregativeSimple

func (*Registry) GaugeFloat64

func (r *Registry) GaugeFloat64(key string, tags AnyTags) *MetricGaugeFloat64

GaugeFloat64 returns a metric of type "MetricGaugeFloat64".

For the same key and tags it will return the same metric.

If there's no such metric then it will create it, register it in the registry and return it. If there's already such metric then it will just return the metric.

MetricGaugeFloat64 is just a gauge metric which stores the value as float64. It's an analog of "Gauge" metric of prometheus, see: https://prometheus.io/docs/concepts/metric_types/#gauge

func (*Registry) GaugeFloat64Func

func (r *Registry) GaugeFloat64Func(key string, tags AnyTags, fn func() float64) *MetricGaugeFloat64Func

GaugeFloat64Func returns a metric of type "MetricGaugeFloat64Func".

MetricGaugeFloat64Func is a gauge metric which uses a float64 value returned by the function "fn".

This metric is the same as MetricGaugeFloat64, but uses the function "fn" as a source of values.

Usually if somebody uses this metrics it requires to disable the GC: `metric.SetGCEnabled(false)`

func (*Registry) GaugeInt64

func (r *Registry) GaugeInt64(key string, tags AnyTags) *MetricGaugeInt64

GaugeInt64 returns a metric of type "MetricGaugeFloat64".

For the same key and tags it will return the same metric.

If there's no such metric then it will create it, register it in the registry and return it. If there's already such metric then it will just return the metric.

MetricGaugeInt64 is just a gauge metric which stores the value as int64. It's an analog of "Gauge" metric of prometheus, see: https://prometheus.io/docs/concepts/metric_types/#gauge

func (*Registry) GaugeInt64Func

func (r *Registry) GaugeInt64Func(key string, tags AnyTags, fn func() int64) *MetricGaugeInt64Func

func (*Registry) Get

func (r *Registry) Get(metricType Type, key string, tags AnyTags) Metric

func (*Registry) GetDefaultGCEnabled

func (r *Registry) GetDefaultGCEnabled() bool

func (*Registry) GetDefaultIsRan

func (r *Registry) GetDefaultIsRan() bool

func (*Registry) GetDefaultIterateInterval

func (r *Registry) GetDefaultIterateInterval() time.Duration

func (*Registry) GetSender

func (r *Registry) GetSender() Sender

func (*Registry) IsDisabled

func (r *Registry) IsDisabled() bool

func (*Registry) IsHiddenTag

func (r *Registry) IsHiddenTag(tagKey string, tagValue interface{}) bool

func (*Registry) List

func (r *Registry) List() (result *Metrics)

func (*Registry) Register

func (r *Registry) Register(metric Metric, key string, inTags AnyTags) error

func (*Registry) Reset

func (r *Registry) Reset()

func (*Registry) Set

func (r *Registry) Set(metric Metric) error

func (*Registry) SetDefaultGCEnabled

func (r *Registry) SetDefaultGCEnabled(newGCEnabledValue bool)

func (*Registry) SetDefaultIsRan

func (r *Registry) SetDefaultIsRan(newIsRanValue bool)

func (*Registry) SetDefaultPercentiles

func (r *Registry) SetDefaultPercentiles(p []float64)

func (*Registry) SetDisabled

func (r *Registry) SetDisabled(newIsDisabled bool) bool

func (*Registry) SetHiddenTags

func (r *Registry) SetHiddenTags(newRawHiddenTags HiddenTags)

func (*Registry) SetSender

func (r *Registry) SetSender(newMetricSender Sender)

func (*Registry) TimingBuffered

func (r *Registry) TimingBuffered(key string, tags AnyTags) *MetricTimingBuffered

func (*Registry) TimingFlow

func (r *Registry) TimingFlow(key string, tags AnyTags) *MetricTimingFlow

func (*Registry) TimingSimple

func (r *Registry) TimingSimple(key string, tags AnyTags) *MetricTimingSimple

type Sender

type Sender interface {
	// SendInt64 is used to send signed integer values
	SendInt64(metric Metric, key string, value int64) error

	// SendUint64 is used to send unsigned integer values
	SendUint64(metric Metric, key string, value uint64) error

	// SendFloat64 is used to send float values
	SendFloat64(metric Metric, key string, value float64) error
}

Sender is a sender to be used to periodically send metric values (for example to StatsD) On high loaded systems we recommend to use prometheus and a status page with all exported metrics instead of sending metrics to somewhere.

func GetSender

func GetSender() Sender

GetSender returns the handler responsible to send metric values to a metrics server (like StatsD)

type Spinlock

type Spinlock int32

func (*Spinlock) Lock

func (s *Spinlock) Lock()

func (*Spinlock) Unlock

func (s *Spinlock) Unlock()

type Tags

type Tags map[string]interface{}

func NewTags

func NewTags() Tags

func (Tags) Copy

func (tags Tags) Copy() Tags

func (Tags) Each

func (tags Tags) Each(fn func(k string, v interface{}) bool)

func (Tags) Get

func (tags Tags) Get(key string) interface{}

func (Tags) Keys

func (tags Tags) Keys() (result []string)

func (Tags) Len

func (tags Tags) Len() int

func (Tags) Release

func (tags Tags) Release()

func (Tags) Set

func (tags Tags) Set(key string, value interface{}) AnyTags

func (Tags) String

func (tags Tags) String() string

func (Tags) ToFastTags

func (tags Tags) ToFastTags() *FastTags

func (Tags) ToMap

func (tags Tags) ToMap(fieldMaps ...map[string]interface{}) map[string]interface{}

func (Tags) WriteAsString

func (tags Tags) WriteAsString(writeStringer interface{ WriteString(string) (int, error) })

type Type

type Type int

func (Type) String

func (t Type) String() string

Directories

Path Synopsis
internal

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL