memory

package
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 30, 2020 License: AGPL-3.0 Imports: 23 Imported by: 37

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	Enabled bool

	TagSupport      bool
	TagQueryWorkers int // number of workers to spin up when evaluation tag expressions

	IndexRules  conf.IndexRules
	Partitioned bool

	MetaTagSupport = false
)

Functions

func CloneArchive added in v0.13.0

func CloneArchive(a *idx.Archive) idx.Archive

CloneArchive safely clones an archive. We use atomic operations to update fields, so we need to use atomic operations to read those fields when copying.

func ConfigProcess

func ConfigProcess()

func ConfigSetup

func ConfigSetup() *flag.FlagSet

Types

type FindCache added in v0.12.0

type FindCache struct {
	sync.RWMutex
	// contains filtered or unexported fields
}

FindCache is a caching layer for the in-memory index. The cache provides per org LRU caches of patterns and the resulting []*Nodes from searches on the index. Users should call `InvalidateFor(orgId, path)` when new entries are added to, or removed from the index to invalidate any cached patterns that match the path. `invalidateQueueSize` sets the maximum number of invalidations for a specific orgId that can be running at any time. If this number is exceeded then the cache for that orgId will be immediately purged and disabled for `backoffTime`. This mechanism protects the instance from excessive resource usage when a large number of new series are added at once.

func NewFindCache added in v0.12.0

func NewFindCache(size, invalidateQueueSize, invalidateMaxSize int, invalidateMaxWait, backoffTime time.Duration) *FindCache

func (*FindCache) Add added in v0.12.0

func (c *FindCache) Add(orgId uint32, pattern string, nodes []*Node)

func (*FindCache) Get added in v0.12.0

func (c *FindCache) Get(orgId uint32, pattern string) ([]*Node, bool)

func (*FindCache) InvalidateFor added in v0.12.0

func (c *FindCache) InvalidateFor(orgId uint32, path string)

InvalidateFor removes entries from the cache for 'orgId' that match the provided path. If lots of InvalidateFor calls are made at once and we end up with `invalidateQueueSize` concurrent goroutines processing the invalidations, we purge the cache and disable it for `backoffTime`. Future InvalidateFor calls made during the backoff time will then return immediately.

func (*FindCache) Purge added in v0.12.0

func (c *FindCache) Purge(orgId uint32)

Purge clears the cache for the specified orgId

func (*FindCache) PurgeAll added in v0.12.0

func (c *FindCache) PurgeAll()

PurgeAll clears the caches for all orgIds

func (*FindCache) Shutdown added in v0.12.0

func (c *FindCache) Shutdown()

type IdSet

type IdSet map[schema.MKey]struct{} // set of ids

func (IdSet) String

func (ids IdSet) String() string

type MemoryIndex added in v0.12.0

type MemoryIndex interface {
	idx.MetricIndex
	idx.MetaRecordIdx
	LoadPartition(int32, []schema.MetricDefinition) int
	UpdateArchiveLastSave(schema.MKey, int32, uint32)

	PurgeFindCache()
	ForceInvalidationFindCache()
	// contains filtered or unexported methods
}

interface implemented by both UnpartitionedMemoryIdx and PartitionedMemoryIdx this is needed to support unit tests.

func New

func New() MemoryIndex

type Node

type Node struct {
	Path     string // branch or NameWithTags for leafs
	Children []string
	Defs     []schema.MKey
}

func (*Node) HasChildren

func (n *Node) HasChildren() bool

func (*Node) Leaf

func (n *Node) Leaf() bool

func (*Node) String

func (n *Node) String() string

type PartitionedMemoryIdx added in v0.12.0

type PartitionedMemoryIdx struct {
	Partition map[int32]*UnpartitionedMemoryIdx
}

Implements the the "MetricIndex" interface

func NewPartitionedMemoryIdx added in v0.12.0

func NewPartitionedMemoryIdx() *PartitionedMemoryIdx

func (*PartitionedMemoryIdx) AddOrUpdate added in v0.12.0

func (p *PartitionedMemoryIdx) AddOrUpdate(mkey schema.MKey, data *schema.MetricData, partition int32) (idx.Archive, int32, bool)

AddOrUpdate makes sure a metric is known in the index, and should be called for every received metric.

func (*PartitionedMemoryIdx) Delete added in v0.12.0

func (p *PartitionedMemoryIdx) Delete(orgId uint32, pattern string) ([]idx.Archive, error)

Delete deletes items from the index If the pattern matches a branch node, then all leaf nodes on that branch are deleted. So if the pattern is "*", all items in the index are deleted. It returns a copy of all of the Archives deleted.

func (*PartitionedMemoryIdx) DeleteTagged added in v0.12.0

func (p *PartitionedMemoryIdx) DeleteTagged(orgId uint32, query tagquery.Query) ([]idx.Archive, error)

DeleteTagged deletes the specified series from the tag index and also the DefById index.

func (*PartitionedMemoryIdx) Find added in v0.12.0

func (p *PartitionedMemoryIdx) Find(orgId uint32, pattern string, from int64) ([]idx.Node, error)

Find searches the index for matching nodes. * orgId describes the org to search in (public data in orgIdPublic is automatically included) * pattern is handled like graphite does. see https://graphite.readthedocs.io/en/latest/render_api.html#paths-and-wildcards

func (*PartitionedMemoryIdx) FindByTag added in v0.12.0

func (p *PartitionedMemoryIdx) FindByTag(orgId uint32, query tagquery.Query) []idx.Node

FindByTag takes a list of expressions in the format key<operator>value. The allowed operators are: =, !=, =~, !=~. It returns a slice of Node structs that match the given conditions, the conditions are logically AND-ed. If the third argument is > 0 then the results will be filtered and only those where the LastUpdate time is >= from will be returned as results. The returned results are not deduplicated and in certain cases it is possible that duplicate entries will be returned.

func (*PartitionedMemoryIdx) FindTagValues added in v0.12.0

func (p *PartitionedMemoryIdx) FindTagValues(orgId uint32, tag, prefix string, limit uint) []string

FindTagValues generates a list of possible values that could complete a given value prefix. It requires a tag to be specified and only values of the given tag will be returned. It also accepts additional conditions to further narrow down the result set in the format of graphite's tag queries

func (*PartitionedMemoryIdx) FindTagValuesWithQuery added in v0.13.0

func (p *PartitionedMemoryIdx) FindTagValuesWithQuery(orgId uint32, tag, prefix string, query tagquery.Query, limit uint) []string

func (*PartitionedMemoryIdx) FindTags added in v0.12.0

func (p *PartitionedMemoryIdx) FindTags(orgId uint32, prefix string, limit uint) []string

FindTags returns tags matching the specified conditions prefix: prefix match limit: the maximum number of results to return

the results will always be sorted alphabetically for consistency

func (*PartitionedMemoryIdx) FindTagsWithQuery added in v0.13.0

func (p *PartitionedMemoryIdx) FindTagsWithQuery(orgId uint32, prefix string, query tagquery.Query, limit uint) []string

FindTagsWithQuery returns tags matching the specified conditions query: tagdb query to run on the index limit: the maximum number of results to return

the results will always be sorted alphabetically for consistency

func (*PartitionedMemoryIdx) FindTerms added in v1.0.0

func (p *PartitionedMemoryIdx) FindTerms(orgID uint32, tags []string, query tagquery.Query) (uint32, map[string]map[string]uint32)

func (*PartitionedMemoryIdx) ForceInvalidationFindCache added in v0.12.0

func (p *PartitionedMemoryIdx) ForceInvalidationFindCache()

ForceInvalidationFindCache forces a full invalidation cycle of the find cache

func (*PartitionedMemoryIdx) Get added in v0.12.0

Get returns the archive for the requested id.

func (*PartitionedMemoryIdx) GetPath added in v0.12.0

func (p *PartitionedMemoryIdx) GetPath(orgId uint32, path string) []idx.Archive

GetPath returns the archives under the given path.

func (*PartitionedMemoryIdx) Init added in v0.12.0

func (p *PartitionedMemoryIdx) Init() error

Init initializes the index at startup and blocks until the index is ready for use.

func (*PartitionedMemoryIdx) List added in v0.12.0

func (p *PartitionedMemoryIdx) List(orgId uint32) []idx.Archive

List returns all Archives for the passed OrgId and the public orgId

func (*PartitionedMemoryIdx) LoadPartition added in v0.12.0

func (p *PartitionedMemoryIdx) LoadPartition(partition int32, defs []schema.MetricDefinition) int

Used to rebuild the index from an existing set of metricDefinitions.

func (*PartitionedMemoryIdx) MetaTagRecordList added in v0.12.0

func (p *PartitionedMemoryIdx) MetaTagRecordList(orgId uint32) []tagquery.MetaTagRecord

func (*PartitionedMemoryIdx) MetaTagRecordSwap added in v0.13.0

func (p *PartitionedMemoryIdx) MetaTagRecordSwap(orgId uint32, records []tagquery.MetaTagRecord) error

func (*PartitionedMemoryIdx) MetaTagRecordUpsert added in v0.12.0

func (p *PartitionedMemoryIdx) MetaTagRecordUpsert(orgId uint32, rawRecord tagquery.MetaTagRecord) error

func (*PartitionedMemoryIdx) Prune added in v0.12.0

func (p *PartitionedMemoryIdx) Prune(oldest time.Time) ([]idx.Archive, error)

Prune deletes all metrics that haven't been seen since the given timestamp. It returns all Archives deleted and any error encountered.

func (*PartitionedMemoryIdx) PurgeFindCache added in v0.12.0

func (p *PartitionedMemoryIdx) PurgeFindCache()

PurgeFindCache purges the findCaches for all orgIds across all partitions

func (*PartitionedMemoryIdx) Stop added in v0.12.0

func (p *PartitionedMemoryIdx) Stop()

Stop shuts down the index.

func (*PartitionedMemoryIdx) TagDetails added in v0.12.0

func (p *PartitionedMemoryIdx) TagDetails(orgId uint32, key string, filter *regexp.Regexp) map[string]uint64

TagDetails returns a list of all values associated with a given tag key in the given org. The occurrences of each value is counted and the count is referred to by the metric names in the returned map. If the third parameter is not "" it will be used as a regular expression to filter the values before accounting for them.

func (*PartitionedMemoryIdx) Tags added in v0.12.0

func (p *PartitionedMemoryIdx) Tags(orgId uint32, filter *regexp.Regexp) []string

Tags returns a list of all tag keys associated with the metrics of a given organization. The return values are filtered by the regex in the second parameter.

func (*PartitionedMemoryIdx) Update added in v0.12.0

func (p *PartitionedMemoryIdx) Update(point schema.MetricPoint, partition int32) (idx.Archive, int32, bool)

Update updates an existing archive, if found. It returns whether it was found, and - if so - the (updated) existing archive and its old partition

func (*PartitionedMemoryIdx) UpdateArchiveLastSave added in v0.13.0

func (p *PartitionedMemoryIdx) UpdateArchiveLastSave(id schema.MKey, partition int32, lastSave uint32)

UpdateArchive updates the archive information

type TagIndex

type TagIndex map[string]TagValues // key -> list of values

type TagQueryContext added in v0.13.0

type TagQueryContext struct {
	// contains filtered or unexported fields
}

TagQueryContext runs a set of pattern or string matches on tag keys and values against the index. It is executed via: Run() which returns a set of matching MetricIDs RunGetTags() which returns a list of tags of the matching metrics

func NewTagQueryContext added in v0.13.0

func NewTagQueryContext(query tagquery.Query) TagQueryContext

NewTagQueryContext takes a tag query and wraps it into all the context structs necessary to execute the query on the indexes

func (*TagQueryContext) Run added in v1.0.0

func (q *TagQueryContext) Run(index TagIndex, byId map[schema.MKey]*idx.Archive, mti *metaTagHierarchy, mtr *metaTagRecords, resCh chan schema.MKey)

Run executes this query on the given indexes and passes the results into the given result channel. It blocks until query execution is finished, but it does not close the result channel.

type TagValues added in v0.13.0

type TagValues map[string]IdSet // value -> set of ids

type TimeLimiter

type TimeLimiter struct {
	// contains filtered or unexported fields
}

TimeLimiter limits the rate of a set of serial operations. It does this by tracking how much time has been spent (updated via Add()), and comparing this to the window size and the limit, slowing down further operations as soon as one Add() is called informing it the per-window allowed budget has been exceeded. Limitations: * the last operation is allowed to exceed the budget (but the next call will be delayed to compensate) * concurrency is not supported

For correctness, you should always follow up an Add() with a Wait()

func NewTimeLimiter

func NewTimeLimiter(window, limit time.Duration, now time.Time) *TimeLimiter

NewTimeLimiter creates a new TimeLimiter. limit must <= window

func (*TimeLimiter) Add

func (l *TimeLimiter) Add(d time.Duration)

Add increments the "time spent" counter by "d"

func (*TimeLimiter) Wait

func (l *TimeLimiter) Wait()

Wait returns when we are not rate limited

  • if we passed the window, we reset everything (this is only safe for callers that behave correctly, i.e. that wait the instructed time after each add)
  • if limit is not reached, no sleep is needed
  • if limit has been exceeded, sleep until next period + extra multiple to compensate this is perhaps best explained with an example: if window is 1s and limit 100ms, but we spent 250ms, then we spent effectively 2.5 seconds worth of work. let's say we are 800ms into the 1s window, that means we should sleep 2500-800 = 1.7s in order to maximize work while honoring the imposed limit.
  • if limit has been met exactly, sleep until next period (this is a special case of the above)

type Tree

type Tree struct {
	Items map[string]*Node // key is the full path of the node.
}

type UnpartitionedMemoryIdx added in v0.12.0

type UnpartitionedMemoryIdx struct {
	sync.RWMutex
	// contains filtered or unexported fields
}

func NewUnpartitionedMemoryIdx added in v0.12.0

func NewUnpartitionedMemoryIdx() *UnpartitionedMemoryIdx

func (*UnpartitionedMemoryIdx) AddOrUpdate added in v0.12.0

func (m *UnpartitionedMemoryIdx) AddOrUpdate(mkey schema.MKey, data *schema.MetricData, partition int32) (idx.Archive, int32, bool)

AddOrUpdate returns the corresponding Archive for the MetricData. if it is existing -> updates lastUpdate based on .Time, and partition if was new -> adds new MetricDefinition to index

func (*UnpartitionedMemoryIdx) Delete added in v0.12.0

func (m *UnpartitionedMemoryIdx) Delete(orgId uint32, pattern string) ([]idx.Archive, error)

func (*UnpartitionedMemoryIdx) DeleteTagged added in v0.12.0

func (m *UnpartitionedMemoryIdx) DeleteTagged(orgId uint32, query tagquery.Query) ([]idx.Archive, error)

func (*UnpartitionedMemoryIdx) Find added in v0.12.0

func (m *UnpartitionedMemoryIdx) Find(orgId uint32, pattern string, from int64) ([]idx.Node, error)

func (*UnpartitionedMemoryIdx) FindByTag added in v0.12.0

func (m *UnpartitionedMemoryIdx) FindByTag(orgId uint32, query tagquery.Query) []idx.Node

func (*UnpartitionedMemoryIdx) FindTagValues added in v0.12.0

func (m *UnpartitionedMemoryIdx) FindTagValues(orgId uint32, tag, prefix string, limit uint) []string

FindTagValues returns tag values matching the specified conditions tag: tag key match prefix: value prefix match limit: the maximum number of results to return

the results will always be sorted alphabetically for consistency

func (*UnpartitionedMemoryIdx) FindTagValuesWithQuery added in v0.13.0

func (m *UnpartitionedMemoryIdx) FindTagValuesWithQuery(orgId uint32, tag, prefix string, query tagquery.Query, limit uint) []string

func (*UnpartitionedMemoryIdx) FindTags added in v0.12.0

func (m *UnpartitionedMemoryIdx) FindTags(orgId uint32, prefix string, limit uint) []string

FindTags returns tags matching the specified conditions prefix: prefix match limit: the maximum number of results to return

the results will always be sorted alphabetically for consistency

func (*UnpartitionedMemoryIdx) FindTagsWithQuery added in v0.13.0

func (m *UnpartitionedMemoryIdx) FindTagsWithQuery(orgId uint32, prefix string, query tagquery.Query, limit uint) []string

FindTagsWithQuery returns tags matching the specified conditions query: tagdb query to run on the index limit: the maximum number of results to return

the results will always be sorted alphabetically for consistency

func (*UnpartitionedMemoryIdx) FindTerms added in v1.0.0

func (m *UnpartitionedMemoryIdx) FindTerms(orgID uint32, tags []string, query tagquery.Query) (uint32, map[string]map[string]uint32)

func (*UnpartitionedMemoryIdx) ForceInvalidationFindCache added in v0.12.0

func (m *UnpartitionedMemoryIdx) ForceInvalidationFindCache()

ForceInvalidationFindCache forces a full invalidation cycle of the find cache

func (*UnpartitionedMemoryIdx) Get added in v0.12.0

func (*UnpartitionedMemoryIdx) GetPath added in v0.12.0

func (m *UnpartitionedMemoryIdx) GetPath(orgId uint32, path string) []idx.Archive

GetPath returns the node under the given org and path. this is an alternative to Find for when you have a path, not a pattern, and want to lookup in a specific org tree only.

func (*UnpartitionedMemoryIdx) Init added in v0.12.0

func (m *UnpartitionedMemoryIdx) Init() error

func (*UnpartitionedMemoryIdx) List added in v0.12.0

func (m *UnpartitionedMemoryIdx) List(orgId uint32) []idx.Archive

func (*UnpartitionedMemoryIdx) Load added in v0.12.0

Used to rebuild the index from an existing set of metricDefinitions.

func (*UnpartitionedMemoryIdx) LoadPartition added in v0.12.0

func (m *UnpartitionedMemoryIdx) LoadPartition(partition int32, defs []schema.MetricDefinition) int

Used to rebuild the index from an existing set of metricDefinitions for a specific paritition.

func (UnpartitionedMemoryIdx) MetaTagRecordList added in v0.12.0

func (m UnpartitionedMemoryIdx) MetaTagRecordList(orgId uint32) []tagquery.MetaTagRecord

func (UnpartitionedMemoryIdx) MetaTagRecordSwap added in v0.13.0

func (m UnpartitionedMemoryIdx) MetaTagRecordSwap(orgId uint32, newRecords []tagquery.MetaTagRecord) error

func (UnpartitionedMemoryIdx) MetaTagRecordUpsert added in v0.12.0

func (m UnpartitionedMemoryIdx) MetaTagRecordUpsert(orgId uint32, upsertRecord tagquery.MetaTagRecord) error

MetaTagRecordUpsert inserts or updates a meta record, depending on whether it already exists or is new. The identity of a record is determined by its queries, if the set of queries in the given record already exists in another record, then the existing record will be updated, otherwise a new one gets created.

func (*UnpartitionedMemoryIdx) Prune added in v0.12.0

func (m *UnpartitionedMemoryIdx) Prune(now time.Time) ([]idx.Archive, error)

Prune prunes series from the index if they have become stale per their index-rule

func (*UnpartitionedMemoryIdx) PurgeFindCache added in v0.12.0

func (m *UnpartitionedMemoryIdx) PurgeFindCache()

PurgeFindCache purges the findCaches for all orgIds

func (*UnpartitionedMemoryIdx) Stop added in v0.12.0

func (m *UnpartitionedMemoryIdx) Stop()

func (*UnpartitionedMemoryIdx) TagDetails added in v0.12.0

func (m *UnpartitionedMemoryIdx) TagDetails(orgId uint32, key string, filter *regexp.Regexp) map[string]uint64

func (*UnpartitionedMemoryIdx) Tags added in v0.12.0

func (m *UnpartitionedMemoryIdx) Tags(orgId uint32, filter *regexp.Regexp) []string

Tags returns a list of all tag keys associated with the metrics of a given organization. The return values are filtered by the regex in the second parameter.

func (*UnpartitionedMemoryIdx) Update added in v0.12.0

func (m *UnpartitionedMemoryIdx) Update(point schema.MetricPoint, partition int32) (idx.Archive, int32, bool)

Update updates an existing archive, if found. It returns whether it was found, and - if so - the (updated) existing archive and its old partition

func (*UnpartitionedMemoryIdx) UpdateArchiveLastSave added in v0.13.0

func (m *UnpartitionedMemoryIdx) UpdateArchiveLastSave(id schema.MKey, partition int32, lastSave uint32)

UpdateArchiveLastSave updates the LastSave timestamp of the archive

type WriteQueue added in v0.13.0

type WriteQueue struct {
	sync.RWMutex
	// contains filtered or unexported fields
}

func NewWriteQueue added in v0.13.0

func NewWriteQueue(index *UnpartitionedMemoryIdx, maxDelay time.Duration, maxBuffered int) *WriteQueue

NewWriteQueue creates a new writeQueue that will add archives to the passed UnpartitionedMemoryIdx in batches

func (*WriteQueue) Get added in v0.13.0

func (wq *WriteQueue) Get(id schema.MKey) (*idx.Archive, bool)

func (*WriteQueue) Queue added in v0.13.0

func (wq *WriteQueue) Queue(archive *idx.Archive)

func (*WriteQueue) Stop added in v0.13.0

func (wq *WriteQueue) Stop()

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL