tsdb

package
v0.10.1-0...-e076de5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 2, 2016 License: MIT Imports: 27 Imported by: 0

README

Line Protocol

The line protocol is a text based format for writing points to InfluxDB. Each line defines a single point. Multiple lines must be separated by the newline character \n. The format of the line consists of three parts:

[key] [fields] [timestamp]

Each section is separated by spaces. The minimum required point consists of a measurement name and at least one field. Points without a specified timestamp will be written using the server's local timestamp. Timestamps are assumed to be in nanoseconds unless a precision value is passed in the query string.

Key

The key is the measurement name and any optional tags separated by commas. Measurement names, tag keys, and tag values must escape any spaces or commas using a backslash (\). For example: \ and \,. All tag values are stored as strings and should not be surrounded in quotes.

Tags should be sorted by key before being sent for best performance. The sort should match that from the Go bytes.Compare function (http://golang.org/pkg/bytes/#Compare).

Examples
# measurement only
cpu

# measurement and tags
cpu,host=serverA,region=us-west

# measurement with commas
cpu\,01,host=serverA,region=us-west

# tag value with spaces
cpu,host=server\ A,region=us\ west

Fields

Fields are key-value metrics associated with the measurement. Every line must have at least one field. Multiple fields must be separated with commas and not spaces.

Field keys are always strings and follow the same syntactical rules as described above for tag keys and values. Field values can be one of four types. The first value written for a given field on a given measurement defines the type of that field for all series under that measurement.

  • integer - Numeric values that do not include a decimal and are followed by a trailing i when inserted (e.g. 1i, 345i, 2015i, -10i). Note that all values must have a trailing i. If they do not they will be written as floats.
  • float - Numeric values that are not followed by a trailing i. (e.g. 1, 1.0, -3.14, 6.0+e5, 10).
  • boolean - A value indicating true or false. Valid boolean strings are (t, T, true, TRUE, f, F, false, and FALSE).
  • string - A text value. All string values must be surrounded in double-quotes ". If the string contains a double-quote or backslashes, it must be escaped with a backslash, e.g. \", \\.
# integer value
cpu value=1i

cpu value=1.1i # will result in a parse error

# float value
cpu_load value=1

cpu_load value=1.0

cpu_load value=1.2

# boolean value
error fatal=true

# string value
event msg="logged out"

# multiple values
cpu load=10,alert=true,reason="value above maximum threshold"

Timestamp

The timestamp section is optional but should be specified if possible. The value is an integer representing nanoseconds since the epoch. If the timestamp is not provided the point will inherit the server's local timestamp.

Some write APIs allow passing a lower precision. If the API supports a lower precision, the timestamp may also be an integer epoch in microseconds, milliseconds, seconds, minutes or hours.

Full Example

A full example is shown below.

cpu,host=server01,region=uswest value=1 1434055562000000000
cpu,host=server02,region=uswest value=3 1434055562000010000

In this example the first line shows a measurement of "cpu", there are two tags "host" and "region, the value is 1.0, and the timestamp is 1434055562000000000. Following this is a second line, also a point in the measurement "cpu" but belonging to a different "host".

cpu,host=server\ 01,region=uswest value=1,msg="all systems nominal"
cpu,host=server\ 01,region=us\,west value_int=1i

In these examples, the "host" is set to server 01. The field value associated with field key msg is double-quoted, as it is a string. The second example shows a region of us,west with the comma properly escaped. In the first example value is written as a floating point number. In the second, value_int is an integer.

Distributed Queries

Documentation

Overview

Package tsdb implements a durable time series database.

Index

Constants

View Source
const (
	// DefaultEngine is the default engine for new shards
	DefaultEngine = "tsm1"

	// DefaultCacheMaxMemorySize is the maximum size a shard's cache can
	// reach before it starts rejecting writes.
	DefaultCacheMaxMemorySize = 500 * 1024 * 1024 // 500MB

	// DefaultCacheSnapshotMemorySize is the size at which the engine will
	// snapshot the cache and write it to a TSM file, freeing up memory
	DefaultCacheSnapshotMemorySize = 25 * 1024 * 1024 // 25MB

	// DefaultCacheSnapshotWriteColdDuration is the length of time at which
	// the engine will snapshot the cache and write it to a new TSM file if
	// the shard hasn't received writes or deletes
	DefaultCacheSnapshotWriteColdDuration = time.Duration(time.Hour)

	// DefaultCompactFullWriteColdDuration is the duration at which the engine
	// will compact all TSM files in a shard if it hasn't received a write or delete
	DefaultCompactFullWriteColdDuration = time.Duration(24 * time.Hour)

	// DefaultMaxPointsPerBlock is the maximum number of points in an encoded
	// block in a TSM file
	DefaultMaxPointsPerBlock = 1000
)
View Source
const EOF = int64(-1)

EOF represents a "not found" key returned by a Cursor.

Variables

View Source
var (
	// ErrFormatNotFound is returned when no format can be determined from a path.
	ErrFormatNotFound = errors.New("format not found")

	// ErrUnknownEngineFormat is returned when the engine format is
	// unknown. ErrUnknownEngineFormat is currently returned if a format
	// other than tsm1 is encountered.
	ErrUnknownEngineFormat = errors.New("unknown engine format")
)
View Source
var (
	// ErrFieldOverflow is returned when too many fields are created on a measurement.
	ErrFieldOverflow = errors.New("field overflow")

	// ErrFieldTypeConflict is returned when a new field already exists with a different type.
	ErrFieldTypeConflict = errors.New("field type conflict")

	// ErrFieldNotFound is returned when a field cannot be found.
	ErrFieldNotFound = errors.New("field not found")

	// ErrFieldUnmappedID is returned when the system is presented, during decode, with a field ID
	// there is no mapping for.
	ErrFieldUnmappedID = errors.New("field ID not mapped")

	// ErrEngineClosed is returned when a caller attempts indirectly to
	// access the shard's underlying engine.
	ErrEngineClosed = errors.New("engine is closed")
)
View Source
var (
	// ErrShardNotFound gets returned when trying to get a non existing shard.
	ErrShardNotFound = fmt.Errorf("shard not found")
	// ErrStoreClosed gets returned when trying to use a closed Store.
	ErrStoreClosed = fmt.Errorf("store is closed")
)

Functions

func DecodeStorePath

func DecodeStorePath(shardOrWALPath string) (database, retentionPolicy string)

DecodeStorePath extracts the database and retention policy names from a given shard or WAL path.

func DedupeEntries

func DedupeEntries(a [][]byte) [][]byte

DedupeEntries returns slices with unique keys (the first 8 bytes).

func IsNumeric

func IsNumeric(c *influxql.Call) bool

IsNumeric returns whether a given aggregate can only be run on numeric fields.

func IsRetryable

func IsRetryable(err error) bool

IsRetryable returns true if this error is temporary and could be retried

func MarshalTags

func MarshalTags(tags map[string]string) []byte

MarshalTags converts a tag set to bytes for use as a lookup key

func MeasurementFromSeriesKey

func MeasurementFromSeriesKey(key string) string

MeasurementFromSeriesKey returns the name of the measurement from a key that contains a measurement name.

func NewFieldKeysIterator

func NewFieldKeysIterator(sh *Shard, opt influxql.IteratorOptions) (influxql.Iterator, error)

func NewSeriesIterator

func NewSeriesIterator(sh *Shard, opt influxql.IteratorOptions) (influxql.Iterator, error)

NewSeriesIterator returns a new instance of SeriesIterator.

func NewShardError

func NewShardError(id uint64, err error) error

NewShardError returns a new ShardError.

func NewTagKeysIterator

func NewTagKeysIterator(sh *Shard, opt influxql.IteratorOptions) (influxql.Iterator, error)

NewTagKeysIterator returns a new instance of TagKeysIterator.

func NewTagValuesIterator

func NewTagValuesIterator(sh *Shard, opt influxql.IteratorOptions) (influxql.Iterator, error)

NewTagValuesIterator returns a new instance of TagValuesIterator.

func RegisterEngine

func RegisterEngine(name string, fn NewEngineFunc)

RegisterEngine registers a storage engine initializer by name.

func RegisteredEngines

func RegisteredEngines() []string

RegisteredEngines returns the slice of currently registered engines.

Types

type ByteSlices

type ByteSlices [][]byte

ByteSlices wraps a list of byte-slices for sorting.

func (ByteSlices) Len

func (a ByteSlices) Len() int

func (ByteSlices) Less

func (a ByteSlices) Less(i, j int) bool

func (ByteSlices) Swap

func (a ByteSlices) Swap(i, j int)

type Config

type Config struct {
	Dir    string `toml:"dir"`
	Engine string `toml:"engine"`

	// General WAL configuration options
	WALDir            string `toml:"wal-dir"`
	WALLoggingEnabled bool   `toml:"wal-logging-enabled"`

	// Query logging
	QueryLogEnabled bool `toml:"query-log-enabled"`

	// Compaction options for tsm1 (descriptions above with defaults)
	CacheMaxMemorySize             uint64        `toml:"cache-max-memory-size"`
	CacheSnapshotMemorySize        uint64        `toml:"cache-snapshot-memory-size"`
	CacheSnapshotWriteColdDuration toml.Duration `toml:"cache-snapshot-write-cold-duration"`
	CompactFullWriteColdDuration   toml.Duration `toml:"compact-full-write-cold-duration"`
	MaxPointsPerBlock              int           `toml:"max-points-per-block"`

	DataLoggingEnabled bool `toml:"data-logging-enabled"`
}

Config holds the configuration for the tsbd package.

func NewConfig

func NewConfig() Config

NewConfig returns the default configuration for tsdb.

func (*Config) Validate

func (c *Config) Validate() error

Validate validates the configuration hold by c.

type Cursor

type Cursor interface {
	SeekTo(seek int64) (key int64, value interface{})
	Next() (key int64, value interface{})
	Ascending() bool
}

Cursor represents an iterator over a series.

func MultiCursor

func MultiCursor(cursors ...Cursor) Cursor

MultiCursor returns a single cursor that combines the results of all cursors in order.

If the same key is returned from multiple cursors then the first cursor specified will take precendence. A key will only be returned once from the returned cursor.

type DatabaseIndex

type DatabaseIndex struct {
	// contains filtered or unexported fields
}

DatabaseIndex is the in memory index of a collection of measurements, time series, and their tags. Exported functions are goroutine safe while un-exported functions assume the caller will use the appropriate locks

func NewDatabaseIndex

func NewDatabaseIndex(name string) *DatabaseIndex

NewDatabaseIndex returns a new initialized DatabaseIndex.

func (*DatabaseIndex) AssignShard

func (d *DatabaseIndex) AssignShard(k string, shardID uint64)

AssignShard update the index to indicate that series k exists in the given shardID

func (*DatabaseIndex) CreateMeasurementIndexIfNotExists

func (d *DatabaseIndex) CreateMeasurementIndexIfNotExists(name string) *Measurement

CreateMeasurementIndexIfNotExists creates or retrieves an in memory index object for the measurement

func (*DatabaseIndex) CreateSeriesIndexIfNotExists

func (d *DatabaseIndex) CreateSeriesIndexIfNotExists(measurementName string, series *Series) *Series

CreateSeriesIndexIfNotExists adds the series for the given measurement to the index and sets its ID or returns the existing series object

func (*DatabaseIndex) DropMeasurement

func (d *DatabaseIndex) DropMeasurement(name string)

DropMeasurement removes the measurement and all of its underlying series from the database index

func (*DatabaseIndex) DropSeries

func (d *DatabaseIndex) DropSeries(keys []string)

DropSeries removes the series keys and their tags from the index

func (*DatabaseIndex) Measurement

func (d *DatabaseIndex) Measurement(name string) *Measurement

Measurement returns the measurement object from the index by the name

func (*DatabaseIndex) MeasurementSeriesCounts

func (d *DatabaseIndex) MeasurementSeriesCounts() (nMeasurements int, nSeries int)

MeasurementSeriesCounts returns the number of measurements and series currently indexed by the database. Useful for reporting and monitoring.

func (*DatabaseIndex) Measurements

func (d *DatabaseIndex) Measurements() Measurements

Measurements returns a list of all measurements.

func (*DatabaseIndex) MeasurementsByName

func (d *DatabaseIndex) MeasurementsByName(names []string) []*Measurement

MeasurementsByName returns a list of measurements.

func (*DatabaseIndex) MeasurementsByRegex

func (d *DatabaseIndex) MeasurementsByRegex(re *regexp.Regexp) Measurements

MeasurementsByRegex returns the measurements that match the regex.

func (*DatabaseIndex) RemoveShard

func (d *DatabaseIndex) RemoveShard(shardID uint64)

RemoveShard removes all references to shardID from any series or measurements in the index. If the shard was the only owner of data for the series, the series is removed from the index.

func (*DatabaseIndex) Series

func (d *DatabaseIndex) Series(key string) *Series

Series returns a series by key.

func (*DatabaseIndex) SeriesKeys

func (d *DatabaseIndex) SeriesKeys() []string

func (*DatabaseIndex) SeriesN

func (d *DatabaseIndex) SeriesN() int

SeriesN returns the number of series.

func (*DatabaseIndex) TagsForSeries

func (d *DatabaseIndex) TagsForSeries(key string) map[string]string

TagsForSeries returns the tag map for the passed in series

func (*DatabaseIndex) UnassignShard

func (d *DatabaseIndex) UnassignShard(k string, shardID uint64)

UnassignShard update the index to indicate that series k does not exist in the given shardID

type Engine

type Engine interface {
	Open() error
	Close() error

	SetLogOutput(io.Writer)
	LoadMetadataIndex(shard *Shard, index *DatabaseIndex) error

	CreateIterator(opt influxql.IteratorOptions) (influxql.Iterator, error)
	SeriesKeys(opt influxql.IteratorOptions) (influxql.SeriesList, error)
	WritePoints(points []models.Point) error
	ContainsSeries(keys []string) (map[string]bool, error)
	DeleteSeries(keys []string) error
	DeleteSeriesRange(keys []string, min, max int64) error
	DeleteMeasurement(name string, seriesKeys []string) error
	SeriesCount() (n int, err error)
	MeasurementFields(measurement string) *MeasurementFields

	// Format will return the format for the engine
	Format() EngineFormat

	io.WriterTo

	Backup(w io.Writer, basePath string, since time.Time) error
}

Engine represents a swappable storage engine for the shard.

func NewEngine

func NewEngine(path string, walPath string, options EngineOptions) (Engine, error)

NewEngine returns an instance of an engine based on its format. If the path does not exist then the DefaultFormat is used.

type EngineFormat

type EngineFormat int

EngineFormat represents the format for an engine.

const (
	// TSM1Format is the format used by the tsm1 engine.
	TSM1Format EngineFormat = 2
)

type EngineOptions

type EngineOptions struct {
	EngineVersion string

	Config Config
}

EngineOptions represents the options used to initialize the engine.

func NewEngineOptions

func NewEngineOptions() EngineOptions

NewEngineOptions returns the default options.

type Field

type Field struct {
	ID   uint8             `json:"id,omitempty"`
	Name string            `json:"name,omitempty"`
	Type influxql.DataType `json:"type,omitempty"`
}

Field represents a series field.

type FieldCodec

type FieldCodec struct {
	// contains filtered or unexported fields
}

FieldCodec provides encoding and decoding functionality for the fields of a given Measurement. It is a distinct type to avoid locking writes on this node while potentially long-running queries are executing.

It is not affected by changes to the Measurement object after codec creation. TODO: this shouldn't be exported. nothing outside the shard should know about field encodings.

However, this is here until tx.go and the engine get refactored into tsdb.

func NewFieldCodec

func NewFieldCodec(fields map[string]*Field) *FieldCodec

NewFieldCodec returns a FieldCodec for the given Measurement. Must be called with a RLock that protects the Measurement.

func (*FieldCodec) DecodeByID

func (f *FieldCodec) DecodeByID(targetID uint8, b []byte) (interface{}, error)

DecodeByID scans a byte slice for a field with the given ID, converts it to its expected type, and return that value. TODO: shouldn't be exported. refactor engine

func (*FieldCodec) DecodeByName

func (f *FieldCodec) DecodeByName(name string, b []byte) (interface{}, error)

DecodeByName scans a byte slice for a field with the given name, converts it to its expected type, and return that value.

func (*FieldCodec) DecodeFields

func (f *FieldCodec) DecodeFields(b []byte) (map[uint8]interface{}, error)

DecodeFields decodes a byte slice into a set of field ids and values.

func (*FieldCodec) DecodeFieldsWithNames

func (f *FieldCodec) DecodeFieldsWithNames(b []byte) (map[string]interface{}, error)

DecodeFieldsWithNames decodes a byte slice into a set of field names and values TODO: shouldn't be exported. refactor engine

func (*FieldCodec) EncodeFields

func (f *FieldCodec) EncodeFields(values map[string]interface{}) ([]byte, error)

EncodeFields converts a map of values with string keys to a byte slice of field IDs and values.

If a field exists in the codec, but its type is different, an error is returned. If a field is not present in the codec, the system panics.

func (*FieldCodec) FieldByName

func (f *FieldCodec) FieldByName(name string) *Field

FieldByName returns the field by its name. It will return a nil if not found

func (*FieldCodec) FieldIDByName

func (f *FieldCodec) FieldIDByName(s string) (uint8, error)

FieldIDByName returns the ID of the field with the given name s. TODO: this shouldn't be exported. remove when tx.go and engine.go get refactored into tsdb

func (*FieldCodec) Fields

func (f *FieldCodec) Fields() []*Field

Fields returns a unsorted list of the codecs fields.

type FieldCreate

type FieldCreate struct {
	Measurement string
	Field       *Field
}

FieldCreate holds information for a field to create on a measurement

type FilterExprs

type FilterExprs map[uint64]influxql.Expr

FilterExprs represents a map of series IDs to filter expressions.

func (FilterExprs) DeleteBoolLiteralTrues

func (fe FilterExprs) DeleteBoolLiteralTrues()

DeleteBoolLiteralTrues deletes all elements whose filter expression is a boolean literal true.

func (FilterExprs) Len

func (fe FilterExprs) Len() int

Len returns the number of elements.

type FloatCursorIterator

type FloatCursorIterator struct {
	// contains filtered or unexported fields
}

FloatCursorIterator represents a wrapper for Cursor to produce an influxql.FloatIterator.

func NewFloatCursorIterator

func NewFloatCursorIterator(name string, tagMap map[string]string, cur Cursor, opt influxql.IteratorOptions) *FloatCursorIterator

NewFloatCursorIterator returns a new instance of FloatCursorIterator.

func (*FloatCursorIterator) Close

func (itr *FloatCursorIterator) Close() error

Close closes the iterator.

func (*FloatCursorIterator) Next

Next returns the next point from the cursor.

type Measurement

type Measurement struct {
	Name string `json:"name,omitempty"`
	// contains filtered or unexported fields
}

Measurement represents a collection of time series in a database. It also contains in memory structures for indexing tags. Exported functions are goroutine safe while un-exported functions assume the caller will use the appropriate locks

func NewMeasurement

func NewMeasurement(name string) *Measurement

NewMeasurement allocates and initializes a new Measurement.

func (*Measurement) AddSeries

func (m *Measurement) AddSeries(s *Series) bool

AddSeries will add a series to the measurementIndex. Returns false if already present

func (*Measurement) DropSeries

func (m *Measurement) DropSeries(seriesID uint64)

DropSeries will remove a series from the measurementIndex.

func (*Measurement) FieldNames

func (m *Measurement) FieldNames() []string

FieldNames returns a list of the measurement's field names

func (*Measurement) HasField

func (m *Measurement) HasField(name string) bool

HasField returns true if the measurement has a field by the given name

func (*Measurement) HasSeries

func (m *Measurement) HasSeries() bool

HasSeries returns true if there is at least 1 series under this measurement

func (*Measurement) HasTagKey

func (m *Measurement) HasTagKey(k string) bool

HasTagKey returns true if at least one series in this measurement has written a value for the passed in tag key

func (*Measurement) SelectFields

func (m *Measurement) SelectFields(stmt *influxql.SelectStatement) []string

SelectFields returns a list of fields in the SELECT section of stmt.

func (*Measurement) SelectTags

func (m *Measurement) SelectTags(stmt *influxql.SelectStatement) []string

SelectTags returns a list of non-field tags in the SELECT section of stmt.

func (*Measurement) SeriesByID

func (m *Measurement) SeriesByID(id uint64) *Series

SeriesByID returns a series by identifier.

func (*Measurement) SeriesKeys

func (m *Measurement) SeriesKeys() []string

SeriesKeys returns the keys of every series in this measurement

func (*Measurement) SetFieldName

func (m *Measurement) SetFieldName(name string)

SetFieldName adds the field name to the measurement.

func (*Measurement) TagKeys

func (m *Measurement) TagKeys() []string

TagKeys returns a list of the measurement's tag names.

func (*Measurement) TagSets

func (m *Measurement) TagSets(dimensions []string, condition influxql.Expr) ([]*influxql.TagSet, error)

TagSets returns the unique tag sets that exist for the given tag keys. This is used to determine what composite series will be created by a group by. i.e. "group by region" should return: {"region":"uswest"}, {"region":"useast"} or region, service returns {"region": "uswest", "service": "redis"}, {"region": "uswest", "service": "mysql"}, etc... This will also populate the TagSet objects with the series IDs that match each tagset and any influx filter expression that goes with the series TODO: this shouldn't be exported. However, until tx.go and the engine get refactored into tsdb, we need it.

func (*Measurement) TagValues

func (m *Measurement) TagValues(key string) []string

TagValues returns all the values for the given tag key

func (*Measurement) ValidateGroupBy

func (m *Measurement) ValidateGroupBy(stmt *influxql.SelectStatement) error

ValidateGroupBy ensures that the GROUP BY is not a field.

func (*Measurement) WhereFields

func (m *Measurement) WhereFields(stmt *influxql.SelectStatement) []string

WhereFields returns a list of non-"time" fields in the WHERE section of stmt.

type MeasurementFields

type MeasurementFields struct {
	Codec *FieldCodec
	// contains filtered or unexported fields
}

MeasurementFields holds the fields of a measurement and their codec.

func NewMeasurementFields

func NewMeasurementFields() *MeasurementFields

func (*MeasurementFields) CreateFieldIfNotExists

func (m *MeasurementFields) CreateFieldIfNotExists(name string, typ influxql.DataType, limitCount bool) error

CreateFieldIfNotExists creates a new field with an autoincrementing ID. Returns an error if 255 fields have already been created on the measurement or the fields already exists with a different type.

func (*MeasurementFields) Field

func (m *MeasurementFields) Field(name string) *Field

func (*MeasurementFields) MarshalBinary

func (m *MeasurementFields) MarshalBinary() ([]byte, error)

MarshalBinary encodes the object to a binary format.

func (*MeasurementFields) UnmarshalBinary

func (m *MeasurementFields) UnmarshalBinary(buf []byte) error

UnmarshalBinary decodes the object from a binary format.

type MeasurementIterator

type MeasurementIterator struct {
	// contains filtered or unexported fields
}

MeasurementIterator represents a string iterator that emits all measurement names in a shard.

func NewMeasurementIterator

func NewMeasurementIterator(sh *Shard, opt influxql.IteratorOptions) (*MeasurementIterator, error)

NewMeasurementIterator returns a new instance of MeasurementIterator.

func (*MeasurementIterator) Close

func (itr *MeasurementIterator) Close() error

Close closes the iterator.

func (*MeasurementIterator) Next

func (itr *MeasurementIterator) Next() (*influxql.FloatPoint, error)

Next emits the next measurement name.

func (*MeasurementIterator) Stats

Stats returns stats about the points processed.

type Measurements

type Measurements []*Measurement

Measurements represents a list of *Measurement.

func (Measurements) Len

func (a Measurements) Len() int

func (Measurements) Less

func (a Measurements) Less(i, j int) bool

func (Measurements) SelectFields

func (a Measurements) SelectFields(stmt *influxql.SelectStatement) []string

SelectFields returns a list of fields in the SELECT section of stmt.

func (Measurements) SelectTags

func (a Measurements) SelectTags(stmt *influxql.SelectStatement) []string

SelectTags returns a list of non-field tags in the SELECT section of stmt.

func (Measurements) Swap

func (a Measurements) Swap(i, j int)

func (Measurements) WhereFields

func (a Measurements) WhereFields(stmt *influxql.SelectStatement) []string

WhereFields returns a list of non-"time" fields in the WHERE section of stmt.

type NewEngineFunc

type NewEngineFunc func(path string, walPath string, options EngineOptions) Engine

NewEngineFunc creates a new engine.

type PointBatcher

type PointBatcher struct {
	// contains filtered or unexported fields
}

PointBatcher accepts Points and will emit a batch of those points when either a) the batch reaches a certain size, or b) a certain time passes.

func NewPointBatcher

func NewPointBatcher(sz int, bp int, d time.Duration) *PointBatcher

NewPointBatcher returns a new PointBatcher. sz is the batching size, bp is the maximum number of batches that may be pending. d is the time after which a batch will be emitted after the first point is received for the batch, regardless of its size.

func (*PointBatcher) Flush

func (b *PointBatcher) Flush()

Flush instructs the batcher to emit any pending points in a batch, regardless of batch size. If there are no pending points, no batch is emitted.

func (*PointBatcher) In

func (b *PointBatcher) In() chan<- models.Point

In returns the channel to which points should be written.

func (*PointBatcher) Out

func (b *PointBatcher) Out() <-chan []models.Point

Out returns the channel from which batches should be read.

func (*PointBatcher) Start

func (b *PointBatcher) Start()

Start starts the batching process. Returns the in and out channels for points and point-batches respectively.

func (*PointBatcher) Stats

func (b *PointBatcher) Stats() *PointBatcherStats

Stats returns a PointBatcherStats object for the PointBatcher. While the each statistic should be closely correlated with each other statistic, it is not guaranteed.

func (*PointBatcher) Stop

func (b *PointBatcher) Stop()

Stop stops the batching process. Stop waits for the batching routine to stop before returning.

type PointBatcherStats

type PointBatcherStats struct {
	BatchTotal   uint64 // Total count of batches transmitted.
	PointTotal   uint64 // Total count of points processed.
	SizeTotal    uint64 // Number of batches that reached size threshold.
	TimeoutTotal uint64 // Number of timeouts that occurred.
}

PointBatcherStats are the statistics each batcher tracks.

type Series

type Series struct {
	Key  string
	Tags map[string]string
	// contains filtered or unexported fields
}

Series belong to a Measurement and represent unique time series in a database

func NewSeries

func NewSeries(key string, tags map[string]string) *Series

NewSeries returns an initialized series struct

func (*Series) AssignShard

func (s *Series) AssignShard(shardID uint64)

func (*Series) Assigned

func (s *Series) Assigned(shardID uint64) bool

func (*Series) InitializeShards

func (s *Series) InitializeShards()

InitializeShards initializes the list of shards.

func (*Series) MarshalBinary

func (s *Series) MarshalBinary() ([]byte, error)

MarshalBinary encodes the object to a binary format.

func (*Series) ShardN

func (s *Series) ShardN() int

func (*Series) UnassignShard

func (s *Series) UnassignShard(shardID uint64)

func (*Series) UnmarshalBinary

func (s *Series) UnmarshalBinary(buf []byte) error

UnmarshalBinary decodes the object from a binary format.

type SeriesCreate

type SeriesCreate struct {
	Measurement string
	Series      *Series
}

SeriesCreate holds information for a series to create

type SeriesIDs

type SeriesIDs []uint64

SeriesIDs is a convenience type for sorting, checking equality, and doing union and intersection of collections of series ids.

func (SeriesIDs) Equals

func (a SeriesIDs) Equals(other SeriesIDs) bool

Equals assumes that both are sorted.

func (SeriesIDs) Intersect

func (a SeriesIDs) Intersect(other SeriesIDs) SeriesIDs

Intersect returns a new collection of series ids in sorted order that is the intersection of the two. The two collections must already be sorted.

func (SeriesIDs) Len

func (a SeriesIDs) Len() int

func (SeriesIDs) Less

func (a SeriesIDs) Less(i, j int) bool

func (SeriesIDs) Reject

func (a SeriesIDs) Reject(other SeriesIDs) SeriesIDs

Reject returns a new collection of series ids in sorted order with the passed in set removed from the original. This is useful for the NOT operator. The two collections must already be sorted.

func (SeriesIDs) Swap

func (a SeriesIDs) Swap(i, j int)

func (SeriesIDs) Union

func (a SeriesIDs) Union(other SeriesIDs) SeriesIDs

Union returns a new collection of series ids in sorted order that is the union of the two. The two collections must already be sorted.

type Shard

type Shard struct {

	// The writer used by the logger.
	LogOutput io.Writer
	// contains filtered or unexported fields
}

Shard represents a self-contained time series database. An inverted index of the measurement and tag data is kept along with the raw time series data. Data can be split across many shards. The query engine in TSDB is responsible for combining the output of many shards into a single query result.

func NewShard

func NewShard(id uint64, index *DatabaseIndex, path string, walPath string, options EngineOptions) *Shard

NewShard returns a new initialized Shard. walPath doesn't apply to the b1 type index

func (*Shard) Close

func (s *Shard) Close() error

Close shuts down the shard's store.

func (*Shard) ContainsSeries

func (s *Shard) ContainsSeries(seriesKeys []string) (map[string]bool, error)

func (*Shard) CreateIterator

func (s *Shard) CreateIterator(opt influxql.IteratorOptions) (influxql.Iterator, error)

CreateIterator returns an iterator for the data in the shard.

func (*Shard) DeleteMeasurement

func (s *Shard) DeleteMeasurement(name string, seriesKeys []string) error

DeleteMeasurement deletes a measurement and all underlying series.

func (*Shard) DeleteSeries

func (s *Shard) DeleteSeries(seriesKeys []string) error

DeleteSeries deletes a list of series.

func (*Shard) DeleteSeriesRange

func (s *Shard) DeleteSeriesRange(seriesKeys []string, min, max int64) error

DeleteSeriesRange deletes all values from for seriesKeys between min and max (inclusive)

func (*Shard) DiskSize

func (s *Shard) DiskSize() (int64, error)

DiskSize returns the size on disk of this shard

func (*Shard) ExpandSources

func (s *Shard) ExpandSources(sources influxql.Sources) (influxql.Sources, error)

ExpandSources expands regex sources and removes duplicates. NOTE: sources must be normalized (db and rp set) before calling this function.

func (*Shard) FieldCodec

func (s *Shard) FieldCodec(measurementName string) *FieldCodec

FieldCodec returns the field encoding for a measurement. TODO: this is temporarily exported to make tx.go work. When the query engine gets refactored into the tsdb package this should be removed. No one outside tsdb should know the underlying field encoding scheme.

func (*Shard) FieldDimensions

func (s *Shard) FieldDimensions(sources influxql.Sources) (fields, dimensions map[string]struct{}, err error)

FieldDimensions returns unique sets of fields and dimensions across a list of sources.

func (*Shard) Open

func (s *Shard) Open() error

Open initializes and opens the shard's store.

func (*Shard) Path

func (s *Shard) Path() string

Path returns the path set on the shard when it was created.

func (*Shard) SeriesCount

func (s *Shard) SeriesCount() (int, error)

SeriesCount returns the number of series buckets on the shard.

func (*Shard) SeriesKeys

func (s *Shard) SeriesKeys(opt influxql.IteratorOptions) (influxql.SeriesList, error)

SeriesKeys returns a list of series in the shard.

func (*Shard) SetLogOutput

func (s *Shard) SetLogOutput(w io.Writer)

SetLogOutput sets the writer to which log output will be written. It must not be called after the Open method has been called.

func (*Shard) WritePoints

func (s *Shard) WritePoints(points []models.Point) error

WritePoints will write the raw data points and any new metadata to the index in the shard

func (*Shard) WriteTo

func (s *Shard) WriteTo(w io.Writer) (int64, error)

WriteTo writes the shard's data to w.

type ShardError

type ShardError struct {
	Err error
	// contains filtered or unexported fields
}

A ShardError implements the error interface, and contains extra context about the shard that generated the error.

func (ShardError) Error

func (e ShardError) Error() string

type Shards

type Shards []*Shard

Shards represents a sortable list of shards.

func (Shards) Len

func (a Shards) Len() int

func (Shards) Less

func (a Shards) Less(i, j int) bool

func (Shards) Swap

func (a Shards) Swap(i, j int)

type Store

type Store struct {
	EngineOptions EngineOptions
	Logger        *log.Logger
	// contains filtered or unexported fields
}

Store manages shards and indexes for databases.

func NewStore

func NewStore(path string) *Store

NewStore returns a new store with the given path and a default configuration. The returned store must be initialized by calling Open before using it.

func (*Store) BackupShard

func (s *Store) BackupShard(id uint64, since time.Time, w io.Writer) error

BackupShard will get the shard and have the engine backup since the passed in time to the writer

func (*Store) Close

func (s *Store) Close() error

Close closes the store and all associated shards. After calling Close accessing shards through the Store will result in ErrStoreClosed being returned.

func (*Store) CreateShard

func (s *Store) CreateShard(database, retentionPolicy string, shardID uint64) error

CreateShard creates a shard with the given id and retention policy on a database.

func (*Store) DatabaseIndex

func (s *Store) DatabaseIndex(name string) *DatabaseIndex

DatabaseIndex returns the index for a database by its name.

func (*Store) DatabaseIndexN

func (s *Store) DatabaseIndexN() int

DatabaseIndexN returns the number of databases indicies in the store.

func (*Store) Databases

func (s *Store) Databases() []string

Databases returns all the databases in the indexes

func (*Store) DeleteDatabase

func (s *Store) DeleteDatabase(name string) error

DeleteDatabase will close all shards associated with a database and remove the directory and files from disk.

func (*Store) DeleteMeasurement

func (s *Store) DeleteMeasurement(database, name string) error

DeleteMeasurement removes a measurement and all associated series from a database.

func (*Store) DeleteRetentionPolicy

func (s *Store) DeleteRetentionPolicy(database, name string) error

DeleteRetentionPolicy will close all shards associated with the provided retention policy, remove the retention policy directories on both the DB and WAL, and remove all shard files from disk.

func (*Store) DeleteSeries

func (s *Store) DeleteSeries(database string, sources []influxql.Source, condition influxql.Expr) error

DeleteSeries loops through the local shards and deletes the series data and metadata for the passed in series keys

func (*Store) DeleteShard

func (s *Store) DeleteShard(shardID uint64) error

DeleteShard removes a shard from disk.

func (*Store) DiskSize

func (s *Store) DiskSize() (int64, error)

DiskSize returns the size of all the shard files in bytes. This size does not include the WAL size.

func (*Store) ExpandSources

func (s *Store) ExpandSources(sources influxql.Sources) (influxql.Sources, error)

ExpandSources expands sources against all local shards.

func (*Store) IteratorCreator

func (s *Store) IteratorCreator(shards []uint64) (influxql.IteratorCreator, error)

func (*Store) IteratorCreators

func (s *Store) IteratorCreators() influxql.IteratorCreators

IteratorCreators returns a set of all local shards as iterator creators.

func (*Store) Measurement

func (s *Store) Measurement(database, name string) *Measurement

Measurement returns a measurement by name from the given database.

func (*Store) Open

func (s *Store) Open() error

Open initializes the store, creating all necessary directories, loading all shards and indexes and initializing periodic maintenance of all shards.

func (*Store) Path

func (s *Store) Path() string

Path returns the store's root path.

func (*Store) SetLogOutput

func (s *Store) SetLogOutput(w io.Writer)

SetLogOutput sets the writer to which all logs are written. It must not be called after Open is called.

func (*Store) Shard

func (s *Store) Shard(id uint64) *Shard

Shard returns a shard by id.

func (*Store) ShardIDs

func (s *Store) ShardIDs() []uint64

ShardIDs returns a slice of all ShardIDs under management.

func (*Store) ShardIteratorCreator

func (s *Store) ShardIteratorCreator(id uint64) influxql.IteratorCreator

ShardIteratorCreator returns an iterator creator for a shard.

func (*Store) ShardN

func (s *Store) ShardN() int

ShardN returns the number of shards in the store.

func (*Store) ShardRelativePath

func (s *Store) ShardRelativePath(id uint64) (string, error)

ShardRelativePath will return the relative path to the shard. i.e. <database>/<retention>/<id>

func (*Store) Shards

func (s *Store) Shards(ids []uint64) []*Shard

Shards returns a list of shards by id.

func (*Store) WriteToShard

func (s *Store) WriteToShard(shardID uint64, points []models.Point) error

WriteToShard writes a list of points to a shard identified by its ID.

type TagFilter

type TagFilter struct {
	Op    influxql.Token
	Key   string
	Value string
	Regex *regexp.Regexp
}

TagFilter represents a tag filter when looking up other tags or measurements.

Directories

Path Synopsis
Package meta is a generated protocol buffer package.
Package meta is a generated protocol buffer package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL