common

package
v0.0.0-...-b6489af Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 21, 2018 License: MIT Imports: 16 Imported by: 0

Documentation

Index

Constants

View Source
const (
	DefaultDateTimeStart = "2018-01-01T00:00:00Z"
	DefaultDateTimeEnd   = "2018-01-02T00:00:00Z"
)
View Source
const DatasetSizeMarker = "dataset-size:"

Variables

View Source
var DatasetSizeMarkerRE = regexp.MustCompile(DatasetSizeMarker + `(\d+),(\d+)`)

Functions

func NewSerializerInflux

func NewSerializerInflux() *serializerInflux

func RandChoice

func RandChoice(choices [][]byte) []byte

func Seed

func Seed(seed int64)

Seed uses the provided seed value to initialize the generator to a deterministic state.

Types

type ClampedRandomWalkDistribution

type ClampedRandomWalkDistribution struct {
	Step Distribution
	Min  float64
	Max  float64

	State float64 // optional
}

ClampedRandomWalkDistribution is a stateful random walk, with minimum and maximum bounds. Initialize it with a Min, Max, and an underlying distribution, which is used to compute the new step value.

func CWD

func CWD(step Distribution, min, max, state float64) *ClampedRandomWalkDistribution

func (*ClampedRandomWalkDistribution) Advance

func (d *ClampedRandomWalkDistribution) Advance()

Advance computes the next value of this distribution and stores it.

func (*ClampedRandomWalkDistribution) Get

Get returns the last computed value for this distribution.

type ConstantDistribution

type ConstantDistribution struct {
	State float64
}

func (*ConstantDistribution) Advance

func (d *ConstantDistribution) Advance()

func (*ConstantDistribution) Get

func (d *ConstantDistribution) Get() float64

type Distribution

type Distribution interface {
	Advance()
	Get() float64 // should be idempotent
}

Distribution provides an interface to model a statistical distribution.

type MonotonicRandomWalkDistribution

type MonotonicRandomWalkDistribution struct {
	Step  Distribution
	State float64
}

MonotonicRandomWalkDistribution is a stateful random walk that only increases. Initialize it with a Start and an underlying distribution, which is used to compute the new step value. The sign of any value of the u.d. is always made positive.

func (*MonotonicRandomWalkDistribution) Advance

func (d *MonotonicRandomWalkDistribution) Advance()

Advance computes the next value of this distribution and stores it.

func (*MonotonicRandomWalkDistribution) Get

type MonotonicUpDownRandomWalkDistribution

type MonotonicUpDownRandomWalkDistribution struct {
	Step  Distribution
	State float64
	Min   float64
	Max   float64
	// contains filtered or unexported fields
}

MonotonicUpDownRandomWalkDistribution is a stateful random walk that continually increases and decreases. Initialize it with State, Min And Max an underlying distribution, which is used to compute the new step value.

func (*MonotonicUpDownRandomWalkDistribution) Advance

Advance computes the next value of this distribution and stores it.

func (*MonotonicUpDownRandomWalkDistribution) Get

type NormalDistribution

type NormalDistribution struct {
	Mean   float64
	StdDev float64
	// contains filtered or unexported fields
}

NormalDistribution models a normal distribution.

func ND

func ND(mean, stddev float64) *NormalDistribution

func (*NormalDistribution) Advance

func (d *NormalDistribution) Advance()

Advance advances this distribution. Since a normal distribution is stateless, this is just overwrites the internal cache value.

func (*NormalDistribution) Get

func (d *NormalDistribution) Get() float64

Get returns the last computed value for this distribution.

type Point

type Point struct {
	MeasurementName []byte
	TagKeys         [][]byte
	TagValues       [][]byte
	FieldKeys       [][]byte
	FieldValues     []interface{}
	Timestamp       *time.Time
	// contains filtered or unexported fields
}

Point wraps a single data point. It stores database-agnostic data representing one point in time of one measurement.

Internally, Point uses byte slices instead of strings to try to minimize overhead.

func MakeUsablePoint

func MakeUsablePoint() *Point

MakeUsablePoint allocates a new Point ready for use by a Simulator.

func (*Point) AppendField

func (p *Point) AppendField(key []byte, value interface{})

func (*Point) AppendTag

func (p *Point) AppendTag(key, value []byte)

func (*Point) Reset

func (p *Point) Reset()

func (*Point) SetMeasurementName

func (p *Point) SetMeasurementName(s []byte)

func (*Point) SetTimestamp

func (p *Point) SetTimestamp(t *time.Time)

type RandomWalkDistribution

type RandomWalkDistribution struct {
	Step Distribution

	State float64 // optional
}

RandomWalkDistribution is a stateful random walk. Initialize it with an underlying distribution, which is used to compute the new step value.

func WD

func (*RandomWalkDistribution) Advance

func (d *RandomWalkDistribution) Advance()

Advance computes the next value of this distribution and stores it.

func (*RandomWalkDistribution) Get

Get returns the last computed value for this distribution.

type Serializer

type Serializer interface {
	SerializePoint(w io.Writer, p *Point) error
	SerializeSize(w io.Writer, points int64, values int64) error
	SerializeToCSV(w io.Writer, p *Point) error
}

type SerializerCassandra

type SerializerCassandra struct {
}

func NewSerializerCassandra

func NewSerializerCassandra() *SerializerCassandra

func (*SerializerCassandra) SerializePoint

func (m *SerializerCassandra) SerializePoint(w io.Writer, p *Point) (err error)

SerializeCassandra writes Point data to the given writer, conforming to the Cassandra query format.

This function writes output that looks like: INSERT INTO <tablename> (series_id, ts_ns, value) VALUES (<series_id>, <timestamp_nanoseconds>, <field value>) where series_id looks like: <measurement>,<tagset>#<field name>#<time shard>

For example: INSERT INTO all_series (series_id, timestamp_ns, value) VALUES ('cpu,hostname=host_01#user#2016-01-01', 12345, 42.1)\n

func (*SerializerCassandra) SerializeSize

func (s *SerializerCassandra) SerializeSize(w io.Writer, points int64, values int64) error

func (*SerializerCassandra) SerializeToCSV

func (s *SerializerCassandra) SerializeToCSV(w io.Writer, p *Point) error

type SerializerElastic

type SerializerElastic struct {
}

func NewSerializerElastic

func NewSerializerElastic() *SerializerElastic

func (*SerializerElastic) SerializePoint

func (s *SerializerElastic) SerializePoint(w io.Writer, p *Point) error

SerializeESBulk writes Point data to the given writer, conforming to the ElasticSearch bulk load protocol.

This function writes output that looks like: <action line> <tags, fields, and timestamp>

For example: { "index" : { "_index" : "measurement_otqio", "_type" : "point" } }\n { "tag_launx": "btkuw", "tag_gaijk": "jiypr", "field_wokxf": 0.08463898963964356, "field_zqstf": -0.043641533500086316, "timestamp": 171300 }\n

TODO(rw): Speed up this function. The bulk of time is spent in strconv.

func (*SerializerElastic) SerializeSize

func (s *SerializerElastic) SerializeSize(w io.Writer, points int64, values int64) error

func (*SerializerElastic) SerializeToCSV

func (s *SerializerElastic) SerializeToCSV(w io.Writer, p *Point) error

type SerializerMongo

type SerializerMongo struct {
}

func NewSerializerMongo

func NewSerializerMongo() *SerializerMongo

func (*SerializerMongo) SerializePoint

func (s *SerializerMongo) SerializePoint(w io.Writer, p *Point) (err error)

SerializeMongo writes Point data to the given writer, conforming to the mongo_serialization FlatBuffers format.

func (*SerializerMongo) SerializeSize

func (s *SerializerMongo) SerializeSize(w io.Writer, points int64, values int64) error

func (*SerializerMongo) SerializeToCSV

func (s *SerializerMongo) SerializeToCSV(w io.Writer, p *Point) error

type SerializerOpenTSDB

type SerializerOpenTSDB struct {
}

func NewSerializerOpenTSDB

func NewSerializerOpenTSDB() *SerializerOpenTSDB

func (*SerializerOpenTSDB) SerializePoint

func (m *SerializerOpenTSDB) SerializePoint(w io.Writer, p *Point) (err error)

SerializeOpenTSDBBulk writes Point data to the given writer, conforming to the OpenTSDB bulk load protocol (the /api/put endpoint). Note that no line has a trailing comma. Downstream programs are responsible for creating batches for POSTing using a JSON array, and for adding any trailing commas (to conform to JSON). We use only millisecond-precision timestamps.

N.B. OpenTSDB only supports millisecond or second resolution timestamps. N.B. OpenTSDB millisecond timestamps must be 13 digits long. N.B. OpenTSDB only supports floating-point field values.

This function writes JSON lines that looks like: { <metric>, <timestamp>, <value>, <tags> }

For example: { "metric": "cpu.usage_user", "timestamp": 14516064000000, "value": 99.5170917755353770, "tags": { "hostname": "host_01", "region": "ap-southeast-2", "datacenter": "ap-southeast-2a" } }

func (*SerializerOpenTSDB) SerializeSize

func (s *SerializerOpenTSDB) SerializeSize(w io.Writer, points int64, values int64) error

func (*SerializerOpenTSDB) SerializeToCSV

func (s *SerializerOpenTSDB) SerializeToCSV(w io.Writer, p *Point) error

type SerializerTSDB

type SerializerTSDB struct {
}

func NewSerializerTSDB

func NewSerializerTSDB() *SerializerTSDB

func (*SerializerTSDB) SerializePoint

func (s *SerializerTSDB) SerializePoint(w io.Writer, p *Point) error

Outputs TSDB data. We refer to a field as a metric, hence the output is one row per field where the 'MeasurementName' is just another label

<measurement>_<field name>#labelKey=labelValue...#fieldValue#timestamp For Example: cpu_usage_user#measurement=cpu,hostname=host_0,region=us-west-1,datacenter=us-west-1b,rack=1,os=Ubuntu16.10#84.123#1514764800000

func (*SerializerTSDB) SerializeSize

func (s *SerializerTSDB) SerializeSize(w io.Writer, points int64, values int64) error

func (*SerializerTSDB) SerializeToCSV

func (s *SerializerTSDB) SerializeToCSV(w io.Writer, p *Point) error

type SerializerTimescaleBin

type SerializerTimescaleBin struct {
}

func NewSerializerTimescaleBin

func NewSerializerTimescaleBin() *SerializerTimescaleBin

func (*SerializerTimescaleBin) SerializePoint

func (t *SerializerTimescaleBin) SerializePoint(w io.Writer, p *Point) (err error)

SerializeTimeScaleBin writes Point data to the given writer, conforming to the Binary GOP encoded format to write

func (*SerializerTimescaleBin) SerializeSize

func (s *SerializerTimescaleBin) SerializeSize(w io.Writer, points int64, values int64) error

func (*SerializerTimescaleBin) SerializeToCSV

func (s *SerializerTimescaleBin) SerializeToCSV(w io.Writer, p *Point) error

type SerializerTimescaleSql

type SerializerTimescaleSql struct {
}

func NewSerializerTimescaleSql

func NewSerializerTimescaleSql() *SerializerTimescaleSql

func (*SerializerTimescaleSql) SerializePoint

func (s *SerializerTimescaleSql) SerializePoint(w io.Writer, p *Point) (err error)

SerializeTimeScale writes Point data to the given writer, conforming to the TimeScale insert format.

This function writes output that looks like: INSERT INTO <tablename> (time,<tag_name list>,<field_name list>') VALUES (<timestamp in nanoseconds>, <tag values list>, <field values>)

func (*SerializerTimescaleSql) SerializeSize

func (s *SerializerTimescaleSql) SerializeSize(w io.Writer, points int64, values int64) error

func (*SerializerTimescaleSql) SerializeToCSV

func (s *SerializerTimescaleSql) SerializeToCSV(w io.Writer, p *Point) error

type SimulatedMeasurement

type SimulatedMeasurement interface {
	Tick(time.Duration)
	ToPoint(*Point) bool //returns true if point if properly filled, false means, that point should be skipped
}

SimulatedMeasurement simulates one measurement (e.g. Redis for DevOps).

type Simulator

type Simulator interface {
	Total() int64
	SeenPoints() int64
	SeenValues() int64
	Finished() bool
	Next(*Point)
}

Simulator simulates a use case.

type TwoStateDistribution

type TwoStateDistribution struct {
	Low   float64
	High  float64
	State float64
}

TwoStateDistribution randomly chooses state from two values

func TSD

func TSD(low float64, high float64, state float64) *TwoStateDistribution

func (*TwoStateDistribution) Advance

func (d *TwoStateDistribution) Advance()

func (*TwoStateDistribution) Get

func (d *TwoStateDistribution) Get() float64

type UniformDistribution

type UniformDistribution struct {
	Low  float64
	High float64
	// contains filtered or unexported fields
}

UniformDistribution models a uniform distribution.

func UD

func UD(low, high float64) *UniformDistribution

func (*UniformDistribution) Advance

func (d *UniformDistribution) Advance()

Advance advances this distribution. Since a uniform distribution is stateless, this is just overwrites the internal cache value.

func (*UniformDistribution) Get

func (d *UniformDistribution) Get() float64

Get computes and returns the next value in the distribution.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL