miel

package
v0.0.0-...-e475a34 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 17, 2022 License: BSD-2-Clause Imports: 13 Imported by: 1

Documentation

Overview

Package miel provides the v1 backwards compatible mistral expression language. This package is versioned not in the canonical way of go, because MiEL shall be backwards compatible and available through the entire lifetime, if feasible. Future v2, v3 etc versions may provide alternative domain specific languages (DSL) which may provide different runtime characteristics or which can be compiled in a different way.

Index

Constants

View Source
const AlignGroupStart = true

AlignGroupStart represents the true literal to improve readability. For the Group* functions, the parameter determines if the natural start of the grouping is set to all X values for each group (first unix time stamp of the group start at 00:00:00) after applying the drift.

View Source
const DefaultGrid = 600

DefaultGrid is 600 seconds.

View Source
const NoDrift = 0

NoDrift represents the 0 literal to improve readability. Usually in Seconds. The drift value is added to each timestamp, so that a drift of the points can be respected (e.g. due to start- or end aggregated data points).

Variables

This section is empty.

Functions

func BucketNames

func BucketNames(ctx context.Context, bucketIDs []UUID) []string

BucketNames translates the given buckets identified by their identifiers, if possible. If no translation exists, the default name is returned. If no metadata is available, the string representation of the ID is returned.

func MatchLanguage

func MatchLanguage(ctx context.Context, languageTags ...string) string

MatchLanguage inspects the request (Accept-Language) and context and matches that against the given language IETF BCP 47 tags (like en, en-US, es-419 or az-Arab). If a tag cannot be parsed a panic is thrown. If the given tags and the required tag cannot be matched, the first tag is the default and returned. If no tags are given, the empty string is returned.

See also https://go.dev/blog/matchlang to get a summary about the topic and the currently used underlying implementation.

func MetricNames

func MetricNames(ctx context.Context, metricIDs []UUID) []string

MetricNames translates the given metrics identified by their identifiers, if possible. If no translation exists, the default name is returned. If no metadata is available, the string representation of the ID is returned.

func Request

func Request(ctx context.Context, v interface{})

Request parses the body from the given context into the given pointer. Panics for illegal arguments. Currently, supported are application/json and application/xml. Subsequent calls are undefined.

func Response

func Response(ctx context.Context, v interface{})

Response marshals the given value as json. If the first field has a xml-tag, the response is treated as xml. Panics for illegal arguments. Subsequent calls are undefined.

func Timezone

func Timezone(ctx context.Context) *time.Location

Timezone resolves the IANA timezone and location in the following order:

  • UTC
  • Takes the http request param X-TZ which may contain an IANA timezone

See also the TZ type which can be used to transport and parse IANA timezone information. The purpose is that the request can set the time zone for calculations, especially for grouping by day, month or year which depends on the customers (tax) location. Intentionally the server location is not the default, because its location is not related to the data it is processing, especially when moving between cloud data centers.

Please keep in mind, that offsets are not time zones.

func ViewportWidth

func ViewportWidth(ctx context.Context) int64

ViewportWidth resolves a hint for down sampling a data series suited for displaying data within a chart. The default is 512 and can be overridden by setting the Viewport-Width http header.

func WithDB

func WithDB(ctx context.Context, db DB) context.Context

WithDB annotates a new Context with the according value.

func WithHttpRequest

func WithHttpRequest(ctx context.Context, r *http.Request) context.Context

WithHttpRequest annotates a new Context with the according value.

func WithHttpResponse

func WithHttpResponse(ctx context.Context, w http.ResponseWriter) context.Context

WithHttpResponse annotates a new Context with the according value.

Types

type AggregateFunc

type AggregateFunc int

AggregateFunc is an enum like type to identify an aggregate function for Group.Reduce or Group.ReduceTransposed functions.

const (
	// MinY returns the minimum Y value.
	MinY AggregateFunc = iota + 1

	// MaxY returns the maximum Y value.
	MaxY

	// AvgY sums all Y values up and performs a float64 division with rounding.
	AvgY

	// SumY returns the sum of all Y values.
	SumY

	// Count returns the amount of entries.
	Count
)

func (AggregateFunc) Valid

func (f AggregateFunc) Valid() bool

Valid determines if AggregateFunc defines a valid enum. See also MinY, MaxY, AvgY, SumY and Count.

type Bucket

type Bucket struct {
	ID           UUID                   `json:"id"`
	Name         string                 `json:"name"`
	Description  string                 `json:"description"`
	Timezone     string                 `json:"timezone"`
	Translations map[string]Translation `json:"translations"`
}

A Bucket represents an abstract namespace for a domain object containing unique (time) series data related to a specific metric. Usually, a Bucket represents a physical device generating data like a wind turbine. Therefore, an attached time zone and an individual name and description makes usually sense. However, it may also contain other virtual metrics like calculated business data for a customer (time zone may be tax related then).

func (Bucket) LanguageTags

func (b Bucket) LanguageTags() []string

LanguageTags returns an alphabetically sorted list of available translations. See also MatchLanguage.

func (Bucket) String

func (b Bucket) String() string

String returns the name.

func (Bucket) Translated

func (b Bucket) Translated() map[string]Translation

Translated is a getter for Translations.

type DB

type DB interface {
	// Bucket loads the metadata about the bucket which usually represents a device which generates a bunch of
	// time series data. Returns false if no such bucket exist. Panics for any other failure.
	Bucket(id UUID) (Bucket, bool)

	// Metric loads the metric metadata and describes a specific time series data which is required to interpret
	// the meaning of x and y values. Returns false if no such metric exist. Panics for any other failure.
	Metric(id UUID) (Metric, bool)

	// ScaleOf returns the scale for the given metric ID or returns 1 if not found. A multiple of 10,
	// usually in the range of 1, 10, 100 or 1000.
	ScaleOf(metricID UUID) int64

	// FindRanges returns all metrics which have at least a single data point and therefore represents a kind of
	// coverage. If multiple buckets (devices) have
	// the same metric, the overall min/max keys are determined and returned. The returned ranges are sorted by metric
	// id.
	FindRanges(bucketIDs []UUID) []DataRange

	// MinMax returns the minimum and maximum timestamp for the given metric within the denoted bucket (device).
	MinMax(bucketID, metricID UUID) DataRange

	// FindInRange loads those (time) series of the given buckets identified by the metric id, which exists.
	FindInRange(bucketIDs []UUID, metricID UUID, r Interval) Group
}

DB describes the contract to the Mistral database and provides a bunch of query methods.

func Query

func Query(ctx context.Context) DB

Query unpacks the context specific DB.

type DataRange

type DataRange struct {
	ID    UUID
	MinX  int64
	MaxX  int64
	Valid bool
}

A DataRange defines a metric id and the range of min/max x data it provides. Usually in Seconds since Unix Epoch. The meaning of ID is undefined and may be the zero UUID or refer to a bucket or metric or a bucket specific metric series. Inspect the according documentation of the exact method which creates such DataRange.

type Evaluator

type Evaluator func(ctx context.Context)

Evaluator specifies t^he func type for the Start callback which performs the actual query.

type FGroup

type FGroup []FPoints

FGroup is just a slice of FPoints elements.

func (FGroup) Join

func (p FGroup) Join(other FGroup) FGroup

Join appends the given other series to this series and returns the new slice.

type FPoint

type FPoint struct {
	// X is usually in milliseconds since Unix Epoch.
	X int64 `json:"x"`

	// Y is usually already un-pre-multiplied and ready to display.
	Y float64 `json:"y"`
}

FPoint is usually used for serialization into JSON and to be ready to be displayed by consumer agents, e.g. written in Java or JavaScript. Intentionally this type does not provide any built-in operations because it should be the last step of processing, if not avoidable at all. Floating point numbers should only be used for display purposes and not for calculations, to avoid rounding errors which become significant when calculating with billions of numbers.

type FPoints

type FPoints []FPoint

FPoints is just a slice of FPoint elements. See also FPoint.

type Group

type Group []Points

Group is slice of (time) series points.

func (Group) First

func (p Group) First() Points

First returns the first series or panics.

func (Group) ForEach

func (p Group) ForEach(f func(pts Points) Points) Group

ForEach allows an in-line modification of each point series inside Group. For example one can SnapToGrid, then create a GroupByDay aggregation with a reduction into a series by using the AvgY operator. A lot of operations can be applied in-place to reduce memory footprint and pressure. See also ForEachF.

func (Group) ForEachF

func (p Group) ForEachF(f func(pts Points) FPoints) FGroup

ForEachF is like ForEach but allows a transformation into a floating point series resulting in a floating point group of series. It is guaranteed that the transformation performs additional heap allocations and therefore should only be used after Downsampling.

func (Group) Reduce

func (p Group) Reduce(f AggregateFunc) Points

Reduce applies an AggregateFunc on the group and returns a single series again. Technically it inner loops over each group on an "as is" basis. Example:

[
  [(1|2), (2|3), (4|5)],
  [(5|6), (7|8), (9|10)],
  [(11|12), (13|14), (15|16)]
]
=> f is called as follows:
  [(1|2), (2|3), (4|5)]
  [(5|6), (7|8), (9|10)]
  [(11|12), (13|14), (15|16)]

Note that the X value if always the first of each group. Also note, that this is weired for Max, because it returns the "wrong" x (the first, as defined).

func (Group) ReduceTransposed

func (p Group) ReduceTransposed(f AggregateFunc) Points

ReduceTransposed applies an AggregateFunc on the group and returns a single series again. OuterGroupByX transposes the points of each group into a new artificial group and invokes f on it. It requires that each group is sorted ascending by X. The result on unsorted groups is undefined. Example:

[
  [(1|2), (2|3), (4|5)],
  [(1|6), (2|8), (8|10)],
  [(0|12), (2|14), (4|16)]
]
=> f is called as follows:
  [(0|12)],
  [(1|2), (1|6)],
  [(2|3), (2|8), (2|14)]
  [(4|5), (4|16)]
  [(8|10]

type Interval

type Interval struct {
	Min, Max int64
}

Interval contains the min and max unix timestamps, which have always 'inclusive' semantics. Usually in Seconds since Unix Epoch. See also Range, TZ and Timezone.

type Intrinsics

type Intrinsics interface {
	// GroupByDay is documented at Points.GroupByDay.
	GroupByDay(p Points, drift int64, align bool, location *time.Location) Group
	// GroupByYear is documented at Points.GroupByYear.
	GroupByYear(p Points, drift int64, align bool, location *time.Location) Group
	// GroupByMonth is documented at Points.GroupByMonth.
	GroupByMonth(p Points, drift int64, align bool, location *time.Location) Group
	// Scale is documented at Points.Scale.
	Scale(p Points, x, y int64) Points
	// Limit is documented at Points.Limit.
	Limit(p Points, min, max int64) Points
	// SnapToGrid is documented at Points.SnapToGrid.
	SnapToGrid(p Points, divisor int64) Points
	// PointsReduce is documented at Points.Reduce.
	PointsReduce(p Points, f AggregateFunc) (int64, bool)
	// M4 is documented at Points.M4.
	M4(p Points, width int64) Points
	// GroupReduce is documented at Group.Reduce.
	GroupReduce(g Group, f AggregateFunc) Points
	// GroupReduceTransposed is documented at Group.ReduceTransposed.
	GroupReduceTransposed(g Group, f AggregateFunc) Points
}

Intrinsics defines the vtable for all required math primitives used by the MiEL v1 api.

var Math Intrinsics = mathStub{}

Math provides a polymorphic entry point (vtable) for a bunch of intrinsically optimized math implementations.

type Metric

type Metric struct {
	ID           UUID                   `json:"id"`
	Name         string                 `json:"name"`
	Description  string                 `json:"description"`
	Scale        int64                  `json:"scale"`
	Resolution   time.Duration          `json:"resolution"`
	Translations map[string]Translation `json:"translations"`
}

Metric describes a (time) series with a specific ID and the same semantics across Buckets (devices). For example, in the context of renewable energy a wind turbine has a bunch of metrics like production in kW, wind speed in km/h or wind direction in radians.

func (Metric) LanguageTags

func (m Metric) LanguageTags() []string

LanguageTags returns an alphabetically sorted list of available translations. See also TranslateName.

func (Metric) String

func (m Metric) String() string

String returns the name.

func (Metric) Translated

func (m Metric) Translated() map[string]Translation

Translated is a getter for Translations.

type Point

type Point struct {
	// X is usually in Seconds since Unix Epoch.
	X int64 `json:"x"`
	// Y is usually a pre-scaled decimal metric value. Use ScaleOf to post-multiply to get
	Y int64 `json:"y"`
}

Point represents a packed and optimized data point which is usually part of a larger time series represented as Points.

type Points

type Points []Point

Points is just a slice of Point with a bunch of optimized helper methods for data analysis.

func (Points) Downscale

func (p Points) Downscale(width int64) Points

Downscale discards points which are insignificant when displaying in the given width. This uses the default downscale implementation, which may change between revisions to optimize experience. Width should be the amount of pixel on which a line chart should be drawn. See also M4 which is currently used.

func (Points) First

func (p Points) First() (Point, bool)

First returns the first Point.

func (Points) GroupByDay

func (p Points) GroupByDay(drift int64, align bool, location *time.Location) Group

GroupByDay takes all points and interprets the Point.X value as a unix timestamp in seconds. The shift value is added to each timestamp, so that a drift of the points can be respected (e.g. due to start- or end aggregated data points). If parameter raster is true, the natural start of the grouping is set to all X values for each group (first unix time stamp of the day at 00:00:00) after applying the shift.

It expects that points are ordered ascended by X (==time). The result is undefined, if the dataset is not sorted correctly. Location may not be nil.

func (Points) GroupByMonth

func (p Points) GroupByMonth(drift int64, align bool, location *time.Location) Group

GroupByMonth takes all points and interprets the Point.X value as a unix timestamp in seconds. The shift value is added to each timestamp, so that a drift of the points can be respected (e.g. due to start- or end aggregated data points). If parameter raster is true, the natural start of the grouping is set to all X values for each group (first unix time stamp of the month, at the first day at 00:00:00) after applying the shift.

It expects that points are ordered ascended by X (==time). The result is undefined, if the dataset is not sorted correctly. Location may not be nil.

func (Points) GroupByYear

func (p Points) GroupByYear(drift int64, align bool, location *time.Location) Group

GroupByYear takes all points and interprets the Point.X value as a unix timestamp in seconds. The shift value is added to each timestamp, so that a drift of the points can be respected (e.g. due to start- or end aggregated data points). If parameter raster is true, the natural start of the grouping is set to all X values for each group (first unix time stamp of the year, 1. January 00:00:00) after applying the shift.

It expects that points are ordered ascended by X (==time). The result is undefined, if the dataset is not sorted correctly. Location may not be nil.

func (Points) Last

func (p Points) Last() (Point, bool)

Last returns the last Point.

func (Points) Limit

func (p Points) Limit(min, max int64) Points

Limit mutates pts so that it only contains y-values which are larger than min and smaller than max (inclusive).

func (Points) M4

func (p Points) M4(width int64) Points

M4 applies the according downscaling algorithm for visualization by Uwe Jugel, Zbigniew Jerzak, Gregor Hackenbroich and Volker Markl. See http://www.vldb.org/pvldb/vol7/p797-jugel.pdf. It expects that the given db.TimeSeries is already sorted.

The width determines how many buckets are created in the given time interval as defined by the time series. Each bucket may have a variable amount of entries, which are sampled to at most 4 values: the highest/lowest values and the max/min values. If these points overlap, they are only returned once, so at worst only one value per bucket is returned.

If the width is larger than the amount of available points, the original points are returned.

func (Points) Reduce

func (p Points) Reduce(f AggregateFunc) (int64, bool)

Reduce applies the given AggregateFunc and returns the result or false, if the value cannot be calculated. In general, points cannot be reduced, if no values are available like calculating an average which would cause a divide by zero error.

func (Points) Scale

func (p Points) Scale(x, y int64) Points

Scale multiplies all points within the series with the given x,y scalars.

func (Points) SnapToGrid

func (p Points) SnapToGrid(divisor int64) Points

SnapToGrid divides by divisor and multiplies back, causing the according truncation. Example rasterization with a divisor of 600:

  • 300 => 0
  • 601 => 600
  • 1202 => 1200
  • 1700 => 1200

func (Points) Unscale

func (p Points) Unscale(yScale int64) FPoints

Unscale multiplies the X value by 1000 to get Milliseconds and divides by yScale using floating point arithmetics. This should be the last step after Downsample and performs another allocation.

type ProcBuilder

type ProcBuilder interface {
	// Parameter defines a function callback to return input and output/result parameter of this proc.
	// The concrete instances are used to provide example values to render.
	Parameter(func() (interface{}, interface{})) ProcBuilder

	// Start configures the given function to be executed for the evaluation.
	// Generally, a function must be thread safe to be invoked multiple times
	// concurrently.
	Start(Evaluator)
}

A ProcBuilder describes and configures a MiEl proc macro for later execution.

func Configure

func Configure() ProcBuilder

Configure creates a ProcBuilder instance which depends on the execution environment.

type Range

type Range string

Range is a string representation of a range. ( or ] can be used to indicate inclusive and exclusive intervals. Format specification:

<[|(> <min>, <max> <]|)> @ <IANA time zone name>

Examples:

[2038-01-19 03:14:07,2038-01-19 03:14:07]@Europe/Berlin
(2038-01-19 03:14:07,2038-01-19 03:14:07)@Europe/Berlin

func (Range) Interval

func (r Range) Interval() (min, max int64, err error)

Interval parses and returns the min and max unix timestamps, which have always 'inclusive' semantics. Min and max are represented as a unix timestamp in seconds.

func (Range) MustInterval

func (r Range) MustInterval() Interval

MustInterval returns the inclusive Interval representation of this Range. See also Interval.

func (*Range) UnmarshalJSON

func (r *Range) UnmarshalJSON(bytes []byte) error

UnmarshalJSON validates the range during unmarshalling.

type TZ

type TZ string

A TZ represents an unparsed IANA time zone and can be converted into a time.Location to perform calculations. Please note that this is not an offset. A time zone refers to a concrete place on earth and describes a very complex mapping to decide how to display a UTC date in a human-readable way. An offset will change randomly according to daylight saving times and politics in that concrete place. So, technically a time zone consists of an arbitrary amount of offsets and rules when to apply them.

func (TZ) MustParse

func (t TZ) MustParse() *time.Location

MustParse returns the actual Location or panics.

func (TZ) Parse

func (t TZ) Parse() (*time.Location, error)

Parse returns the parsed IANA time zone.

func (*TZ) UnmarshalJSON

func (t *TZ) UnmarshalJSON(bytes []byte) error

UnmarshalJSON provides just a json unmarshal serialization to validate the input.

type Times

type Times struct {
	// contains filtered or unexported fields
}

Times provides access to a variety of UTC and Interval operations based on a Timezone.

func Time

func Time(ctx context.Context) Times

Time returns a helper instance located into the given Timezone as resolved by Timezone. If ctx is nil, a UTC zoned Times is returned.

func (Times) DayOf

func (t Times) DayOf(offset int) Interval

DayOf returns the time zone interpreted UTC value with the given offset, where 0 means today and -1 yesterday.

func (Times) Now

func (t Times) Now() time.Time

Now returns the current time instant interpreted in the given time zone.

func (Times) ThisYear

func (t Times) ThisYear() Interval

ThisYear returns the UTC Interval in seconds from 01.01.20xx and 31.12.20xx based on the current Location.

func (Times) Today

func (t Times) Today() Interval

Today returns the first UTC value in the given time zone for the current day and the according last UTC value.

func (Times) Year

func (t Times) Year(year int) Interval

Year returns the first UTC value in the given time zone for the given year and the according last UTC value.

type Translation

type Translation struct {
	Name        string `json:"name"`
	Description string `json:"description"`
}

A Translation model helps to translate a Name and Description tuple into a specific language. See also MatchLanguage.

type UUID

type UUID [16]byte

UUID represents 16 byte for a UUID.

func NewUUID

func NewUUID() UUID

NewUUID creates a new secure type 4 UUID or panics.

func ParseUUID

func ParseUUID(text string) (UUID, error)

ParseUUID can only parse UUIDs like 12da0b4c-8f1e-4897-842f-3487849dfba6. However, intentionally any hex combination can be parsed, even if that does not represent a real UUID. This allows the intended misuse of the full 16 byte with arbitrary content.

func (UUID) MarshalText

func (u UUID) MarshalText() ([]byte, error)

MarshalText renders the UUID properly into JSON.

func (UUID) String

func (u UUID) String() string

String returns the typical UUID representation.

func (*UUID) UnmarshalText

func (u *UUID) UnmarshalText(data []byte) error

UnmarshalText implements encoding.TextUnmarshaler.

type UUIDs

type UUIDs []UUID

UUIDs is just a slice of UUID with some helper methods attached to write more readable implementations.

func (UUIDs) First

func (s UUIDs) First() UUID

First returns the first UUID or panics.

func (UUIDs) Join

func (s UUIDs) Join(other UUIDs) UUIDs

Join concat the other UUIDs to this slice of UUIDs and returns the new slice.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL