cwopencensusexporter

package module
v0.0.0-...-d17a884 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 2, 2019 License: Apache-2.0 Imports: 12 Imported by: 0

README

cwopencensusexporter

Build Status GoDoc Coverage Status

cwopencensusexporter implements the opencensus Exporter API and allows you to send open census metrics to cloudwatch.

Example

	awsClient := cloudwatch.New(session.Must(session.NewSession()))
	exporter, sender := cwopencensusexporter.DefaultExporter(awsClient)
	go sender.Run()
	defer sender.Shutdown(context.Background())
	ir, err := metricexport.NewIntervalReader(&metricexport.Reader{}, exporter)
	if err != nil {
		panic(err)
	}
	if err := ir.Start(); err != nil {
		panic(err)
	}
	defer ir.Stop()

Limitations

The open census API does not fully allow first class CloudWatch support. There are a few limitations causing this.

  1. CloudWatch expects to send aggregations of Max or Min across a time window. For example, the PutMetricData API for CloudWatch, when working on aggregations, wants you to specify the maximum and minimum value seen in a 60 second time window. This is not possible with the input data of metricdata.Distribution. While I can try to estimate a minimum or maximum value given the buckets, this isn't the true min or max that could otherwise be aggregated easily as points are ingested.
  2. CloudWatch wants values aggregated inside a time window. For example with 60 second aggregations, a point seen at 59.9 seconds should be in an aggregation for time window 0, while a point seen at 60.1 seconds should be in an aggregation for time window 1. The API for metricexport.Exporter does not split aggregations across time windows. The best I can do is try to align calls to Exporter.ExportMetrics to a time boundary (call at exactly 60 seconds then exactly at 120 seconds, etc). This is unlikely to handle corner cases. The best way to get data with the most fidelity would be to aggregate calls into buckets when they are added to stats.Record.
  3. Some types, like metricdata.Summary, do not translate to CloudWatch metrics and I am unable to provide a good user experience for data submitted to cwopencensusexporter.Exporter.ExportMetrics that are of type metricdata.Summary. An ideal experience for CloudWatch users would be to never let data get put into a metricdata.Summary and instead bucket them at the level of stats.Record.
  4. The layers of abstraction inside opencensus create unnecessary memory allocation that could be avoided with a system designed for CloudWatch's aggregation API.
  5. Open census buckets include the concept of [...inf] as the last range in their buckets. There's no way to represent this range for CloudWatch. Ideally open census would be able to track maximum (and minimum) values inside a time window so that I could use the min (or max) in the range of my buckets, rather than just assume infinity.

Contributing

Make sure your tests pass CI/CD pipeline which includes running make fix lint test locally.

Documentation

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

func DefaultExporter

func DefaultExporter(client CloudWatchClient) (*Exporter, *BatchMetricDatumSender)

DefaultExporter returns a reasonable Exporter that you can attach to a Reader. The exporter will send metrics to BatchMetricDatumSender. You should call `go BatchMetricDatumSender.Run()` on the returned Sender and pass the Exporter to a Reader.

Types

type BatchMetricDatumSender

type BatchMetricDatumSender struct {
	// CloudWatchClient is anything that can send datum to cloudwatch.  It should probably be cwpagedmetricput.Pager so
	// you can take care of batching large requests
	CloudWatchClient CloudWatchClient
	// BatchDelay is how long to wait between getting one value and waiting for a batch to fill up
	BatchDelay time.Duration
	// BatchSize is the maximum number of Datum to send to a single call to CloudWatchClient
	BatchSize int
	// Namespace is the cloudwatch namespace attached to the datum
	Namespace string
	// OnFailedSend is called on any failure to send datum to CloudWatchClient
	OnFailedSend func(datum []*cloudwatch.MetricDatum, err error)
	// contains filtered or unexported fields
}

BatchMetricDatumSender aggregates datum into a channel and sends them to cloudwatch

func (*BatchMetricDatumSender) Run

func (b *BatchMetricDatumSender) Run() error

Run executes the batch datum sender. You should probably execute this inside a goroutine. It blocks until Shutdown

func (*BatchMetricDatumSender) SendMetricDatum

func (b *BatchMetricDatumSender) SendMetricDatum(md *cloudwatch.MetricDatum) error

SendMetricDatum queues a datum for sending to cloudwatch

func (*BatchMetricDatumSender) Shutdown

func (b *BatchMetricDatumSender) Shutdown(ctx context.Context) error

Shutdown stops the sender once it has been started. Blocks until either Run finishes, or ctx dies.

type CloudWatchClient

type CloudWatchClient interface {
	// PutMetricDataWithContext should match the contract of cloudwatch.CloudWatch.PutMetricDataWithContext
	PutMetricDataWithContext(aws.Context, *cloudwatch.PutMetricDataInput, ...request.Option) (*cloudwatch.PutMetricDataOutput, error)
}

CloudWatchClient is anything that can receive CloudWatch metrics as documented by CloudWatch's public API constraints.

type Exporter

type Exporter struct {
	// Sender is anything that can take cloudwatch.MetricDatum and send them to cloudwatch
	Sender MetricDatumSender
	// OnFailedSend is called with any metric datum that the sender fails to send
	OnFailedSend func(md *cloudwatch.MetricDatum, err error)
	// contains filtered or unexported fields
}

Exporter understands how to implement the interface of metricexport.Exporter and turn a list of metricdata.Metric into cloudwatch.MetricDatum. Those datum are fed into Sender for each call to ExportMetrics

func (*Exporter) ExportMetrics

func (e *Exporter) ExportMetrics(ctx context.Context, metrics []*metricdata.Metric) error

ExportMetrics converts all metrics items into the appropriate *cloudwatch.MetricDatum and sends each to Sender. It will ignore any metrics that Exporter cannot currently export and calls OnFailedSend on any failed sends.

Example
package main

import (
	"context"
	"log"

	"github.com/aws/aws-sdk-go/aws/session"
	"github.com/aws/aws-sdk-go/service/cloudwatch"
	"github.com/cep21/cwopencensusexporter"
	"go.opencensus.io/metric/metricexport"
)

func main() {
	awsClient := cloudwatch.New(session.Must(session.NewSession()))
	exporter, sender := cwopencensusexporter.DefaultExporter(awsClient)
	go func() {
		if err := sender.Run(); err != nil {
			log.Print(err)
		}
	}()
	defer func() {
		if err := sender.Shutdown(context.Background()); err != nil {
			log.Print(err)
		}
	}()
	ir, err := metricexport.NewIntervalReader(&metricexport.Reader{}, exporter)
	if err != nil {
		panic(err)
	}
	if err := ir.Start(); err != nil {
		panic(err)
	}
	defer ir.Stop()
}
Output:

type MetricDatumSender

type MetricDatumSender interface {
	// SendMetricDatum should not block.  It should queue the datum for sending, or just send it.
	// It should not modify the input datum
	// but can assume the input datum is immutable.  Return an error if unable to send this datum correctly.
	SendMetricDatum(md *cloudwatch.MetricDatum) error
}

MetricDatumSender is anything that can send datum somewhere

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL