cloudwatch

package module
v0.0.0-...-c091913 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 9, 2016 License: MIT Imports: 15 Imported by: 0

README

logspout-cloudwatch

A logspout adapter that pushes logs to the AWS Cloudwatch Logs service.


Overview

This software is a plugin for logspout, which is a container-based application that collects Docker logs from the other containers run on a given Docker host. This plugin then sends those log messages on to Amazon's Cloudwatch Logs web service.


Features

  • Runs as a single Docker container with access to the docker socket file -- no setup is required within the logged containers. Auto-detects Region when run in EC2, and uses IAM Role credentials when available.

  • By default, names each Cloudwatch Log Stream after its container name, and groups the streams by host name. But also...

  • Provides flexible, dynamic control of stream and group names, based on templates. Can assign names based on container labels or environment variables. Defines host-wide defaults while allowing per-container overrides.

  • Batches messages by stream, but periodically flushes all batches to AWS, based on a configurable timeout.


Installation

The software runs in a container, so just run docker pull mdsol/logspout.


Workstation Usage / Outside EC2

First, make sure you're not running any containers that might be logging sensitive information -- that is, logs that you don't want showing up in your Cloudwatch console.

  1. To test the plugin, run a container that just prints out the date every few seconds:

     docker run --name=echo3 -d ubuntu bash -c \
       'while true; do echo "Hi, the date is $(date)" ; sleep 3 ; done'
    

    Notice that the container is run without the -t option. Logspout will not process output from containers with a TTY attached.

  2. Now run the logspout container with a route URL of cloudwatch://us-east-1?DEBUG=1 (substitute your desired AWS region). The plugin needs AWS credentials to push data to the service, so if your credentials are set up in the usual way (at ~/.aws/credentials), you can run:

     docker run -h $(hostname) -v ~/.aws/credentials:/root/.aws/credentials \
       --volume=/var/run/docker.sock:/tmp/docker.sock --name=logspout \
       --rm -it mdsol/logspout 'cloudwatch://us-east-1?DEBUG=1&NOEC2'
    

    Notice the -h $(hostname -f) parameter; you probably want the logging container name to share hostnames with the Docker host, because the default behavior is to group logs by hostname. The DEBUG=1 route option allows you to make sure each batch of logs gets submitted to AWS without errors. The NOEC2 option tells the plugin not to look for the EC2 Metadata service.

  3. Navigate to the Cloudwatch console, click on Logs, then look for a Log Group named after your workstation's hostname. You should see a Log Stream within it named echo3, which should be receiving your container's output messages every four seconds.


Production Usage / Inside EC2

  1. Logspout needs the following policy permissions to create and write log streams and groups. Make sure your EC2 instance has a Role that includes the following:

     "Statement": [{
       "Action": [
         "logs:CreateLogGroup",
         "logs:CreateLogStream",
         "logs:DescribeLogGroups",
         "logs:DescribeLogStreams",
         "logs:PutLogEvents"
       ],
       "Effect": "Allow",
       "Resource": "*"
     }]
    
  2. Now run the logspout container with a route URL of cloudwatch://auto. The AWS Region and the IAM Role credentials will be read from the EC2 Metadata Service.

     docker run -h $(hostname) -dt --name=logspout \
       --volume=/var/run/docker.sock:/tmp/docker.sock \
       mdsol/logspout 'cloudwatch://auto'
    

    The -d and -t flags are optional, depending on whether you want to background the process, or run it under some supervisory daemon. But if you do omit the -t flag, you can use the environment variable LOGSPOUT=ignore to prevent Logspout from attempting to post its own output to AWS.


Customizing the Group and Stream Names

The first time a message is received from a given container, its Log Group and Log Stream names are computed. When planning how to group your logs, make sure the combination of these two will be unique, because if more than one container tries to write to a given stream simultaneously, errors will occur.

By default, each Log Stream is named after its associated container, and each stream's Log Group is the hostname of the container running Logspout. These two values can be overridden by setting the Environment variables LOGSPOUT_GROUP and LOGSPOUT_STREAM on the Logspout container, or on any individual log-producing container (container-specific values take precendence). In this way, precomputed values can be set for each container.

Furthermore, when the Log Group and Log Stream names are computed, these Envinronment-based values are passed through Go's standard template engine, and provided with the following render context:

type RenderContext struct {
  Host       string            // container host name
  Env        map[string]string // container ENV
  Labels     map[string]string // container Labels
  Name       string            // container Name
  ID         string            // container ID
  LoggerHost string            // hostname of logging container (os.Hostname)
  InstanceID string            // EC2 Instance ID
  Region     string            // EC2 region
}

So you may use the {{}} template-syntax to build complex Log Group and Log Stream names from container Labels, or from other Env vars. Here are some examples:

# Prefix the default stream name with the EC2 Instance ID:
LOGSPOUT_STREAM={{.InstanceID}}-{{.Name}}

# Group streams by application and workflow stage (dev, prod, etc.),
# where these values are set as container environment vars:
LOGSPOUT_GROUP={{.Env.APP_NAME}}-{{.Env.STAGE_NAME}}

# Or use container Labels to do the same thing:
LOGSPOUT_GROUP={{.Labels.APP_NAME}}-{{.Labels.STAGE_NAME}}

# If the labels contain the period (.) character, you can do this:
LOGSPOUT_GROUP={{.Lbl "com.mycompany.loggroup"}}
LOGSPOUT_STREAM={{.Lbl "com.mycompany.logstream"}}

Complex settings like this are most easily applied to contaners by putting them into a separate "environment file", and passing its path to docker at runtime: docker run --env-file /path/to/file [...]


Further Configuration

  • Adding the route option NOEC2, as in cloudwatch://[region]?NOEC2 causes the adapter to skip its usual check for the EC2 Metadata service, for faster startup time when running outside EC2.

  • Adding the route option DELAY=8, as in cloudwatch://[region]?DELAY=8 causes the adapter to push all logs to AWS every 8 seconds instead of the default of 4 seconds. If you run this adapter at scale, you may need to tune this value to avoid overloading your request rate limit on the Cloudwatch Logs API.


Contribution / Development

This software was created by Benton Roberts (broberts@mdsol.com)

By default, the Docker image builds from the Go source on GitHub, not from local disk, as per the instructions for Logspout custom builds.

Documentation

Index

Constants

View Source
const DEFAULT_DELAY = 4 //seconds
View Source
const MAX_BATCH_COUNT = 10000 // messages

Rules for creating Cloudwatch Log batches, from https://goo.gl/TrIN8c

View Source
const MAX_BATCH_SIZE = 1048576 // bytes
View Source
const MSG_OVERHEAD = 26 // bytes

Variables

This section is empty.

Functions

func NewCloudwatchAdapter

func NewCloudwatchAdapter(route *router.Route) (router.LogAdapter, error)

NewCloudwatchAdapter creates a CloudwatchAdapter for the current region.

Types

type CloudwatchAdapter

type CloudwatchAdapter struct {
	Route       *router.Route
	OsHost      string
	Ec2Region   string
	Ec2Instance string
	// contains filtered or unexported fields
}

CloudwatchAdapter is an adapter that streams JSON to AWS CloudwatchLogs. It mostly just checkes ENV vars and other container info to determine the LogGroup and LogStream for each message, then sends each message on to a CloudwatchBatcher, which batches messages for upload to AWS.

func (*CloudwatchAdapter) Stream

func (a *CloudwatchAdapter) Stream(logstream chan *router.Message)

Stream implements the router.LogAdapter interface.

type CloudwatchBatch

type CloudwatchBatch struct {
	Msgs []CloudwatchMessage
	Size int64
}

func NewCloudwatchBatch

func NewCloudwatchBatch() *CloudwatchBatch

func (*CloudwatchBatch) Append

func (b *CloudwatchBatch) Append(msg CloudwatchMessage)

type CloudwatchBatcher

type CloudwatchBatcher struct {
	Input chan CloudwatchMessage
	// contains filtered or unexported fields
}

CloudwatchBatcher receieves Cloudwatch messages on its input channel, stores them in CloudwatchBatches until enough data is ready to send, then sends each CloudwatchMessageBatch on its output channel.

func NewCloudwatchBatcher

func NewCloudwatchBatcher(adapter *CloudwatchAdapter) *CloudwatchBatcher

constructor for CloudwatchBatcher - requires the adapter

func (*CloudwatchBatcher) RunTimer

func (b *CloudwatchBatcher) RunTimer()

func (*CloudwatchBatcher) Start

func (b *CloudwatchBatcher) Start()

Main loop for the Batcher - just sorts each messages into a batch, but submits the batch first and replaces it if the message is too big.

type CloudwatchMessage

type CloudwatchMessage struct {
	Message   string    `json:"message"`
	Group     string    `json:"group"`
	Stream    string    `json:"stream"`
	Time      time.Time `json:"time"`
	Container string    `json:"container"`
}

CloudwatchMessage is a simple JSON input to Cloudwatch.

type CloudwatchUploader

type CloudwatchUploader struct {
	Input chan CloudwatchBatch
	// contains filtered or unexported fields
}

CloudwatchUploader receieves CloudwatchBatches on its input channel, and sends them on to the AWS Cloudwatch Logs endpoint.

func NewCloudwatchUploader

func NewCloudwatchUploader(adapter *CloudwatchAdapter) *CloudwatchUploader

func (*CloudwatchUploader) Start

func (u *CloudwatchUploader) Start()

Main loop for the Uploader - POSTs each batch to AWS Cloudwatch Logs, while keeping track of the unique sequence token for each log stream.

type EC2Info

type EC2Info struct {
	InstanceID string
	Region     string
}

func NewEC2Info

func NewEC2Info(route *router.Route) (EC2Info, error)

type RenderContext

type RenderContext struct {
	Host       string            // container host name
	Env        map[string]string // container ENV
	Labels     map[string]string // container Labels
	Name       string            // container Name
	ID         string            // container ID
	LoggerHost string            // hostname of logging container (os.Hostname)
	InstanceID string            // EC2 Instance ID
	Region     string            // EC2 region
}

func (*RenderContext) Lbl

func (r *RenderContext) Lbl(key string) (string, error)

renders a label value based on a given key

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL