daybook

package module
v0.0.0-...-08c5fce Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 30, 2014 License: Apache-2.0 Imports: 13 Imported by: 0

README

Daybook

Opinionated service deployment management.

daybook

What the heck does it do?

Daybook provides the basis for a pull-based deployment model that doesn't break the mindshare bank. Simple tools, simple primitives.

The premise is, wait for it, simple: you put all of your service assets -- these are your tarballs of service code, JARs, data files, whatever -- in a dedicated bucket on S3. Next, you set up a datacenter-wide key/value store (Consul) to store the mappings of what services should be on what hosts. These mappings are flexible, and can include wildcards. Specificity wins. You structure the name of your service assets in a way that the name of a service, or services, in your mapping data can be used to find the assets in your bucket. Voila, now you're ready to pull down some assets to different hosts without adding too much entropy to your infrastructure, or the universe.

How do I use it?

Check out the quick start guide for hitting the ground running.

Tell me more about the specifics

daybook-pull makes some assumptions. Firstly, your assets are structured in a particular fashion. This is roughly enforced by daybook-push, if you use it, but since it's just S3, there's no reason you can't upload assets on your own.

Your assets need to meet the following requirements:

  • it has to be a tarball (gzip compressed tar archive, with the ".tar.gz" extension; this is hard-coded)
  • the filename must be in the form of: service_name-version.tar.gz
  • the service name can be any alphanumeric characters, including underscores. hyphens aren't allowed because they're the name/version delimiter
  • the version can also be alphanumeric characters, including underscores
  • the extension, as mentioned above, needs to be ".tar.gz"

Secondly, it assumes you want all versions of a given service on disk. Service discovery, and thus which version to use, is an entirely separate problem that Daybook doesn't try to solve. Disk space is cheap, and it's simpler to assume we want a synchronized universe of service code than to pick and choose. There may or may not be a future improvement to limit scope or try and detect what we already have... but it's not on the immediate horizon. Sorry, people with a million JAR files that are 90MB a piece.

When you do a pull, daybook-pull will load the configuration, connect to Consul and look for patterns that match the hostname it has. The hostname is either what's in the configuration file, or if that's empty, what it is able to get from the OS. If it can't do that, it will whine and exit.

For all the patterns it finds that could match the hostname given, it figures out which one is most specific. The patterns can use asterisks for a wildcard. They aren't full regular expressions, because trying to determine specificity from a regular expression is non-trivial. Wildcards work 99% of the time, and the longest matching pattern is implicitly the most specific pattern. Simple.

In Consul, using its key/value store, these patterns look something like this:

/v1/kv/daybook/hosts/web-prod-* -> service_1,service_2,service_3
/v1/kv/daybook/hosts/web-prod-oneoff-* -> service_1,service_4

So, you can see that for a hostname of "web-prod-oneoff-001", both of those keys would match. Due to the length, "web-prod-oneoff-*" would be the most specific match, and so we'd try and download all of the service assets for "service_1" and "service_4". Conversely, if you had mappings that looked like this:

/v1/kv/daybook/hosts/web-prod-* -> service_1,service_2,service_3
/v1/kv/daybook/hosts/web-prod-oneoff-* -> service_1,service_4
/v1/kv/daybook/hosts/web-prod-oneoff-001 -> service_5,service_6

Again, all of these patterns would match the same hostname, but we have a full match, which implicitly means it's the longest, and so we'd try and download all of the service assets for "service_5" and "service_6". Also, as you can see, tThe services are specified as a comma-delimited list, which keeps things... you got it, simple.

For each service, we query the S3 bucket for all objects that have a perfix of the service name. For all the objects we get back, we make sure they conform -- ".tar.gz" extension, matches the service_name-version naming scheme, etc. If it looks good, we decompress it and extract it on the fly. Based on the configuration, we use the specified installation directory, and within that, make two subdirectories: one for the service, and another under that for the version. So, for a serivce asset called

test_service-123.tar.gz

it would be extracted into

<install directory>/test_service/123/

Thus, you'll want your archiving process to reflect that and not bother with encapsulating all of the assets within a single folder before tarballing.

Documentation

Index

Constants

This section is empty.

Variables

View Source
var BASE_PREFIX string = "daybook/hosts/"

Functions

This section is empty.

Types

type Archive

type Archive interface {
	Extract(rootDir string) error
}

type ConsulRegister

type ConsulRegister struct {
	// contains filtered or unexported fields
}

func NewConsulRegister

func NewConsulRegister() (*ConsulRegister, error)

func (*ConsulRegister) AddServices

func (r *ConsulRegister) AddServices(pattern string, services []string) error

func (*ConsulRegister) GetServices

func (r *ConsulRegister) GetServices(host string) ([]*Service, error)

func (*ConsulRegister) ListServices

func (r *ConsulRegister) ListServices(pattern string) ([]string, error)

func (*ConsulRegister) RemoveServices

func (r *ConsulRegister) RemoveServices(pattern string, services []string) error

type Register

type Register interface {
	GetServices(host string) ([]*Service, error)
	AddServices(pattern string, services []string) error
	RemoveServices(pattern string, services []string) error
	ListServices(pattern string) ([]string, error)
}

type S3Store

type S3Store struct {
	// contains filtered or unexported fields
}

func NewS3Store

func NewS3Store(auth aws.Auth, region aws.Region, bucket string) *S3Store

func (*S3Store) Get

func (s *S3Store) Get(service *Service) (Archive, error)

func (*S3Store) GetAll

func (s *S3Store) GetAll(serviceName string) ([]*Service, error)

func (*S3Store) Put

func (s *S3Store) Put(service *Service, filePath string) error

type Service

type Service struct {
	Name    string
	Version string
}

type SpecifierList

type SpecifierList []*consulapi.KVPair

func (SpecifierList) Len

func (sl SpecifierList) Len() int

func (SpecifierList) Less

func (sl SpecifierList) Less(i, j int) bool

func (SpecifierList) Swap

func (sl SpecifierList) Swap(i, j int)

type Store

type Store interface {
	GetAll(serviceName string) ([]*Service, error)
	Get(service *Service) (Archive, error)
	Put(service *Service, filePath string) error
}

type TarGzArchive

type TarGzArchive struct {
	// contains filtered or unexported fields
}

func NewTarGzArchive

func NewTarGzArchive(compressed io.ReadCloser) *TarGzArchive

func (*TarGzArchive) Extract

func (a *TarGzArchive) Extract(rootDir string) error

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL