recache

package module
v4.0.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 9, 2020 License: MIT Imports: 11 Imported by: 0

README

GoDoc Build Status codecov

recache

recursive compressed caching library and proxy server

recache is a library (standalone server implementation pending) that enables you to easily construct caches with reusable components stored efficiently in compressed buffers and efficiently streamed to any client consumer.

This is based on the fact that any compliant GZIP decoder decompresses a concatenation of individually compressed component GZIP buffers to the equivalent of the concatenation of the source component buffers. This allows recache to generate a tree of components on request that can be sequentially written to a consumer such as a HTTP request or buffer builder with zero extra buffer copies or allocation.

The recursive aspect of recache denotes the ability of each cache entry to contain references to other entries in the same or a different cache. This allows to easily save on cache entry storage space and propagate eviction of dependant cache entries on parent entry eviction.

Unlike more traditional caches, that provide a more or less CRUD-like interface, recache abstracts cache hits and misses from the client. The client instead provides a lookup key and lambda to generate a new cache entry. In case of cache miss (of the targeted entry or any included entry references) recache calls the provided lambda, compresses the result, generates hashes and ETags for versioning and registers any recursively looked up entries from the same or other cache instances.

On client request a component tree is generated for this specific request. This allows recache to pass any errors, that occurred during generation, before writing a single byte to the client, enabling simple error propagation. Once a component tree has been generated it is immutable and safe to be streamed to the client, even if a component is evicted concurrently during the streaming.

recache guarantees work deduplication with concurrent requests requesting a missing entry. In such a case the first client will proceed to generate the cache entry data and any subsequent clients will block until this generation has completed. Once generation is completed, the entry is immutable and any subsequent clients will simply consume it after a cheap atomic flag check.

A single cache can contain multiple frontends. A frontend's stored data is subject to the same memory and LRU limits as its parent cache, however each frontend has it's own private key space and a possibly different new entry generation lambda.

recache provides configurable per-cache instance maximum used memory and last entry use time limits. In case of overflow the least recently used entries are evicted from the cache until the overflow is eventually mitigated. recache also provides methods for evicting entries by key, by matcher functions or clearing the cache or frontend by evicting all entries.

recache does not perform any actions without calls to the library. This means that recache has zero passive runtime costs, if you exclude the cost of managing the memory used by recache on the Go runtime.

TODO: benchmarks

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	// Indicates no components have been written and no error has been returned
	// in a call to Getter. This is not allowed.
	ErrEmptyRecord = errors.New("empty record created")
)

Functions

This section is empty.

Types

type Cache

type Cache struct {
	// contains filtered or unexported fields
}

Unified storage for cached records with specific eviction parameters

func NewCache

func NewCache(opts CacheOptions) (c *Cache)

Create new cache with specified memory and LRU eviction limits. After either of these are exceeded, the least recently used cache records will be evicted, until the requirements are satisfied again. Note that this eviction is eventual and not immediate for optimisation purposes.

Pass in zero values to ignore either or both eviction limits.

func (*Cache) EvictAll

func (c *Cache) EvictAll(t time.Duration)

Evict all records from cache after t amount of time, if the matched are still in the cache by then.

If t = 0, any matched record(s) are evicted immediately.

t can be used to decrease record turnover on often evicted records, thereby decreasing fresh data fetches and improving performance.

Any subsequent scheduled eviction calls on matching records with a greater t value than is currently left from a previous scheduled eviction on the record will have no effect.

A scheduled eviction with a smaller timer than currently left on the record will replace the existing timer.

func (*Cache) NewFrontend

func (c *Cache) NewFrontend(opts FrontendOptions) *Frontend

Create new Frontend for accessing the cache. A Frontend must only be created using this method.

type CacheOptions

type CacheOptions struct {
	// Maximum amount of memory the cache can consume without forcing eviction
	MemoryLimit uint

	// Maximum last use time of record without forcing eviction
	LRULimit time.Duration
}

Options for new cache creation

type Frontend

type Frontend struct {
	// contains filtered or unexported fields
}

A frontend for accessing the cache contents

func (*Frontend) Evict

func (f *Frontend) Evict(t time.Duration, k Key)

Evict a record by key after t amount of time, if the matched are still in the cache by then.

If t = 0, any matched record(s) are evicted immediately.

t can be used to decrease record turnover on often evicted records, thereby decreasing fresh data fetches and improving performance.

Any subsequent scheduled eviction calls on matching records with a greater t value than is currently left from a previous scheduled eviction on the record will have no effect.

A scheduled eviction with a smaller timer than currently left on the record will replace the existing timer.

func (*Frontend) EvictAll

func (f *Frontend) EvictAll(t time.Duration)

Evict all records from frontend after t amount of time, if the matched are still in the cache by then.

If t = 0, any matched record(s) are evicted immediately.

t can be used to decrease record turnover on often evicted records, thereby decreasing fresh data fetches and improving performance.

Any subsequent scheduled eviction calls on matching records with a greater t value than is currently left from a previous scheduled eviction on the record will have no effect.

A scheduled eviction with a smaller timer than currently left on the record will replace the existing timer.

func (*Frontend) EvictByFunc

func (f *Frontend) EvictByFunc(t time.Duration, fn func(Key) (bool, error),
) error

Evict records from frontend using matcher function fn after t amount of time,

if the matched are still in the cache by then.

If t = 0, any matched record(s) are evicted immediately.

t can be used to decrease record turnover on often evicted records, thereby decreasing fresh data fetches and improving performance.

Any subsequent scheduled eviction calls on matching records with a greater t value than is currently left from a previous scheduled eviction on the record will have no effect.

A scheduled eviction with a smaller timer than currently left on the record will replace the existing timer.

func (*Frontend) Get

func (f *Frontend) Get(k Key) (s Streamer, err error)

Retrieve or generate data by key and return a consumable result Stream

func (*Frontend) WriteHTTP

func (f *Frontend) WriteHTTP(k Key, w http.ResponseWriter, r *http.Request,
) (n int64, err error)

Retrieve or generate data by key and write it to w. Writes ETag to w and returns 304 on ETag match without writing data. Sets "Content-Encoding" header to "gzip".

type FrontendOptions

type FrontendOptions struct {
	// Will be used for generating fresh cache records for the given key by
	// the cache engine. These records will be stored by the cache engine and
	// must not be modified after Get() returns. Get() must be thread-safe.
	Get Getter

	// Level of compression to use for storing records.
	// Defaults to gzip.DefaultCompression.
	Level *int
}

Options for creating a new cache frontend

type Getter

type Getter func(Key, *RecordWriter) error

Generates fresh cache records for the given key by writing to RecordWriter. Getter must be thread-safe.

type Key

type Key interface{}

Value used to store entries in the cache. Must be a type suitable for being a key in a Go map.

type RecordWriter

type RecordWriter struct {
	// contains filtered or unexported fields
}

Provides utility methods for building record buffers and recursive record trees

func (*RecordWriter) Bind

func (rw *RecordWriter) Bind(f *Frontend, k Key) (Streamer, error)

Bind to record from passed frontend by key and return a consumable stream of the retrieved record. The record generated by rw will automatically be evicted from its parent cache on eviction of the included record.

func (*RecordWriter) BindJSON

func (rw *RecordWriter) BindJSON(
	f *Frontend,
	k Key,
	dst interface{},
) (err error)

Bind to record from passed frontend by key and decode it as JSON into dst. The record generated by rw will automatically be evicted from its parent cache on eviction of the included record.

func (*RecordWriter) Include

func (rw *RecordWriter) Include(f *Frontend, k Key) (err error)

Include data from passed frontend by key and bind it to rw. The record generated by rw will automatically be evicted from its parent cache on eviction of the included record.

func (*RecordWriter) ReadFrom

func (rw *RecordWriter) ReadFrom(r io.Reader) (n int64, err error)

Read non-gzipped data from r and write it to the record for storage

func (*RecordWriter) Write

func (rw *RecordWriter) Write(p []byte) (n int, err error)

Write non-gzipped data to the record for storage

type Streamer

type Streamer interface {
	// Can be called safely from multiple goroutines
	io.WriterTo

	// Create a new io.Reader for this stream.
	// Multiple instances of such an io.Reader can exist and be read
	// concurrently.
	NewReader() io.Reader

	// Convenience method for efficiently decoding stream contents as JSON into
	// the destination variable.
	//
	// dst: pointer to destination variable
	DecodeJSON(dst interface{}) error

	// Create a new io.ReadCloser for the unzipped content of this stream.
	//
	// It is the caller's responsibility to call Close on the io.ReadCloser
	// when finished reading.
	Unzip() io.ReadCloser

	// Return SHA1 hash of the content
	SHA1() [sha1.Size]byte

	// Return strong etag of content
	ETag() string
}

Readable stream with support for io.WriterTo and conversion to io.Reader interfaces

Directories

Path Synopsis
cmd

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL