lcw

package module
v1.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 11, 2024 License: MIT Imports: 13 Imported by: 3

README

Loading Cache Wrapper Build Status Coverage Status godoc

The library adds a thin layer on top of lru cache and internal implementation of expirable cache.

Cache name Constructor Defaults Description
LruCache lcw.NewLruCache keys=1000 LRU cache with limits
ExpirableCache lcw.NewExpirableCache keys=1000, ttl=5m TTL cache with limits
RedisCache lcw.NewRedisCache ttl=5m Redis cache with limits
Nop lcw.NewNopCache Do-nothing cache

Main features:

  • LoadingCache (guava style)
  • Limit maximum cache size (in bytes)
  • Limit maximum key size
  • Limit maximum size of a value
  • Limit number of keys
  • TTL support (ExpirableCache and RedisCache)
  • Callback on eviction event (not supported in RedisCache)
  • Functional style invalidation
  • Functional options
  • Sane defaults

Install and update

go get -u github.com/go-pkgz/lcw

Usage

package main

import (
	"github.com/go-pkgz/lcw"
)

func main() {
	cache, err := lcw.NewLruCache(lcw.MaxKeys(500), lcw.MaxCacheSize(65536), lcw.MaxValSize(200), lcw.MaxKeySize(32))
	if err != nil {
		panic("failed to create cache")
	}
	defer cache.Close()

	val, err := cache.Get("key123", func() (interface{}, error) {
		res, err := getDataFromSomeSource(params) // returns string
		return res, err
	})

	if err != nil {
		panic("failed to get data")
	}

	s := val.(string) // cached value
}
Cache with URI

Cache can be created with URIs:

  • mem://lru?max_key_size=10&max_val_size=1024&max_keys=50&max_cache_size=64000 - creates LRU cache with given limits
  • mem://expirable?ttl=30s&max_key_size=10&max_val_size=1024&max_keys=50&max_cache_size=64000 - create expirable cache
  • redis://10.0.0.1:1234?db=16&password=qwerty&network=tcp4&dial_timeout=1s&read_timeout=5s&write_timeout=3s - create redis cache
  • nop:// - create Nop cache

Scoped cache

Scache provides a wrapper on top of all implementations of LoadingCache with a number of special features:

  1. Key is not a string, but a composed type made from partition, key-id and list of scopes (tags).
  2. Value type limited to []byte
  3. Added Flush method for scoped/tagged invalidation of multiple records in a given partition
  4. A simplified interface with Get, Stat, Flush and Close only.

Details

  • In all cache types other than Redis (e.g. LRU and Expirable at the moment) values are stored as-is which means that mutable values can be changed outside of cache. ExampleLoadingCache_Mutability illustrates that.
  • All byte-size limits (MaxCacheSize and MaxValSize) only work for values implementing lcw.Sizer interface.
  • Negative limits (max options) rejected
  • The implementation started as a part of remark42 and later on moved to go-pkgz/rest library and finally generalized to become lcw.

Documentation

Overview

Package lcw adds a thin layer on top of lru and expirable cache providing more limits and common interface. The primary method to get (and set) data to/from the cache is LoadingCache.Get returning stored data for a given key or call provided func to retrieve and store, similar to Guava loading cache. Limits allow max values for key size, number of keys, value size and total size of values in the cache. CacheStat gives general stats on cache performance. 3 flavors of cache provided - NoP (do-nothing cache), ExpirableCache (TTL based), and LruCache

Example (LoadingCacheMutability)

nolint:govet //false positive due to example name ExampleLoadingCacheMutability illustrates changing mutable stored item outside of cache, works only for non-Redis cache.

c, err := NewExpirableCache(MaxKeys(10), TTL(time.Minute*30)) // make expirable cache (30m TTL) with up to 10 keys
if err != nil {
	panic("can' make cache")
}
defer c.Close()

mutableSlice := []string{"key1", "key2"}

// put mutableSlice in "mutableSlice" cache key
_, _ = c.Get("mutableSlice", func() (interface{}, error) {
	return mutableSlice, nil
})

// get from cache, func won't run because mutableSlice is cached
// value is original now
v, _ := c.Get("mutableSlice", func() (interface{}, error) {
	return nil, nil
})
fmt.Printf("got %v slice from cache\n", v)

mutableSlice[0] = "another_key_1"
mutableSlice[1] = "another_key_2"

// get from cache, func won't run because mutableSlice is cached
// value is changed inside the cache now because mutableSlice stored as-is, in mutable state
v, _ = c.Get("mutableSlice", func() (interface{}, error) {
	return nil, nil
})
fmt.Printf("got %v slice from cache after it's change outside of cache\n", v)
Output:

got [key1 key2] slice from cache
got [another_key_1 another_key_2] slice from cache after it's change outside of cache

Index

Examples

Constants

View Source
const RedisValueSizeLimit = 512 * 1024 * 1024

RedisValueSizeLimit is maximum allowed value size in Redis

Variables

This section is empty.

Functions

This section is empty.

Types

type CacheStat added in v0.2.0

type CacheStat struct {
	Hits   int64
	Misses int64
	Keys   int
	Size   int64
	Errors int64
}

CacheStat represent stats values

func (CacheStat) String added in v0.2.0

func (s CacheStat) String() string

String formats cache stats

type ExpirableCache added in v0.2.0

type ExpirableCache struct {
	CacheStat
	// contains filtered or unexported fields
}

ExpirableCache implements LoadingCache with TTL.

func NewExpirableCache added in v0.2.0

func NewExpirableCache(opts ...Option) (*ExpirableCache, error)

NewExpirableCache makes expirable LoadingCache implementation, 1000 max keys by default and 5m TTL

func (*ExpirableCache) Close added in v0.6.0

func (c *ExpirableCache) Close() error

Close kills cleanup goroutine

func (*ExpirableCache) Delete added in v0.3.0

func (c *ExpirableCache) Delete(key string)

Delete cache item by key

func (*ExpirableCache) Get added in v0.2.0

func (c *ExpirableCache) Get(key string, fn func() (interface{}, error)) (data interface{}, err error)

Get gets value by key or load with fn if not found in cache

func (*ExpirableCache) Invalidate added in v0.2.0

func (c *ExpirableCache) Invalidate(fn func(key string) bool)

Invalidate removes keys with passed predicate fn, i.e. fn(key) should be true to get evicted

func (*ExpirableCache) Keys added in v0.5.0

func (c *ExpirableCache) Keys() (res []string)

Keys returns cache keys

func (*ExpirableCache) Peek added in v0.2.0

func (c *ExpirableCache) Peek(key string) (interface{}, bool)

Peek returns the key value (or undefined if not found) without updating the "recently used"-ness of the key.

func (*ExpirableCache) Purge added in v0.2.0

func (c *ExpirableCache) Purge()

Purge clears the cache completely.

func (*ExpirableCache) Stat added in v0.2.0

func (c *ExpirableCache) Stat() CacheStat

Stat returns cache statistics

type FlusherRequest added in v0.5.0

type FlusherRequest struct {
	// contains filtered or unexported fields
}

FlusherRequest used as input for cache.Flush

func Flusher added in v0.5.0

func Flusher(partition string) FlusherRequest

Flusher makes new FlusherRequest with empty scopes

func (FlusherRequest) Scopes added in v0.5.0

func (f FlusherRequest) Scopes(scopes ...string) FlusherRequest

Scopes adds scopes to FlusherRequest

type Key added in v0.5.0

type Key struct {
	// contains filtered or unexported fields
}

Key for scoped cache. Created foe given partition (can be empty) and set with ID and Scopes. example: k := NewKey("sys1").ID(postID).Scopes("last_posts", customer_id)

func NewKey added in v0.5.0

func NewKey(partition ...string) Key

NewKey makes base key for given partition. Partition can be omitted.

func (Key) ID added in v0.5.0

func (k Key) ID(id string) Key

ID sets key id

func (Key) Scopes added in v0.5.0

func (k Key) Scopes(scopes ...string) Key

Scopes of the key

func (Key) String added in v0.5.0

func (k Key) String() string

String makes full string key from primary key, partition and scopes key string made as <partition>@@<id>@@<scope1>$$<scope2>....

type LoadingCache

type LoadingCache interface {
	Get(key string, fn func() (interface{}, error)) (val interface{}, err error) // load or get from cache
	Peek(key string) (interface{}, bool)                                         // get from cache by key
	Invalidate(fn func(key string) bool)                                         // invalidate items for func(key) == true
	Delete(key string)                                                           // delete by key
	Purge()                                                                      // clear cache
	Stat() CacheStat                                                             // cache stats
	Keys() []string                                                              // list of all keys
	Close() error                                                                // close open connections
}

LoadingCache defines guava-like cache with Get method returning cached value ao retrieving it if not in cache

func New added in v0.4.0

func New(uri string) (LoadingCache, error)

New parses uri and makes any of supported caches supported URIs:

  • redis://<ip>:<port>?db=123&max_keys=10
  • mem://lru?max_keys=10&max_cache_size=1024
  • mem://expirable?ttl=30s&max_val_size=100
  • nop://

type LruCache added in v0.2.0

type LruCache struct {
	CacheStat
	// contains filtered or unexported fields
}

LruCache wraps lru.LruCache with loading cache Get and size limits

Example

LruCache illustrates the use of LRU loading cache

// set up test server for single response
var hitCount int
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
	if r.URL.String() == "/post/42" && hitCount == 0 {
		_, _ = w.Write([]byte("<html><body>test response</body></html>"))
		return
	}
	w.WriteHeader(404)
}))

// load page function
loadURL := func(url string) (string, error) {
	resp, err := http.Get(url) // nolint
	if err != nil {
		return "", err
	}
	b, err := io.ReadAll(resp.Body)
	_ = resp.Body.Close()
	if err != nil {
		return "", err
	}
	return string(b), nil
}

// fixed size LRU cache, 100 items, up to 10k in total size
cache, err := NewLruCache(MaxKeys(100), MaxCacheSize(10*1024))
if err != nil {
	log.Printf("can't make lru cache, %v", err)
}

// url not in cache, load data
url := ts.URL + "/post/42"
val, err := cache.Get(url, func() (val interface{}, err error) {
	return loadURL(url)
})
if err != nil {
	log.Fatalf("can't load url %s, %v", url, err)
}
fmt.Println(val.(string))

// url not in cache, load data
val, err = cache.Get(url, func() (val interface{}, err error) {
	return loadURL(url)
})
if err != nil {
	log.Fatalf("can't load url %s, %v", url, err)
}
fmt.Println(val.(string))

// url cached, skip load and get from the cache
val, err = cache.Get(url, func() (val interface{}, err error) {
	return loadURL(url)
})
if err != nil {
	log.Fatalf("can't load url %s, %v", url, err)
}
fmt.Println(val.(string))

// get cache stats
stats := cache.Stat()
fmt.Printf("%+v\n", stats)

// close test HTTP server after all log.Fatalf are passed
ts.Close()
Output:

<html><body>test response</body></html>
<html><body>test response</body></html>
<html><body>test response</body></html>
{hits:2, misses:1, ratio:0.67, keys:1, size:0, errors:0}

func NewLruCache added in v0.2.0

func NewLruCache(opts ...Option) (*LruCache, error)

NewLruCache makes LRU LoadingCache implementation, 1000 max keys by default

func (*LruCache) Close added in v0.6.0

func (c *LruCache) Close() error

Close does nothing for this type of cache

func (*LruCache) Delete added in v0.3.0

func (c *LruCache) Delete(key string)

Delete cache item by key

func (*LruCache) Get added in v0.2.0

func (c *LruCache) Get(key string, fn func() (interface{}, error)) (data interface{}, err error)

Get gets value by key or load with fn if not found in cache

func (*LruCache) Invalidate added in v0.2.0

func (c *LruCache) Invalidate(fn func(key string) bool)

Invalidate removes keys with passed predicate fn, i.e. fn(key) should be true to get evicted

func (*LruCache) Keys added in v0.5.0

func (c *LruCache) Keys() (res []string)

Keys returns cache keys

func (*LruCache) Peek added in v0.2.0

func (c *LruCache) Peek(key string) (interface{}, bool)

Peek returns the key value (or undefined if not found) without updating the "recently used"-ness of the key.

func (*LruCache) Purge added in v0.2.0

func (c *LruCache) Purge()

Purge clears the cache completely.

func (*LruCache) Stat added in v0.2.0

func (c *LruCache) Stat() CacheStat

Stat returns cache statistics

type Nop

type Nop struct{}

Nop is do-nothing implementation of LoadingCache

func NewNopCache added in v0.2.0

func NewNopCache() *Nop

NewNopCache makes new do-nothing cache

func (*Nop) Close added in v0.6.0

func (n *Nop) Close() error

Close does nothing for nop cache

func (*Nop) Delete added in v0.3.0

func (n *Nop) Delete(string)

Delete does nothing for nop cache

func (*Nop) Get

func (n *Nop) Get(_ string, fn func() (interface{}, error)) (interface{}, error)

Get calls fn without any caching

func (*Nop) Invalidate

func (n *Nop) Invalidate(func(key string) bool)

Invalidate does nothing for nop cache

func (*Nop) Keys added in v0.5.0

func (n *Nop) Keys() []string

Keys does nothing for nop cache

func (*Nop) Peek

func (n *Nop) Peek(string) (interface{}, bool)

Peek does nothing and always returns false

func (*Nop) Purge

func (n *Nop) Purge()

Purge does nothing for nop cache

func (*Nop) Stat added in v0.2.0

func (n *Nop) Stat() CacheStat

Stat always 0s for nop cache

type Option

type Option func(o *options) error

Option func type

func EventBus added in v0.7.0

func EventBus(pubSub eventbus.PubSub) Option

EventBus sets PubSub for distributed cache invalidation

func MaxCacheSize

func MaxCacheSize(max int64) Option

MaxCacheSize functional option defines the total size of cached data. By default it is 0, which means unlimited.

func MaxKeySize

func MaxKeySize(max int) Option

MaxKeySize functional option defines the largest key's size allowed to be used in cache By default it is 0, which means unlimited.

func MaxKeys

func MaxKeys(max int) Option

MaxKeys functional option defines how many keys to keep. By default it is 0, which means unlimited.

func MaxValSize

func MaxValSize(max int) Option

MaxValSize functional option defines the largest value's size allowed to be cached By default it is 0, which means unlimited.

func OnEvicted added in v0.3.0

func OnEvicted(fn func(key string, value interface{})) Option

OnEvicted sets callback on invalidation event

func TTL added in v0.2.0

func TTL(ttl time.Duration) Option

TTL functional option defines duration. Works for ExpirableCache only

type RedisCache added in v0.4.0

type RedisCache struct {
	CacheStat
	// contains filtered or unexported fields
}

RedisCache implements LoadingCache for Redis.

func NewRedisCache added in v0.4.0

func NewRedisCache(backend *redis.Client, opts ...Option) (*RedisCache, error)

NewRedisCache makes Redis LoadingCache implementation.

func (*RedisCache) Close added in v0.6.0

func (c *RedisCache) Close() error

Close closes underlying connections

func (*RedisCache) Delete added in v0.4.0

func (c *RedisCache) Delete(key string)

Delete cache item by key

func (*RedisCache) Get added in v0.4.0

func (c *RedisCache) Get(key string, fn func() (interface{}, error)) (data interface{}, err error)

Get gets value by key or load with fn if not found in cache

func (*RedisCache) Invalidate added in v0.4.0

func (c *RedisCache) Invalidate(fn func(key string) bool)

Invalidate removes keys with passed predicate fn, i.e. fn(key) should be true to get evicted

func (*RedisCache) Keys added in v0.5.0

func (c *RedisCache) Keys() (res []string)

Keys gets all keys for the cache

func (*RedisCache) Peek added in v0.4.0

func (c *RedisCache) Peek(key string) (interface{}, bool)

Peek returns the key value (or undefined if not found) without updating the "recently used"-ness of the key.

func (*RedisCache) Purge added in v0.4.0

func (c *RedisCache) Purge()

Purge clears the cache completely.

func (*RedisCache) Stat added in v0.4.0

func (c *RedisCache) Stat() CacheStat

Stat returns cache statistics

type Scache added in v0.5.0

type Scache struct {
	// contains filtered or unexported fields
}

Scache wraps LoadingCache with partitions (sub-system), and scopes. Simplified interface with just 4 funcs - Get, Flush, Stats and Close

Example

LruCache illustrates the use of LRU loading cache

// set up test server for single response
var hitCount int
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
	if r.URL.String() == "/post/42" && hitCount == 0 {
		_, _ = w.Write([]byte("<html><body>test response</body></html>"))
		return
	}
	w.WriteHeader(404)
}))

// load page function
loadURL := func(url string) ([]byte, error) {
	resp, err := http.Get(url) // nolint
	if err != nil {
		return nil, err
	}
	b, err := io.ReadAll(resp.Body)
	_ = resp.Body.Close()
	if err != nil {
		return nil, err
	}
	return b, nil
}

// fixed size LRU cache, 100 items, up to 10k in total size
backend, err := NewLruCache(MaxKeys(100), MaxCacheSize(10*1024))
if err != nil {
	log.Fatalf("can't make lru cache, %v", err)
}

cache := NewScache(backend)

// url not in cache, load data
url := ts.URL + "/post/42"
key := NewKey().ID(url).Scopes("test")
val, err := cache.Get(key, func() (val []byte, err error) {
	return loadURL(url)
})
if err != nil {
	log.Fatalf("can't load url %s, %v", url, err)
}
fmt.Println(string(val))

// url not in cache, load data
key = NewKey().ID(url).Scopes("test")
val, err = cache.Get(key, func() (val []byte, err error) {
	return loadURL(url)
})
if err != nil {
	log.Fatalf("can't load url %s, %v", url, err)
}
fmt.Println(string(val))

// url cached, skip load and get from the cache
key = NewKey().ID(url).Scopes("test")
val, err = cache.Get(key, func() (val []byte, err error) {
	return loadURL(url)
})
if err != nil {
	log.Fatalf("can't load url %s, %v", url, err)
}
fmt.Println(string(val))

// get cache stats
stats := cache.Stat()
fmt.Printf("%+v\n", stats)

// close cache and test HTTP server after all log.Fatalf are passed
ts.Close()
err = cache.Close()
if err != nil {
	log.Fatalf("can't close cache %v", err)
}
Output:

<html><body>test response</body></html>
<html><body>test response</body></html>
<html><body>test response</body></html>
{hits:2, misses:1, ratio:0.67, keys:1, size:0, errors:0}

func NewScache added in v0.5.0

func NewScache(lc LoadingCache) *Scache

NewScache creates Scache on top of LoadingCache

func (*Scache) Close added in v0.6.1

func (m *Scache) Close() error

Close calls Close function of the underlying cache

func (*Scache) Flush added in v0.5.0

func (m *Scache) Flush(req FlusherRequest)

Flush clears cache and calls postFlushFn async

func (*Scache) Get added in v0.5.0

func (m *Scache) Get(key Key, fn func() ([]byte, error)) (data []byte, err error)

Get retrieves a key from underlying backend

func (*Scache) Stat added in v0.5.0

func (m *Scache) Stat() CacheStat

Stat delegates the call to the underlying cache backend

type Sizer

type Sizer interface {
	Size() int
}

Sizer allows to perform size-based restrictions, optional. If not defined both maxValueSize and maxCacheSize checks will be ignored

Directories

Path Synopsis
Package eventbus provides PubSub interface used for distributed cache invalidation, as well as NopPubSub and RedisPubSub implementations.
Package eventbus provides PubSub interface used for distributed cache invalidation, as well as NopPubSub and RedisPubSub implementations.
internal
cache
Package cache implements LoadingCache.
Package cache implements LoadingCache.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL