bicache

package module
v2.4.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 3, 2023 License: MIT Imports: 12 Imported by: 0

README

GoDoc

bicache

Bicache is a sharded hybrid MFU/MRU, TTL optional, general purpose cache for Go. It combines exact LRU ordering and MFU accounting per shard (read: accurate but possibly expensive).

Bicache's two tiers of cache are individually size configurable (in key count). A global lookup table is used to limit read ops to a max of one cache miss, even with two tiers of sharded cache levels. Bicache handles MRU to MFU promotions and overflow evictions at write time or on automatic interval as a background task.

Bicached is built for highly concurrent, read heavy workloads.

API

See GoDoc for additional reference.

See code Example section at bottom.

Set(string, interface{}) bool
ok := c.Set("key", "value")

Sets key to value (if exists, updates). Set can be used to update an existing TTL'd key without affecting the TTL. A status bool is returned to signal whether or not the set was successful. A false is returned when Bicache is configured with NoOverflow enabled and the cache is full.

SetTTL(string, interface{}, int32) bool
ok := c.SetTTL("key", "value", 3600)

Sets key to value (if exists, updates) with a TTL expiration (in seconds). SetTTL can be used to add a TTL to an existing non-TTL'd key, or, updating an existing TTL. A status bool is returned to signal whether or not the set was successful. A false is returned when Bicache is configured with NoOverflow enabled and the cache is full.

Get(string) interface{}
value := c.Get("key")

Returns value for key. Increments the key score by 1. Get returns nil if the key doesn't exist.

Del(string)
c.Del("key")

Removes key from the cache.

List(int) ListResults
c.List(10)

Returns a *bicache.ListResults that includes the top n keys by score, formatted as key:state:score (state: 0 = MRU cache, 1 = MFU cache).

type ListResults []*KeyInfo

type KeyInfo struct {
    Key   string
    State uint8
    Score uint64
}
FlushMRU() error, FlushMFU() error, FlushAll() error
err := c.FlushMRU()
err := c.FlushMFU()
err := c.FlushAll()

Flush commands flush all keys from the respective cache. FlushAll is faster than combining FlushMRU and FlushMFU.

Pause() error, Resume() error
c.Pause()
c.Resume()

Pause and Resume allow auto evictions to be suspended and resumed, respectively. If eviction logging is enabled and evictions are paused, bicache will log accordingly.

Close()
c.Close()

Close should be called when a *Bicache is done being used, before removing any references to it, to ensure any background tasks have returned and that it can be cleanly garbage collected.

Stats() *Stats
stats := c.Stats()

Returns a *bicache.Stats.

type Stats struct {
    MFUSize   uint   // Number of acive MFU keys.
    MRUSize   uint   // Number of active MRU keys.
    MFUUsedP  uint   // MFU used in percent.
    MRUUsedP  uint   // MRU used in percent.
    Hits      uint64 // Cache hits.
    Misses    uint64 // Cache misses.
    Evictions uint64 // Cache evictions.
    Overflows uint64 // Failed sets on full caches.
}

Stats structs can be formatted as a json string:

j, _ := json.Marshal(stats)
fmt.Prinln(string(j))
{"MFUSize":0,"MRUSize":3,"MFUUsedP":0,"MRUUsedP":4,"Hits":3,"Misses":0,"Evictions":0,"Overflows":0}

Design

In a pure MRU cache, both fetching and setting a key moves it to the front of the list. When the list is full, keys are evicted from the tail when space for a new key is needed. Bicache isolates MRU thrashing by promoting the most frequently used keys to an MFU cache when the MRU cache is full. At MRU eviction time, Bicache gathers the highest score MRU keys and promotes only those that have scores exceeding keys in the MFU. Any remainder key count that must be evicted is accomplished with MFU to MRU demotion followed by MRU tail eviction.

img_0836

New keys are always set to the head of the MRU list; MFU keys are only ever populated through promotion from the MRU list.

Internally, bicache shards the two cache tiers into many sub-caches (sized through configuration in powers of 2). This is done for two primary reasons: 1) to reduce lock contention in highly concurrent workloads 2) minimize the maximum runtime of expensive maintenance tasks (e.g. many MRU to MFU promotions followed by many MRU evictions). Otherwise, shards are invisible from the perspective of the API.

img_0849

color key denotes shard lock exclusivity; blue represents a read lock, orange is a full rw lock

Get, Set and Delete requests are routed to the appropriate cache shard with a hash-routing on the key name. Bicache's internal accounting, cache promotion, evictions and usage stats are all isolated per shard. Promotions and evictions are handled sequentially across shards in a dedicated background task at the configured AutoEvict interval (promotion/eviction timings are emitted if configured; these metrics represet the most performance influencing aspect of bicache). When calling the Stat() method on bicache, shard statistics (hits, misses, usage) are aggregated and returned.

Installation

Tested with Go 1.7+.

  • go get -u github.com/jamiealquiza/bicache
  • Import package (or go install github.com/jamiealquiza/bicache/... for examples)

Configuration

Shard counts

Shards must be sized in powers of 2. Shards are relatively inexpensive to manage but should not be arbitrarily high. Shard sizing should be relative to desired cache sizes and workload; more key space and greater write concurrency/rates are better suited with more shards. Typical sizes might be 8 shards for simple testing and 1024 shards for production workloads that experience tens of thousands (or more) of cache lookups a second.

Cache sizes

Bicache can be configured with arbitrary sizes for each cache, allowing a ratio of MFU to MRU for different usage patterns. While the example shows very low cache sizes, this is purely to demonstrate functionality when the MRU is overflowed. A real world configuration might be a 10,000 key MFU and 30,000 key MRU capacity.

The Config.NoOverflow setting specifies whether or not Set and SetTTL methods are allowed to add additional keys when the cache is full. If NoOverflow is enabled, a set will return false if the cache is full. Allowing overflow will allow caches to run over 100% utilization until a promovtion/eviction cycle is performed to evict overflow keys. No Overflow may be interesting for strict cache size controls with extremely high set volumes, where the caches could reach several times their capacity between eviction cycles.

The MFU can also be set to 0, causing Bicache to behave like a typical MRU/LRU cache.

Also take note that the actual cache capacity may vary slightly from what's configured, once incorporating the shard count setting. MFU and MRU sizes are divided over the number of configured shards, rounded up for even distribution. For example, settings the MRU capacity to 9 and the shard count to 6 would result in an actual MRU capacity of 12 (minimum of 2 MRU keys per shard to deliver the requested 9). In practice, this would go mostly unnoticed as most typical shard counts will be upwards of 1024 and cache sizes in the tens of thousands.

Auto Eviction

TTL expirations, MRU to MFU promotions, and MRU overflow evictions only occur automatically if the AutoEvict configuration parameter is set. This is a background task that only runs if a non-zero parameter is set. If unset or explicitly configured to 0, TTL expirations never run and MRU promotions and evictions will be performed at each Set operation.

The Bicache EvictLog configuration specifies whether or not eviction timing logs are emitted:

2017/02/22 11:01:47 [PromoteEvict] cumulative: 61.023µs | min: 52ns | max: 434ns

This reports the total time spent on the previous eviction cycle across all shards, along with the min and max time experienced for any individual shard.

Example

test.go:

package main

import (
        "encoding/json"
        "fmt"
        "time"

        "github.com/jamiealquiza/bicache"
)

func main() {
        c, _ := bicache.New(&bicache.Config{
                MFUSize:    24,    // MFU capacity in keys
                MRUSize:    64,    // MRU capacity in keys
                ShardCount: 16,    // Shard count. Defaults to 512 if unset.
                AutoEvict:  30000, // Run TTL evictions + MRU->MFU promotions / evictions automatically every 30s.
                EvictLog:   true,  // Emit eviction timing logs.
                NoOverflow: true,  // Disallow Set ops when the MRU cache is full.
        })

        // Keys must be strings and values
        // can be essentially anything (value is an interface{}).
        // Key and value types can be mixed
        // in a single cache object.
        c.Set("name", "john")
        c.Set("1", 5535)
        c.Set("myKey", []byte("my value"))

        time.Sleep(time.Second)

        fmt.Println(c.Get("name"))
        fmt.Println(c.Get("1"))
        fmt.Println(c.Get("myKey"))

        stats := c.Stats()
        j, _ := json.Marshal(stats)
        fmt.Printf("\n%s\n", string(j))
}

Output:

% go run test.go
john
5535
[109 121 32 118 97 108 117 101]

{"MFUSize":0,"MRUSize":3,"MFUUsedP":0,"MRUUsedP":4,"Hits":3,"Misses":0,"Evictions":0,"Overflows":0}

Documentation

Overview

Package bicache implements a two-tier MFU/MRU cache with sharded cache units.

Package bicache implements a two-tier MFU/MRU cache with sharded cache units.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Bicache

type Bicache struct {
	ShardCount uint32
	Size       int
	// contains filtered or unexported fields
}

Bicache implements a two-tier MFU/MRU cache with sharded cache units.

func New

func New(c *Config) (*Bicache, error)

New takes a *Config and returns an initialized *Bicache.

func (*Bicache) Close

func (b *Bicache) Close()

Close stops background tasks and releases any resources. This should be called before removing a reference to a *Bicache if it's desired to be garbage collected cleanly.

func (*Bicache) Del

func (b *Bicache) Del(k string)

Del deletes a key.

func (*Bicache) FlushAll

func (b *Bicache) FlushAll() error

FlushAll flushes all cache entries. Flush all is much faster than combining both a FlushMRU and FlushMFU call.

func (*Bicache) FlushMFU

func (b *Bicache) FlushMFU() error

FlushMFU flushes all MFU entries.

func (*Bicache) FlushMRU

func (b *Bicache) FlushMRU() error

FlushMRU flushes all MRU entries.

func (*Bicache) Get

func (b *Bicache) Get(k string) interface{}

Get takes a key and returns the value. Every get on a key increases the key score.

func (*Bicache) List

func (b *Bicache) List(n int) ListResults

List returns all key names, states, and scores sorted in descending order by score. Returns n top restults.

func (*Bicache) Pause

func (b *Bicache) Pause() error

Pause suspends normal and TTL evictions. If eviction logging is enabled, bicache will log that evictions are paused at each interval if paused.

func (*Bicache) Resume

func (b *Bicache) Resume() error

Resume resumes normal and TTL evictions.

func (*Bicache) Set

func (b *Bicache) Set(k string, v interface{}) bool

Set takes a key and value and creates and entry in the MRU cache. If the key already exists, the value is updated.

func (*Bicache) SetTTL

func (b *Bicache) SetTTL(k string, v interface{}, t int32) bool

SetTTL is the same as set but accepts a parameter t to specify a TTL in seconds.

func (*Bicache) Stats

func (b *Bicache) Stats() *Stats

Stats returns a *Stats with Bicache statistics data.

type Config

type Config struct {
	MFUSize    uint
	MRUSize    uint
	AutoEvict  uint
	EvictLog   bool
	ShardCount int
	NoOverflow bool
	Context    context.Context
}

Config holds a Bicache configuration. The MFU and MRU cache sizes are set in number of keys. The AutoEvict setting specifies an interval in milliseconds that a background goroutine will handle MRU->MFU promotion and MFU/MRU evictions. Setting this to 0 defers the operation until each Set is called on the bicache.

type KeyInfo

type KeyInfo struct {
	Key   string
	State uint8
	Score uint64
}

KeyInfo holds a key name, state (0: MRU, 1: MFU) and cache score.

type ListResults

type ListResults []*KeyInfo

ListResults is a container that holds results from from a List method (as keyInfo), allowing sorting of available key names by score.

func (ListResults) Len

func (lr ListResults) Len() int

func (ListResults) Less

func (lr ListResults) Less(i, j int) bool

func (ListResults) Swap

func (lr ListResults) Swap(i, j int)

type Shard

type Shard struct {
	sync.RWMutex
	// contains filtered or unexported fields
}

Shard implements a cache unit with isolated MFU/MRU caches.

type Stats

type Stats struct {
	MFUSize    uint   // Number of active MFU keys.
	MRUSize    uint   // Number of active MRU keys.
	MFUUsedP   uint   // MFU used in percent.
	MRUUsedP   uint   // MRU used in percent.
	MFUMaxSize uint   // Maximum number of MFU keys.
	MRUMaxSize uint   // Maximum number of MRU keys.
	Hits       uint64 // Cache hits.
	Misses     uint64 // Cache misses.
	Evictions  uint64 // Cache evictions.
	Overflows  uint64 // Failed sets on full caches.
}

Stats holds Bicache statistics data.

Directories

Path Synopsis
examples
sll
examples/sll-example
The MIT License (MIT)
The MIT License (MIT)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL