cache

package module
v0.4.7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 25, 2023 License: MIT Imports: 23 Imported by: 2

README

High performance resilient in-memory cache for Go

This library defines cache interfaces and provides in-memory implementations.

Build Status Coverage Status GoDevDoc time tracker Code lines Comments

Why?

There are a few libraries that provide in-memory cache already, why another one?

This library addresses additional practical issues that are not usually covered by key-value storage concerns. It helps to improve performance and resiliency by gentle handling of cache misses and allows for comprehensive observability with fine control of caching behavior.

Please check this blog post for more details.

Failover Cache

Failover is a cache frontend to manage cache updates in a non-conflicting and performant way.

An instance can be created with NewFailover and functional options.

Main API is a Get function that takes a key and a builder function. If value is available in cache, it is served from cache and builder function is not invoked. If value is not available in cache, builder function is invoked and the result is stored in cache.

// Get value from cache or the function.
v, err := f.Get(ctx, []byte("my-key"), func(ctx context.Context) (interface{}, error) {
    // Build value or return error on failure.

    return "<value>", nil
})

Or, starting with go1.18 you can use generic API.

f := cache.NewFailoverOf[Dog](func(cfg *cache.FailoverConfigOf[Dog]) {
    // Using last 30 seconds of 5m TTL for background update.
    cfg.MaxStaleness = 30 * time.Second
    cfg.BackendConfig.TimeToLive = 5*time.Minute - cfg.MaxStaleness
})

// Get value from cache or the function.
v, err := f.Get(ctx, []byte("my-key"), func(ctx context.Context) (Dog, error) {
    // Build value or return error on failure.

    return Dog{Name: "Snoopy"}, nil
})

Additionally, there are few other aspects of behavior to optimize performance.

  • Builder function is locked per key, so if the key needs a fresh value the builder function is only called once. All the other Get calls for the same key are blocked until the value is available. This helps to avoid cache stampede problem when popular value is missing or expired.
  • If expired (stale) value is available, the value is refreshed with a short TTL (configured as UpdateTTL) before the builder function is invoked. This immediately unblocks readers with a stale value and improves tail latency.
  • If the value has expired longer than MaxStaleness ago, stale value is not served and readers are blocked till the builder function return.
  • By default, if stale value is served, it is served to all readers, including the first reader who triggered builder function. Builder function runs in background so that reader latency is not affected. This behavior can be changed with SyncUpdate option, so that first reader who invokes builder function is blocked till result is ready instead of having stale value immediately.
  • If builder function fails, the error value is also cached and all consecutive calls for the key, would fail immediately with same error for next 20 seconds (can be configured with FailedUpdateTTL). This helps to avoid abusing building function when there is a persistent problem. For example, if you have 100 hits per second for a key that is updated from database and database is temporary down, errors caching prevents unexpected excessive load that usually hides behind value cache.
  • If builder function fails and stale value is available, stale value is served regardless of MaxStaleness. This allows to reduce impact of temporary outages in builder function. This behavior can be disabled with FailHard option, so that error is served instead of overly stale value.

Failover cache uses ReadWriter backend as a storage. By default ShardedMap is created using BackendConfig.

It is recommended that separate caches are used for different entities, this helps observability on the sizes and activity for particular entities. Cache Name can be configured to reflect the purpose. Additionally, Logger and Stats tracker can be provided to collect operating information.

If ObserveMutability is enabled, Failover will also emit stats of how often the rebuilt value was different from the previous. This may help to understand data volatility and come up with a better TTL value. The check is done with reflect.DeepEqual and may affect performance.

Sharded Map

ShardedMap implements ReadWriter and few other behaviours with in-memory storage sharded by key. It offers good performance for concurrent usage. Values can expire.

An instance can be created with NewShardedMap and functional options.

Generic API is also available with NewShardedMapOf.

It is recommended that separate caches are used for different entities, this helps observability on the sizes and activity for particular entities. Cache Name can be configured to reflect the purpose. Additionally, Logger and Stats tracker can be provided to collect operating information.

Expiration is configurable with TimeToLive and defaults to 5 minutes. It can be changed to a particular key via context by cache.WithTTL.

Actual TTL applied to a particular key is randomly altered in ±5% boundaries (configurable with ExpirationJitter), this helps against synchronous cache expiration (and excessive load to refresh many values at the same time) in case when many cache entries were created within a small timeframe (for example early after application startup). Expiration jitter diffuses such synchronization for smoother load distribution.

Expired items are not deleted immediately to reduce the churn rate and to provide stale data for Failover cache.

All items are checked in background once an hour (configurable with DeleteExpiredJobInterval) and items that have expired more than 24h ago (configurable with DeleteExpiredAfter) are removed.

Additionally, there are HeapInUseSoftLimit and CountSoftLimit to trigger eviction of 10% (configurable with EvictFraction) entries if count of items or application heap in use exceeds the limit. Limit check and optional eviction are triggered right after expired items check (in the same background job).

EvictionStrategy defines which entries would be evicted, by default EvictMostExpired is used. It selects entries with the longest expiration overdue or those that are soonest to expire.

Alternatively EvictLeastRecentlyUsed (LRU) and EvictLeastFrequentlyUsed (LFU) can be used at cost of minor performance impact (for updating counters on each cache serve).

Keep in mind that eviction happens in response to soft limits that are checked periodically, so dataset may stay above eviction threshold, especially if EvictFraction combined with DeleteExpiredJobInterval are lower than speed of growth.

Batch Operations

ShardedMap has ExpireAll function to mark all entries as expired, so that they are updated on next read and are available as stale values in meantime, this function does not affect memory usage.

In contrast, DeleteAll removes all entries and frees the memory, stale values are not available after this operation.

Deleting or expiring all items in multiple caches can be done with help of cache.Invalidator. Deletion/expiration function can be appended to Invalidator.Callbacks and it will be triggered on Invalidator.Invalidate. This may be useful as a debugging/firefighting tool.

Deleting of multiple related (labeled) items can be done with InvalidationIndex.

Len returns currently available number of entries ( including expired).

Walk iterates all entries and invokes a callback for each entry, iteration stops if callback fails.

Cached entries can be dumped as a binary stream with Dump and restored from a binary stream with Restore, this may enable cache transfer between the instances of an application to avoid cold state after startup. Binary serialization is done with encoding/gob, cached types that are to be dumped/restored have to be registered with cache.GobRegister.

Dumping and walking cache are non-blocking operations and are safe to use together with regular reads/writes, performance impact is expected to be negligible.

HTTPTransfer is a helper to transfer caches over HTTP. Having multiple cache instances registered, it provides Export HTTP handler that can be plugged into the HTTP server and serve data for an Import function of another application instance.

HTTPTransfer.Import fails if cached types differ from the exporting application instance, for example because of different versions of applications. The check is based on cache.GobTypesHash that is calculated from cached structures during cache.GobRegister.

Sync Map

SyncMap implements ReadWriter and few other behaviours with in-memory storage backed by standard sync.Map. It implements same behaviors as ShardedMap and can be a replacement. There is slight performance difference in latency and usually ShardedMap tends to consume less memory.

Context

Context is propagated from parent goroutine to Failover and further to backend ReadWriter and builder function. In addition to usual responsibilities (cancellation, tracing, etc...), context can carry cache options.

  • cache.WithTTL and cache.TTL to set and get time to live for a particular operation.
  • cache.WithSkipRead and cache.SkipRead to set and get skip reading flag, if the flag is set Read function should return ErrNotFound, therefore bypassing cache. At the same time Write operation is not affected by this flag, so SkipRead can be used to force cache refresh.

A handy use case for cache.WithSkipRead could be to implement a debug mode for request processing with no cache. Such debug mode can be implemented with HTTP (or other transport) middleware that instruments context under certain conditions, for example if a special header is found in request.

Detached Context

When builder function is invoked in background, the context is detached into a new one, derived context is not cancelled/failed/closed if parent context is.

For example, original context was created for an incoming HTTP request and was closed once response was written, meanwhile the cache update that was triggered in this context is still being processed in background. If original context was used, background processing would have been cancelled once parent HTTP request is fulfilled, leading to update failure.

Detached context makes background job continue even after the original context was legitimately closed.

Performance

Failover cache adds some overhead, but overall performance is still good (especially for IO-bound applications).

Please check detailed benchmarks.

Documentation

Overview

Package cache defines cache interfaces and provides in-memory implementations.

Index

Examples

Constants

View Source
const (
	// DefaultTTL indicates default value (replaced by config.TimeToLive) for entry expiration time.
	DefaultTTL = time.Duration(0)

	// UnlimitedTTL indicates unlimited TTL for config TimeToLive.
	UnlimitedTTL = time.Duration(-1)
)
View Source
const (
	// ErrExpired indicates expired cache entry,
	// may implement ErrWithExpiredItem to enable stale value serving.
	ErrExpired = SentinelError("expired cache item")

	// ErrNotFound indicates missing cache entry.
	ErrNotFound = SentinelError("missing cache item")

	// ErrNothingToInvalidate indicates no caches were added to Invalidator.
	ErrNothingToInvalidate = SentinelError("nothing to invalidate")

	// ErrAlreadyInvalidated indicates recent invalidation.
	ErrAlreadyInvalidated = SentinelError("already invalidated")

	// ErrUnexpectedType is thrown on failed type assertion.
	ErrUnexpectedType = SentinelError("unexpected type")
)
View Source
const (
	// MetricMiss is a name of a metric to count cache miss events.
	MetricMiss = "cache_miss"
	// MetricExpired is a name of a metric to count expired cache read events.
	MetricExpired = "cache_expired"
	// MetricHit is a name of a metric to count valid cache read events.
	MetricHit = "cache_hit"
	// MetricWrite is a name of a metric to count cache write events.
	MetricWrite = "cache_write"
	// MetricDelete is a name of a metric to count cache delete events.
	MetricDelete = "cache_delete"
	// MetricItems is a name of a gauge to count number of items in cache.
	MetricItems = "cache_items"

	// MetricRefreshed is a name of a metric to count stale refresh events.
	MetricRefreshed = "cache_refreshed"
	// MetricBuild is a name of a metric to count value building events.
	MetricBuild = "cache_build"
	// MetricFailed is a name of a metric to count number of failed value builds.
	MetricFailed = "cache_failed"

	// MetricChanged is a name of a metric to count number of cache builds that changed cached value.
	MetricChanged = "cache_changed"

	// MetricEvict is a name of metric to count evictions.
	MetricEvict = "cache_evict"

	// MetricEvictionElapsedSeconds is a name of metric to count eviction job time.
	MetricEvictionElapsedSeconds = "cache_eviction_elapsed_seconds"
)

Variables

This section is empty.

Functions

func GobRegister

func GobRegister(values ...interface{})

GobRegister enables cached type transferring.

Example (Transfer_cache)
package main

import (
	"bytes"
	"context"
	"fmt"

	"github.com/bool64/cache"
)

type SomeEntity struct {
	Parent           *SomeEntity
	SomeField        string
	SomeSlice        []int
	SomeRecursiveMap map[string]SomeEntity
	unexported       string
}

func main() {
	// Registering cached type to gob.
	cache.GobRegister(SomeEntity{})

	c1 := cache.NewShardedMap()
	c2 := cache.NewShardedMap()
	ctx := context.Background()

	_ = c1.Write(ctx, []byte("key1"), SomeEntity{
		SomeField:  "foo",
		SomeSlice:  []int{1, 2, 3},
		unexported: "will be lost in transfer",
	})

	w := bytes.NewBuffer(nil)

	// Transferring data from c1 to c2.
	_, _ = c1.Dump(w)
	_, _ = c2.Restore(w)

	v, _ := c2.Read(ctx, []byte("key1"))

	fmt.Println(v.(SomeEntity).SomeField)

}
Output:

foo

func GobTypesHash

func GobTypesHash() uint64

GobTypesHash returns a fingerprint of a group of types to transfer.

func GobTypesHashReset

func GobTypesHashReset()

GobTypesHashReset resets types hash to zero value.

func SkipRead

func SkipRead(ctx context.Context) bool

SkipRead returns true if cache read is ignored in context.

func TTL

func TTL(ctx context.Context) time.Duration

TTL retrieves cache time to live from context, zero value is returned by default.

func WithSkipRead

func WithSkipRead(ctx context.Context) context.Context

WithSkipRead returns context with cache read ignored.

With such context cache.Reader should always return ErrNotFound discarding cached value.

func WithTTL

func WithTTL(ctx context.Context, ttl time.Duration, updateExisting bool) context.Context

WithTTL adds cache time to live information to context.

If there is already ttl in context and updateExisting is true, then ttl value in original context will be updated.

Updating existing ttl can be useful if ttl information is only available internally during cache value build, in such a case value builder can communicate ttl to external cache backend. For example cache ttl can be derived from HTTP response cache headers of value source.

When existing ttl is updated minimal non-zero value is kept. This is necessary to unite ttl requirements of multiple parties.

Types

type BinaryEncoding added in v0.2.6

type BinaryEncoding interface {
	Encode(ctx context.Context, value interface{}) ([]byte, error)
	Decode(ctx context.Context, buf []byte) (interface{}, error)
}

BinaryEncoding defines binary transmission protocol.

type BinaryUnmarshaler added in v0.2.6

type BinaryUnmarshaler func(data []byte) (encoding.BinaryMarshaler, error)

BinaryUnmarshaler implements BinaryEncoding.

func (BinaryUnmarshaler) Decode added in v0.2.6

func (f BinaryUnmarshaler) Decode(_ context.Context, buf []byte) (interface{}, error)

Decode decodes value from bytes.

func (BinaryUnmarshaler) Encode added in v0.2.6

func (f BinaryUnmarshaler) Encode(_ context.Context, value interface{}) ([]byte, error)

Encode encodes value to bytes.

type Config

type Config struct {
	// Logger is an instance of contextualized logger, can be nil.
	Logger Logger

	// Stats is a metrics collector, can be nil.
	Stats StatsTracker

	// Name is cache instance name, used in stats and logging.
	Name string

	// ItemsCountReportInterval is items count metric report interval, default 1m.
	ItemsCountReportInterval time.Duration

	// TimeToLive is delay before entry expiration, default 5m.
	// Use UnlimitedTTL value to set up unlimited TTL.
	TimeToLive time.Duration

	// DeleteExpiredAfter is delay before expired entry is deleted from cache, default 24h.
	DeleteExpiredAfter time.Duration

	// DeleteExpiredJobInterval is delay between two consecutive cleanups, default 1h.
	DeleteExpiredJobInterval time.Duration

	// ExpirationJitter is a fraction of TTL to randomize, default 0.1.
	// Use -1 to disable.
	// If enabled, entry TTL will be randomly altered in bounds of ±(ExpirationJitter * TTL / 2).
	ExpirationJitter float64

	// HeapInUseSoftLimit sets heap in use (runtime.MemStats).HeapInuse threshold when eviction will be triggered.
	HeapInUseSoftLimit uint64

	// SysMemSoftLimit sets system memory (runtime.MemStats).Sys threshold when eviction will be triggered.
	SysMemSoftLimit uint64

	// CountSoftLimit sets count threshold when eviction will be triggered.
	// As opposed to memory soft limits, when count limit is exceeded, eviction will remove items to achieve
	// the level of CountSoftLimit*(1-EvictFraction), which may be more items that EvictFraction defines.
	CountSoftLimit uint64

	// EvictionNeeded is a user-defined function to decide whether eviction is necessary.
	// If true is returned, eviction cycle will happen.
	EvictionNeeded func() bool

	// EvictFraction is a fraction (0, 1] of total count of items to be evicted when resource is overused,
	// default 0.1 (10% of items).
	EvictFraction float64

	// EvictionStrategy is EvictMostExpired by default.
	EvictionStrategy EvictionStrategy
}

Config controls cache instance.

func (Config) Use

func (c Config) Use(cfg *Config)

Use is a functional option to apply configuration.

type Deleter

type Deleter interface {
	// Delete removes a cache entry with a given key
	// and returns ErrNotFound for non-existent keys.
	Delete(ctx context.Context, key []byte) error
}

Deleter deletes from cache.

type Dumper

type Dumper interface {
	Dump(w io.Writer) (int, error)
}

Dumper dumps cache entries in binary format.

type Entry

type Entry interface {
	Key() []byte
	Value() interface{}
	ExpireAt() time.Time
}

Entry is cache entry with key and value.

type EntryOf added in v0.2.3

type EntryOf[V any] interface {
	Key() []byte
	Value() V
	ExpireAt() time.Time
}

EntryOf is cache entry with key and value.

type ErrWithExpiredItem

type ErrWithExpiredItem interface {
	error
	Value() interface{}
	ExpiredAt() time.Time
}

ErrWithExpiredItem defines an expiration error with entry details.

type ErrWithExpiredItemOf added in v0.2.3

type ErrWithExpiredItemOf[V any] interface {
	error
	Value() V
	ExpiredAt() time.Time
}

ErrWithExpiredItemOf defines an expiration error with entry details.

type EvictionStrategy added in v0.4.0

type EvictionStrategy uint8

EvictionStrategy defines eviction behavior when soft limit is met during cleanup job.

const (
	// EvictMostExpired removes entries with the oldest expiration time.
	// Both expired and non-expired entries may be affected.
	// Default eviction strategy, most performant as it does not maintain counters on each serve.
	EvictMostExpired EvictionStrategy = iota

	// EvictLeastRecentlyUsed removes entries that were not served recently.
	// It has a minor performance impact due to update of timestamp on every serve.
	EvictLeastRecentlyUsed

	// EvictLeastFrequentlyUsed removes entries that were in low demand.
	// It has a minor performance impact due to update of timestamp on every serve.
	EvictLeastFrequentlyUsed
)

type Failover

type Failover struct {
	// Errors caches errors of failed updates.
	Errors *ShardedMap
	// contains filtered or unexported fields
}

Failover is a cache frontend to manage cache updates in a non-conflicting and performant way.

Please use NewFailover to create instance.

func NewFailover

func NewFailover(options ...func(cfg *FailoverConfig)) *Failover

NewFailover creates a Failover cache instance.

Build is locked per key to avoid concurrent updates, new value is served . Stale value is served during non-concurrent update (up to FailoverConfig.UpdateTTL long).

func (Failover) Error added in v0.2.4

func (lt Failover) Error(ctx context.Context, msg string, keysAndValues ...interface{})

func (*Failover) Get

func (f *Failover) Get(
	ctx context.Context,
	key []byte,
	buildFunc func(ctx context.Context) (interface{}, error),
) (interface{}, error)

Get returns value from cache or from build function.

Example
package main

import (
	"context"
	"log"

	"github.com/bool64/cache"
)

func main() {
	ctx := context.TODO()
	f := cache.NewFailover()

	// Get value from cache or the function.
	v, err := f.Get(ctx, []byte("my-key"), func(ctx context.Context) (interface{}, error) {
		// Build value or return error on failure.

		return "<value>", nil
	})
	if err != nil {
		log.Fatal(err)
	}

	// Assert the type and use value.
	_ = v.(string)
}
Output:

Example (Caching_wrapper)
package main

import (
	"context"
	"errors"
	"fmt"
	"strconv"
	"time"

	"github.com/bool64/cache"
)

type Value struct {
	Sequence int
	ID       int
	Country  string
}

type Service interface {
	GetByID(ctx context.Context, country string, id int) (Value, error)
}

// Real is an instance of real service that does some slow/expensive processing.
type Real struct {
	Calls int
}

func (s *Real) GetByID(_ context.Context, country string, id int) (Value, error) {
	s.Calls++

	if id == 0 {
		return Value{}, errors.New("invalid id")
	}

	return Value{
		Country:  country,
		ID:       id,
		Sequence: s.Calls,
	}, nil
}

// Cached is a service wrapper to serve already processed results from cache.
type Cached struct {
	upstream Service
	storage  *cache.Failover
}

func (s Cached) GetByID(ctx context.Context, country string, id int) (Value, error) {
	// Prepare string cache key and call stale cache.
	// Build function will be called if there is a cache miss.
	cacheKey := country + ":" + strconv.Itoa(id)

	value, err := s.storage.Get(ctx, []byte(cacheKey), func(ctx context.Context) (interface{}, error) {
		return s.upstream.GetByID(ctx, country, id)
	})
	if err != nil {
		return Value{}, err
	}

	// Type assert and return result.
	return value.(Value), nil
}

func main() {
	var service Service

	ctx := context.Background()

	service = Cached{
		upstream: &Real{},
		storage: cache.NewFailover(func(cfg *cache.FailoverConfig) {
			cfg.BackendConfig.TimeToLive = time.Minute
		}),
	}

	fmt.Println(service.GetByID(ctx, "DE", 123))
	fmt.Println(service.GetByID(ctx, "US", 0)) // Error increased sequence, but was cached with short-ttl.
	fmt.Println(service.GetByID(ctx, "US", 0)) // This call did not hit backend.
	fmt.Println(service.GetByID(ctx, "US", 456))
	fmt.Println(service.GetByID(ctx, "DE", 123))
	fmt.Println(service.GetByID(ctx, "US", 456))
	fmt.Println(service.GetByID(ctx, "FR", 789))

}
Output:

{1 123 DE} <nil>
{0 0 } invalid id
{0 0 } invalid id
{3 456 US} <nil>
{1 123 DE} <nil>
{3 456 US} <nil>
{4 789 FR} <nil>

type FailoverConfig

type FailoverConfig struct {
	// Name is added to logs and stats.
	Name string

	// Backend is a cache instance, ShardedMap created by default.
	Backend ReadWriter

	// BackendConfig is a configuration for ShardedMap cache instance if Backend is not provided.
	BackendConfig Config

	// FailedUpdateTTL is ttl of failed build cache, default 20s, -1 disables errors cache.
	// Errors cache prevents application from pressuring a failing data source.
	FailedUpdateTTL time.Duration

	// UpdateTTL is a time interval to retry update, default 1 minute.
	// When stale value is being updated to a new one, current (stale) cache entry
	// is refreshed in cache with this TTL. This unblocks other consumers with stale value,
	// instead of blocking them to wait for a new one.
	UpdateTTL time.Duration

	// SyncUpdate disables update in background, default is background update with stale value served.
	SyncUpdate bool

	// SyncRead enables backend reading in the critical section to ensure cache miss
	// will not trigger multiple updates sequentially.
	//
	// Probability of such issue is low, there is performance penalty for enabling this option.
	SyncRead bool

	// MaxStaleness is duration when value can be served after expiration.
	// If value has expired longer than this duration it won't be served unless value update failure.
	MaxStaleness time.Duration

	// FailHard disables serving of stale value in case of update failure.
	FailHard bool

	// Logger collects messages with context.
	Logger Logger

	// Stats tracks stats.
	Stats StatsTracker

	// ObserveMutability enables deep equal check with metric collection on cache update.
	ObserveMutability bool
}

FailoverConfig is optional configuration for NewFailover.

func (FailoverConfig) Use

func (fc FailoverConfig) Use(cfg *FailoverConfig)

Use is a functional option for NewFailover to apply configuration.

type FailoverConfigOf added in v0.2.3

type FailoverConfigOf[V any] struct {
	// Name is added to logs and stats.
	Name string

	// Backend is a cache instance, ShardedMap created by default.
	Backend ReadWriterOf[V]

	// BackendConfig is a configuration for ShardedMap cache instance if Backend is not provided.
	BackendConfig Config

	// FailedUpdateTTL is ttl of failed build cache, default 20s, -1 disables errors cache.
	FailedUpdateTTL time.Duration

	// UpdateTTL is a time interval to retry update, default 1 minute.
	UpdateTTL time.Duration

	// SyncUpdate disables update in background, default is background update with stale value served.
	SyncUpdate bool

	// SyncRead enables backend reading in the critical section to ensure cache miss
	// will not trigger multiple updates sequentially.
	//
	// Probability of such issue is low, there is performance penalty for enabling this option.
	SyncRead bool

	// MaxStaleness is duration when value can be served after expiration.
	// If value has expired longer than this duration it won't be served unless value update failure.
	MaxStaleness time.Duration

	// FailHard disables serving of stale value in case up update failure.
	FailHard bool

	// Logger collects messages with context.
	Logger Logger

	// Stats tracks stats.
	Stats StatsTracker

	// ObserveMutability enables deep equal check with metric collection on cache update.
	ObserveMutability bool
}

FailoverConfigOf is optional configuration for NewFailoverOf.

func (FailoverConfigOf[V]) Use added in v0.2.3

func (fc FailoverConfigOf[V]) Use(cfg *FailoverConfigOf[V])

Use is a functional option for NewFailover to apply configuration.

type FailoverOf added in v0.2.3

type FailoverOf[V any] struct {
	// Errors caches errors of failed updates.
	Errors *ShardedMapOf[error]
	// contains filtered or unexported fields
}

FailoverOf is a cache frontend to manage cache updates in a non-conflicting and performant way.

Please use NewFailoverOf to create instance.

func NewFailoverOf added in v0.2.3

func NewFailoverOf[V any](options ...func(cfg *FailoverConfigOf[V])) *FailoverOf[V]

NewFailoverOf creates a FailoverOf cache instance.

Build is locked per key to avoid concurrent updates, new value is served . Stale value is served during non-concurrent update (up to FailoverConfigOf.UpdateTTL long).

func (FailoverOf) Error added in v0.2.4

func (lt FailoverOf) Error(ctx context.Context, msg string, keysAndValues ...interface{})

func (*FailoverOf[V]) Get added in v0.2.3

func (f *FailoverOf[V]) Get(
	ctx context.Context,
	key []byte,
	buildFunc func(ctx context.Context) (V, error),
) (V, error)

Get returns value from cache or from build function.

Example
package main

import (
	"context"
	"fmt"
	"log"

	"github.com/bool64/cache"
)

func main() {
	// Dog is your cached type.
	type Dog struct {
		Name string
	}

	ctx := context.TODO()
	f := cache.NewFailoverOf[Dog]()

	// Get value from cache or the function.
	v, err := f.Get(ctx, []byte("my-key"), func(ctx context.Context) (Dog, error) {
		// Build value or return error on failure.

		return Dog{Name: "Snoopy"}, nil
	})
	if err != nil {
		log.Fatal(err)
	}

	// Use value.
	fmt.Printf("%s", v.Name)

}
Output:

Snoopy
Example (Caching_wrapper)
//go:build go1.18
// +build go1.18

package main

import (
	"context"
	"fmt"
	"strconv"
	"time"

	"github.com/bool64/cache"
)

// CachedGeneric is a service wrapper to serve already processed results from cache.
type CachedGeneric struct {
	upstream Service
	storage  *cache.FailoverOf[Value]
}

func (s CachedGeneric) GetByID(ctx context.Context, country string, id int) (Value, error) {
	// Prepare string cache key and call stale cache.
	// Build function will be called if there is a cache miss.
	cacheKey := country + ":" + strconv.Itoa(id)

	return s.storage.Get(ctx, []byte(cacheKey), func(ctx context.Context) (Value, error) {
		return s.upstream.GetByID(ctx, country, id)
	})
}

func main() {
	var service Service

	ctx := context.Background()

	service = CachedGeneric{
		upstream: &Real{},
		storage: cache.NewFailoverOf[Value](func(cfg *cache.FailoverConfigOf[Value]) {
			cfg.BackendConfig.TimeToLive = time.Minute
		}),
	}

	fmt.Println(service.GetByID(ctx, "DE", 123))
	fmt.Println(service.GetByID(ctx, "US", 0)) // Error increased sequence, but was cached with short-ttl.
	fmt.Println(service.GetByID(ctx, "US", 0)) // This call did not hit backend.
	fmt.Println(service.GetByID(ctx, "US", 456))
	fmt.Println(service.GetByID(ctx, "DE", 123))
	fmt.Println(service.GetByID(ctx, "US", 456))
	fmt.Println(service.GetByID(ctx, "FR", 789))

}
Output:

{1 123 DE} <nil>
{0 0 } invalid id
{0 0 } invalid id
{3 456 US} <nil>
{1 123 DE} <nil>
{3 456 US} <nil>
{4 789 FR} <nil>

type HTTPTransfer added in v0.2.0

type HTTPTransfer struct {
	Logger    Logger
	Transport http.RoundTripper
	// contains filtered or unexported fields
}

HTTPTransfer exports and imports cache entries via http.

func (*HTTPTransfer) AddCache added in v0.2.0

func (t *HTTPTransfer) AddCache(name string, c WalkDumpRestorer)

AddCache registers cache into exporter.

func (*HTTPTransfer) CachesCount added in v0.2.5

func (t *HTTPTransfer) CachesCount() int

CachesCount returns how many caches were added.

func (*HTTPTransfer) Export added in v0.2.0

func (t *HTTPTransfer) Export() http.Handler

Export creates http handler to export cache entries in encoding/gob format.

Example
package main

import (
	"context"
	"fmt"
	"net/http"
	"net/http/httptest"

	"github.com/bool64/cache"
)

func main() {
	ctx := context.Background()
	cacheExporter := cache.HTTPTransfer{}

	mux := http.NewServeMux()
	mux.Handle("/debug/transfer-cache", cacheExporter.Export())
	srv := httptest.NewServer(mux)

	defer srv.Close()

	// Cached entities must have exported fields to be transferable with reflection-based "encoding/gob".
	type SomeCachedEntity struct {
		Value string
	}

	// Cached entity types must be registered to gob, this can be done in init functions of cache facades.
	cache.GobRegister(SomeCachedEntity{})

	// Exported cache(s).
	someEntityCache := cache.NewShardedMap()
	_ = someEntityCache.Write(ctx, []byte("key1"), SomeCachedEntity{Value: "foo"})

	// Registry of caches.
	cacheExporter.AddCache("some-entity", someEntityCache)

	// Importing cache(s).
	someEntityCacheOfNewInstance := cache.NewShardedMap()

	// Caches registry.
	cacheImporter := cache.HTTPTransfer{}
	cacheImporter.AddCache("some-entity", someEntityCacheOfNewInstance)

	_ = cacheImporter.Import(ctx, srv.URL+"/debug/transfer-cache")

	val, _ := someEntityCacheOfNewInstance.Read(ctx, []byte("key1"))

	fmt.Println(val.(SomeCachedEntity).Value)
}
Output:

foo
Example (Generic)
package main

import (
	"context"
	"fmt"
	"net/http"
	"net/http/httptest"

	"github.com/bool64/cache"
)

func main() {
	ctx := context.Background()
	cacheExporter := cache.HTTPTransfer{}

	mux := http.NewServeMux()
	mux.Handle("/debug/transfer-cache", cacheExporter.Export())
	srv := httptest.NewServer(mux)

	defer srv.Close()

	// Cached entities must have exported fields to be transferable with reflection-based "encoding/gob".
	type GenericCachedEntity struct {
		Value string
	}

	// With generic cache it is not strictly necessary to register in gob, the transfer will still work.
	// However, registering to gob also calculates structural hash to avoid cache transfer when the
	// structures have changed.
	cache.GobRegister(GenericCachedEntity{})

	// Exported cache(s).
	someEntityCache := cache.NewShardedMapOf[GenericCachedEntity]()
	_ = someEntityCache.Write(ctx, []byte("key1"), GenericCachedEntity{Value: "foo"})

	// Registry of caches.
	cacheExporter.AddCache("some-entity", someEntityCache.WalkDumpRestorer())

	// Importing cache(s).
	someEntityCacheOfNewInstance := cache.NewShardedMapOf[GenericCachedEntity]()

	// Caches registry.
	cacheImporter := cache.HTTPTransfer{}
	cacheImporter.AddCache("some-entity", someEntityCacheOfNewInstance.WalkDumpRestorer())

	_ = cacheImporter.Import(ctx, srv.URL+"/debug/transfer-cache")

	val, _ := someEntityCacheOfNewInstance.Read(ctx, []byte("key1"))

	fmt.Println(val.Value)
}
Output:

foo

func (*HTTPTransfer) ExportJSONL added in v0.2.0

func (t *HTTPTransfer) ExportJSONL() http.Handler

ExportJSONL creates http handler to export cache entries as JSON lines.

func (*HTTPTransfer) Import added in v0.2.0

func (t *HTTPTransfer) Import(ctx context.Context, exportURL string) error

Import loads cache entries exported at exportURL.

type InvalidationIndex added in v0.4.1

type InvalidationIndex struct {
	// contains filtered or unexported fields
}

InvalidationIndex keeps index of keys labeled for future invalidation.

func NewInvalidationIndex added in v0.4.1

func NewInvalidationIndex(deleters ...Deleter) *InvalidationIndex

NewInvalidationIndex creates new instance of label-based invalidator.

Example
package main

import (
	"context"
	"fmt"

	"github.com/bool64/cache"
)

func main() {
	// Invalidation index maintains lists of keys in multiple cache instances with shared labels.
	// For example, when you were building cache value, you used resources that can change in the future.
	// You can add a resource label to the new cache key (separate label for each resource), so that later,
	// when resource is changed, invalidation index can be asked to drop cached entries associated
	// with respective label.
	i := cache.NewInvalidationIndex()

	cache1 := cache.NewShardedMap()
	cache2 := cache.NewShardedMap()

	// Each cache instance, that is subject for invalidation needs to be added with unique name.
	i.AddCache("one", cache1)
	i.AddCache("two", cache2)

	ctx := context.Background()
	_ = cache1.Write(ctx, []byte("keyA"), "A1")
	_ = cache1.Write(ctx, []byte("keyB"), "B1")

	_ = cache2.Write(ctx, []byte("keyA"), "A2")
	_ = cache2.Write(ctx, []byte("keyB"), "B2")

	// Labeling keyA in both caches.
	i.AddLabels("one", []byte("keyA"), "A")
	i.AddLabels("two", []byte("keyA"), "A")

	// Labeling keyA only in one cache.
	i.AddLabels("one", []byte("keyB"), "B")

	// Invalidation will delete keyA in both cache one and two.
	n, _ := i.InvalidateByLabels(ctx, "A")
	fmt.Println("Keys deleted for A:", n)

	// Invalidation will not affect keyB in cache two, but will delete in cache one.
	n, _ = i.InvalidateByLabels(ctx, "B")
	fmt.Println("Keys deleted for B:", n)

}
Output:

Keys deleted for A: 2
Keys deleted for B: 1

func (*InvalidationIndex) AddCache added in v0.4.7

func (i *InvalidationIndex) AddCache(name string, deleter Deleter)

AddCache adds a named instance of cache with deletable entries.

func (*InvalidationIndex) AddInvalidationLabels added in v0.4.1

func (i *InvalidationIndex) AddInvalidationLabels(key []byte, labels ...string)

AddInvalidationLabels registers invalidation labels to a cache key in default cache.

func (*InvalidationIndex) AddLabels added in v0.4.7

func (i *InvalidationIndex) AddLabels(cacheName string, key []byte, labels ...string)

AddLabels registers invalidation labels to a cache key.

func (*InvalidationIndex) InvalidateByLabels added in v0.4.1

func (i *InvalidationIndex) InvalidateByLabels(ctx context.Context, labels ...string) (int, error)

InvalidateByLabels deletes keys from cache that have any of provided labels and returns number of deleted entries. If delete fails, function puts unprocessed keys back in the index and returns.

Example
package main

import (
	"context"
	"fmt"

	"github.com/bool64/cache"
)

func main() {
	c := cache.NewShardedMap()
	ctx := context.TODO()

	// Any cache key can be accompanied by invalidation labels.
	_ = c.Write(ctx, []byte("my-foo"), "foo")
	c.AddInvalidationLabels([]byte("my-foo"), "my", "f**")

	_ = c.Write(ctx, []byte("my-bar"), "bar")
	c.AddInvalidationLabels([]byte("my-bar"), "my", "b**")

	_ = c.Write(ctx, []byte("my-baz"), "baz")
	c.AddInvalidationLabels([]byte("my-baz"), "my", "b**")

	n, _ := c.InvalidateByLabels(ctx, "b**")

	fmt.Println("deleted items for 'b**':", n)

	_, err := c.Read(ctx, []byte("my-foo"))
	fmt.Println("my-foo err:", err)

	_, err = c.Read(ctx, []byte("my-bar"))
	fmt.Println("my-bar err:", err)

	_, err = c.Read(ctx, []byte("my-baz"))
	fmt.Println("my-baz err:", err)

	n, _ = c.InvalidateByLabels(ctx, "my", "f**")

	fmt.Println("deleted items for 'my':", n)

	_, err = c.Read(ctx, []byte("my-foo"))
	fmt.Println("my-foo err:", err)

}
Output:

deleted items for 'b**': 2
my-foo err: <nil>
my-bar err: missing cache item
my-baz err: missing cache item
deleted items for 'my': 1
my-foo err: missing cache item

type Invalidator

type Invalidator struct {
	sync.Mutex

	// SkipInterval defines minimal duration between two cache invalidations (flood protection).
	SkipInterval time.Duration

	// Callbacks contains a list of functions to call on invalidate.
	Callbacks []func(ctx context.Context)
	// contains filtered or unexported fields
}

Invalidator is a registry of cache expiration triggers.

func (*Invalidator) Invalidate

func (i *Invalidator) Invalidate(ctx context.Context) error

Invalidate triggers cache expiration.

type Key added in v0.2.6

type Key []byte

Key os a key of cached entry.

func (Key) MarshalText added in v0.2.6

func (ks Key) MarshalText() ([]byte, error)

MarshalText renders bytes as text.

type Logger added in v0.2.4

type Logger interface {
	// Error logs a message.
	Error(ctx context.Context, msg string, keysAndValues ...interface{})
}

Logger collects contextual information.

This interface matches github.com/bool64/ctxd.Logger.

func NewLogger added in v0.2.4

func NewLogger(
	logError,
	logWarn,
	logImportant,
	logDebug func(ctx context.Context, msg string, keysAndValues ...interface{}),
) Logger

NewLogger creates logger instance from logging functions.

Any logging function can be nil.

type NoOp

type NoOp struct{}

NoOp is a ReadWriter stub.

func (NoOp) Delete

func (NoOp) Delete(_ context.Context, _ []byte) error

Delete is always missing item.

func (NoOp) Read

func (NoOp) Read(_ context.Context, _ []byte) (interface{}, error)

Read is always missing item.

func (NoOp) Write

func (NoOp) Write(_ context.Context, _ []byte, _ interface{}) error

Write does nothing.

type ReadWriter

type ReadWriter interface {
	Reader
	Writer
}

ReadWriter reads from and writes to cache.

type ReadWriterOf added in v0.2.3

type ReadWriterOf[V any] interface {
	ReaderOf[V]
	WriterOf[V]
}

ReadWriterOf reads from and writes to cache.

type Reader

type Reader interface {
	// Read returns cached value or error.
	Read(ctx context.Context, key []byte) (interface{}, error)
}

Reader reads from cache.

type ReaderOf added in v0.2.3

type ReaderOf[V any] interface {
	// Read returns cached value or error.
	Read(ctx context.Context, key []byte) (V, error)
}

ReaderOf reads from cache.

type Restorer

type Restorer interface {
	Restore(r io.Reader) (int, error)
}

Restorer restores cache entries from binary dump.

type SentinelError

type SentinelError string

SentinelError is an error.

func (SentinelError) Error

func (e SentinelError) Error() string

Error implements error.

type ShardedMap

type ShardedMap struct {
	// contains filtered or unexported fields
}

ShardedMap is an in-memory cache backend. Please use NewShardedMap to create it.

func NewShardedMap

func NewShardedMap(options ...func(cfg *Config)) *ShardedMap

NewShardedMap creates an instance of in-memory cache with optional configuration.

Example
package main

import (
	"context"
	"fmt"
	"log"
	"time"

	"github.com/bool64/cache"
)

func main() {
	// Create cache instance.
	c := cache.NewShardedMap(cache.Config{
		Name:       "dogs",
		TimeToLive: 13 * time.Minute,
		// Logging errors with standard logger, non-error messages are ignored.
		Logger: cache.NewLogger(func(ctx context.Context, msg string, keysAndValues ...interface{}) {
			log.Printf("cache failed: %s %v", msg, keysAndValues)
		}, nil, nil, nil),

		// Tweak these parameters to reduce/stabilize rwMutexMap consumption at cost of cache hit rate.
		// If cache cardinality and size are reasonable, default values should be fine.
		DeleteExpiredAfter:       time.Hour,
		DeleteExpiredJobInterval: 10 * time.Minute,
		HeapInUseSoftLimit:       200 * 1024 * 1024, // 200MB soft limit for process heap in use.
		EvictFraction:            0.2,               // Drop 20% of mostly expired items (including non-expired) on heap overuse.
	}.Use)

	// Use context if available, it may hold TTL and SkipRead information.
	ctx := context.TODO()

	// Write value to cache.
	_ = c.Write(
		cache.WithTTL(ctx, time.Minute, true), // Change default TTL with context if necessary.
		[]byte("my-key"),
		[]int{1, 2, 3},
	)

	// Read value from cache.
	val, _ := c.Read(ctx, []byte("my-key"))
	fmt.Printf("%v", val)

	// Delete value from cache.
	_ = c.Delete(ctx, []byte("my-key"))

}
Output:

[1 2 3]

func (ShardedMap) Delete

func (c ShardedMap) Delete(ctx context.Context, key []byte) error

Delete removes value by the key.

It fails with ErrNotFound if key does not exist.

func (ShardedMap) DeleteAll

func (c ShardedMap) DeleteAll(ctx context.Context)

DeleteAll erases all entries.

func (*ShardedMap) Dump

func (c *ShardedMap) Dump(w io.Writer) (int, error)

Dump saves cached entries and returns a number of processed entries.

Dump uses encoding/gob to serialize cache entries, therefore it is necessary to register cached types in advance with cache.GobRegister.

func (ShardedMap) ExpireAll

func (c ShardedMap) ExpireAll(ctx context.Context)

ExpireAll marks all entries as expired, they can still serve stale cache.

func (ShardedMap) Len

func (c ShardedMap) Len() int

Len returns number of elements in cache.

func (ShardedMap) Load added in v0.2.2

func (c ShardedMap) Load(key []byte) (interface{}, bool)

Load returns the value stored in the map for a key, or nil if no value is present. The ok result indicates whether value was found in the map.

func (ShardedMap) Read

func (c ShardedMap) Read(ctx context.Context, key []byte) (interface{}, error)

Read gets value.

func (*ShardedMap) Restore

func (c *ShardedMap) Restore(r io.Reader) (int, error)

Restore loads cached entries and returns number of processed entries.

Restore uses encoding/gob to unserialize cache entries, therefore it is necessary to register cached types in advance with cache.GobRegister.

func (ShardedMap) Store added in v0.2.2

func (c ShardedMap) Store(key []byte, val interface{})

Store sets the value for a key.

func (ShardedMap) Walk

func (c ShardedMap) Walk(walkFn func(e Entry) error) (int, error)

Walk walks cached entries.

func (ShardedMap) Write

func (c ShardedMap) Write(ctx context.Context, k []byte, v interface{}) error

Write sets value by the key.

type ShardedMapOf added in v0.2.3

type ShardedMapOf[V any] struct {
	// contains filtered or unexported fields
}

ShardedMapOf is an in-memory cache backend. Please use NewShardedMapOf to create it.

func NewShardedMapOf added in v0.2.3

func NewShardedMapOf[V any](options ...func(cfg *Config)) *ShardedMapOf[V]

NewShardedMapOf creates an instance of in-memory cache with optional configuration.

Example
package main

import (
	"context"
	"fmt"
	"log"
	"time"

	"github.com/bool64/cache"
)

func main() {
	// Dog is your cached type.
	type Dog struct {
		Name string
	}

	// Create cache instance.
	c := cache.NewShardedMapOf[Dog](cache.Config{
		Name:       "dogs",
		TimeToLive: 13 * time.Minute,
		// Logging errors with standard logger, non-error messages are ignored.
		Logger: cache.NewLogger(func(ctx context.Context, msg string, keysAndValues ...interface{}) {
			log.Printf("cache failed: %s %v", msg, keysAndValues)
		}, nil, nil, nil),

		// Tweak these parameters to reduce/stabilize rwMutexMap consumption at cost of cache hit rate.
		// If cache cardinality and size are reasonable, default values should be fine.
		DeleteExpiredAfter:       time.Hour,
		DeleteExpiredJobInterval: 10 * time.Minute,
		HeapInUseSoftLimit:       200 * 1024 * 1024, // 200MB soft limit for process heap in use.
		EvictFraction:            0.2,               // Drop 20% of mostly expired items (including non-expired) on heap overuse.
	}.Use)

	// Use context if available, it may hold TTL and SkipRead information.
	ctx := context.TODO()

	// Write value to cache.
	_ = c.Write(
		cache.WithTTL(ctx, time.Minute, true), // Change default TTL with context if necessary.
		[]byte("my-key"),
		Dog{Name: "Snoopy"},
	)

	// Read value from cache.
	val, _ := c.Read(ctx, []byte("my-key"))
	fmt.Printf("%s", val.Name)

	// Delete value from cache.
	_ = c.Delete(ctx, []byte("my-key"))

}
Output:

Snoopy

func (ShardedMapOf) Delete added in v0.2.3

func (c ShardedMapOf) Delete(ctx context.Context, key []byte) error

Delete removes value by the key.

It fails with ErrNotFound if key does not exist.

func (ShardedMapOf) DeleteAll added in v0.2.3

func (c ShardedMapOf) DeleteAll(ctx context.Context)

DeleteAll erases all entries.

func (*ShardedMapOf[V]) Dump added in v0.2.3

func (c *ShardedMapOf[V]) Dump(w io.Writer) (int, error)

Dump saves cached entries and returns a number of processed entries.

Dump uses encoding/gob to serialize cache entries, therefore it is necessary to register cached types in advance with cache.GobRegister.

func (ShardedMapOf) ExpireAll added in v0.2.3

func (c ShardedMapOf) ExpireAll(ctx context.Context)

ExpireAll marks all entries as expired, they can still serve stale cache.

func (ShardedMapOf) Len added in v0.2.3

func (c ShardedMapOf) Len() int

Len returns number of elements in cache.

func (ShardedMapOf) Load added in v0.2.3

func (c ShardedMapOf) Load(key []byte) (val V, loaded bool)

Load returns the value stored in the map for a key, or nil if no value is present. The ok result indicates whether value was found in the map.

func (ShardedMapOf) Read added in v0.2.3

func (c ShardedMapOf) Read(ctx context.Context, key []byte) (val V, _ error)

Read gets value.

func (*ShardedMapOf[V]) Restore added in v0.2.3

func (c *ShardedMapOf[V]) Restore(r io.Reader) (int, error)

Restore loads cached entries and returns number of processed entries.

Restore uses encoding/gob to unserialize cache entries, therefore it is necessary to register cached types in advance with cache.GobRegister.

func (ShardedMapOf) Store added in v0.2.3

func (c ShardedMapOf) Store(key []byte, val V)

Store sets the value for a key.

func (ShardedMapOf) Walk added in v0.2.3

func (c ShardedMapOf) Walk(walkFn func(e EntryOf[V]) error) (int, error)

Walk walks cached entries.

func (*ShardedMapOf[V]) WalkDumpRestorer added in v0.2.5

func (c *ShardedMapOf[V]) WalkDumpRestorer() WalkDumpRestorer

WalkDumpRestorer is an adapter of a non-generic cache transfer interface.

func (ShardedMapOf) Write added in v0.2.3

func (c ShardedMapOf) Write(ctx context.Context, k []byte, v V) error

Write sets value by the key.

type StatsTracker added in v0.2.4

type StatsTracker interface {
	// Add collects additional or observable value.
	Add(ctx context.Context, name string, increment float64, labelsAndValues ...string)

	// Set collects absolute value, e.g. number of cache entries at the moment.
	Set(ctx context.Context, name string, absolute float64, labelsAndValues ...string)
}

StatsTracker collects incremental and absolute (gauge) metrics.

This interface matches github.com/bool64/stats.Tracker.

func NewStatsTracker added in v0.2.4

func NewStatsTracker(
	add,
	set func(ctx context.Context, name string, val float64, labelsAndValues ...string),
) StatsTracker

NewStatsTracker creates logger instance from tracking functions.

type SyncMap added in v0.2.1

type SyncMap struct {
	// contains filtered or unexported fields
}

SyncMap is an in-memory cache backend. Please use NewSyncMap to create it.

func NewSyncMap added in v0.2.1

func NewSyncMap(options ...func(cfg *Config)) *SyncMap

NewSyncMap creates an instance of in-memory cache with optional configuration.

func (SyncMap) Delete added in v0.2.1

func (c SyncMap) Delete(ctx context.Context, key []byte) error

Delete removes values by the key.

func (SyncMap) DeleteAll added in v0.2.1

func (c SyncMap) DeleteAll(ctx context.Context)

DeleteAll erases all entries.

func (*SyncMap) Dump added in v0.2.1

func (c *SyncMap) Dump(w io.Writer) (int, error)

Dump saves cached entries and returns a number of processed entries.

Dump uses encoding/gob to serialize cache entries, therefore it is necessary to register cached types in advance with GobRegister.

func (SyncMap) ExpireAll added in v0.2.1

func (c SyncMap) ExpireAll(ctx context.Context)

ExpireAll marks all entries as expired, they can still serve stale values.

func (SyncMap) Len added in v0.2.1

func (c SyncMap) Len() int

Len returns number of elements including expired.

func (SyncMap) Read added in v0.2.1

func (c SyncMap) Read(ctx context.Context, key []byte) (interface{}, error)

Read gets value.

func (*SyncMap) Restore added in v0.2.1

func (c *SyncMap) Restore(r io.Reader) (int, error)

Restore loads cached entries and returns number of processed entries.

Restore uses encoding/gob to unserialize cache entries, therefore it is necessary to register cached types in advance with GobRegister.

func (SyncMap) Walk added in v0.2.1

func (c SyncMap) Walk(walkFn func(e Entry) error) (int, error)

Walk walks cached entries.

func (SyncMap) Write added in v0.2.1

func (c SyncMap) Write(ctx context.Context, k []byte, v interface{}) error

Write sets value by the key.

type Trait added in v0.2.6

type Trait struct {
	Closed chan struct{}

	DeleteExpired func(before time.Time)
	Len           func() int
	Evict         func(fraction float64) int

	Config Config
	Stat   StatsTracker
	Log    logTrait
	// contains filtered or unexported fields
}

Trait is a shared trait, useful to implement ReadWriter.

func NewTrait added in v0.2.6

func NewTrait(config Config, options ...func(t *Trait)) *Trait

NewTrait instantiates new Trait.

func (*Trait) NotifyDeleted added in v0.2.6

func (c *Trait) NotifyDeleted(ctx context.Context, key []byte)

NotifyDeleted collects logs and metrics.

func (*Trait) NotifyDeletedAll added in v0.2.6

func (c *Trait) NotifyDeletedAll(ctx context.Context, start time.Time, cnt int)

NotifyDeletedAll collects logs and metrics.

func (*Trait) NotifyExpiredAll added in v0.2.6

func (c *Trait) NotifyExpiredAll(ctx context.Context, start time.Time, cnt int)

NotifyExpiredAll collects logs and metrics.

func (*Trait) NotifyWritten added in v0.2.6

func (c *Trait) NotifyWritten(ctx context.Context, key []byte, value interface{}, ttl time.Duration)

NotifyWritten collects logs and metrics.

func (*Trait) PrepareRead added in v0.2.6

func (c *Trait) PrepareRead(ctx context.Context, cacheEntry *TraitEntry, found bool) (interface{}, error)

PrepareRead handles cached entry.

func (*Trait) TTL added in v0.2.6

func (c *Trait) TTL(ctx context.Context) time.Duration

TTL calculates time to live for a new entry.

type TraitEntry added in v0.2.6

type TraitEntry struct {
	K Key         `json:"key" description:"Key."`
	V interface{} `json:"val" description:"Value."`
	E int64       `json:"exp" description:"Expiration timestamp (ns)."`
	C int64       `json:"-" description:"Usage count or last serve timestamp (ns)."`
}

TraitEntry is a cache entry.

func (TraitEntry) ExpireAt added in v0.2.6

func (e TraitEntry) ExpireAt() time.Time

ExpireAt returns entry expiration time.

func (TraitEntry) Key added in v0.2.6

func (e TraitEntry) Key() []byte

Key returns entry key.

func (TraitEntry) Value added in v0.2.6

func (e TraitEntry) Value() interface{}

Value returns entry value.

type TraitEntryOf added in v0.2.6

type TraitEntryOf[V any] struct {
	K Key   `json:"key" description:"Cache entry key."`
	V V     `json:"val" description:"Cache entry value."`
	E int64 `json:"exp" description:"Expiration timestamp, ns."`
	C int64 `json:"-" description:"Usage count or last serve timestamp (ns)."`
}

TraitEntryOf is a cache entry.

func (TraitEntryOf[V]) ExpireAt added in v0.2.6

func (e TraitEntryOf[V]) ExpireAt() time.Time

ExpireAt returns entry expiration time.

func (TraitEntryOf[V]) Key added in v0.2.6

func (e TraitEntryOf[V]) Key() []byte

Key returns entry key.

func (TraitEntryOf[V]) Value added in v0.2.6

func (e TraitEntryOf[V]) Value() V

Value returns entry value.

type TraitOf added in v0.2.6

type TraitOf[V any] struct {
	Trait
}

TraitOf is a parametrized shared trait, useful to implement ReadWriterOf.

func NewTraitOf added in v0.2.6

func NewTraitOf[V any](config Config, options ...func(t *Trait)) *TraitOf[V]

NewTraitOf instantiates new TraitOf.

func (*TraitOf[V]) NotifyWritten added in v0.2.6

func (c *TraitOf[V]) NotifyWritten(ctx context.Context, key []byte, value V, ttl time.Duration)

NotifyWritten collects logs and metrics.

func (*TraitOf[V]) PrepareRead added in v0.2.6

func (c *TraitOf[V]) PrepareRead(ctx context.Context, cacheEntry *TraitEntryOf[V], found bool) (v V, err error)

PrepareRead handles cached entry.

type WalkDumpRestorer added in v0.2.0

type WalkDumpRestorer interface {
	Dumper
	Walker
	Restorer
}

WalkDumpRestorer walks, dumps and restores cache.

type Walker

type Walker interface {
	Walk(cb func(entry Entry) error) (int, error)
}

Walker calls function for every entry in cache and fails on first error returned by that function.

Count of processed entries is returned.

type WalkerOf added in v0.2.3

type WalkerOf[V any] interface {
	Walk(cb func(entry EntryOf[V]) error) (int, error)
}

WalkerOf calls function for every entry in cache and fails on first error returned by that function.

Count of processed entries is returned.

type Writer

type Writer interface {
	// Write stores value in cache with a given key.
	Write(ctx context.Context, key []byte, value interface{}) error
}

Writer writes to cache.

type WriterOf added in v0.2.3

type WriterOf[V any] interface {
	// Write stores value in cache with a given key.
	Write(ctx context.Context, key []byte, value V) error
}

WriterOf writes to cache.

Directories

Path Synopsis
Package bench implements concurrent benchmark for cache backends.
Package bench implements concurrent benchmark for cache backends.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL