oncecache

package module
v0.0.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 13, 2024 License: MIT Imports: 11 Imported by: 0

README

oncecache: on-demand in-memory object cache

Go Reference Go Report Card License Workflow

oncecache is a strongly-typed, concurrency-safe, context-aware, dependency-free, in-memory, on-demand Golang object cache, focused on write-once, read-often ergonomics.

The package also provides an event mechanism useful for logging, metrics, or propagating cache entries between overlapping composite caches.

Documentation

Overview

Package oncecache contains a strongly-typed, concurrency-safe, context-aware, dependency-free, in-memory, on-demand object Cache, focused on write-once, read-often ergonomics.

The package also provides an event mechanism useful for logging, metrics, or propagating cache entries between overlapping composite caches.

Index

Constants

This section is empty.

Variables

View Source
var LogConfig = struct {
	Msg       string
	AttrEvent string
	AttrCache string
	AttrOp    string
	AttrKey   string
	AttrVal   string
	AttrErr   string
}{
	Msg:       "Cache event",
	AttrEvent: "ev",
	AttrCache: "cache",
	AttrOp:    "op",
	AttrKey:   "k",
	AttrVal:   "v",
	AttrErr:   "err",
}

LogConfig is used by oncecache.Log to configure log output. See also: oncecache.Event.LogValue, oncecache.Entry.LogValue.

Functions

func NewContext

func NewContext[K comparable, V any](ctx context.Context, c *Cache[K, V]) context.Context

NewContext returns ctx decorated with Cache c. If ctx is nil, a new context is created.

Types

type Cache

type Cache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

Cache is a concurrency-safe, in-memory, on-demand cache that ensures that a given cache entry is populated only once, either implicitly via Cache.Get and the fetch func, or externally via [Cache.Set].

However, a cache entry can be explicitly cleared via Cache.Delete or Cache.Clear, allowing the entry to be populated afresh.

A cache entry consists not only of the key and value, but also any error associated with filling the entry value via the fetch func or via [Cache.Set]. Thus, a cache entry is a triple: (key, value, error). An entry with a non-nil error is still a valid cache entry. A call to Cache.Get for an existing errorful cache entry does not invoke the fetch func again. Cache entry population occurs only once (hence "oncecache"), unless the entry is explicitly evicted via Cache.Delete or Cache.Clear.

The zero value is not usable; instead invoke New.

func FromContext

func FromContext[K comparable, V any](ctx context.Context) *Cache[K, V]

FromContext returns the Cache value stored in ctx, if any, or nil. All cache callbacks receive a context that has been decorated with the Cache instance.

func New

func New[K comparable, V any](fetch FetchFunc[K, V], opts ...Opt) *Cache[K, V]

New returns a new Cache instance. The fetch func is invoked, on-demand, by Cache.Get to obtain an entry value for a given key, OR the entry may be externally set via [Cache.Set]. Either which way, the entry is populated only once. That is, unless the entry is explicitly cleared via Cache.Delete or Cache.Clear, at which point the entry may be populated afresh.

Arg opts is a set of options that can be used to configure the cache. For example, see Name to set the cache name, or the OnFill or OnEvict options for event callbacks. Any nil Opt in opts is ignored.

func (*Cache[K, V]) Clear

func (c *Cache[K, V]) Clear(ctx context.Context)

Clear clears the cache entries, invoking any OnEvict callbacks on each cache entry. The entry callback order is not specified. The cache is locked until Clear (including any callbacks) returns.

func (*Cache[K, V]) Close

func (c *Cache[K, V]) Close() error

Close closes the cache, releasing any resources. Close is idempotent and always returns nil. Callbacks are not invoked. The cache is not usable after Close is invoked; calls to other Cache methods may panic.

func (*Cache[K, V]) Delete

func (c *Cache[K, V]) Delete(ctx context.Context, key K)

Delete deletes the entry for the given key, invoking any OnEvict callbacks. The cache is locked until Delete (including any callbacks) returns.

func (*Cache[K, V]) Get

func (c *Cache[K, V]) Get(ctx context.Context, key K) (V, error)

Get gets the value (and fill error) for the given key. If there's no entry for the key, the fetch func is invoked, setting the entry value and error. If the entry is already populated, the value and error are returned without invoking the fetch func. Any OnHit, OnMiss, and OnFill callbacks are invoked, and OpHit, OpMiss and OpFill events emitted, as appropriate.

func (*Cache[K, V]) GobDecode

func (c *Cache[K, V]) GobDecode(p []byte) error

GobDecode implements gob.GobDecoder. Only the cache name and entries are decoded. The fetch func and callbacks are not decoded. Any pre-existing entries in c are cleared prior to decoding.

func (*Cache[K, V]) GobEncode

func (c *Cache[K, V]) GobEncode() ([]byte, error)

GobEncode implements gob.GobEncoder. Only the cache name and entries are encoded. The fetch func and callbacks are not encoded.

func (*Cache[K, V]) Has

func (c *Cache[K, V]) Has(key K) bool

Has returns true if Cache c has an entry for key.

func (*Cache[K, V]) Keys

func (c *Cache[K, V]) Keys() []K

Keys returns the cache keys. The keys will be in an indeterminate order.

func (*Cache[K, V]) Len

func (c *Cache[K, V]) Len() int

Len returns the number of entries in the cache.

func (*Cache[K, V]) LogValue

func (c *Cache[K, V]) LogValue() slog.Value

LogValue implements slog.LogValuer.

func (*Cache[K, V]) MaybeSet

func (c *Cache[K, V]) MaybeSet(ctx context.Context, key K, val V, err error) (ok bool)

MaybeSet sets the value and fill error for the given key if it is not already filled, returning true if the value was set. This would allow an external process to prime the cache.

Note that the value might instead be filled implicitly via Cache.Get, when it invokes the fetch func. If there's already a cache entry for key, MaybeSet is no-op: the value is not updated. If this MaybeSet call does update the cache entry, any OnFill callbacks - as provided to New - are invoked, and ok returns true.

func (*Cache[K, V]) Name

func (c *Cache[K, V]) Name() string

Name returns the cache's name, useful for logging. Specify the cache name by passing oncecache.Name to New; otherwise a random name is used.

func (*Cache[K, V]) String

func (c *Cache[K, V]) String() string

String returns a debug-friendly string representation of the cache.

type Entry

type Entry[K comparable, V any] struct {
	Cache *Cache[K, V]
	Key   K
	Val   V
	Err   error
}

Entry is the external representation of a cache entry. It is not part of the cache's internal state; it can be modified by the user if desired.

func (Entry[K, V]) LogValue

func (e Entry[K, V]) LogValue() slog.Value

LogValue implements slog.LogValuer, logging Val if it implements slog.LogValuer or is a primitive type such as int or bool (but not string), logging Err if non-nil, and always logging Key and Cache.Name.

func (Entry[K, V]) String

func (e Entry[K, V]) String() string

String returns a string representation of the entry. The entry's Val field is not incorporated. For logging, note Entry.LogValue.

type Event

type Event[K comparable, V any] struct {
	Entry[K, V]
	Op Op
}

Event is a cache event.

func (Event[K, V]) LogValue

func (e Event[K, V]) LogValue() slog.Value

LogValue implements slog.LogValuer, logging according to Entry.LogValue, but also logging [Event.Op].

func (Event[K, V]) String

func (e Event[K, V]) String() string

String returns a string representation of the event. The event's Val field is not incorporated. For logging, note Event.LogValue.

type FetchFunc

type FetchFunc[K comparable, V any] func(ctx context.Context, key K) (val V, err error)

FetchFunc is called by Cache.Get to fill an unpopulated cache entry. If needed, the source Cache can be retrieved from ctx via FromContext.

type Name

type Name string

Name is an Opt for New that sets the cache's name. The name is accessible via Cache.Name.

c := oncecache.New[int, string](fetch, oncecache.Name("foobar"))

The name is used by Cache.String and Cache.LogValue. If Name is not specified, a random name such as "cache-38a2b7d4" is generated.

type Op

type Op uint8

Op is an enumeration of cache operations, as see in [Event.Op].

const (
	// OpHit indicates a cache hit: a cache entry already exists for the key. Note
	// that the cache entry may contain a non-nil error, and the entry value may
	// be the zero value. An errorful cache entry is a valid hit.
	OpHit Op = 1

	// OpMiss indicates a cache miss. It is always immediately followed by an
	// [OpFill].
	OpMiss Op = 2

	// OpFill indicates that a cache entry has been populated. Typically it is
	// immediately preceded by [OpMiss], but will occur standalone when
	// [Cache.Set] is invoked. Note that if the entry fill results in an error,
	// the entry is still considered valid, and [OpFill] is still emitted.
	OpFill Op = 3

	// OpEvict indicates a cache entry has been removed.
	OpEvict Op = 4
)

func (Op) IsZero

func (o Op) IsZero() bool

IsZero returns true if the action is the zero value, which is an invalid Op.

func (Op) String

func (o Op) String() string

String returns the op name.

type Opt

type Opt interface {
	// contains filtered or unexported methods
}

Opt is an option for New.

func Log

func Log(log *slog.Logger, lvl slog.Leveler, ops ...Op) Opt

Log is an Opt for oncecache.New that logs each Event to log, where [Event.Op] is in ops. If ops is empty, all events are logged. If log is nil, Log is a no-op. If lvl is nil, slog.LevelInfo is used. Example usage:

c := oncecache.New[int, int](
	calcFibonacci,
	oncecache.Name("fibs"),
	oncecache.Log(log, slog.LevelInfo, oncecache.OpFill, oncecache.OpEvict),
	oncecache.Log(log, slog.LevelDebug, oncecache.OpHit, oncecache.OpMiss),
)

Log is intended to handle basic logging needs. Some configuration is possible via oncecache.LogConfig. If you require more control, you can roll your own logging mechanism using an OnEvent channel or the On* callbacks. If doing so, note that Event, Entry and Cache each implement slog.LogValuer.

func OnEvent

func OnEvent[K comparable, V any](ch chan<- Event[K, V], block bool, ops ...Op) Opt

OnEvent is an Opt argument to New that configures the cache to emit events on the given chan. If ops is empty, all events are emitted; otherwise, only events for the given ops are emitted.

If arg block is true, the Cache function that triggered the event will block on sending to a full ch. If false, the new event is dropped if ch is full.

You can use an unbuffered channel and block=true to stop the event consumer from falling too far behind the cache state. Alternatively the synchronous OnHit, OnMiss, OnFill, and OnEvict callbacks can be used, at cost of increased lock contention and lower throughput.

For basic logging, consider oncecache.Log.

func OnEvict

func OnEvict[K comparable, V any](fn func(ctx context.Context, key K, val V, err error)) Opt

OnEvict returns a callback Opt for New that is invoked when a cache entry is evicted via Cache.Delete or Cache.Clear.

Note that OnEvict callbacks are synchronous; the triggering call to Cache.Delete or Cache.Clear blocks until every OnEvict returns. Consider using OnEvent for long-running callbacks.

func OnFill

func OnFill[K comparable, V any](fn func(ctx context.Context, key K, val V, err error)) Opt

OnFill returns a callback Opt for New that is invoked when a cache entry is populated, whether on-demand via Cache.Get and FetchFunc, or externally via Cache.MaybeSet.

Note that OnFill callbacks are synchronous; the triggering call to Cache.MaybeSet or Cache.Get blocks until every OnFill returns. Consider using OnEvent for long-running callbacks.

While OnFill can be used for logging, metrics, etc., most common tasks are better accomplished via OnEvent.

func OnHit

func OnHit[K comparable, V any](fn func(ctx context.Context, key K, val V, err error)) Opt

OnHit returns a callback Opt for New that is invoked when Cache.Get results in a cache hit.

Note that OnHit callbacks are synchronous; the triggering call to Cache.Get blocks until every OnHit returns. Consider using the asynchronous OnEvent for long-running callbacks.

func OnMiss

func OnMiss[K comparable, V any](fn func(ctx context.Context, key K)) Opt

OnMiss returns a callback Opt for New that is invoked when Cache.Get results in a cache miss.

Note that OnMiss callbacks are synchronous; the triggering call to Cache.Get blocks until every OnMiss returns. Consider using the asynchronous OnEvent for long-running callbacks.

FIXME: Starting to think OnMiss should just use the standard callback signature.

Directories

Path Synopsis
examples

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL