theine

package module
v0.3.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 7, 2023 License: MIT Imports: 7 Imported by: 14

README

Theine

codecov

High performance in-memory & hybrid cache inspired by Caffeine.

Table of Contents

Requirements

Go 1.19+

Installation

go get github.com/Yiling-J/theine-go

API

Builder API

Theine provides two types of client, simple cache and loading cache. Both of them are initialized from a builder. The difference between simple cache and loading cache is: loading cache's Get method will compute the value using loader function when there is a miss, while simple cache client only return false and do nothing.

Loading cache uses singleflight to prevent concurrent loading to same key(thundering herd).

simple cache:

import "github.com/Yiling-J/theine-go"

// key type string, value type string, max size 1000
// max size is the only required configuration to build a client
client, err := theine.NewBuilder[string, string](1000).Build()
if err != nil {
	panic(err)
}

// builder also provide several optional configurations
// you can chain them together and call build once
// client, err := theine.NewBuilder[string, string](1000).Cost(...).Doorkeeper(...).Build()

// or create builder first
builder := theine.NewBuilder[string, string](1000)

// dynamic cost function based on value
// use 0 in Set will call this function to evaluate cost at runtime
builder.Cost(func(v string) int64 {
		return int64(len(v))
})

// doorkeeper
// doorkeeper will drop Set if they are not in bloomfilter yet
// this can improve write performance, but may lower hit ratio
builder.Doorkeeper(true)

// removal listener, this function will be called when entry is removed
// RemoveReason could be REMOVED/EVICTED/EXPIRED
// REMOVED: remove by API
// EVICTED: evicted by Window-TinyLFU policy
// EXPIRED: expired by timing wheel
builder.RemovalListener(func(key K, value V, reason theine.RemoveReason) {})

loading cache:

import "github.com/Yiling-J/theine-go"

// loader function: func(ctx context.Context, key K) (theine.Loaded[V], error)
// Loaded struct should include cache value, cost and ttl, which required by Set method
client, err := theine.NewBuilder[string, string](1000).Loading(
	func(ctx context.Context, key string) (theine.Loaded[string], error) {
		return theine.Loaded[string]{Value: key, Cost: 1, TTL: 0}, nil
	},
).Build()
if err != nil {
	panic(err)
}

Other builder options are same as simple cache(cost, doorkeeper, removal listener).

Client API

// set, key foo, value bar, cost 1
// success will be false if cost > max size
success := client.Set("foo", "bar", 1)
// cost 0 means using dynamic cost function
// success := client.Set("foo", "bar", 0)

// set with ttl
success = client.SetWithTTL("foo", "bar", 1, 1*time.Second)

// get(simple cache version)
value, ok := client.Get("foo")

// get(loading cache version)
value, err := client.Get(ctx, "foo")

// remove
client.Delete("foo")

// iterate key/value in cache and apply custom function
// if function returns false, range stops the iteration
client.Range(func(key, value int) bool {
	return true
})

// close client, set hashmaps in shard to nil and close all goroutines
client.Close()

Hybrid Cache

HybridCache feature enables Theine to extend the DRAM cache to NVM. With HybridCache, Theine can seamlessly move Items stored in cache across DRAM and NVM as they are accessed. Using HybridCache, you can shrink your DRAM footprint of the cache and replace it with NVM like Flash. This can also enable you to achieve large cache capacities for the same or relatively lower power and dollar cost.

Design

Hybrid Cache is inspired by CacheLib's HybridCache. See introduction and architecture from CacheLib's guide.

When you use HybridCache, items allocated in the cache can live on NVM or DRAM based on how they are accessed. Irrespective of where they are, when you access them, you always get them to be in DRAM.

Items start their lifetime on DRAM. As an item becomes cold it gets evicted from DRAM when the cache is full. Theine spills it to a cache on the NVM device. Upon subsequent access through Get(), if the item is not in DRAM, theine looks it up in the HybridCache and if found, moves it to DRAM. When the HybridCache gets filled up, subsequent insertions into the HybridCache from DRAM will throw away colder items from HybridCache.

Same as CacheLib, Theine hybrid cache also has BigHash and Block Cache, it's highly recommended to read the CacheLib architecture design before using hybrid cache, here is a simple introduction of these 2 engines(just copy from CacheLib):

  • BigHash is effectively a giant fixed-bucket hash map on the device. To read or write, the entire bucket is read (in case of write, updated and written back). Bloom filter used to reduce number of IO. When bucket is full, items evicted in FIFO manner. You don't pay any RAM price here (except Bloom filter, which is 2GB for 1TB BigHash, tunable).
  • Block Cache, on the other hand, divides device into equally sized regions (16MB, tunable) and fills a region with items of same size class, or, in case of log-mode fills regions sequentially with items of different size. Sometimes we call log-mode “stack alloc”. BC stores compact index in memory: key hash to offset. We do not store full key in memory and if collision happens (super rare), old item will look like evicted. In your calculations, use 12 bytes overhead per item to estimate RAM usage. For example, if your average item size is 4KB and cache size is 500GB you'll need around 1.4GB of memory.
Using Hybrid Cache

To use HybridCache, you need to create a nvm cache with NvmBuilder. NewNvmBuilder require 2 params, first is cache file name, second is cache size in bytes. Theine will use direct I/O to read/write file.

nvm, err := theine.NewNvmBuilder[int, int]("cache", 150<<20).[settings...].Build()

Then enable hybrid mode in your Theine builder.

client, err := theine.NewBuilder[int, int](100).Hybrid(nvm).Build()
NVM Builder Settings

All settings are optional, unless marked as "Required".

  • [Common] BlockSize default 4096

    Device block size in bytes (minimum IO granularity).

  • [Common] KeySerializer default JsonSerializer

    KeySerializer is used to marshal/unmarshal between your key type and bytes.

    type Serializer[T any] interface {
        Marshal(v T) ([]byte, error)
        Unmarshal(raw []byte, v *T) error
    }
    
  • [Common] ValueSerializer default JsonSerializer

    ValueSerializer is used to marshal/unmarshal between your value type and bytes. Same interface as KeySerializer.

  • [Common] ErrorHandler default do nothing

    Theine evicts entries to Nvm asynchronously, so errors will be handled by this error handler.

  • [BlockCache] RegionSize default 16 << 20 (16 MB)

    Region size in bytes.

  • [BlockCache] CleanRegionSize default 3

    How many regions do we reserve for future writes. Set this to be equivalent to your per-second write rate. It should ensure your writes will not have to retry to wait for a region reclamation to finish.

  • [BigHash] BucketSize defalut 4 << 10 (4 KB)

    Bucket size in bytes.

  • [BigHash] BigHashPct default 10

    Percentage of space to reserve for BigHash. Set the percentage > 0 to enable BigHash. The remaining part is for BlockCache. The value has to be in the range of [0, 100]. Set to 100 will disable block cache.

  • [BigHash] BigHashMaxItemSize default (bucketSize - 80)

    Maximum size of a small item to be stored in BigHash. Must be less than (bucket size - 80).

  • [BigHash] BucketBfSize default 8 bytes

    Bloom filter size, bytes per bucket.

Hybrid Mode Settings

After you call Hybrid(...) in a cache builder. Theine will convert current builder to hybrid builder. Hybrid builder has several settings.

  • Workers defalut 2

    Theine evicts entries in a separate policy goroutinue, but insert to NVM can be done parallel. To make this work, Theine send evicted entries to workers, and worker will sync data to NVM cache. This setting controls how many workers are used to sync data.

  • AdmProbability defalut 1

    This is an admission policy for endurance and performance reason. When entries are evicted from DRAM cache, this policy will be used to control the insertion percentage. A value of 1 means that all entries evicted from DRAM will be inserted into NVM. Values should be in the range of [0, 1].

Limitations
  • Cache Persistence is not currently supported, but it may be added in the future. You can still use the Persistence API in a hybrid-enabled cache, but only the DRAM part of the cache will be saved or loaded.
  • The removal listener will only receive REMOVED events, which are generated when an entry is explicitly removed by calling the Delete API.
  • No Range/Len API.

Cache Persistence

Theine supports persisting the cache into io.Writer and restoring from io.Reader. Gob is used to encode/decode data, so make sure your key/value can be encoded by gob correctly first before using this feature.

API
func (c *Cache[K, V]) SaveCache(version uint64, writer io.Writer) error
func (c *Cache[K, V]) LoadCache(version uint64, reader io.Reader) error

- Important: please LoadCache immediately after client created, or existing entries' TTL might be affected.

Example:
// save
f, err := os.Create("test")
err := client.SaveCache(0, f)
f.Close()

// load
f, err = os.Open("test")
require.Nil(t, err)
newClient, err := theine.NewBuilder[int, int](100).Build()
// load immediately after client created
err = newClient.LoadCache(0, f)
f.Close()

Version number must be same when saving and loading, or LoadCache will return theine.VersionMismatch error. You can change the version number when you want to ignore persisted cache.

err := newClient.LoadCache(1, f)
// VersionMismatch is a global variable
if err == theine.VersionMismatch {
	// ignore and skip loading
} else if err != nil {
	// panic error
}
Details

When persisting cache, Theine roughly do:

  • Store version number.
  • Store clock(used in TTL).
  • Store frequency sketch.
  • Store entries one by one in protected LRU in most-recently:least-recently order.
  • Store entries one by one in probation LRU in most-recently:least-recently order.
  • Loop shards and store entries one by one in each shard deque.

When loading cache, Theine roughly do:

  • Load version number, compare to current version number.
  • Load clock.
  • Load frequency sketch.
  • Load protected LRU and insert entries back to new protected LRU and shards/timingwheel, expired entries will be ignored. Because cache capacity may change, this step will stop if max protected LRU size reached.
  • Load probation LRU and insert entries back to new probation LRU and shards/timingwheel, expired entries will be ignored, Because cache capacity may change, this step will stop if max probation LRU size reached.
  • Load deque entries and insert back to shards, expired entries will be ignored.

Theine will save checksum when persisting cache and verify checksum first when loading.

Benchmarks

Source: https://github.com/Yiling-J/go-cache-benchmark-plus

This repo includes reproducible throughput/hit-ratios benchmark code, you can also test your own cache package with it.

throughput
goos: darwin
goarch: amd64
pkg: github.com/Yiling-J/go-cache-benchmark-plus
cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
BenchmarkGetParallel/theine-12          40604346                28.72 ns/op            0 B/op          0 allocs/op
BenchmarkGetParallel/ristretto-12       60166238                23.50 ns/op           17 B/op          1 allocs/op
BenchmarkSetParallel/theine-12          16067138                67.55 ns/op            0 B/op          0 allocs/op
BenchmarkSetParallel/ristretto-12       12830085                79.30 ns/op          116 B/op          3 allocs/op
BenchmarkZipfParallel/theine-12         15908767                70.07 ns/op            0 B/op          0 allocs/op
BenchmarkZipfParallel/ristretto-12      17200935                80.05 ns/op          100 B/op          3 allocs/op
hit ratios

ristretto v0.1.1: https://github.com/dgraph-io/ristretto

from Ristretto README, the hit ratio should be higher. But I can't reproduce their benchmark results. So I open an issue: https://github.com/dgraph-io/ristretto/issues/336

golang-lru v2.0.2: https://github.com/hashicorp/golang-lru

zipf

hit ratios search

This trace is described as "disk read accesses initiated by a large commercial search engine in response to various web search requests."

hit ratios database

This trace is described as "a database server running at a commercial site running an ERP application on top of a commercial database."

hit ratios Scarabresearch database trace

Scarabresearch 1 hour database trace from this issue

hit ratios Meta anonymized trace

Meta shared anonymized trace captured from large scale production cache services, from cachelib

hit ratios

Tips

  • If your key size is very large, you may consider using a struct with 2 hashes instead:
type hashKey struct {
	key uint64
	conflict uint64
}

This is how Ristretto handle keys. But keep in mind that even though the collision rate is very low, it's still possible.

Support

Open an issue, ask question in discussions or join discord channel: https://discord.gg/StrgfPaQqE

Documentation

Index

Constants

View Source
const (
	REMOVED = internal.REMOVED
	EVICTED = internal.EVICTED
	EXPIRED = internal.EXPIRED
)
View Source
const (
	ZERO_TTL = 0 * time.Second
)

Variables

View Source
var VersionMismatch = internal.VersionMismatch

Functions

This section is empty.

Types

type Builder added in v0.2.0

type Builder[K comparable, V any] struct {
	// contains filtered or unexported fields
}

func NewBuilder added in v0.2.0

func NewBuilder[K comparable, V any](maxsize int64) *Builder[K, V]

func (*Builder[K, V]) Build added in v0.2.0

func (b *Builder[K, V]) Build() (*Cache[K, V], error)

Build builds a cache client from builder.

func (*Builder[K, V]) BuildWithLoader added in v0.2.0

func (b *Builder[K, V]) BuildWithLoader(loader func(ctx context.Context, key K) (Loaded[V], error)) (*LoadingCache[K, V], error)

BuildWithLoader builds a loading cache client from builder with custom loader function.

func (*Builder[K, V]) Cost added in v0.2.0

func (b *Builder[K, V]) Cost(cost func(v V) int64) *Builder[K, V]

Cost adds dynamic cost function to builder. There is a default cost function which always return 1.

func (*Builder[K, V]) Doorkeeper added in v0.2.0

func (b *Builder[K, V]) Doorkeeper(enabled bool) *Builder[K, V]

Doorkeeper enables doorkeeper. Doorkeeper will drop Set if they are not in bloomfilter yet.

func (*Builder[K, V]) Hybrid added in v0.3.0

func (b *Builder[K, V]) Hybrid(cache internal.SecondaryCache[K, V]) *HybridBuilder[K, V]

Add secondary cache and switch to HybridBuilder.

func (*Builder[K, V]) Loading added in v0.3.0

func (b *Builder[K, V]) Loading(
	loader func(ctx context.Context, key K) (Loaded[V], error),
) *LoadingBuilder[K, V]

Add loading function and switch to LoadingBuilder.

func (*Builder[K, V]) RemovalListener added in v0.2.0

func (b *Builder[K, V]) RemovalListener(listener func(key K, value V, reason RemoveReason)) *Builder[K, V]

RemovalListener adds remove callback function to builder. This function is called when entry in cache is evicted/expired/deleted.

func (*Builder[K, V]) StringKey added in v0.3.2

func (b *Builder[K, V]) StringKey(fn func(k K) string) *Builder[K, V]

StringKey add a custom key -> string method, the string will be used in shard hashing.

type Cache

type Cache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

func (*Cache[K, V]) Close

func (c *Cache[K, V]) Close()

Close closes all goroutines created by cache.

func (*Cache[K, V]) Delete

func (c *Cache[K, V]) Delete(key K)

Delete deletes key from cache.

func (*Cache[K, V]) Get

func (c *Cache[K, V]) Get(key K) (V, bool)

Get gets value by key.

func (*Cache[K, V]) Len

func (c *Cache[K, V]) Len() int

Len returns number of entries in cache.

func (*Cache[K, V]) LoadCache added in v0.2.6

func (c *Cache[K, V]) LoadCache(version uint64, reader io.Reader) error

LoadCache load cache data from reader.

func (*Cache[K, V]) Range added in v0.2.4

func (c *Cache[K, V]) Range(f func(key K, value V) bool)

Range calls f sequentially for each key and value present in the cache. If f returns false, range stops the iteration.

func (*Cache[K, V]) SaveCache added in v0.2.6

func (c *Cache[K, V]) SaveCache(version uint64, writer io.Writer) error

SaveCache save cache data to writer.

func (*Cache[K, V]) Set

func (c *Cache[K, V]) Set(key K, value V, cost int64) bool

Set inserts or updates entry in cache. Return false when cost > max size.

func (*Cache[K, V]) SetWithTTL

func (c *Cache[K, V]) SetWithTTL(key K, value V, cost int64, ttl time.Duration) bool

Set inserts or updates entry in cache with given ttl. Return false when cost > max size.

type DataBlock added in v0.3.0

type DataBlock = internal.DataBlock[any]

type HybridBuilder added in v0.3.0

type HybridBuilder[K comparable, V any] struct {
	// contains filtered or unexported fields
}

func (*HybridBuilder[K, V]) AdmProbability added in v0.3.0

func (b *HybridBuilder[K, V]) AdmProbability(p float32) *HybridBuilder[K, V]

Set acceptance probability. The value has to be in the range of [0, 1].

func (*HybridBuilder[K, V]) Build added in v0.3.0

func (b *HybridBuilder[K, V]) Build() (*HybridCache[K, V], error)

Build builds a cache client from builder.

func (*HybridBuilder[K, V]) Loading added in v0.3.0

func (b *HybridBuilder[K, V]) Loading(
	loader func(ctx context.Context, key K) (Loaded[V], error),
) *HybridLoadingBuilder[K, V]

Add loading function and switch to HybridLoadingBuilder.

func (*HybridBuilder[K, V]) Workers added in v0.3.0

func (b *HybridBuilder[K, V]) Workers(w int) *HybridBuilder[K, V]

Set secondary cache workers. Worker will send evicted entries to secondary cache.

type HybridCache added in v0.3.0

type HybridCache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

func (*HybridCache[K, V]) Close added in v0.3.0

func (c *HybridCache[K, V]) Close()

Close closes all goroutines created by cache.

func (*HybridCache[K, V]) Delete added in v0.3.0

func (c *HybridCache[K, V]) Delete(key K) error

Delete deletes key from cache.

func (*HybridCache[K, V]) Get added in v0.3.0

func (c *HybridCache[K, V]) Get(key K) (V, bool, error)

Get gets value by key.

func (*HybridCache[K, V]) LoadCache added in v0.3.1

func (c *HybridCache[K, V]) LoadCache(version uint64, reader io.Reader) error

LoadCache load cache data from reader.

func (*HybridCache[K, V]) SaveCache added in v0.3.1

func (c *HybridCache[K, V]) SaveCache(version uint64, writer io.Writer) error

SaveCache save cache data to writer.

func (*HybridCache[K, V]) Set added in v0.3.0

func (c *HybridCache[K, V]) Set(key K, value V, cost int64) bool

Set inserts or updates entry in cache. Return false when cost > max size.

func (*HybridCache[K, V]) SetWithTTL added in v0.3.0

func (c *HybridCache[K, V]) SetWithTTL(key K, value V, cost int64, ttl time.Duration) bool

Set inserts or updates entry in cache with given ttl. Return false when cost > max size.

type HybridLoadingBuilder added in v0.3.0

type HybridLoadingBuilder[K comparable, V any] struct {
	// contains filtered or unexported fields
}

func (*HybridLoadingBuilder[K, V]) Build added in v0.3.0

func (b *HybridLoadingBuilder[K, V]) Build() (*HybridLoadingCache[K, V], error)

Build builds a cache client from builder.

type HybridLoadingCache added in v0.3.0

type HybridLoadingCache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

func (*HybridLoadingCache[K, V]) Close added in v0.3.0

func (c *HybridLoadingCache[K, V]) Close()

Close closes all goroutines created by cache.

func (*HybridLoadingCache[K, V]) Delete added in v0.3.0

func (c *HybridLoadingCache[K, V]) Delete(key K) error

Delete deletes key from cache.

func (*HybridLoadingCache[K, V]) Get added in v0.3.0

func (c *HybridLoadingCache[K, V]) Get(ctx context.Context, key K) (V, error)

Get gets value by key.

func (*HybridLoadingCache[K, V]) LoadCache added in v0.3.0

func (c *HybridLoadingCache[K, V]) LoadCache(version uint64, reader io.Reader) error

LoadCache load cache data from reader.

func (*HybridLoadingCache[K, V]) SaveCache added in v0.3.0

func (c *HybridLoadingCache[K, V]) SaveCache(version uint64, writer io.Writer) error

SaveCache save cache data to writer.

func (*HybridLoadingCache[K, V]) Set added in v0.3.0

func (c *HybridLoadingCache[K, V]) Set(key K, value V, cost int64) bool

Set inserts or updates entry in cache. Return false when cost > max size.

func (*HybridLoadingCache[K, V]) SetWithTTL added in v0.3.0

func (c *HybridLoadingCache[K, V]) SetWithTTL(key K, value V, cost int64, ttl time.Duration) bool

Set inserts or updates entry in cache with given ttl. Return false when cost > max size.

type JsonSerializer added in v0.3.0

type JsonSerializer[T any] struct{}

func (*JsonSerializer[T]) Marshal added in v0.3.0

func (s *JsonSerializer[T]) Marshal(v T) ([]byte, error)

func (*JsonSerializer[T]) Unmarshal added in v0.3.0

func (s *JsonSerializer[T]) Unmarshal(raw []byte, v *T) error

type Loaded added in v0.2.0

type Loaded[V any] struct {
	Value V
	Cost  int64
	TTL   time.Duration
}

type LoadingBuilder added in v0.3.0

type LoadingBuilder[K comparable, V any] struct {
	// contains filtered or unexported fields
}

func (*LoadingBuilder[K, V]) Build added in v0.3.0

func (b *LoadingBuilder[K, V]) Build() (*LoadingCache[K, V], error)

Build builds a cache client from builder.

func (*LoadingBuilder[K, V]) Hybrid added in v0.3.0

func (b *LoadingBuilder[K, V]) Hybrid(cache internal.SecondaryCache[K, V]) *HybridLoadingBuilder[K, V]

Add secondary cache and switch to HybridLoadingBuilder.

type LoadingCache added in v0.2.0

type LoadingCache[K comparable, V any] struct {
	// contains filtered or unexported fields
}

func (*LoadingCache[K, V]) Close added in v0.2.0

func (c *LoadingCache[K, V]) Close()

Close closes all goroutines created by cache.

func (*LoadingCache[K, V]) Delete added in v0.2.0

func (c *LoadingCache[K, V]) Delete(key K)

Delete deletes key from cache.

func (*LoadingCache[K, V]) Get added in v0.2.0

func (c *LoadingCache[K, V]) Get(ctx context.Context, key K) (V, error)

Get gets value by key.

func (*LoadingCache[K, V]) Len added in v0.2.0

func (c *LoadingCache[K, V]) Len() int

Len returns number of entries in cache.

func (*LoadingCache[K, V]) LoadCache added in v0.2.6

func (c *LoadingCache[K, V]) LoadCache(version uint64, reader io.Reader) error

LoadCache load cache data from reader.

func (*LoadingCache[K, V]) Range added in v0.2.4

func (c *LoadingCache[K, V]) Range(f func(key K, value V) bool)

Range calls f sequentially for each key and value present in the cache. If f returns false, range stops the iteration.

func (*LoadingCache[K, V]) SaveCache added in v0.2.6

func (c *LoadingCache[K, V]) SaveCache(version uint64, writer io.Writer) error

SaveCache save cache data to writer.

func (*LoadingCache[K, V]) Set added in v0.2.0

func (c *LoadingCache[K, V]) Set(key K, value V, cost int64) bool

Set inserts or updates entry in cache. Return false when cost > max size.

func (*LoadingCache[K, V]) SetWithTTL added in v0.2.0

func (c *LoadingCache[K, V]) SetWithTTL(key K, value V, cost int64, ttl time.Duration) bool

Set inserts or updates entry in cache with given ttl. Return false when cost > max size.

type NvmBuilder added in v0.3.0

type NvmBuilder[K comparable, V any] struct {
	// contains filtered or unexported fields
}

func NewNvmBuilder added in v0.3.0

func NewNvmBuilder[K comparable, V any](file string, cacheSize int) *NvmBuilder[K, V]

func (*NvmBuilder[K, V]) BigHashMaxItemSize added in v0.3.0

func (b *NvmBuilder[K, V]) BigHashMaxItemSize(size int) *NvmBuilder[K, V]

Maximum size of a small item to be stored in BigHash. Must be less than the bucket size.

func (*NvmBuilder[K, V]) BigHashPct added in v0.3.0

func (b *NvmBuilder[K, V]) BigHashPct(pct int) *NvmBuilder[K, V]

Percentage of space to reserve for BigHash. Set the percentage > 0 to enable BigHash. Set percentage to 100 to disable block cache.

func (*NvmBuilder[K, V]) BlockSize added in v0.3.0

func (b *NvmBuilder[K, V]) BlockSize(size int) *NvmBuilder[K, V]

Device block size in bytes (minimum IO granularity).

func (*NvmBuilder[K, V]) BucketBfSize added in v0.3.1

func (b *NvmBuilder[K, V]) BucketBfSize(size int) *NvmBuilder[K, V]

func (*NvmBuilder[K, V]) BucketSize added in v0.3.0

func (b *NvmBuilder[K, V]) BucketSize(size int) *NvmBuilder[K, V]

Big hash bucket size in bytes.

func (*NvmBuilder[K, V]) Build added in v0.3.0

func (b *NvmBuilder[K, V]) Build() (*nvm.NvmStore[K, V], error)

Build cache.

func (*NvmBuilder[K, V]) CleanRegionSize added in v0.3.0

func (b *NvmBuilder[K, V]) CleanRegionSize(size int) *NvmBuilder[K, V]

Block cache clean region size.

func (*NvmBuilder[K, V]) ErrorHandler added in v0.3.0

func (b *NvmBuilder[K, V]) ErrorHandler(fn func(err error)) *NvmBuilder[K, V]

Nvm cache error handler.

func (*NvmBuilder[K, V]) KeySerializer added in v0.3.0

func (b *NvmBuilder[K, V]) KeySerializer(s Serializer[K]) *NvmBuilder[K, V]

Nvm cache key serializer.

func (*NvmBuilder[K, V]) RegionSize added in v0.3.0

func (b *NvmBuilder[K, V]) RegionSize(size int) *NvmBuilder[K, V]

Block cache Region size in bytes.

func (*NvmBuilder[K, V]) ValueSerializer added in v0.3.0

func (b *NvmBuilder[K, V]) ValueSerializer(s Serializer[V]) *NvmBuilder[K, V]

Nvm cache value serializer.

type RemoveReason added in v0.1.2

type RemoveReason = internal.RemoveReason

type Serializer added in v0.3.0

type Serializer[T any] interface {
	internal.Serializer[T]
}

Directories

Path Synopsis
benchmarks module
Package mpsc provides an efficient implementation of a multi-producer, single-consumer lock-free queue.
Package mpsc provides an efficient implementation of a multi-producer, single-consumer lock-free queue.
bf
nvm
nvm/directio
This is library for the Go language to enable use of Direct IO under all supported OSes of Go.
This is library for the Go language to enable use of Direct IO under all supported OSes of Go.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL