ccache

package module
v1.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 17, 2021 License: MIT Imports: 6 Imported by: 3

README

CCache

This fork of the original CCache, a concurrent LRU cache written in Go, exists only to support the build requirements of the LaunchDarkly Go SDK. Changes in this fork should not be submitted upstream.

Specifically, the issue is that the LaunchDarkly Go SDK (in major versions up to and including v5) explicitly supports use by applications that use non-module-compatible package managers such as dep instead of Go modules. This means that there cannot be any dependencies that are modules with a major version greater than 1, because they would have a /vN major version suffix in their import paths, which dep and similar tools do not understand. This fork simply removes the /v2 path suffix from ccache and resets the major version to 1.

The versioning of this package starts at v1.1.0, to avoid confusion with the versioning of the original repository which had 1.0.0 and 1.0.1 releases.

For all other information about this package, see the original repository.

Documentation

Overview

An LRU cached aimed at high concurrency

An LRU cached aimed at high concurrency

Index

Constants

This section is empty.

Variables

View Source
var NilTracked = new(nilItem)

Functions

This section is empty.

Types

type Cache

type Cache struct {
	*Configuration
	// contains filtered or unexported fields
}

func New

func New(config *Configuration) *Cache

Create a new cache with the specified configuration See ccache.Configure() for creating a configuration

func (*Cache) Clear

func (c *Cache) Clear()

Clears the cache

func (*Cache) Delete

func (c *Cache) Delete(key string) bool

Remove the item from the cache, return true if the item was present, false otherwise.

func (*Cache) DeleteFunc added in v1.1.0

func (c *Cache) DeleteFunc(matches func(key string, item *Item) bool) int

Deletes all items that the matches func evaluates to true.

func (*Cache) DeletePrefix added in v1.1.0

func (c *Cache) DeletePrefix(prefix string) int

func (*Cache) Fetch

func (c *Cache) Fetch(key string, duration time.Duration, fetch func() (interface{}, error)) (*Item, error)

Attempts to get the value from the cache and calles fetch on a miss (missing or stale item). If fetch returns an error, no value is cached and the error is returned back to the caller.

func (*Cache) ForEachFunc added in v1.1.0

func (c *Cache) ForEachFunc(matches func(key string, item *Item) bool)

func (*Cache) Get

func (c *Cache) Get(key string) *Item

Get an item from the cache. Returns nil if the item wasn't found. This can return an expired item. Use item.Expired() to see if the item is expired and item.TTL() to see how long until the item expires (which will be negative for an already expired item).

func (*Cache) GetDropped added in v1.1.0

func (c *Cache) GetDropped() int

Gets the number of items removed from the cache due to memory pressure since the last time GetDropped was called

func (*Cache) ItemCount added in v1.1.0

func (c *Cache) ItemCount() int

func (*Cache) Replace

func (c *Cache) Replace(key string, value interface{}) bool

Replace the value if it exists, does not set if it doesn't. Returns true if the item existed an was replaced, false otherwise. Replace does not reset item's TTL

func (*Cache) Set

func (c *Cache) Set(key string, value interface{}, duration time.Duration)

Set the value in the cache for the specified duration

func (*Cache) SetMaxSize added in v1.1.0

func (c *Cache) SetMaxSize(size int64)

Sets a new max size. That can result in a GC being run if the new maxium size is smaller than the cached size

func (*Cache) Stop added in v1.1.0

func (c *Cache) Stop()

Stops the background worker. Operations performed on the cache after Stop is called are likely to panic

func (*Cache) TrackingGet

func (c *Cache) TrackingGet(key string) TrackedItem

Used when the cache was created with the Track() configuration option. Avoid otherwise

func (*Cache) TrackingSet added in v1.1.0

func (c *Cache) TrackingSet(key string, value interface{}, duration time.Duration) TrackedItem

Used when the cache was created with the Track() configuration option. Sets the item, and returns a tracked reference to it.

type Configuration

type Configuration struct {
	// contains filtered or unexported fields
}

func Configure

func Configure() *Configuration

Creates a configuration object with sensible defaults Use this as the start of the fluent configuration: e.g.: ccache.New(ccache.Configure().MaxSize(10000))

func (*Configuration) Buckets

func (c *Configuration) Buckets(count uint32) *Configuration

Keys are hashed into % bucket count to provide greater concurrency (every set requires a write lock on the bucket). Must be a power of 2 (1, 2, 4, 8, 16, ...) [16]

func (*Configuration) DeleteBuffer

func (c *Configuration) DeleteBuffer(size uint32) *Configuration

The size of the queue for items which should be deleted. If the queue fills up, calls to Delete() will block

func (*Configuration) GetsPerPromote

func (c *Configuration) GetsPerPromote(count int32) *Configuration

Give a large cache with a high read / write ratio, it's usually unnecessary to promote an item on every Get. GetsPerPromote specifies the number of Gets a key must have before being promoted [3]

func (*Configuration) ItemsToPrune

func (c *Configuration) ItemsToPrune(count uint32) *Configuration

The number of items to prune when memory is low [500]

func (*Configuration) MaxSize

func (c *Configuration) MaxSize(max int64) *Configuration

The max size for the cache [5000]

func (*Configuration) OnDelete added in v1.1.0

func (c *Configuration) OnDelete(callback func(item *Item)) *Configuration

OnDelete allows setting a callback function to react to ideam deletion. This typically allows to do a cleanup of resources, such as calling a Close() on cached object that require some kind of tear-down.

func (*Configuration) PromoteBuffer

func (c *Configuration) PromoteBuffer(size uint32) *Configuration

The size of the queue for items which should be promoted. If the queue fills up, promotions are skipped [1024]

func (*Configuration) Track

func (c *Configuration) Track() *Configuration

By turning tracking on and using the cache's TrackingGet, the cache won't evict items which you haven't called Release() on. It's a simple reference counter.

type Item

type Item struct {
	// contains filtered or unexported fields
}

func (*Item) Expired

func (i *Item) Expired() bool

func (*Item) Expires

func (i *Item) Expires() time.Time

func (*Item) Extend

func (i *Item) Extend(duration time.Duration)

func (*Item) Release

func (i *Item) Release()

func (*Item) TTL

func (i *Item) TTL() time.Duration

func (*Item) Value

func (i *Item) Value() interface{}

type LayeredCache

type LayeredCache struct {
	*Configuration
	// contains filtered or unexported fields
}

func Layered

func Layered(config *Configuration) *LayeredCache

See ccache.Configure() for creating a configuration

func (*LayeredCache) Clear

func (c *LayeredCache) Clear()

Clears the cache

func (*LayeredCache) Delete

func (c *LayeredCache) Delete(primary, secondary string) bool

Remove the item from the cache, return true if the item was present, false otherwise.

func (*LayeredCache) DeleteAll

func (c *LayeredCache) DeleteAll(primary string) bool

Deletes all items that share the same primary key

func (*LayeredCache) DeleteFunc added in v1.1.0

func (c *LayeredCache) DeleteFunc(primary string, matches func(key string, item *Item) bool) int

Deletes all items that share the same primary key and where the matches func evaluates to true.

func (*LayeredCache) DeletePrefix added in v1.1.0

func (c *LayeredCache) DeletePrefix(primary, prefix string) int

Deletes all items that share the same primary key and prefix.

func (*LayeredCache) Fetch

func (c *LayeredCache) Fetch(primary, secondary string, duration time.Duration, fetch func() (interface{}, error)) (*Item, error)

Attempts to get the value from the cache and calles fetch on a miss. If fetch returns an error, no value is cached and the error is returned back to the caller.

func (*LayeredCache) ForEachFunc added in v1.1.0

func (c *LayeredCache) ForEachFunc(primary string, matches func(key string, item *Item) bool)

func (*LayeredCache) Get

func (c *LayeredCache) Get(primary, secondary string) *Item

Get an item from the cache. Returns nil if the item wasn't found. This can return an expired item. Use item.Expired() to see if the item is expired and item.TTL() to see how long until the item expires (which will be negative for an already expired item).

func (*LayeredCache) GetDropped added in v1.1.0

func (c *LayeredCache) GetDropped() int

Gets the number of items removed from the cache due to memory pressure since the last time GetDropped was called

func (*LayeredCache) GetOrCreateSecondaryCache added in v1.1.0

func (c *LayeredCache) GetOrCreateSecondaryCache(primary string) *SecondaryCache

Get the secondary cache for a given primary key. This operation will never return nil. In the case where the primary key does not exist, a new, underlying, empty bucket will be created and returned.

func (*LayeredCache) ItemCount added in v1.1.0

func (c *LayeredCache) ItemCount() int

func (*LayeredCache) Replace

func (c *LayeredCache) Replace(primary, secondary string, value interface{}) bool

Replace the value if it exists, does not set if it doesn't. Returns true if the item existed an was replaced, false otherwise. Replace does not reset item's TTL nor does it alter its position in the LRU

func (*LayeredCache) Set

func (c *LayeredCache) Set(primary, secondary string, value interface{}, duration time.Duration)

Set the value in the cache for the specified duration

func (*LayeredCache) SetMaxSize added in v1.1.0

func (c *LayeredCache) SetMaxSize(size int64)

Sets a new max size. That can result in a GC being run if the new maxium size is smaller than the cached size

func (*LayeredCache) Stop added in v1.1.0

func (c *LayeredCache) Stop()

func (*LayeredCache) TrackingGet

func (c *LayeredCache) TrackingGet(primary, secondary string) TrackedItem

Used when the cache was created with the Track() configuration option. Avoid otherwise

func (*LayeredCache) TrackingSet added in v1.1.0

func (c *LayeredCache) TrackingSet(primary, secondary string, value interface{}, duration time.Duration) TrackedItem

Set the value in the cache for the specified duration

type SecondaryCache added in v1.1.0

type SecondaryCache struct {
	// contains filtered or unexported fields
}

func (*SecondaryCache) Delete added in v1.1.0

func (s *SecondaryCache) Delete(secondary string) bool

Delete a secondary key. The semantics are the same as for LayeredCache.Delete

func (*SecondaryCache) Fetch added in v1.1.0

func (s *SecondaryCache) Fetch(secondary string, duration time.Duration, fetch func() (interface{}, error)) (*Item, error)

Fetch or set a secondary key. The semantics are the same as for LayeredCache.Fetch

func (*SecondaryCache) Get added in v1.1.0

func (s *SecondaryCache) Get(secondary string) *Item

Get the secondary key. The semantics are the same as for LayeredCache.Get

func (*SecondaryCache) Replace added in v1.1.0

func (s *SecondaryCache) Replace(secondary string, value interface{}) bool

Replace a secondary key. The semantics are the same as for LayeredCache.Replace

func (*SecondaryCache) Set added in v1.1.0

func (s *SecondaryCache) Set(secondary string, value interface{}, duration time.Duration) *Item

Set the secondary key to a value. The semantics are the same as for LayeredCache.Set

func (*SecondaryCache) TrackingGet added in v1.1.0

func (c *SecondaryCache) TrackingGet(secondary string) TrackedItem

Track a secondary key. The semantics are the same as for LayeredCache.TrackingGet

type Sized

type Sized interface {
	Size() int64
}

type TrackedItem

type TrackedItem interface {
	Value() interface{}
	Release()
	Expired() bool
	TTL() time.Duration
	Expires() time.Time
	Extend(duration time.Duration)
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL