ccache

package module
v0.0.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 3, 2021 License: MIT Imports: 6 Imported by: 0

README

notice

modify github.com/karlseguin/ccache/v2.0.7

CCache

CCache is an LRU Cache, written in Go, focused on supporting high concurrency.

Lock contention on the list is reduced by:

  • Introducing a window which limits the frequency that an item can get promoted
  • Using a buffered channel to queue promotions for a single worker
  • Garbage collecting within the same thread as the worker

Unless otherwise stated, all methods are thread-safe.

Setup

First, download the project:

    go get github.com/karlseguin/ccache/v2

Configuration

Next, import and create a Cache instance:

import (
  "github.com/karlseguin/ccache/v2"
)

var cache = ccache.New(ccache.Configure())

Configure exposes a chainable API:

var cache = ccache.New(ccache.Configure().MaxSize(1000).ItemsToPrune(100))

The most likely configuration options to tweak are:

  • MaxSize(int) - the maximum number size to store in the cache (default: 5000)
  • GetsPerPromote(int) - the number of times an item is fetched before we promote it. For large caches with long TTLs, it normally isn't necessary to promote an item after every fetch (default: 3)
  • ItemsToPrune(int) - the number of items to prune when we hit MaxSize. Freeing up more than 1 slot at a time improved performance (default: 500)

Configurations that change the internals of the cache, which aren't as likely to need tweaking:

  • Buckets - ccache shards its internal map to provide a greater amount of concurrency. Must be a power of 2 (default: 16).
  • PromoteBuffer(int) - the size of the buffer to use to queue promotions (default: 1024)
  • DeleteBuffer(int) the size of the buffer to use to queue deletions (default: 1024)

Usage

Once the cache is setup, you can Get, Set and Delete items from it. A Get returns an *Item:

Get
item := cache.Get("user:4")
if item == nil {
  //handle
} else {
  user := item.Value().(*User)
}

The returned *Item exposes a number of methods:

  • Value() interface{} - the value cached
  • Expired() bool - whether the item is expired or not
  • TTL() time.Duration - the duration before the item expires (will be a negative value for expired items)
  • Expires() time.Time - the time the item will expire

By returning expired items, CCache lets you decide if you want to serve stale content or not. For example, you might decide to serve up slightly stale content (< 30 seconds old) while re-fetching newer data in the background. You might also decide to serve up infinitely stale content if you're unable to get new data from your source.

Set

Set expects the key, value and ttl:

cache.Set("user:4", user, time.Minute * 10)
Fetch

There's also a Fetch which mixes a Get and a Set:

item, err := cache.Fetch("user:4", time.Minute * 10, func() (interface{}, error) {
  //code to fetch the data incase of a miss
  //should return the data to cache and the error, if any
})
Delete

Delete expects the key to delete. It's ok to call Delete on a non-existent key:

cache.Delete("user:4")
DeletePrefix

DeletePrefix deletes all keys matching the provided prefix. Returns the number of keys removed.

DeleteFunc

DeleteFunc deletes all items that the provded matches func evaluates to true. Returns the number of keys removed.

Clear

Clear clears the cache. If the cache's gc is running, Clear waits for it to finish.

Extend

The life of an item can be changed via the Extend method. This will change the expiry of the item by the specified duration relative to the current time.

Replace

The value of an item can be updated to a new value without renewing the item's TTL or it's position in the LRU:

cache.Replace("user:4", user)

Replace returns true if the item existed (and thus was replaced). In the case where the key was not in the cache, the value is not inserted and false is returned.

GetDropped

You can get the number of keys evicted due to memory pressure by calling GetDropped:

dropped := cache.GetDropped()

The counter is reset on every call. If the cache's gc is running, GetDropped waits for it to finish; it's meant to be called asynchronously for statistics /monitoring purposes.

Stop

The cache's background worker can be stopped by calling Stop. Once Stop is called the cache should not be used (calls are likely to panic). Stop must be called in order to allow the garbage collector to reap the cache.

Tracking

CCache supports a special tracking mode which is meant to be used in conjunction with other pieces of your code that maintains a long-lived reference to data.

When you configure your cache with Track():

cache = ccache.New(ccache.Configure().Track())

The items retrieved via TrackingGet will not be eligible for purge until Release is called on them:

item := cache.TrackingGet("user:4")
user := item.Value()   //will be nil if "user:4" didn't exist in the cache
item.Release()  //can be called even if item.Value() returned nil

In practice, Release wouldn't be called until later, at some other place in your code. TrackingSet can be used to set a value to be tracked.

There's a couple reason to use the tracking mode if other parts of your code also hold references to objects. First, if you're already going to hold a reference to these objects, there's really no reason not to have them in the cache - the memory is used up anyways.

More important, it helps ensure that your code returns consistent data. With tracking, "user:4" might be purged, and a subsequent Fetch would reload the data. This can result in different versions of "user:4" being returned by different parts of your system.

LayeredCache

CCache's LayeredCache stores and retrieves values by both a primary and secondary key. Deletion can happen against either the primary and secondary key, or the primary key only (removing all values that share the same primary key).

LayeredCache is useful for HTTP caching, when you want to purge all variations of a request.

LayeredCache takes the same configuration object as the main cache, exposes the same optional tracking capabilities, but exposes a slightly different API:

cache := ccache.Layered(ccache.Configure())

cache.Set("/users/goku", "type:json", "{value_to_cache}", time.Minute * 5)
cache.Set("/users/goku", "type:xml", "<value_to_cache>", time.Minute * 5)

json := cache.Get("/users/goku", "type:json")
xml := cache.Get("/users/goku", "type:xml")

cache.Delete("/users/goku", "type:json")
cache.Delete("/users/goku", "type:xml")
// OR
cache.DeleteAll("/users/goku")

SecondaryCache

In some cases, when using a LayeredCache, it may be desirable to always be acting on the secondary portion of the cache entry. This could be the case where the primary key is used as a key elsewhere in your code. The SecondaryCache is retrieved with:

cache := ccache.Layered(ccache.Configure())
sCache := cache.GetOrCreateSecondaryCache("/users/goku")
sCache.Set("type:json", "{value_to_cache}", time.Minute * 5)

The semantics for interacting with the SecondaryCache are exactly the same as for a regular Cache. However, one difference is that Get will not return nil, but will return an empty 'cache' for a non-existent primary key.

Size

By default, items added to a cache have a size of 1. This means that if you configure MaxSize(10000), you'll be able to store 10000 items in the cache.

However, if the values you set into the cache have a method Size() int64, this size will be used. Note that ccache has an overhead of ~350 bytes per entry, which isn't taken into account. In other words, given a filled up cache, with MaxSize(4096000) and items that return a Size() int64 of 2048, we can expect to find 2000 items (4096000/2048) taking a total space of 4796000 bytes.

Want Something Simpler?

For a simpler cache, checkout out rcache

Documentation

Overview

An LRU cached aimed at high concurrency

An LRU cached aimed at high concurrency

An LRU cached aimed at high concurrency

Index

Constants

This section is empty.

Variables

View Source
var NilTracked = new(nilItem)

Functions

func BitHighestInt64

func BitHighestInt64(n int64) (pos int)

func NormalTo2N

func NormalTo2N(n int64) int64

Types

type Cache

type Cache struct {
	*Configuration
	// contains filtered or unexported fields
}

func New

func New(config *Configuration) *Cache

Create a new cache with the specified configuration See ccache.Configure() for creating a configuration

func (*Cache) Clear

func (c *Cache) Clear()

Clears the cache

func (*Cache) Delete

func (c *Cache) Delete(key string) bool

Remove the item from the cache, return true if the item was present, false otherwise.

func (*Cache) DeleteFunc

func (c *Cache) DeleteFunc(matches func(key string, item *Item) bool) int

Deletes all items that the matches func evaluates to true.

func (*Cache) DeletePrefix

func (c *Cache) DeletePrefix(prefix string) int

func (*Cache) Fetch

func (c *Cache) Fetch(key string, duration time.Duration, fetch func() (interface{}, error)) (*Item, error)

Attempts to get the value from the cache and calles fetch on a miss (missing or stale item). If fetch returns an error, no value is cached and the error is returned back to the caller.

func (*Cache) FetchNow

func (c *Cache) FetchNow(key string, now time.Time, duration time.Duration, fetch func() (interface{}, error)) (*Item, error)

func (*Cache) Get

func (c *Cache) Get(key string) *Item

Get an item from the cache. Returns nil if the item wasn't found. This can return an expired item. Use item.Expired() to see if the item is expired and item.TTL() to see how long until the item expires (which will be negative for an already expired item).

func (*Cache) GetDropped

func (c *Cache) GetDropped() int

Gets the number of items removed from the cache due to memory pressure since the last time GetDropped was called

func (*Cache) GetIncrVal

func (c *Cache) GetIncrVal(key string) (r int64)

func (*Cache) GetItem

func (c *Cache) GetItem(key string) *Item

func (*Cache) GetItemWithNow

func (c *Cache) GetItemWithNow(key string, now time.Time) (*Item, bool)

return item and expired

func (*Cache) GetWithNow

func (c *Cache) GetWithNow(key string, now time.Time) *Item

return item and expired

func (*Cache) GetWithNowNoPromote

func (c *Cache) GetWithNowNoPromote(key string, now time.Time) (*Item, bool)

return item and expired

func (*Cache) Incr

func (c *Cache) Incr(key string, n int64, duration time.Duration) int64

just incr no renew timeout

func (*Cache) IncrNow

func (c *Cache) IncrNow(key string, n int64, now time.Time, duration time.Duration) int64

func (*Cache) IncrNowPromote

func (c *Cache) IncrNowPromote(key string, n int64, now time.Time, duration time.Duration) int64

func (*Cache) IncrPromote

func (c *Cache) IncrPromote(key string, n int64, duration time.Duration) int64

incr and then renew ttl

func (*Cache) ItemCount

func (c *Cache) ItemCount() int

func (*Cache) Promote

func (c *Cache) Promote(item *Item)

func (*Cache) Replace

func (c *Cache) Replace(key string, value interface{}) bool

Replace the value if it exists, does not set if it doesn't. Returns true if the item existed an was replaced, false otherwise. Replace does not reset item's TTL

func (*Cache) Set

func (c *Cache) Set(key string, value interface{}, duration time.Duration)

Set the value in the cache for the specified duration

func (*Cache) SetMaxSize

func (c *Cache) SetMaxSize(size int64)

Sets a new max size. That can result in a GC being run if the new maxium size is smaller than the cached size

func (*Cache) SetWithDeadline

func (c *Cache) SetWithDeadline(key string, value interface{}, deadline time.Time)

func (*Cache) Stop

func (c *Cache) Stop()

Stops the background worker. Operations performed on the cache after Stop is called are likely to panic

func (*Cache) TrackingGet

func (c *Cache) TrackingGet(key string) TrackedItem

Used when the cache was created with the Track() configuration option. Avoid otherwise

func (*Cache) TrackingSet

func (c *Cache) TrackingSet(key string, value interface{}, duration time.Duration) TrackedItem

Used when the cache was created with the Track() configuration option. Sets the item, and returns a tracked reference to it.

type CacheInt64

type CacheInt64 struct {
	*ConfigurationInt64
	// contains filtered or unexported fields
}

The cache has a generic 'control' channel that is used to send messages to the worker. These are the messages that can be sent to it

func NewCacheInt64

func NewCacheInt64(config *ConfigurationInt64) *CacheInt64

Create a new cache with the specified configuration See ccache.Configure() for creating a configuration

func (*CacheInt64) Clear

func (c *CacheInt64) Clear()

Clears the cache

func (*CacheInt64) Delete

func (c *CacheInt64) Delete(key int64) bool

Remove the item from the cache, return true if the item was present, false otherwise.

func (*CacheInt64) DeleteFunc

func (c *CacheInt64) DeleteFunc(matches func(key int64, item *ItemInt64) bool) int

Deletes all items that the matches func evaluates to true.

func (*CacheInt64) DeletePrefix

func (c *CacheInt64) DeletePrefix(prefix int64) int

func (*CacheInt64) Fetch

func (c *CacheInt64) Fetch(key int64, duration time.Duration, fetch func() (interface{}, error)) (*ItemInt64, error)

Attempts to get the value from the cache and calles fetch on a miss (missing or stale item). If fetch returns an error, no value is cached and the error is returned back to the caller.

func (*CacheInt64) FetchNow

func (c *CacheInt64) FetchNow(key int64, now time.Time, duration time.Duration, fetch func() (interface{}, error)) (*ItemInt64, error)

func (*CacheInt64) Get

func (c *CacheInt64) Get(key int64) *ItemInt64

Get an item from the cache. Returns nil if the item wasn't found. This can return an expired item. Use item.Expired() to see if the item is expired and item.TTL() to see how long until the item expires (which will be negative for an already expired item).

func (*CacheInt64) GetDropped

func (c *CacheInt64) GetDropped() int

Gets the number of items removed from the cache due to memory pressure since the last time GetDropped was called

func (*CacheInt64) GetIncrVal

func (c *CacheInt64) GetIncrVal(key int64) (r int64)

func (*CacheInt64) GetItem

func (c *CacheInt64) GetItem(key int64) *ItemInt64

func (*CacheInt64) GetItemWithNow

func (c *CacheInt64) GetItemWithNow(key int64, now time.Time) (*ItemInt64, bool)

func (*CacheInt64) GetWithNow

func (c *CacheInt64) GetWithNow(key int64, now time.Time) *ItemInt64

func (*CacheInt64) GetWithNowNoPromote

func (c *CacheInt64) GetWithNowNoPromote(key int64, now time.Time) (*ItemInt64, bool)

return item and expired

func (*CacheInt64) Incr

func (c *CacheInt64) Incr(key int64, n int64, duration time.Duration) int64

just incr no renew timeout

func (*CacheInt64) IncrNow

func (c *CacheInt64) IncrNow(key int64, n int64, now time.Time, duration time.Duration) int64

func (*CacheInt64) IncrNowPromote

func (c *CacheInt64) IncrNowPromote(key int64, n int64, now time.Time, duration time.Duration) int64

func (*CacheInt64) IncrPromote

func (c *CacheInt64) IncrPromote(key int64, n int64, duration time.Duration) int64

incr and then renew ttl

func (*CacheInt64) ItemCount

func (c *CacheInt64) ItemCount() int

func (*CacheInt64) Promote

func (c *CacheInt64) Promote(item *ItemInt64)

func (*CacheInt64) Replace

func (c *CacheInt64) Replace(key int64, value interface{}) bool

Replace the value if it exists, does not set if it doesn't. Returns true if the item existed an was replaced, false otherwise. Replace does not reset item's TTL

func (*CacheInt64) Set

func (c *CacheInt64) Set(key int64, value interface{}, duration time.Duration)

Set the value in the cache for the specified duration

func (*CacheInt64) SetMaxSize

func (c *CacheInt64) SetMaxSize(size int64)

Sets a new max size. That can result in a GC being run if the new maxium size is smaller than the cached size

func (*CacheInt64) SetWithDeadline

func (c *CacheInt64) SetWithDeadline(key int64, value interface{}, deadline time.Time)

func (*CacheInt64) Stop

func (c *CacheInt64) Stop()

Stops the background worker. Operations performed on the cache after Stop is called are likely to panic

func (*CacheInt64) TrackingGet

func (c *CacheInt64) TrackingGet(key int64) TrackedItem

Used when the cache was created with the Track() configuration option. Avoid otherwise

func (*CacheInt64) TrackingSet

func (c *CacheInt64) TrackingSet(key int64, value interface{}, duration time.Duration) TrackedItem

Used when the cache was created with the Track() configuration option. Sets the item, and returns a tracked reference to it.

type Configuration

type Configuration struct {
	// contains filtered or unexported fields
}

func Configure

func Configure() *Configuration

Creates a configuration object with sensible defaults Use this as the start of the fluent configuration: e.g.: ccache.New(ccache.Configure().MaxSize(10000))

func (*Configuration) Buckets

func (c *Configuration) Buckets(count uint32) *Configuration

Keys are hashed into % bucket count to provide greater concurrency (every set requires a write lock on the bucket). Must be a power of 2 (1, 2, 4, 8, 16, ...) [16]

func (*Configuration) DeleteBuffer

func (c *Configuration) DeleteBuffer(size uint32) *Configuration

The size of the queue for items which should be deleted. If the queue fills up, calls to Delete() will block

func (*Configuration) GetsPerPromote

func (c *Configuration) GetsPerPromote(count int32) *Configuration

Give a large cache with a high read / write ratio, it's usually unnecessary to promote an item on every Get. GetsPerPromote specifies the number of Gets a key must have before being promoted [3]

func (*Configuration) ItemsToPrune

func (c *Configuration) ItemsToPrune(count uint32) *Configuration

The number of items to prune when memory is low [500]

func (*Configuration) MaxSize

func (c *Configuration) MaxSize(max int64) *Configuration

The max size for the cache [5000]

func (*Configuration) OnDelete

func (c *Configuration) OnDelete(callback func(item *Item)) *Configuration

OnDelete allows setting a callback function to react to ideam deletion. This typically allows to do a cleanup of resources, such as calling a Close() on cached object that require some kind of tear-down.

func (*Configuration) PromoteBuffer

func (c *Configuration) PromoteBuffer(size uint32) *Configuration

The size of the queue for items which should be promoted. If the queue fills up, promotions are skipped [1024]

func (*Configuration) Track

func (c *Configuration) Track() *Configuration

By turning tracking on and using the cache's TrackingGet, the cache won't evict items which you haven't called Release() on. It's a simple reference counter.

type ConfigurationInt64

type ConfigurationInt64 struct {
	// contains filtered or unexported fields
}

func ConfigureInt64

func ConfigureInt64() *ConfigurationInt64

Creates a configuration object with sensible defaults Use this as the start of the fluent configuration: e.g.: ccache.New(ccache.Configure().MaxSize(10000))

func (*ConfigurationInt64) Buckets

func (c *ConfigurationInt64) Buckets(count uint32) *ConfigurationInt64

Keys are hashed into % bucket count to provide greater concurrency (every set requires a write lock on the bucket). Must be a power of 2 (1, 2, 4, 8, 16, ...) [16]

func (*ConfigurationInt64) DeleteBuffer

func (c *ConfigurationInt64) DeleteBuffer(size uint32) *ConfigurationInt64

The size of the queue for items which should be deleted. If the queue fills up, calls to Delete() will block

func (*ConfigurationInt64) GetsPerPromote

func (c *ConfigurationInt64) GetsPerPromote(count int32) *ConfigurationInt64

Give a large cache with a high read / write ratio, it's usually unnecessary to promote an item on every Get. GetsPerPromote specifies the number of Gets a key must have before being promoted [3]

func (*ConfigurationInt64) ItemsToPrune

func (c *ConfigurationInt64) ItemsToPrune(count uint32) *ConfigurationInt64

The number of items to prune when memory is low [500]

func (*ConfigurationInt64) MaxSize

func (c *ConfigurationInt64) MaxSize(max int64) *ConfigurationInt64

The max size for the cache [5000]

func (*ConfigurationInt64) OnDelete

func (c *ConfigurationInt64) OnDelete(callback func(item *ItemInt64)) *ConfigurationInt64

OnDelete allows setting a callback function to react to ideam deletion. This typically allows to do a cleanup of resources, such as calling a Close() on cached object that require some kind of tear-down.

func (*ConfigurationInt64) PromoteBuffer

func (c *ConfigurationInt64) PromoteBuffer(size uint32) *ConfigurationInt64

The size of the queue for items which should be promoted. If the queue fills up, promotions are skipped [1024]

func (*ConfigurationInt64) Track

By turning tracking on and using the cache's TrackingGet, the cache won't evict items which you haven't called Release() on. It's a simple reference counter.

type Item

type Item struct {
	// contains filtered or unexported fields
}

func (*Item) Expired

func (i *Item) Expired() bool

func (*Item) Expires

func (i *Item) Expires() time.Time

func (*Item) Extend

func (i *Item) Extend(duration time.Duration)

func (*Item) IsExpired

func (i *Item) IsExpired(now time.Time) bool

func (*Item) Key

func (i *Item) Key() string

func (*Item) Release

func (i *Item) Release()

func (*Item) TTL

func (i *Item) TTL() time.Duration

func (*Item) Value

func (i *Item) Value() interface{}

type ItemInt64

type ItemInt64 struct {
	// contains filtered or unexported fields
}

func (*ItemInt64) Expired

func (i *ItemInt64) Expired() bool

func (*ItemInt64) Expires

func (i *ItemInt64) Expires() time.Time

func (*ItemInt64) Extend

func (i *ItemInt64) Extend(duration time.Duration)

func (*ItemInt64) IsExpired

func (i *ItemInt64) IsExpired(now time.Time) bool

func (*ItemInt64) Key

func (i *ItemInt64) Key() int64

func (*ItemInt64) Release

func (i *ItemInt64) Release()

func (*ItemInt64) TTL

func (i *ItemInt64) TTL() time.Duration

func (*ItemInt64) Value

func (i *ItemInt64) Value() interface{}

type LayeredCache

type LayeredCache struct {
	*Configuration
	// contains filtered or unexported fields
}

func Layered

func Layered(config *Configuration) *LayeredCache

See ccache.Configure() for creating a configuration

func (*LayeredCache) Clear

func (c *LayeredCache) Clear()

Clears the cache

func (*LayeredCache) Delete

func (c *LayeredCache) Delete(primary, secondary string) bool

Remove the item from the cache, return true if the item was present, false otherwise.

func (*LayeredCache) DeleteAll

func (c *LayeredCache) DeleteAll(primary string) bool

Deletes all items that share the same primary key

func (*LayeredCache) DeleteFunc

func (c *LayeredCache) DeleteFunc(primary string, matches func(key string, item *Item) bool) int

Deletes all items that share the same primary key and where the matches func evaluates to true.

func (*LayeredCache) DeletePrefix

func (c *LayeredCache) DeletePrefix(primary, prefix string) int

Deletes all items that share the same primary key and prefix.

func (*LayeredCache) Fetch

func (c *LayeredCache) Fetch(primary, secondary string, duration time.Duration, fetch func() (interface{}, error)) (*Item, error)

Attempts to get the value from the cache and calles fetch on a miss. If fetch returns an error, no value is cached and the error is returned back to the caller.

func (*LayeredCache) Get

func (c *LayeredCache) Get(primary, secondary string) *Item

Get an item from the cache. Returns nil if the item wasn't found. This can return an expired item. Use item.Expired() to see if the item is expired and item.TTL() to see how long until the item expires (which will be negative for an already expired item).

func (*LayeredCache) GetDropped

func (c *LayeredCache) GetDropped() int

Gets the number of items removed from the cache due to memory pressure since the last time GetDropped was called

func (*LayeredCache) GetOrCreateSecondaryCache

func (c *LayeredCache) GetOrCreateSecondaryCache(primary string) *SecondaryCache

Get the secondary cache for a given primary key. This operation will never return nil. In the case where the primary key does not exist, a new, underlying, empty bucket will be created and returned.

func (*LayeredCache) ItemCount

func (c *LayeredCache) ItemCount() int

func (*LayeredCache) Replace

func (c *LayeredCache) Replace(primary, secondary string, value interface{}) bool

Replace the value if it exists, does not set if it doesn't. Returns true if the item existed an was replaced, false otherwise. Replace does not reset item's TTL nor does it alter its position in the LRU

func (*LayeredCache) Set

func (c *LayeredCache) Set(primary, secondary string, value interface{}, duration time.Duration)

Set the value in the cache for the specified duration

func (*LayeredCache) SetMaxSize

func (c *LayeredCache) SetMaxSize(size int64)

Sets a new max size. That can result in a GC being run if the new maxium size is smaller than the cached size

func (*LayeredCache) Stop

func (c *LayeredCache) Stop()

func (*LayeredCache) TrackingGet

func (c *LayeredCache) TrackingGet(primary, secondary string) TrackedItem

Used when the cache was created with the Track() configuration option. Avoid otherwise

func (*LayeredCache) TrackingSet

func (c *LayeredCache) TrackingSet(primary, secondary string, value interface{}, duration time.Duration) TrackedItem

Set the value in the cache for the specified duration

type SecondaryCache

type SecondaryCache struct {
	// contains filtered or unexported fields
}

func (*SecondaryCache) Delete

func (s *SecondaryCache) Delete(secondary string) bool

Delete a secondary key. The semantics are the same as for LayeredCache.Delete

func (*SecondaryCache) Fetch

func (s *SecondaryCache) Fetch(secondary string, duration time.Duration, fetch func() (interface{}, error)) (*Item, error)

Fetch or set a secondary key. The semantics are the same as for LayeredCache.Fetch

func (*SecondaryCache) Get

func (s *SecondaryCache) Get(secondary string) *Item

Get the secondary key. The semantics are the same as for LayeredCache.Get

func (*SecondaryCache) Replace

func (s *SecondaryCache) Replace(secondary string, value interface{}) bool

Replace a secondary key. The semantics are the same as for LayeredCache.Replace

func (*SecondaryCache) Set

func (s *SecondaryCache) Set(secondary string, value interface{}, duration time.Duration) *Item

Set the secondary key to a value. The semantics are the same as for LayeredCache.Set

func (*SecondaryCache) TrackingGet

func (c *SecondaryCache) TrackingGet(secondary string) TrackedItem

Track a secondary key. The semantics are the same as for LayeredCache.TrackingGet

type Sized

type Sized interface {
	Size() int64
}

type TrackedItem

type TrackedItem interface {
	Value() interface{}
	Release()
	Expired() bool
	TTL() time.Duration
	Expires() time.Time
	Extend(duration time.Duration)
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL