Documentation ¶
Index ¶
Examples ¶
Constants ¶
const ( // Default2QRecentRatio is the ratio of the 2Q cache dedicated // to recently added entries that have only been accessed once. Default2QRecentRatio = 0.25 // Default2QGhostEntries is the default ratio of ghost // entries kept to track entries recently evicted Default2QGhostEntries = 0.50 )
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Evicted ¶ added in v0.1.1
type Evicted[K comparable, V any] struct { Key K Value V }
Evicted holds key/value pair that was evicted from cache.
type TwoQueue ¶
type TwoQueue[K comparable, V any] struct { // contains filtered or unexported fields }
TwoQueue is a thread-safe fixed size 2Q cache. 2Q is an enhancement over the standard LRU cache in that it tracks both frequently and recently used entries separately. This avoids a burst in access to new entries from evicting frequently used entries. It adds some additional tracking overhead to the standard LRU cache, and is computationally about 2x the cost, and adds some metadata over head.
Example ¶
package main import ( "fmt" twoqueue "github.com/floatdrop/2q" ) func main() { cache := twoqueue.New[string, int](256) cache.Set("Hello", 5) if e := cache.Get("Hello"); e != nil { fmt.Println(*e) } }
Output: 5
func New ¶
func New[K comparable, V any](size int) *TwoQueue[K, V]
New creates 2Q cache with predefined size splits. 25% of size goes to Kin, 50% to KOut and rest to Am size.
func NewParams ¶ added in v0.1.1
New creates 2Q cache with specified capacities:
- Kin defines A1in FIFO size for key/value pairs - Kout defines A1out FIFO size for keys - size defines frequent LRU size for key/value pairs
It's recommended to hold 25% of available memory in Kin. Kout size should correspond to 50% memory for values. And size should consume rest of memory. You can refer to original paper (http://www.vldb.org/conf/1994/P439.PDF) for computing sizes.
For example, if you can store around 10000 items in cache: - Kin should hold around 2500 items. - Kout should hold 5000 items. - And size should take the rest 7500 items.
Cache will preallocate size count of internal structures to avoid allocation in process.
func (*TwoQueue[K, V]) Get ¶
func (L *TwoQueue[K, V]) Get(key K) *V
Get probes frequent and recent cached items and returns pointer to value (or nil if it was not found).
func (*TwoQueue[K, V]) Peek ¶ added in v0.1.1
func (L *TwoQueue[K, V]) Peek(key K) *V
Peek returns value for key (if key was in cache), but does not modify its recency.