memocache

package
v0.0.0-...-d7cf2f0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 15, 2019 License: MIT Imports: 3 Imported by: 0

Documentation

Overview

Package memocache provides in memory cache that is safe for concurrent use by multiple goroutines. It's meant to be used for expensive to compute or slow to fetch values.

There are a few interfaces defined in this package. MapInterface is an interface to a map that is safe for concurrent use. A good example implementation is sync.Map and this package implements LRUMap. CacheInterface provides LoadOrCall function and it saves computation or RPC calls for the same keys. Implementation of CacheInterface often use MapInterface.

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Cache

type Cache struct {
	// contains filtered or unexported fields
}

Cache is a kind of key value cache map but it is safe for concurrent use by multiple goroutines. It can avoid multiple duplicate function calls associated with the same key. When the cache is missing, the given function is used to compute or fetch the value for the key. Subsequent calls to the same key waits until the function returns, but calls to a different key are not blocked. Map should not be copied after first use.

Example
m := NewCache(&sync.Map{})

fmt.Println(m.LoadOrCall(1, func() interface{} { return "one" }))
fmt.Println(m.LoadOrCall("two", func() interface{} { return 2 }))
fmt.Println(m.LoadOrCall(1, func() interface{} { return "not one" }))
fmt.Println(m.LoadOrCall("two", func() interface{} { return 20 }))
m.Delete(1)
fmt.Println(m.LoadOrCall(1, func() interface{} { return "maybe not one" }))
fmt.Println(m.LoadOrCall("two", func() interface{} { return 200 }))
Output:

one
2
one
2
maybe not one
2
Example (CallsOncePerKey)
m := NewCache(&sync.Map{})

keys := []string{
	"abc",
	"ab",
	"abc",
	"88",
	"abc",
	"abc",
	"88",
	"abc",
	"abc",
	"abc",
}
res := make([]string, len(keys))

var numCalls int32

par.For(len(keys), func(i int) {
	key := keys[i]
	res[i] = m.LoadOrCall(key, func() interface{} {
		atomic.AddInt32(&numCalls, 1)
		return strings.ToUpper(key)
	}).(string)
})

fmt.Printf("Number of calls: %d\n", numCalls)

for i := 0; i < len(keys); i++ {
	fmt.Printf("%q => %q\n", keys[i], res[i])
}
Output:

Number of calls: 3
"abc" => "ABC"
"ab" => "AB"
"abc" => "ABC"
"88" => "88"
"abc" => "ABC"
"abc" => "ABC"
"88" => "88"
"abc" => "ABC"
"abc" => "ABC"
"abc" => "ABC"
Example (DifferentKeysNotBlocked)
// This example shows that different keys are not blocked. Key "b" and
// "c" starts after the function for key "a" is called run processing.
// But key "a" waits until key "b" finishes. If key "b" is blocked while
// the value of "a" is being evaluated, there will be a deadlock, which
// doesn't happen in this example. The order in the output is always the
// same.
m := NewCache(&sync.Map{})

started := map[string]chan struct{}{
	"":  make(chan struct{}),
	"a": make(chan struct{}),
	"b": make(chan struct{}),
	"c": make(chan struct{}),
}
done := []chan struct{}{make(chan struct{}), make(chan struct{}), make(chan struct{}), make(chan struct{})}
ops := []struct {
	key        string
	startAfter chan struct{}
	waitUntil  chan struct{}
}{
	{key: "a", startAfter: started[""], waitUntil: done[2]},
	{key: "a", startAfter: started[""], waitUntil: done[2]},
	{key: "b", startAfter: started["a"], waitUntil: done[3]},
	{key: "c", startAfter: started["b"], waitUntil: started[""]},
}

par.Do(
	func() {
		par.For(len(ops), func(i int) {
			op := ops[i]
			<-op.startAfter
			m.LoadOrCall(op.key, func() interface{} {
				close(started[op.key])
				<-op.waitUntil
				fmt.Printf("key %q was looked up\n", op.key)
				return op.key
			})
			close(done[i])
		})
	},
	func() {
		close(started[""])
	},
)
Output:

key "c" was looked up
key "b" was looked up
key "a" was looked up

func NewCache

func NewCache(m MapInterface) *Cache

NewCache returns a new cache backed by the given m which should be safe for concurrent use by multiple goroutines.

func (*Cache) Delete

func (c *Cache) Delete(key interface{})

Delete deletes the cache value for the key. Prior LoadOrCall() with the same key won't be affected by the delete calls. Later LoadOrCall() with the same key will have to call getValue, since the cache is cleared for the key. The key should be hashable.

func (*Cache) LoadOrCall

func (c *Cache) LoadOrCall(key interface{}, getValue func() interface{}) interface{}

LoadOrCall gets pre-cached value associated with the given key or calls getValue to get the value for the key. The function getValue is called only once for the given key. Even if different getValue is given for the same key, only one function is called. The key should be hashable.

type CacheInterface

type CacheInterface interface {
	LoadOrCall(key interface{}, getValue func() interface{}) interface{}
	Delete(key interface{})
}

CacheInterface is an interface that provides map interface which is safe to use in multiple goroutines.

type LRUMap

type LRUMap struct {
	// contains filtered or unexported fields
}

LRUMap implements the least recently used map with manual deletion. LRUMap needs a linked list and has some overhead on the memory space.

func NewLRUMap

func NewLRUMap(l *list.List, maxSize int) *LRUMap

NewLRUMap returns a new LRU cache.

func (*LRUMap) Delete

func (l *LRUMap) Delete(key interface{})

Delete deletes the value for a key.

func (*LRUMap) LoadOrStore

func (l *LRUMap) LoadOrStore(key, value interface{}) (actual interface{}, loaded bool)

LoadOrStore returns the existing value for the key if present. Otherwise, it stores and returns the given value. The loaded result is true if the value was loaded, false if stored. If the cache size exceeds the maxSize, it removes the value from the map.

type Map deprecated

type Map struct {
	// contains filtered or unexported fields
}

Map is a kind of key value cache map but it is safe for concurrent use by multiple goroutines. It can avoid multiple duplicate function calls associated with the same key. When the cache is missing, the given function is used to compute or fetch the value for the key. Subsequent calls to the same key waits until the function returns, but calls to a different key are not blocked. Map should not be copied after first use.

Deprecated: Use NewCache(&sync.Map{}).

func (*Map) Delete

func (m *Map) Delete(key interface{})

Delete deletes the cache value for the key. Prior LoadOrCall() with the same key won't be affected by the delete calls. Later LoadOrCall() with the same key will have to call getValue, since the cache is cleared for the key. The key should be hashable.

func (*Map) LoadOrCall

func (m *Map) LoadOrCall(key interface{}, getValue func() interface{}) interface{}

LoadOrCall gets pre-cached value associated with the given key or calls getValue to get the value for the key. The function getValue is called only once for the given key. Even if different getValue is given for the same key, only one function is called. The key should be hashable.

type MapInterface

type MapInterface interface {
	LoadOrStore(key, value interface{}) (actual interface{}, loaded bool)
	Delete(key interface{})
}

MapInterface implements a map safe for concurrent use by multiple goroutines. For example, *sync.Map implements MapInterface.

type MultiLevelMap

type MultiLevelMap struct {
	// contains filtered or unexported fields
}

MultiLevelMap is an expansion of a Map that can manage tree like structure. It's possible to prune a subtree. There shouldn't be any conflicts between a subtree and the leaf node. For example, if a path ("a", "b", "c") has a value, path ("a", "b") cannot have a value. Each elemnt of path should be hashable. MultiLevelMap should not be copied after first use. MultiLevelMap uses a single level cache that implements CacheInterface such as *sync.Map as a backend. For some cache with replacement policies, cache maps on a different levels may need to share some stats like the current size of the cache.

Example
var m MultiLevelMap

names := []string{"John", "Mary", "Linda", "Oscar"}
gender := []string{"m", "f", "f", "m"}
lookup := func(id int, category string) string {
	return m.LoadOrCall(func() interface{} {
		return names[id]
	}, category, id).(string)
}
lookupAll := func(header string) {
	fmt.Println(header)

	for i := 0; i < len(names); i++ {
		fmt.Println(lookup(i, gender[i]))
	}
}

lookupAll("== First Calls ==")

fmt.Println("Now we change all names upper case in the backend")

for i := 0; i < len(names); i++ {
	names[i] = strings.ToUpper(names[i])
}

lookupAll("== Call again (All cache hit/Nothing should change) ==")

fmt.Println("Prune males")
m.Prune("m")

lookupAll("== Call again (All males should change) ==")

fmt.Println("Prune Linda only")
m.Prune("f", 2)

lookupAll("== Call again ==")
Output:

== First Calls ==
John
Mary
Linda
Oscar
Now we change all names upper case in the backend
== Call again (All cache hit/Nothing should change) ==
John
Mary
Linda
Oscar
Prune males
== Call again (All males should change) ==
JOHN
Mary
Linda
OSCAR
Prune Linda only
== Call again ==
JOHN
Mary
LINDA
OSCAR
Example (DifferentKeysNotBlocked)
// This example shows that different keys are not blocked even if they
// share the same prefix path. Key "b" and "c" starts after the function
// for key "a" is called run processing. But key "a" waits until key "b"
// finishes. If key "b" is blocked while the value of "a" is being
// evaluated, there will be a deadlock, which doesn't happen in this
// example. The order in the output is always the same.
var m MultiLevelMap

started := map[string]chan struct{}{
	"":  make(chan struct{}),
	"a": make(chan struct{}),
	"b": make(chan struct{}),
	"c": make(chan struct{}),
}
done := []chan struct{}{make(chan struct{}), make(chan struct{}), make(chan struct{}), make(chan struct{})}
ops := []struct {
	key        string
	startAfter chan struct{}
	waitUntil  chan struct{}
}{
	{key: "a", startAfter: started[""], waitUntil: done[2]},
	{key: "a", startAfter: started[""], waitUntil: done[2]},
	{key: "b", startAfter: started["a"], waitUntil: done[3]},
	{key: "c", startAfter: started["b"], waitUntil: started[""]},
}

par.Do(
	func() {
		par.For(len(ops), func(i int) {
			op := ops[i]
			<-op.startAfter
			m.LoadOrCall(func() interface{} {
				close(started[op.key])
				<-op.waitUntil
				fmt.Printf("key %q was looked up\n", op.key)
				return op.key
			}, "common", "path", "and", op.key)
			close(done[i])
		})
	},
	func() {
		close(started[""])
	},
)
Output:

key "c" was looked up
key "b" was looked up
key "a" was looked up
Example (WithLRUCache)
sharedList := list.New()
m := NewMultiLevelMap(func() CacheInterface {
	// The example uses up 6 spaces.
	return NewCache(NewLRUMap(sharedList, 5))
})

names := []string{"John", "Mary", "Linda", "Oscar"}
gender := []string{"m", "f", "f", "m"}
lookup := func(id int, category string) string {
	return m.LoadOrCall(func() interface{} {
		return names[id]
	}, category, id).(string)
}
lookupAll := func(header string) {
	fmt.Println(header)

	for i := 0; i < len(names); i++ {
		fmt.Println(lookup(i, gender[i]))
	}
}

lookupAll("== First Calls ==")

fmt.Println("Now we change all names upper case in the backend")

for i := 0; i < len(names); i++ {
	names[i] = strings.ToUpper(names[i])
}

lookupAll("== Call Again ==")
Output:

== First Calls ==
John
Mary
Linda
Oscar
Now we change all names upper case in the backend
== Call Again ==
JOHN
MARY
LINDA
OSCAR
Example (WithRRCache)
var currentSize int32
m := NewMultiLevelMap(func() CacheInterface {
	// The example uses up 6 spaces.
	return NewRRCache(&currentSize, 4, 2, rand.Intn)
})

names := []string{"John", "Mary", "Linda", "Oscar"}
gender := []string{"m", "f", "f", "m"}
lookup := func(id int, category string) string {
	return m.LoadOrCall(func() interface{} {
		return names[id]
	}, category, id).(string)
}
lookupAll := func(header string) {
	fmt.Println(header)

	for i := 0; i < len(names); i++ {
		fmt.Println(lookup(i, gender[i]))
	}
}

lookupAll("== First Calls ==")

fmt.Println("Now we change all names upper case in the backend")

for i := 0; i < len(names); i++ {
	names[i] = strings.ToUpper(names[i])
}

lookupAll("== Call Again ==")
// Example Output:
// == First Calls ==
// John
// Mary
// Linda
// Oscar
// Now we change all names upper case in the backend
// == Call Again ==
// JOHN
// MARY
// LINDA
// Oscar
Output:

func NewMultiLevelMap

func NewMultiLevelMap(newMap func() CacheInterface) *MultiLevelMap

NewMultiLevelMap returns a new MultiLevelMap with the given newMap factory. For LRU cache, you may call:

const maxSize = 10000
sharedList := list.New()
m := NewMultiLevelMap(func() memocache.CacheInterface {
	return NewCache(NewLRUMap(sharedList, maxSize))
})

For random replacement cache, you may call:

const maxSize = 10000
var currentSize int32
m := NewMultiLevelMap(func() memocache.CacheInterface {
	return NewRRCache(&currentSize, maxSize, maxSize/2, rand.Intn)
})

func (*MultiLevelMap) LoadOrCall

func (m *MultiLevelMap) LoadOrCall(getValue func() interface{}, path ...interface{}) interface{}

LoadOrCall loads the value in path. If the value doesn't exist, it calls getValue only once. All concurrent calls to the same path will block until the value is available. Calls to other paths are not blocked. Each path element should be hashable.

func (*MultiLevelMap) Prune

func (m *MultiLevelMap) Prune(path ...interface{})

Prune removes a subtree of the path. It may or may not affect other LoadOrCall calls made at the same time. But subsequent LoadOrCall calls in the same goroutine are affected by the Prune call, so newly updated value will be cached again.

type RRCache

type RRCache struct {
	// contains filtered or unexported fields
}

RRCache implements the random replacement cache. It removes about a half (random) of cached items when it goes over the max size. RRCache has smaller memory overhead than LRUCache has.

Example
var currentSize int32
m := NewRRCache(&currentSize, 6, 3, rand.Intn)
names := []string{
	"John", "Mary", "Linda", "Oscar", "Yang",
	"Yoshi", "Carlos", "Samantha"}
lookup := func(id int) string {
	return m.LoadOrCall(id, func() interface{} {
		fmt.Printf("%d %v called\n", id, names[id])
		return names[id]
	}).(string)
}
lookupAll := func(header string) {
	fmt.Println(header)

	for i := 0; i < len(names); i++ {
		fmt.Println(lookup(i))
	}
}
lookupAll("== First Calls ==")
lookupAll("== Call again ==")
// Example Output:
// == First Calls ==
// 0 John called
// John
// 1 Mary called
// Mary
// 2 Linda called
// Linda
// 3 Oscar called
// Oscar
// 4 Yang called
// Yang
// 5 Yoshi called
// Yoshi
// 6 Carlos called
// Carlos
// 7 Samantha called
// Samantha
// == Call again ==
// John
// Mary
// 2 Linda called
// Linda
// Oscar
// 4 Yang called
// Yang
// 5 Yoshi called
// Yoshi
// 6 Carlos called
// Carlos
// Samantha
Output:

func NewRRCache

func NewRRCache(currentSize *int32, maxSize, targetNum int32, intn func(n int) int) *RRCache

NewRRCache creates a new random replacement cache. If the maxSize is reached, items are evicted approximately until the given targetNum (like half of maxSize) items are remaining. The evicted items may not be truly random. A pointer to currentSize is used to share the counter for the number of items for multi level maps. Pass rand.Intn as intn or any random number generator that is safe for concurrent use.

func (*RRCache) Delete

func (r *RRCache) Delete(key interface{})

Delete deletes the cache value for the key. Prior LoadOrCall() with the same key won't be affected by the delete calls. Later LoadOrCall() with the same key will have to call getValue, since the cache is cleared for the key. The key should be hashable.

func (*RRCache) LoadOrCall

func (r *RRCache) LoadOrCall(key interface{}, getValue func() interface{}) interface{}

LoadOrCall loads the value in path. If the value doesn't exist, it calls getValue only once. All concurrent calls to the same path will block until the value is available. Calls to other paths are not blocked. Each path element should be hashable. If the number of items exceeds the maxSize, it will evict random items.

type Value

type Value struct {
	// contains filtered or unexported fields
}

Value is a single value that is initialized once by calling the given function only once. Value should not be copied after first use.

func (*Value) LoadOrCall

func (e *Value) LoadOrCall(getValue func() interface{}) interface{}

LoadOrCall gets the value. If the value isn't ready it calls getValue to get the value.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL