cache

package module
v0.0.0-...-e8a81b0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 22, 2016 License: MIT Imports: 7 Imported by: 135

README

Cache GoDoc Build Status

Cache is a backend provider for common use cases

Install and Usage

Install the package with:

go get github.com/koding/cache

Import it with:

import "github.com/koding/cache"

Example


// create a cache with 2 second TTL
cache := NewMemoryWithTTL(2 * time.Second)
// start garbage collection for expired keys
cache.StartGC(time.Millisecond * 10)
// set item
err := cache.Set("test_key", "test_data")
// get item
data, err := cache.Get("test_key")

Supported caching algorithms:

  • MemoryNoTS : provides a non-thread safe in-memory caching system
  • Memory : provides a thread safe in-memory caching system, built on top of MemoryNoTS cache
  • LRUNoTS : provides a non-thread safe, fixed size in-memory caching system, built on top of MemoryNoTS cache
  • LRU : provides a thread safe, fixed size in-memory caching system, built on top of LRUNoTS cache
  • MemoryTTL : provides a thread safe, expiring in-memory caching system, built on top of MemoryNoTS cache
  • ShardedNoTS : provides a non-thread safe sharded cache system, built on top of a cache interface
  • ShardedTTL : provides a thread safe, expiring in-memory sharded cache system, built on top of ShardedNoTS over MemoryNoTS
  • LFUNoTS : provides a non-thread safe, fixed size in-memory caching system, built on top of MemoryNoTS cache
  • LFU : provides a thread safe, fixed size in-memory caching system, built on top of LFUNoTS cache

Documentation

Overview

Package cache provides basic caching mechanisms for Go(lang) projects.

Currently supported caching algorithms:

MemoryNoTS  : provides a non-thread safe in-memory caching system
Memory      : provides a thread safe in-memory caching system, built on top of MemoryNoTS cache
LRUNoTS     : provides a non-thread safe, fixed size in-memory caching system, built on top of MemoryNoTS cache
LRU         : provides a thread safe, fixed size in-memory caching system, built on top of LRUNoTS cache
MemoryTTL   : provides a thread safe, expiring in-memory caching system,  built on top of MemoryNoTS cache
ShardedNoTS : provides a non-thread safe sharded cache system, built on top of a cache interface
ShardedTTL  : provides a thread safe, expiring in-memory sharded cache system, built on top of ShardedNoTS over MemoryNoTS
LFUNoTS     : provides a non-thread safe, fixed size in-memory caching system, built on top of MemoryNoTS cache
LFU         : provides a thread safe, fixed size in-memory caching system, built on top of LFUNoTS cache

Index

Constants

This section is empty.

Variables

View Source
var (
	// ErrNotFound holds exported `not found error` for not found items
	ErrNotFound = errors.New("not found")
)

Functions

This section is empty.

Types

type Cache

type Cache interface {
	// Get returns single item from the backend if the requested item is not
	// found, returns NotFound err
	Get(key string) (interface{}, error)

	// Set sets a single item to the backend
	Set(key string, value interface{}) error

	// Delete deletes single item from backend
	Delete(key string) error
}

Cache is the contract for all of the cache backends that are supported by this package

func NewLFU

func NewLFU(size int) Cache

NewLFU creates a thread-safe LFU cache

func NewLFUNoTS

func NewLFUNoTS(size int) Cache

NewLFUNoTS creates a new LFU cache struct for further cache operations. Size is used for limiting the upper bound of the cache

func NewLRU

func NewLRU(size int) Cache

NewLRU creates a thread-safe LRU cache

func NewLRUNoTS

func NewLRUNoTS(size int) Cache

NewLRUNoTS creates a new LRU cache struct for further cache operations. Size is used for limiting the upper bound of the cache

func NewMemNoTSCache

func NewMemNoTSCache() Cache

NewMemNoTSCache is a helper method to return a Cache interface, so callers don't have to typecast

func NewMemory

func NewMemory() Cache

NewMemory creates an inmemory cache system Which everytime will return the true value about a cache hit

type Document

type Document struct {
	Key      string      `bson:"_id" json:"_id"`
	Value    interface{} `bson:"value" json:"value"`
	ExpireAt time.Time   `bson:"expireAt" json:"expireAt"`
}

Document holds the key-value pair for mongo cache

type LFU

type LFU struct {
	// Mutex is used for handling the concurrent
	// read/write requests for cache
	sync.Mutex
	// contains filtered or unexported fields
}

LFU holds the Least frequently used cache values

func (*LFU) Delete

func (l *LFU) Delete(key string) error

Delete deletes the given key-value pair from cache, this function doesnt return an error if item is not in the cache

func (*LFU) Get

func (l *LFU) Get(key string) (interface{}, error)

Get returns the value of a given key if it exists, every get item will be increased for every usage

func (*LFU) Set

func (l *LFU) Set(key string, val interface{}) error

Set sets or overrides the given key with the given value, every set item will be increased as usage. when the cache is full, least frequently used items will be evicted from linked list

type LFUNoTS

type LFUNoTS struct {
	// contains filtered or unexported fields
}

LFUNoTS holds the cache struct

func (*LFUNoTS) Delete

func (l *LFUNoTS) Delete(key string) error

Delete deletes the key and its dependencies

func (*LFUNoTS) Get

func (l *LFUNoTS) Get(key string) (interface{}, error)

Get gets value of cache item then increments the usage of the item

func (*LFUNoTS) Set

func (l *LFUNoTS) Set(key string, value interface{}) error

Set sets a new key-value pair Set increments the key usage count too

eg: cache.Set("test_key","2") cache.Set("test_key","1") if you try to set a value into same key its usage count will be increased and usage count of "test_key" will be 2 in this example

type LRU

type LRU struct {
	// Mutex is used for handling the concurrent
	// read/write requests for cache
	sync.Mutex
	// contains filtered or unexported fields
}

LRU Discards the least recently used items first. This algorithm requires keeping track of what was used when.

func (*LRU) Delete

func (l *LRU) Delete(key string) error

Delete deletes the given key-value pair from cache, this function doesnt return an error if item is not in the cache

func (*LRU) Get

func (l *LRU) Get(key string) (interface{}, error)

Get returns the value of a given key if it exists, every get item will be moved to the head of the linked list for keeping track of least recent used item

func (*LRU) Set

func (l *LRU) Set(key string, val interface{}) error

Set sets or overrides the given key with the given value, every set item will be moved or prepended to the head of the linked list for keeping track of least recent used item. When the cache is full, last item of the linked list will be evicted from the cache

type LRUNoTS

type LRUNoTS struct {
	// contains filtered or unexported fields
}

LRUNoTS Discards the least recently used items first. This algorithm requires keeping track of what was used when.

func (*LRUNoTS) Delete

func (l *LRUNoTS) Delete(key string) error

Delete deletes the given key-value pair from cache, this function doesnt return an error if item is not in the cache

func (*LRUNoTS) Get

func (l *LRUNoTS) Get(key string) (interface{}, error)

Get returns the value of a given key if it exists, every get item will be moved to the head of the linked list for keeping track of least recent used item

func (*LRUNoTS) Set

func (l *LRUNoTS) Set(key string, val interface{}) error

Set sets or overrides the given key with the given value, every set item will be moved or prepended to the head of the linked list for keeping track of least recent used item. When the cache is full, last item of the linked list will be evicted from the cache

type Memory

type Memory struct {
	// Mutex is used for handling the concurrent
	// read/write requests for cache
	sync.Mutex
	// contains filtered or unexported fields
}

Memory provides an inmemory caching mechanism

func (*Memory) Delete

func (r *Memory) Delete(key string) error

Delete deletes the given key-value pair from cache, this function doesnt return an error if item is not in the cache

func (*Memory) Get

func (r *Memory) Get(key string) (interface{}, error)

Get returns the value of a given key if it exists

func (*Memory) Set

func (r *Memory) Set(key string, value interface{}) error

Set sets a value to the cache or overrides existing one with the given value

type MemoryNoTS

type MemoryNoTS struct {
	// contains filtered or unexported fields
}

MemoryNoTS provides a non-thread safe caching mechanism

func NewMemoryNoTS

func NewMemoryNoTS() *MemoryNoTS

NewMemoryNoTS creates MemoryNoTS struct

func (*MemoryNoTS) Delete

func (r *MemoryNoTS) Delete(key string) error

Delete deletes a given key, it doesnt return error if the item is not in the system

func (*MemoryNoTS) Get

func (r *MemoryNoTS) Get(key string) (interface{}, error)

Get returns a value of a given key if it exists and valid for the time being

func (*MemoryNoTS) Set

func (r *MemoryNoTS) Set(key string, value interface{}) error

Set will persist a value to the cache or override existing one with the new one

type MemoryTTL

type MemoryTTL struct {
	// Mutex is used for handling the concurrent
	// read/write requests for cache
	sync.RWMutex
	// contains filtered or unexported fields
}

MemoryTTL holds the required variables to compose an in memory cache system which also provides expiring key mechanism

func NewMemoryWithTTL

func NewMemoryWithTTL(ttl time.Duration) *MemoryTTL

NewMemoryWithTTL creates an inmemory cache system Which everytime will return the true values about a cache hit and never will leak memory ttl is used for expiration of a key from cache

func (*MemoryTTL) Delete

func (r *MemoryTTL) Delete(key string) error

Delete deletes a given key if exists

func (*MemoryTTL) Get

func (r *MemoryTTL) Get(key string) (interface{}, error)

Get returns a value of a given key if it exists and valid for the time being

func (*MemoryTTL) Set

func (r *MemoryTTL) Set(key string, value interface{}) error

Set will persist a value to the cache or override existing one with the new one

func (*MemoryTTL) StartGC

func (r *MemoryTTL) StartGC(gcInterval time.Duration)

StartGC starts the garbage collection process in a go routine

func (*MemoryTTL) StopGC

func (r *MemoryTTL) StopGC()

StopGC stops sweeping goroutine.

type MongoCache

type MongoCache struct {

	// CollectionName speficies the optional collection name for mongoDB
	// if CollectionName is not set, then default value will be set
	CollectionName string

	// ttl is a duration for a cache key to expire
	TTL time.Duration

	// GCInterval specifies the time duration for garbage collector time interval
	GCInterval time.Duration

	// GCStart starts the garbage collector and deletes the
	// expired keys from mongo with given time interval
	GCStart bool

	// Mutex is used for handling the concurrent
	// read/write requests for cache
	sync.RWMutex
	// contains filtered or unexported fields
}

MongoCache holds the cache values that will be stored in mongoDB

func NewMongoCacheWithTTL

func NewMongoCacheWithTTL(session *mgo.Session, configs ...Option) *MongoCache

NewMongoCacheWithTTL creates a caching layer backed by mongo. TTL's are managed either by a background cleaner or document is removed on the Get operation. Mongo TTL indexes are not utilized since there can be multiple systems using the same collection with different TTL values.

The responsibility of stopping the GC process belongs to the user.

Session is not closed while stopping the GC.

This self-referential function satisfy you to avoid passing nil value to the function as parameter e.g (usage) : configure with defaults, just call; NewMongoCacheWithTTL(session)

configure ttl duration with;

NewMongoCacheWithTTL(session, func(m *MongoCache) {
		m.TTL = 2 * time.Minute
})

or NewMongoCacheWithTTL(session, SetTTL(time.Minute * 2))

configure collection name with;

NewMongoCacheWithTTL(session, func(m *MongoCache) {
		m.CollectionName = "MongoCacheCollectionName"
})

func (*MongoCache) Delete

func (m *MongoCache) Delete(key string) error

Delete deletes a given key if exists

func (*MongoCache) EnsureIndex

func (m *MongoCache) EnsureIndex() error

EnsureIndex ensures the index with expireAt key

func (*MongoCache) Get

func (m *MongoCache) Get(key string) (interface{}, error)

Get returns a value of a given key if it exists

func (*MongoCache) Set

func (m *MongoCache) Set(key string, value interface{}) error

Set will persist a value to the cache or override existing one with the new one

func (*MongoCache) SetEx

func (m *MongoCache) SetEx(key string, duration time.Duration, value interface{}) error

SetEx will persist a value to the cache or override existing one with the new one with ttl duration

func (*MongoCache) StartGC

func (m *MongoCache) StartGC(gcInterval time.Duration)

StartGC starts the garbage collector with given time interval The expired data will be checked & deleted with given interval time

func (*MongoCache) StopGC

func (m *MongoCache) StopGC()

StopGC stops sweeping goroutine.

type Option

type Option func(*MongoCache)

Option sets the options specified.

func MustEnsureIndexExpireAt

func MustEnsureIndexExpireAt() Option

MustEnsureIndexExpireAt ensures the expireAt index usage: NewMongoCacheWithTTL(mongoSession, MustEnsureIndexExpireAt())

func SetCollectionName

func SetCollectionName(collName string) Option

SetCollectionName sets the collection name for mongoDB in MongoCache struct as option usage: NewMongoCacheWithTTL(mongoSession, SetCollectionName("mongoCollName"))

func SetGCInterval

func SetGCInterval(duration time.Duration) Option

SetGCInterval sets the garbage collector interval in MongoCache struct as option usage: NewMongoCacheWithTTL(mongoSession, SetGCInterval(time*Minute))

func SetTTL

func SetTTL(duration time.Duration) Option

SetTTL sets the ttl duration in MongoCache as option usage: NewMongoCacheWithTTL(mongoSession, SetTTL(time*Minute))

func StartGC

func StartGC() Option

StartGC enables the garbage collector in MongoCache struct usage: NewMongoCacheWithTTL(mongoSession, StartGC())

type ShardedCache

type ShardedCache interface {
	// Get returns single item from the backend if the requested item is not
	// found, returns NotFound err
	Get(shardID, key string) (interface{}, error)

	// Set sets a single item to the backend
	Set(shardID, key string, value interface{}) error

	// Delete deletes single item from backend
	Delete(shardID, key string) error

	// Deletes all items in that shard
	DeleteShard(shardID string) error
}

ShardedCache is the contract for all of the sharded cache backends that are supported by this package

type ShardedNoTS

type ShardedNoTS struct {
	// contains filtered or unexported fields
}

ShardedNoTS ; the concept behind this storage is that each cache entry is associated with a tenantID and this enables fast purging for just that tenantID

func NewShardedNoTS

func NewShardedNoTS(c func() Cache) *ShardedNoTS

NewShardedNoTS inits ShardedNoTS struct

func (*ShardedNoTS) Delete

func (l *ShardedNoTS) Delete(tenantID, key string) error

Delete deletes a given key

func (*ShardedNoTS) DeleteShard

func (l *ShardedNoTS) DeleteShard(tenantID string) error

DeleteShard deletes the keys inside from maps of cache & itemCount

func (*ShardedNoTS) Get

func (l *ShardedNoTS) Get(tenantID, key string) (interface{}, error)

Get returns a value of a given key if it exists and valid for the time being

func (*ShardedNoTS) Set

func (l *ShardedNoTS) Set(tenantID, key string, val interface{}) error

Set will persist a value to the cache or override existing one with the new one

type ShardedTTL

type ShardedTTL struct {
	// Mutex is used for handling the concurrent
	// read/write requests for cache
	sync.Mutex
	// contains filtered or unexported fields
}

ShardedTTL holds the required variables to compose an in memory sharded cache system which also provides expiring key mechanism

func NewShardedCacheWithTTL

func NewShardedCacheWithTTL(ttl time.Duration, f func() Cache) *ShardedTTL

NewShardedCacheWithTTL creates a sharded cache system with TTL based on specified Cache constructor Which everytime will return the true values about a cache hit and never will leak memory ttl is used for expiration of a key from cache

func NewShardedWithTTL

func NewShardedWithTTL(ttl time.Duration) *ShardedTTL

NewShardedWithTTL creates an in-memory sharded cache system ttl is used for expiration of a key from cache

func (*ShardedTTL) Delete

func (r *ShardedTTL) Delete(tenantID, key string) error

Delete deletes a given key if exists

func (*ShardedTTL) DeleteShard

func (r *ShardedTTL) DeleteShard(tenantID string) error

DeleteShard deletes with given tenantID without key

func (*ShardedTTL) Get

func (r *ShardedTTL) Get(tenantID, key string) (interface{}, error)

Get returns a value of a given key if it exists and valid for the time being

func (*ShardedTTL) Set

func (r *ShardedTTL) Set(tenantID, key string, value interface{}) error

Set will persist a value to the cache or override existing one with the new one

func (*ShardedTTL) StartGC

func (r *ShardedTTL) StartGC(gcInterval time.Duration)

StartGC starts the garbage collection process in a go routine

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL