swiftcache

package module
v0.0.0-...-1ab6509 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 2, 2024 License: MIT Imports: 8 Imported by: 0

README

SwiftCache

SwiftCache is a streamlined, in-memory cache library for Go, inspired by the segmented caching concepts from freecache and bigcache. Optimized for single-machine use, it encompasses under 600 lines of code including comments, offering a robust yet concise caching solution. Influenced by go-cache, SwiftCache features a user-friendly interface for effortless integration.

A key feature is its thread-safe map[string]interface{} structure with support for entry expiration, eliminating the need for content serialization or network transmission. This allows for versatile object storage within the cache.

SwiftCache stands out with both Least Recently Used (LRU) and First In First Out (FIFO) eviction policies, giving developers control over data eviction according to their application's requirements. LRU prioritizes evicting least-accessed items, while FIFO removes items based on their addition order, catering to varied caching strategies.

Performance

SwiftCache has undergone extensive benchmarking against traditional Go maps and the widely-used go-cache library, showcasing its superior performance in scenarios characterized by high concurrency and large data volumes. The following benchmarks highlight SwiftCache's efficiency, especially when managing millions of items and performing concurrent operations.

Benchmarks source code can be found here.

goos: windows
goarch: amd64
pkg: github.com/simp-lee/swiftcache
cpu: 13th Gen Intel(R) Core(TM) i5-13500H
BenchmarkGetManyConcurrent-16           27213105                41.54 ns/op
--- BENCH: BenchmarkGetManyConcurrent-16
cache_test.go:932: Total items: 1000000, Each: 0, Hit Rate: 0.00%
cache_test.go:932: Total items: 1000000, Each: 6, Hit Rate: 96.00%
cache_test.go:932: Total items: 1000000, Each: 625, Hit Rate: 100.00%
cache_test.go:932: Total items: 1000000, Each: 62500, Hit Rate: 100.00%
cache_test.go:932: Total items: 1000000, Each: 1700819, Hit Rate: 100.00%
...

The following benchmark summary showcases SwiftCache's performance in different scenarios:

+--------------------------+--------------------+----------------+-----------------+----------+
| Benchmark Scenario       | Cache Solution     | Operations/sec | Latency (ns/op) | Hit Rate |
+--------------------------+--------------------+----------------+-----------------+----------+
| GetManyConcurrent        | SwiftCache FIFO    | 27,213,105     | 41.54           | 100.00%  |
|                          | SwiftCache LRU     | 32,707,440     | 50.37           | 100.00%  |
|                          | go-cache           | 18,846,588     | 66.29           | 100.00%  |
|                          | Map                | 21,343,122     | 54.70           | 100.00%  |
+--------------------------+--------------------+----------------+-----------------+----------+
| SetManyConcurrent        | SwiftCache FIFO    | 15,027,386     | 88.20           | -        |
|                          | SwiftCache LRU     | 14,252,052     | 85.15           | -        |
|                          | go-cache           | 3,881,776      | 357.3           | -        |
|                          | Map                | 5,064,476      | 283.7           | -        |
+--------------------------+--------------------+----------------+-----------------+----------+
| SetAndGetManyConcurrent  | SwiftCache FIFO    | 10,798,873     | 95.76           | 100.00%  |
|                          | SwiftCache LRU     | 11,510,503     | 103.2           | 100.00%  |
|                          | go-cache           | 3,054,723      | 412.2           | 100.00%  |
|                          | Map                | 3,459,856      | 354.6           | 100.00%  |
+--------------------------+--------------------+----------------+-----------------+----------+

GetManyConcurrent Test: SwiftCache demonstrates exceptional retrieval speeds, with FIFO mode slightly outperforming LRU due to its simpler eviction logic. Both exhibit performance marginally superior to standard Go maps and go-cache, consistently achieving up to a 100% hit rate.

SetManyConcurrent Test: In write-heavy scenarios, SwiftCache maintains high performance, with FIFO mode again showing a slight edge over LRU. The efficiency starkly contrasts with the higher latencies observed in go-cache and Go maps.

SetAndGetManyConcurrent Test: When testing combined set and get operations, SwiftCache's efficiency shines, offering the lowest operation times among the tested solutions and maintaining a 100% hit rate.

Usage

Installation
go get github.com/simp-lee/swiftcache
Simple initialization

A quick start guide to using SwiftCache:

package main

import (
    "fmt"
    "github.com/simp-lee/swiftcache"
)

func main() {
    // Initialize the cache with default settings
    cache, _ := swiftcache.NewCache()

    // Set a value with default expiration
    cache.Set("myKey", "myValue", swiftcache.DefaultExpiration)

    // Retrieve and check if the value exists
    value, found := cache.Get("myKey")
    if found {
        fmt.Println("Found myKey: ", value)
    }
}

Custom initialization

To customize SwiftCache's behavior, such as segment count or eviction policy:

package main

import (
    "fmt"
    "github.com/simp-lee/swiftcache"
    "hash/fnv"
    "time"
)

func main() {
    cacheConfig := swiftcache.CacheConfig{
        SegmentCount:      1024,            // Customize the number of segments
        MaxCacheSize:      5000,            // Set the maximum cache size
        DefaultExpiration: 5 * time.Minute, // Default expiration time
        EvictionPolicy:    "LRU",           // Set the eviction policy
        HashFunc:          fnv.New32,       // Use FNV-1 32-bit hash function
    }

    // Initialize the cache with custom settings
    cache, err := swiftcache.NewCache(cacheConfig)

    if err != nil {
        fmt.Println("Cache initialization failed: %v", err)
    }
    // Use the cache with the configured settings
    // ...
}

How it works

Segmented Storage Mechanism

SwiftCache utilizes a segmented storage mechanism to distribute cache data across multiple segments or "shards". This approach significantly reduces lock contention, a common bottleneck in traditional, single-lock cache systems. By partitioning the cache space, SwiftCache allows concurrent access to different segments, enhancing throughput and scalability. Each segment operates independently with its own lock, ensuring that operations on different segments do not block each other.

Superior Performance

The performance benefits of SwiftCache over standard Go maps and the go-cache library stem from this segmented architecture and the tailored locking strategy. Go maps, while fast for single-threaded operations, can suffer from contention in concurrent scenarios. SwiftCache's design minimizes this contention, providing faster access and modification times in multi-threaded environments.

LRU and FIFO Eviction Policies

SwiftCache implements two eviction policies: Least Recently Used (LRU) and First In First Out (FIFO).

LRU: Items accessed least recently are evicted first. This policy is ideal for retaining frequently accessed data in the cache. Implementing LRU requires updating access order on each retrieval, which slightly affects performance due to the necessary lock operations for order maintenance.

FIFO: Items are evicted in the order they were added, without considering access patterns. FIFO simplifies the eviction process as it does not require updating order on access, leading to potentially higher performance for scenarios where access order is not critical.

The choice between FIFO and LRU affects the locking mechanism used. LRU's need for order updates requires more sophisticated locking strategies, whereas FIFO can often operate with simpler, more performance-optimized locking due to its straightforward eviction approach.

Lazy Eviction and Expiration

Data eviction and expiration in SwiftCache are handled lazily. Instead of performing periodic sweeps to clean expired or evictable items, these operations are triggered during access attempts. This strategy ensures that the overhead of cleaning is spread out over time, preventing spikes in processing time that can occur with batch eviction or expiration processes. Lazy eviction contributes to a smoother performance profile, effectively "smoothing out" potential performance peaks.

Bitwise Operations for Segment Allocation

SwiftCache optimizes key allocation to segments using bitwise AND operations, leveraging the requirement for the number of segments to be powers of two (e.g., 2, 4, 8, 16, ... 512, 1024). This design allows SwiftCache to replace conventional modulo arithmetic with a faster bitwise operation. Bitwise AND is used because, for any number of segments that is a power of two, calculating hash & (numberOfSegments - 1) effectively performs a modulo operation but with improved performance. Although the gain from this optimization is relatively minor, it underscores SwiftCache's dedication to maximizing efficiency across all aspects of its architecture. This approach not only simplifies the internal logic for segment allocation but also contributes to the overall performance advantages of SwiftCache in high-concurrency scenarios.

In summary, SwiftCache's design principles—segmented storage, tailored eviction policies, lazy data management, and optimized segment allocation—work in concert to provide a high-performance caching solution suitable for a wide range of applications.

API

Initialization

NewCache(options ...CacheConfig) (*Cache, error): Creates a new cache instance with optional configuration. The CacheConfig struct allows customization of segments, maximum cache size, default expiration, hash function, and eviction policy.

Basic Operations

Set(key string, value interface{}, ttl time.Duration): Adds a new item to the cache or updates an existing item's value and expiration time. ttl can be set to NoExpiration for items that should not expire.

Get(key string) (interface{}, bool): Retrieves an item from the cache. Returns the item and a boolean indicating whether the key was found.

Delete(key string): Removes an item from the cache by its key.

Flush(): Clears all items from the cache.

Advanced Features

GetWithExpiration(key string) (interface{}, time.Time, bool): Similar to Get, but also returns the expiration time of the item if it exists.

ItemCount() int: Returns the total number of items currently in the cache, including those that may have expired but have not yet been cleaned up.

Items() map[string]interface{}: Returns a copy of all unexpired items in the cache as a map.

Item(key string) (*Item, bool): Retrieve an item along with its metadata from the cache. It returns a pointer to the Item and a boolean indicating whether the key was found. The Item struct includes the value, expiration time, and other internal details. This method is particularly useful when you need more information about a cache item, such as its expiration time, in addition to the value itself.

cache.Set("userId", 12345, 5*time.Minute)

item, found := cache.Item("userId")
if found {
    fmt.Printf("User ID: %v, Expires at: %v\n", item.Value, time.Unix(0, item.Expiration))
}

Increment(key string, n int64) error: Increments the value of a numerical item by n. Returns an error if the key does not exist, the item has expired, or the item's value is not a number.

Decrement(key string, n int64) error: Decrements the value of a numerical item by n. Similar error conditions apply as with Increment.

cache.Set("counter", 10, swiftcache.DefaultExpiration)

err := cache.Increment("counter", 5)
if err == nil {
    counter, _ := cache.Get("counter")
    fmt.Printf("Counter after increment: %v\n", counter)
}

err = cache.Decrement("counter", 3)
if err == nil {
    counter, _ := cache.Get("counter")
    fmt.Printf("Counter after decrement: %v\n", counter)
}
Eviction and Expiration

OnEvicted(f func(string, interface{})): Sets a callback function that is called whenever an item is evicted from the cache. This can be due to expiration or when an item is manually deleted.

Acknowledgments

Special thanks to the creators of freecache and bigcache for their innovative caching mechanisms in Go, which significantly inspired the design of SwiftCache.

A heartfelt acknowledgment to go-cache and its contributors. The design of SwiftCache's API and a substantial part of our testing methodologies are influenced by the simplicity and effectiveness of go-cache. Its approach to caching solutions has been crucial in guiding the development of a user-friendly interface and robust functionality in SwiftCache.

We are thankful to the open-source community for the collaboration and knowledge-sharing that make projects like SwiftCache possible. It is through these community efforts that we can continue to build efficient and effective software solutions.

Documentation

Index

Constants

View Source
const (
	DefaultExpiration     time.Duration = 0     // Default expiration time
	NoExpiration          time.Duration = -1    // For use with functions that take an expiration time.
	DefaultSegmentCount                 = 512   // Default number of segments to reduce lock contention
	MaxCacheSize                        = 1000  // Default maximum size for each cache segment
	DefaultEvictionPolicy               = "LRU" // Default eviction policy: "LRU".
)

Variables

This section is empty.

Functions

This section is empty.

Types

type Cache

type Cache struct {
	// contains filtered or unexported fields
}

Cache is a structure holding multiple segments

func NewCache

func NewCache(options ...CacheConfig) (*Cache, error)

NewCache creates a new cache instance

func (*Cache) Decrement

func (c *Cache) Decrement(k string, n int64) error

Decrement decreases the value of an item by n.

func (*Cache) Delete

func (c *Cache) Delete(key string)

Delete removes a key from the cache (public interface)

func (*Cache) Flush

func (c *Cache) Flush()

Flush clears all cached items from the cache.

func (*Cache) Get

func (c *Cache) Get(key string) (interface{}, bool)

Get retrieves a value for a key from the cache (public interface)

func (*Cache) GetWithExpiration

func (c *Cache) GetWithExpiration(key string) (interface{}, time.Time, bool)

GetWithExpiration returns an item and its expiration time from the cache.

func (*Cache) Increment

func (c *Cache) Increment(k string, n int64) error

Increment increases the value of an item by n.

func (*Cache) Item

func (c *Cache) Item(key string) (*Item, bool)

Item retrieves an item from the cache, along with its existence. It returns a pointer to the Item and a boolean indicating whether the item was found.

func (*Cache) ItemCount

func (c *Cache) ItemCount() int

ItemCount returns the number of items in the cache.

func (*Cache) Items

func (c *Cache) Items() map[string]interface{}

Items copies all unexpired items in the cache into a new map and returns it.

func (*Cache) OnEvicted

func (c *Cache) OnEvicted(f func(string, interface{}))

OnEvicted sets an (optional) function that is called with the key and value when an item is evicted from the cache. (Including when it is deleted manually, but not when it is overwritten.) Set to nil to disable.

func (*Cache) Set

func (c *Cache) Set(key string, value interface{}, ttl time.Duration)

Set sets a key-value pair in the cache (public interface)

type CacheConfig

type CacheConfig struct {
	SegmentCount      int                // Number of segments to reduce lock contention
	MaxCacheSize      int                // Maximum size for each cache segment
	DefaultExpiration time.Duration      // Default expiration time for cache items
	HashFunc          func() hash.Hash32 // Hash function to distribute keys across segments.
	EvictionPolicy    string             // Eviction policy: "LRU" or "FIFO".
}

CacheConfig is used to configure a cache instance.

type Item

type Item struct {
	Value      interface{} // Value of the cache item
	Expiration int64       // Expiration time in nanoseconds
	// contains filtered or unexported fields
}

Item defines an item in the cache

func (*Item) Expired

func (item *Item) Expired() bool

Expired checks if the cache item is expired

type Segment

type Segment struct {
	// contains filtered or unexported fields
}

Segment represents a segment of the cache

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL