ratelimiter

package module
v0.0.0-...-cad6d6d Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 30, 2019 License: BSD-2-Clause Imports: 4 Imported by: 1

README

ratelimiter

Build Status

Package ratelimiter implements an LRU based ratelimiter for near-cache type workloads. It's aim is to marry LRU based local caching with Redis's INCR command for incrmenting counters. There are times during data processing where you'd like to be able to say things like "If I see this thing n times in an hour, I don't care about it anymore", or the classic API rate limiting example. Once you hit a certain level of throughput you'd also like to reduce your network hops over to a remote service. This package aims to support both kids of workloads. To get the best and most accurate results it's assumed that your keys are partitioned in a way where the same calls go to the same box. The data used in ratelimiter is local to the machine it's on. It's also appromixation rate limiting vs keeping track of any time windows or exact time boundaries.

The logic is simply.. we'll keep incrementing you, once you pass the max count we're allowing we'll check the time of your key update and compare against the duration difference between now and the rate period you passed in. If it's been longer than that time period we'll reset your count and update the time to give you a fresh set of credits.

The underlying structure is based on groupcache's LRU library.

Installation

go get github.com/CrowdStrike/ratelimiter

Features

  • Ability to set a rate limiting time period
  • Ability to disable time periods and say once you're ratelimited, you're done
  • You can set a maxsize so your memory footprint can remain constant, most used keys stay hot in cache

Authors/Contributors

Example of correct usage where you want to only allow 1000 requests per hour for a given key. Note: If you want to disable lifting the rate period set ratePeriod := 0 That will effectively say for the lifetime of the process if you hit the rate limit you're done.


import "github.com/CrowdStrike/ratelimiter"

maxCapacity := 1000
ratePeriod := 1 * time.Hour
rl, err := ratelimiter.New(maxCapacity, ratePeriod)
if err != nil {
	fmt.Printf("Unable to create cache")
}

userKey := "user123"
maxCount = 100 // the maximum number of items I want from this user in one hour
cnt, underRateLimit := rl.Incr(userKey, maxCount)
if underRateLimit {
	// allow further access
	...
} else {
	fmt.Printf("User [%s] is over rate limit, denying for now, current count [%d]", userKey, cnt)
}

Performance

Approximately 3.2MM Incr operations per second on a standard 2014 macbook pro

BenchmarkIncrWithPeriod 5000000       308 ns/op
BenchmarkIncrWithoutPeriod 5000000    253 ns/op
BenchmarkGet 10000000                 174 ns/op

3.2MM ops / second is more than enough for our needs, but should someone need more we've found adding more selective locking mechanics can be implemented and using the sync/atomic package can be used for a ~50% speed up at a minor cost of readability

Documentation

Overview

Package ratelimiter implements an LRU cache that uses incr to determine rate limit policity violations

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Cache

type Cache struct {

	// MaxEntries is the maximum number of cache entries before
	// an item is evicted. Zero means no limit.
	MaxEntries int

	// OnEvicted optionally specificies a callback function to be
	// executed when an entry is purged from the cache.
	OnEvicted func(key interface{}, value interface{})
	// contains filtered or unexported fields
}

Cache is an LRU cache. It is safe for concurrent access as it locks when mutations are made even with locks it's able to do 3.2MM ops per second on a standard laptop.

func New

func New(maxEntries int, ratePeriod time.Duration) (*Cache, error)

New creates a new Cache. ratePeriod is the window between now and seconds ago the rate limit applies

func (*Cache) Get

func (c *Cache) Get(key interface{}) (value uint64, ok bool)

Get looks up a key's value from the cache.

func (*Cache) Incr

func (c *Cache) Incr(key interface{}, maxValue int) (uint64, bool)

Incr allows you to increment a key, if it's over the rate limit maxValue and it's been shorter than the grace period then it will return false for the underRateLimit boolean

func (*Cache) Len

func (c *Cache) Len() int

Len returns the number of items in the cache.

func (*Cache) Remove

func (c *Cache) Remove(key interface{})

Remove removes the provided key from the cache.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL