boom

package module
v0.0.0-...-492daef Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 10, 2019 License: Apache-2.0 Imports: 14 Imported by: 0

README

Boom Filters

Build Status GoDoc

Boom Filters are probabilistic data structures for processing continuous, unbounded streams. This includes Stable Bloom Filters, Scalable Bloom Filters, Counting Bloom Filters, Inverse Bloom Filters, Cuckoo Filters, several variants of traditional Bloom filters, HyperLogLog, Count-Min Sketch, and MinHash.

Classic Bloom filters generally require a priori knowledge of the data set in order to allocate an appropriately sized bit array. This works well for offline processing, but online processing typically involves unbounded data streams. With enough data, a traditional Bloom filter "fills up", after which it has a false-positive probability of 1.

Boom Filters are useful for situations where the size of the data set isn't known ahead of time. For example, a Stable Bloom Filter can be used to deduplicate events from an unbounded event stream with a specified upper bound on false positives and minimal false negatives. Alternatively, an Inverse Bloom Filter is ideal for deduplicating a stream where duplicate events are relatively close together. This results in no false positives and, depending on how close together duplicates are, a small probability of false negatives. Scalable Bloom Filters place a tight upper bound on false positives while avoiding false negatives but require allocating memory proportional to the size of the data set. Counting Bloom Filters and Cuckoo Filters are useful for cases which require adding and removing elements to and from a set.

For large or unbounded data sets, calculating the exact cardinality is impractical. HyperLogLog uses a fraction of the memory while providing an accurate approximation. Similarly, Count-Min Sketch provides an efficient way to estimate event frequency for data streams, while Top-K tracks the top-k most frequent elements.

MinHash is a probabilistic algorithm to approximate the similarity between two sets. This can be used to cluster or compare documents by splitting the corpus into a bag of words.

Installation

$ go get github.com/tylertreat/BoomFilters

Stable Bloom Filter

This is an implementation of Stable Bloom Filters as described by Deng and Rafiei in Approximately Detecting Duplicates for Streaming Data using Stable Bloom Filters.

A Stable Bloom Filter (SBF) continuously evicts stale information so that it has room for more recent elements. Like traditional Bloom filters, an SBF has a non-zero probability of false positives, which is controlled by several parameters. Unlike the classic Bloom filter, an SBF has a tight upper bound on the rate of false positives while introducing a non-zero rate of false negatives. The false-positive rate of a classic Bloom filter eventually reaches 1, after which all queries result in a false positive. The stable-point property of an SBF means the false-positive rate asymptotically approaches a configurable fixed constant. A classic Bloom filter is actually a special case of SBF where the eviction rate is zero and the cell size is one, so this provides support for them as well (in addition to bitset-based Bloom filters).

Stable Bloom Filters are useful for cases where the size of the data set isn't known a priori and memory is bounded. For example, an SBF can be used to deduplicate events from an unbounded event stream with a specified upper bound on false positives and minimal false negatives.

Usage
package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    sbf := boom.NewDefaultStableBloomFilter(10000, 0.01)
    fmt.Println("stable point", sbf.StablePoint())
    
    sbf.Add([]byte(`a`))
    if sbf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if !sbf.TestAndAdd([]byte(`b`)) {
        fmt.Println("doesn't contain b")
    }
    
    if sbf.Test([]byte(`b`)) {
        fmt.Println("now it contains b!")
    }
    
    // Restore to initial state.
    sbf.Reset()
}

Scalable Bloom Filter

This is an implementation of a Scalable Bloom Filter as described by Almeida, Baquero, Preguica, and Hutchison in Scalable Bloom Filters.

A Scalable Bloom Filter (SBF) dynamically adapts to the size of the data set while enforcing a tight upper bound on the rate of false positives and a false-negative probability of zero. This works by adding Bloom filters with geometrically decreasing false-positive rates as filters become full. A tightening ratio, r, controls the filter growth. The compounded probability over the whole series converges to a target value, even accounting for an infinite series.

Scalable Bloom Filters are useful for cases where the size of the data set isn't known a priori and memory constraints aren't of particular concern. For situations where memory is bounded, consider using Inverse or Stable Bloom Filters.

The core parts of this implementation were originally written by Jian Zhen as discussed in Benchmarking Bloom Filters and Hash Functions in Go.

Usage
package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    sbf := boom.NewDefaultScalableBloomFilter(0.01)
    
    sbf.Add([]byte(`a`))
    if sbf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if !sbf.TestAndAdd([]byte(`b`)) {
        fmt.Println("doesn't contain b")
    }
    
    if sbf.Test([]byte(`b`)) {
        fmt.Println("now it contains b!")
    }
    
    // Restore to initial state.
    sbf.Reset()
}

Inverse Bloom Filter

An Inverse Bloom Filter, or "the opposite of a Bloom filter", is a concurrent, probabilistic data structure used to test whether an item has been observed or not. This implementation, originally described and written by Jeff Hodges, replaces the use of MD5 hashing with a non-cryptographic FNV-1 function.

The Inverse Bloom Filter may report a false negative but can never report a false positive. That is, it may report that an item has not been seen when it actually has, but it will never report an item as seen which it hasn't come across. This behaves in a similar manner to a fixed-size hashmap which does not handle conflicts.

This structure is particularly well-suited to streams in which duplicates are relatively close together. It uses a CAS-style approach, which makes it thread-safe.

Usage
package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    ibf := boom.NewInverseBloomFilter(10000)
    
    ibf.Add([]byte(`a`))
    if ibf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if !ibf.TestAndAdd([]byte(`b`)) {
        fmt.Println("doesn't contain b")
    }
    
    if ibf.Test([]byte(`b`)) {
        fmt.Println("now it contains b!")
    }
}

Counting Bloom Filter

This is an implementation of a Counting Bloom Filter as described by Fan, Cao, Almeida, and Broder in Summary Cache: A Scalable Wide-Area Web Cache Sharing Protocol.

A Counting Bloom Filter (CBF) provides a way to remove elements by using an array of n-bit buckets. When an element is added, the respective buckets are incremented. To remove an element, the respective buckets are decremented. A query checks that each of the respective buckets are non-zero. Because CBFs allow elements to be removed, they introduce a non-zero probability of false negatives in addition to the possibility of false positives.

Counting Bloom Filters are useful for cases where elements are both added and removed from the data set. Since they use n-bit buckets, CBFs use roughly n-times more memory than traditional Bloom filters.

See Deletable Bloom Filter for an alternative which avoids false negatives.

Usage
package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    bf := boom.NewDefaultCountingBloomFilter(1000, 0.01)
    
    bf.Add([]byte(`a`))
    if bf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if !bf.TestAndAdd([]byte(`b`)) {
        fmt.Println("doesn't contain b")
    }
    
    if bf.TestAndRemove([]byte(`b`)) {
        fmt.Println("removed b")
    }
    
    // Restore to initial state.
    bf.Reset()
}

Cuckoo Filter

This is an implementation of a Cuckoo Filter as described by Andersen, Kaminsky, and Mitzenmacher in Cuckoo Filter: Practically Better Than Bloom. The Cuckoo Filter is similar to the Counting Bloom Filter in that it supports adding and removing elements, but it does so in a way that doesn't significantly degrade space and performance.

It works by using a cuckoo hashing scheme for inserting items. Instead of storing the elements themselves, it stores their fingerprints which also allows for item removal without false negatives (if you don't attempt to remove an item not contained in the filter).

For applications that store many items and target moderately low false-positive rates, cuckoo filters have lower space overhead than space-optimized Bloom filters.

Usage
package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    cf := boom.NewCuckooFilter(1000, 0.01)
    
    cf.Add([]byte(`a`))
    if cf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if contains, _ := cf.TestAndAdd([]byte(`b`)); !contains {
        fmt.Println("doesn't contain b")
    }
    
    if cf.TestAndRemove([]byte(`b`)) {
        fmt.Println("removed b")
    }
    
    // Restore to initial state.
    cf.Reset()
}

Classic Bloom Filter

A classic Bloom filter is a special case of a Stable Bloom Filter whose eviction rate is zero and cell size is one. We call this special case an Unstable Bloom Filter. Because cells require more memory overhead, this package also provides two bitset-based Bloom filter variations. The first variation is the traditional implementation consisting of a single bit array. The second implementation is a partitioned approach which uniformly distributes the probability of false positives across all elements.

Bloom filters have a limited capacity, depending on the configured size. Once all bits are set, the probability of a false positive is 1. However, traditional Bloom filters cannot return a false negative.

A Bloom filter is ideal for cases where the data set is known a priori because the false-positive rate can be configured by the size and number of hash functions.

Usage
package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    // We could also use boom.NewUnstableBloomFilter or boom.NewPartitionedBloomFilter.
    bf := boom.NewBloomFilter(1000, 0.01)
    
    bf.Add([]byte(`a`))
    if bf.Test([]byte(`a`)) {
        fmt.Println("contains a")
    }
    
    if !bf.TestAndAdd([]byte(`b`)) {
        fmt.Println("doesn't contain b")
    }
    
    if bf.Test([]byte(`b`)) {
        fmt.Println("now it contains b!")
    }
    
    // Restore to initial state.
    bf.Reset()
}

Count-Min Sketch

This is an implementation of a Count-Min Sketch as described by Cormode and Muthukrishnan in An Improved Data Stream Summary: The Count-Min Sketch and its Applications.

A Count-Min Sketch (CMS) is a probabilistic data structure which approximates the frequency of events in a data stream. Unlike a hash map, a CMS uses sub-linear space at the expense of a configurable error factor. Similar to Counting Bloom filters, items are hashed to a series of buckets, which increment a counter. The frequency of an item is estimated by taking the minimum of each of the item's respective counter values.

Count-Min Sketches are useful for counting the frequency of events in massive data sets or unbounded streams online. In these situations, storing the entire data set or allocating counters for every event in memory is impractical. It may be possible for offline processing, but real-time processing requires fast, space-efficient solutions like the CMS. For approximating set cardinality, refer to the HyperLogLog.

Usage
package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    cms := boom.NewCountMinSketch(0.001, 0.99)
    
    cms.Add([]byte(`alice`)).Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`frank`))
    fmt.Println("frequency of alice", cms.Count([]byte(`alice`)))
    fmt.Println("frequency of bob", cms.Count([]byte(`bob`)))
    fmt.Println("frequency of frank", cms.Count([]byte(`frank`)))
    

    // Serialization example
    buf := new(bytes.Buffer)
    n, err := cms.WriteDataTo(buf)
    if err != nil {
       fmt.Println(err, n)
    }

    // Restore to initial state.
    cms.Reset()

    newCMS := boom.NewCountMinSketch(0.001, 0.99)
    n, err = newCMS.ReadDataFrom(buf)
    if err != nil {
       fmt.Println(err, n)
    }

    fmt.Println("frequency of frank", newCMS.Count([]byte(`frank`)))

   
}

Top-K

Top-K uses a Count-Min Sketch and min-heap to track the top-k most frequent elements in a stream.

Usage
package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
	topk := boom.NewTopK(0.001, 0.99, 5)

	topk.Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`bob`))
	topk.Add([]byte(`tyler`)).Add([]byte(`tyler`)).Add([]byte(`tyler`)).Add([]byte(`tyler`))
	topk.Add([]byte(`fred`))
	topk.Add([]byte(`alice`)).Add([]byte(`alice`)).Add([]byte(`alice`)).Add([]byte(`alice`))
	topk.Add([]byte(`james`))
	topk.Add([]byte(`fred`))
	topk.Add([]byte(`sara`)).Add([]byte(`sara`))
	topk.Add([]byte(`bill`))

	for i, element := range topk.Elements() {
		fmt.Println(i, string(element.Data), element.Freq)
	}
	
	// Restore to initial state.
	topk.Reset()
}

HyperLogLog

This is an implementation of HyperLogLog as described by Flajolet, Fusy, Gandouet, and Meunier in HyperLogLog: the analysis of a near-optimal cardinality estimation algorithm.

HyperLogLog is a probabilistic algorithm which approximates the number of distinct elements in a multiset. It works by hashing values and calculating the maximum number of leading zeros in the binary representation of each hash. If the maximum number of leading zeros is n, the estimated number of distinct elements in the set is 2^n. To minimize variance, the multiset is split into a configurable number of registers, the maximum number of leading zeros is calculated in the numbers in each register, and a harmonic mean is used to combine the estimates.

For large or unbounded data sets, calculating the exact cardinality is impractical. HyperLogLog uses a fraction of the memory while providing an accurate approximation.

This implementation was originally written by Eric Lesh. Some small changes and additions have been made, including a way to construct a HyperLogLog optimized for a particular relative accuracy and adding FNV hashing. For counting element frequency, refer to the Count-Min Sketch.

Usage
package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    hll, err := boom.NewDefaultHyperLogLog(0.1)
    if err != nil {
        panic(err)
    }
    
    hll.Add([]byte(`alice`)).Add([]byte(`bob`)).Add([]byte(`bob`)).Add([]byte(`frank`))
    fmt.Println("count", hll.Count())

    // Serialization example
    buf := new(bytes.Buffer)
    _, err = hll.WriteDataTo(buf)
    if err != nil {
       fmt.Println(err)
    }
    
    // Restore to initial state.
    hll.Reset()

    newHll, err := boom.NewDefaultHyperLogLog(0.1)
    if err != nil {
       fmt.Println(err)
    }

    _, err = newHll.ReadDataFrom(buf)
    if err != nil {
       fmt.Println(err)
    }
    fmt.Println("count", newHll.Count())

}

MinHash

This is a variation of the technique for estimating similarity between two sets as presented by Broder in On the resemblance and containment of documents.

MinHash is a probabilistic algorithm which can be used to cluster or compare documents by splitting the corpus into a bag of words. MinHash returns the approximated similarity ratio of the two bags. The similarity is less accurate for very small bags of words.

Usage
package main

import (
    "fmt"
    "github.com/tylertreat/BoomFilters"
)

func main() {
    bag1 := []string{"bill", "alice", "frank", "bob", "sara", "tyler", "james"}
	bag2 := []string{"bill", "alice", "frank", "bob", "sara"}
	
	fmt.Println("similarity", boom.MinHash(bag1, bag2))
}

References

Documentation

Overview

Package boom implements probabilistic data structures for processing continuous, unbounded data streams. This includes Stable Bloom Filters, Scalable Bloom Filters, Counting Bloom Filters, Inverse Bloom Filters, several variants of traditional Bloom filters, HyperLogLog, Count-Min Sketch, and MinHash.

Classic Bloom filters generally require a priori knowledge of the data set in order to allocate an appropriately sized bit array. This works well for offline processing, but online processing typically involves unbounded data streams. With enough data, a traditional Bloom filter "fills up", after which it has a false-positive probability of 1.

Boom Filters are useful for situations where the size of the data set isn't known ahead of time. For example, a Stable Bloom Filter can be used to deduplicate events from an unbounded event stream with a specified upper bound on false positives and minimal false negatives. Alternatively, an Inverse Bloom Filter is ideal for deduplicating a stream where duplicate events are relatively close together. This results in no false positives and, depending on how close together duplicates are, a small probability of false negatives. Scalable Bloom Filters place a tight upper bound on false positives while avoiding false negatives but require allocating memory proportional to the size of the data set. Counting Bloom Filters and Cuckoo Filters are useful for cases which require adding and removing elements to and from a set.

For large or unbounded data sets, calculating the exact cardinality is impractical. HyperLogLog uses a fraction of the memory while providing an accurate approximation. Similarly, Count-Min Sketch provides an efficient way to estimate event frequency for data streams. TopK tracks the top-k most frequent elements.

MinHash is a probabilistic algorithm to approximate the similarity between two sets. This can be used to cluster or compare documents by splitting the corpus into a bag of words.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func MinHash

func MinHash(bag1, bag2 []string) float32

MinHash is a variation of the technique for estimating similarity between two sets as presented by Broder in On the resemblance and containment of documents:

http://gatekeeper.dec.com/ftp/pub/dec/SRC/publications/broder/positano-final-wpnums.pdf

This can be used to cluster or compare documents by splitting the corpus into a bag of words. MinHash returns the approximated similarity ratio of the two bags. The similarity is less accurate for very small bags of words.

func OptimalK

func OptimalK(fpRate float64) uint

OptimalK calculates the optimal number of hash functions to use for a Bloom filter based on the desired rate of false positives.

func OptimalM

func OptimalM(n uint, fpRate float64) uint

OptimalM calculates the optimal Bloom filter size, m, based on the number of items and the desired rate of false positives.

Types

type BloomFilter

type BloomFilter struct {
	// contains filtered or unexported fields
}

BloomFilter implements a classic Bloom filter. A Bloom filter has a non-zero probability of false positives and a zero probability of false negatives.

func NewBloomFilter

func NewBloomFilter(n uint, fpRate float64) *BloomFilter

NewBloomFilter creates a new Bloom filter optimized to store n items with a specified target false-positive rate.

func (*BloomFilter) Add

func (b *BloomFilter) Add(data []byte) Filter

Add will add the data to the Bloom filter. It returns the filter to allow for chaining.

func (*BloomFilter) Capacity

func (b *BloomFilter) Capacity() uint

Capacity returns the Bloom filter capacity, m.

func (*BloomFilter) Count

func (b *BloomFilter) Count() uint

Count returns the number of items added to the filter.

func (*BloomFilter) EstimatedFillRatio

func (b *BloomFilter) EstimatedFillRatio() float64

EstimatedFillRatio returns the current estimated ratio of set bits.

func (*BloomFilter) FillRatio

func (b *BloomFilter) FillRatio() float64

FillRatio returns the ratio of set bits.

func (*BloomFilter) GobDecode

func (b *BloomFilter) GobDecode(data []byte) error

GobDecode implements gob.GobDecoder interface.

func (*BloomFilter) GobEncode

func (b *BloomFilter) GobEncode() ([]byte, error)

GobEncode implements gob.GobEncoder interface.

func (*BloomFilter) K

func (b *BloomFilter) K() uint

K returns the number of hash functions.

func (*BloomFilter) ReadFrom

func (b *BloomFilter) ReadFrom(stream io.Reader) (int64, error)

ReadFrom reads a binary representation of BloomFilter (such as might have been written by WriteTo()) from an i/o stream. It returns the number of bytes read.

func (*BloomFilter) Reset

func (b *BloomFilter) Reset() *BloomFilter

Reset restores the Bloom filter to its original state. It returns the filter to allow for chaining.

func (*BloomFilter) SetHash

func (b *BloomFilter) SetHash(h hash.Hash64)

SetHash sets the hashing function used in the filter. For the effect on false positive rates see: https://github.com/tylertreat/BoomFilters/pull/1

func (*BloomFilter) Test

func (b *BloomFilter) Test(data []byte) bool

Test will test for membership of the data and returns true if it is a member, false if not. This is a probabilistic test, meaning there is a non-zero probability of false positives but a zero probability of false negatives.

func (*BloomFilter) TestAndAdd

func (b *BloomFilter) TestAndAdd(data []byte) bool

TestAndAdd is equivalent to calling Test followed by Add. It returns true if the data is a member, false if not.

func (*BloomFilter) WriteTo

func (b *BloomFilter) WriteTo(stream io.Writer) (int64, error)

WriteTo writes a binary representation of the BloomFilter to an i/o stream. It returns the number of bytes written.

type Buckets

type Buckets struct {
	// contains filtered or unexported fields
}

Buckets is a fast, space-efficient array of buckets where each bucket can store up to a configured maximum value.

func NewBuckets

func NewBuckets(count uint, bucketSize uint8) *Buckets

NewBuckets creates a new Buckets with the provided number of buckets where each bucket is the specified number of bits.

func (*Buckets) Count

func (b *Buckets) Count() uint

Count returns the number of buckets.

func (*Buckets) Get

func (b *Buckets) Get(bucket uint) uint32

Get returns the value in the specified bucket.

func (*Buckets) GobDecode

func (b *Buckets) GobDecode(data []byte) error

GobDecode implements gob.GobDecoder interface.

func (*Buckets) GobEncode

func (b *Buckets) GobEncode() ([]byte, error)

GobEncode implements gob.GobEncoder interface.

func (*Buckets) Increment

func (b *Buckets) Increment(bucket uint, delta int32) *Buckets

Increment will increment the value in the specified bucket by the provided delta. A bucket can be decremented by providing a negative delta. The value is clamped to zero and the maximum bucket value. Returns itself to allow for chaining.

func (*Buckets) MaxBucketValue

func (b *Buckets) MaxBucketValue() uint8

MaxBucketValue returns the maximum value that can be stored in a bucket.

func (*Buckets) ReadFrom

func (b *Buckets) ReadFrom(stream io.Reader) (int64, error)

ReadFrom reads a binary representation of Buckets (such as might have been written by WriteTo()) from an i/o stream. It returns the number of bytes read.

func (*Buckets) Reset

func (b *Buckets) Reset() *Buckets

Reset restores the Buckets to the original state. Returns itself to allow for chaining.

func (*Buckets) Set

func (b *Buckets) Set(bucket uint, value uint8) *Buckets

Set will set the bucket value. The value is clamped to zero and the maximum bucket value. Returns itself to allow for chaining.

func (*Buckets) WriteTo

func (b *Buckets) WriteTo(stream io.Writer) (int64, error)

WriteTo writes a binary representation of Buckets to an i/o stream. It returns the number of bytes written.

type CountMinSketch

type CountMinSketch struct {
	// contains filtered or unexported fields
}

CountMinSketch implements a Count-Min Sketch as described by Cormode and Muthukrishnan in An Improved Data Stream Summary: The Count-Min Sketch and its Applications:

http://dimacs.rutgers.edu/~graham/pubs/papers/cm-full.pdf

A Count-Min Sketch (CMS) is a probabilistic data structure which approximates the frequency of events in a data stream. Unlike a hash map, a CMS uses sub-linear space at the expense of a configurable error factor. Similar to Counting Bloom filters, items are hashed to a series of buckets, which increment a counter. The frequency of an item is estimated by taking the minimum of each of the item's respective counter values.

Count-Min Sketches are useful for counting the frequency of events in massive data sets or unbounded streams online. In these situations, storing the entire data set or allocating counters for every event in memory is impractical. It may be possible for offline processing, but real-time processing requires fast, space-efficient solutions like the CMS. For approximating set cardinality, refer to the HyperLogLog.

func NewCountMinSketch

func NewCountMinSketch(epsilon, delta float64) *CountMinSketch

NewCountMinSketch creates a new Count-Min Sketch whose relative accuracy is within a factor of epsilon with probability delta. Both of these parameters affect the space and time complexity.

func (*CountMinSketch) Add

func (c *CountMinSketch) Add(data []byte) *CountMinSketch

Add will add the data to the set. Returns the CountMinSketch to allow for chaining.

func (*CountMinSketch) Count

func (c *CountMinSketch) Count(data []byte) uint64

Count returns the approximate count for the specified item, correct within epsilon * total count with a probability of delta.

func (*CountMinSketch) Delta

func (c *CountMinSketch) Delta() float64

Delta returns the relative-accuracy probability, delta.

func (*CountMinSketch) Epsilon

func (c *CountMinSketch) Epsilon() float64

Epsilon returns the relative-accuracy factor, epsilon.

func (*CountMinSketch) Merge

func (c *CountMinSketch) Merge(other *CountMinSketch) error

Merge combines this CountMinSketch with another. Returns an error if the matrix width and depth are not equal.

func (*CountMinSketch) ReadDataFrom

func (c *CountMinSketch) ReadDataFrom(stream io.Reader) (int, error)

ReadDataFrom reads a binary representation of the CMS data written by WriteDataTo() from io stream. It returns the number of bytes read and error If serialized CMS configuration is different it returns error with expected params

func (*CountMinSketch) Reset

func (c *CountMinSketch) Reset() *CountMinSketch

Reset restores the CountMinSketch to its original state. It returns itself to allow for chaining.

func (*CountMinSketch) SetHash

func (c *CountMinSketch) SetHash(h hash.Hash64)

SetHash sets the hashing function used.

func (*CountMinSketch) TestAndRemove

func (c *CountMinSketch) TestAndRemove(data []byte, n uint64) bool

TestAndRemove attemps to remove n counts of data from the CMS. If n is greater than the data count, TestAndRemove is a no-op and returns false. Else, return true and decrement count by n.

func (*CountMinSketch) TestAndRemoveAll

func (c *CountMinSketch) TestAndRemoveAll(data []byte) bool

TestAndRemoveAll counts data frequency, performs TestAndRemove(data, count), and returns true if count is positive. If count is 0, TestAndRemoveAll is a no-op and returns false.

func (*CountMinSketch) TotalCount

func (c *CountMinSketch) TotalCount() uint64

TotalCount returns the number of items added to the sketch.

func (*CountMinSketch) WriteDataTo

func (c *CountMinSketch) WriteDataTo(stream io.Writer) (int, error)

WriteDataTo writes a binary representation of the CMS data to an io stream. It returns the number of bytes written and error

type CountingBloomFilter

type CountingBloomFilter struct {
	// contains filtered or unexported fields
}

CountingBloomFilter implements a Counting Bloom Filter as described by Fan, Cao, Almeida, and Broder in Summary Cache: A Scalable Wide-Area Web Cache Sharing Protocol:

http://pages.cs.wisc.edu/~jussara/papers/00ton.pdf

A Counting Bloom Filter (CBF) provides a way to remove elements by using an array of n-bit buckets. When an element is added, the respective buckets are incremented. To remove an element, the respective buckets are decremented. A query checks that each of the respective buckets are non-zero. Because CBFs allow elements to be removed, they introduce a non-zero probability of false negatives in addition to the possibility of false positives.

Counting Bloom Filters are useful for cases where elements are both added and removed from the data set. Since they use n-bit buckets, CBFs use roughly n-times more memory than traditional Bloom filters.

func NewCountingBloomFilter

func NewCountingBloomFilter(n uint, b uint8, fpRate float64) *CountingBloomFilter

NewCountingBloomFilter creates a new Counting Bloom Filter optimized to store n items with a specified target false-positive rate and bucket size. If you don't know how many bits to use for buckets, use NewDefaultCountingBloomFilter for a sensible default.

func NewDefaultCountingBloomFilter

func NewDefaultCountingBloomFilter(n uint, fpRate float64) *CountingBloomFilter

NewDefaultCountingBloomFilter creates a new Counting Bloom Filter optimized to store n items with a specified target false-positive rate. Buckets are allocated four bits.

func (*CountingBloomFilter) Add

func (c *CountingBloomFilter) Add(data []byte) Filter

Add will add the data to the Bloom filter. It returns the filter to allow for chaining.

func (*CountingBloomFilter) Capacity

func (c *CountingBloomFilter) Capacity() uint

Capacity returns the Bloom filter capacity, m.

func (*CountingBloomFilter) Count

func (c *CountingBloomFilter) Count() uint

Count returns the number of items in the filter.

func (*CountingBloomFilter) K

func (c *CountingBloomFilter) K() uint

K returns the number of hash functions.

func (*CountingBloomFilter) Reset

Reset restores the Bloom filter to its original state. It returns the filter to allow for chaining.

func (*CountingBloomFilter) SetHash

func (c *CountingBloomFilter) SetHash(h hash.Hash64)

SetHash sets the hashing function used in the filter. For the effect on false positive rates see: https://github.com/tylertreat/BoomFilters/pull/1

func (*CountingBloomFilter) Test

func (c *CountingBloomFilter) Test(data []byte) bool

Test will test for membership of the data and returns true if it is a member, false if not. This is a probabilistic test, meaning there is a non-zero probability of false positives and false negatives.

func (*CountingBloomFilter) TestAndAdd

func (c *CountingBloomFilter) TestAndAdd(data []byte) bool

TestAndAdd is equivalent to calling Test followed by Add. It returns true if the data is a member, false if not.

func (*CountingBloomFilter) TestAndRemove

func (c *CountingBloomFilter) TestAndRemove(data []byte) bool

TestAndRemove will test for membership of the data and remove it from the filter if it exists. Returns true if the data was a member, false if not.

type CuckooFilter

type CuckooFilter struct {
	// contains filtered or unexported fields
}

CuckooFilter implements a Cuckoo Bloom filter as described by Andersen, Kaminsky, and Mitzenmacher in Cuckoo Filter: Practically Better Than Bloom:

http://www.pdl.cmu.edu/PDL-FTP/FS/cuckoo-conext2014.pdf

A Cuckoo Filter is a Bloom filter variation which provides support for removing elements without significantly degrading space and performance. It works by using a cuckoo hashing scheme for inserting items. Instead of storing the elements themselves, it stores their fingerprints which also allows for item removal without false negatives (if you don't attempt to remove an item not contained in the filter).

For applications that store many items and target moderately low false-positive rates, cuckoo filters have lower space overhead than space-optimized Bloom filters.

func NewCuckooFilter

func NewCuckooFilter(n uint, fpRate float64) *CuckooFilter

NewCuckooFilter creates a new Cuckoo Bloom filter optimized to store n items with a specified target false-positive rate.

func (*CuckooFilter) Add

func (c *CuckooFilter) Add(data []byte) error

Add will add the data to the Cuckoo Filter. It returns an error if the filter is full. If the filter is full, an item is removed to make room for the new item. This introduces a possibility for false negatives. To avoid this, use Count and Capacity to check if the filter is full before adding an item.

func (*CuckooFilter) Buckets

func (c *CuckooFilter) Buckets() uint

Buckets returns the number of buckets.

func (*CuckooFilter) Capacity

func (c *CuckooFilter) Capacity() uint

Capacity returns the number of items the filter can store.

func (*CuckooFilter) Count

func (c *CuckooFilter) Count() uint

Count returns the number of items in the filter.

func (*CuckooFilter) Reset

func (c *CuckooFilter) Reset() *CuckooFilter

Reset restores the Bloom filter to its original state. It returns the filter to allow for chaining.

func (*CuckooFilter) SetHash

func (c *CuckooFilter) SetHash(h hash.Hash32)

SetHash sets the hashing function used in the filter. For the effect on false positive rates see: https://github.com/tylertreat/BoomFilters/pull/1

func (*CuckooFilter) Test

func (c *CuckooFilter) Test(data []byte) bool

Test will test for membership of the data and returns true if it is a member, false if not. This is a probabilistic test, meaning there is a non-zero probability of false positives.

func (*CuckooFilter) TestAndAdd

func (c *CuckooFilter) TestAndAdd(data []byte) (bool, error)

TestAndAdd is equivalent to calling Test followed by Add. It returns true if the data is a member, false if not. An error is returned if the filter is full. If the filter is full, an item is removed to make room for the new item. This introduces a possibility for false negatives. To avoid this, use Count and Capacity to check if the filter is full before adding an item.

func (*CuckooFilter) TestAndRemove

func (c *CuckooFilter) TestAndRemove(data []byte) bool

TestAndRemove will test for membership of the data and remove it from the filter if it exists. Returns true if the data was a member, false if not.

type DeletableBloomFilter

type DeletableBloomFilter struct {
	// contains filtered or unexported fields
}

DeletableBloomFilter implements a Deletable Bloom Filter as described by Rothenberg, Macapuna, Verdi, Magalhaes in The Deletable Bloom filter - A new member of the Bloom family:

http://arxiv.org/pdf/1005.0352.pdf

A Deletable Bloom Filter compactly stores information on collisions when inserting elements. This information is used to determine if elements are deletable. This design enables false-negative-free deletions at a fraction of the cost in memory consumption.

Deletable Bloom Filters are useful for cases which require removing elements but cannot allow false negatives. This means they can be safely swapped in place of traditional Bloom filters.

func NewDeletableBloomFilter

func NewDeletableBloomFilter(n, r uint, fpRate float64) *DeletableBloomFilter

NewDeletableBloomFilter creates a new DeletableBloomFilter optimized to store n items with a specified target false-positive rate. The r value determines the number of bits to use to store collision information. This controls the deletability of an element. Refer to the paper for selecting an optimal value.

func (*DeletableBloomFilter) Add

func (d *DeletableBloomFilter) Add(data []byte) Filter

Add will add the data to the Bloom filter. It returns the filter to allow for chaining.

func (*DeletableBloomFilter) Capacity

func (d *DeletableBloomFilter) Capacity() uint

Capacity returns the Bloom filter capacity, m.

func (*DeletableBloomFilter) Count

func (d *DeletableBloomFilter) Count() uint

Count returns the number of items added to the filter.

func (*DeletableBloomFilter) K

func (d *DeletableBloomFilter) K() uint

K returns the number of hash functions.

func (*DeletableBloomFilter) Reset

Reset restores the Bloom filter to its original state. It returns the filter to allow for chaining.

func (*DeletableBloomFilter) SetHash

func (d *DeletableBloomFilter) SetHash(h hash.Hash64)

SetHash sets the hashing function used in the filter. For the effect on false positive rates see: https://github.com/tylertreat/BoomFilters/pull/1

func (*DeletableBloomFilter) Test

func (d *DeletableBloomFilter) Test(data []byte) bool

Test will test for membership of the data and returns true if it is a member, false if not. This is a probabilistic test, meaning there is a non-zero probability of false positives but a zero probability of false negatives.

func (*DeletableBloomFilter) TestAndAdd

func (d *DeletableBloomFilter) TestAndAdd(data []byte) bool

TestAndAdd is equivalent to calling Test followed by Add. It returns true if the data is a member, false if not.

func (*DeletableBloomFilter) TestAndRemove

func (d *DeletableBloomFilter) TestAndRemove(data []byte) bool

TestAndRemove will test for membership of the data and remove it from the filter if it exists. Returns true if the data was a member, false if not.

type Element

type Element struct {
	Data []byte
	Freq uint64
}

Element represents a data and it's frequency

type Filter

type Filter interface {
	// Test will test for membership of the data and returns true if it is a
	// member, false if not.
	Test([]byte) bool

	// Add will add the data to the Bloom filter. It returns the filter to
	// allow for chaining.
	Add([]byte) Filter

	// TestAndAdd is equivalent to calling Test followed by Add. It returns
	// true if the data is a member, false if not.
	TestAndAdd([]byte) bool
}

Filter is a probabilistic data structure which is used to test the membership of an element in a set.

type HyperLogLog

type HyperLogLog struct {
	// contains filtered or unexported fields
}

HyperLogLog implements the HyperLogLog cardinality estimation algorithm as described by Flajolet, Fusy, Gandouet, and Meunier in HyperLogLog: the analysis of a near-optimal cardinality estimation algorithm:

http://algo.inria.fr/flajolet/Publications/FlFuGaMe07.pdf

HyperLogLog is a probabilistic algorithm which approximates the number of distinct elements in a multiset. It works by hashing values and calculating the maximum number of leading zeros in the binary representation of each hash. If the maximum number of leading zeros is n, the estimated number of distinct elements in the set is 2^n. To minimize variance, the multiset is split into a configurable number of registers, the maximum number of leading zeros is calculated in the numbers in each register, and a harmonic mean is used to combine the estimates.

For large or unbounded data sets, calculating the exact cardinality is impractical. HyperLogLog uses a fraction of the memory while providing an accurate approximation. For counting element frequency, refer to the Count-Min Sketch.

func NewDefaultHyperLogLog

func NewDefaultHyperLogLog(e float64) (*HyperLogLog, error)

NewDefaultHyperLogLog creates a new HyperLogLog optimized for the specified standard error. Returns an error if the number of registers can't be calculated for the provided accuracy.

func NewHyperLogLog

func NewHyperLogLog(m uint) (*HyperLogLog, error)

NewHyperLogLog creates a new HyperLogLog with m registers. Returns an error if m isn't a power of two.

func (*HyperLogLog) Add

func (h *HyperLogLog) Add(data []byte) *HyperLogLog

Add will add the data to the set. Returns the HyperLogLog to allow for chaining.

func (*HyperLogLog) Count

func (h *HyperLogLog) Count() uint64

Count returns the approximated cardinality of the set.

func (*HyperLogLog) Merge

func (h *HyperLogLog) Merge(other *HyperLogLog) error

Merge combines this HyperLogLog with another. Returns an error if the number of registers in the two HyperLogLogs are not equal.

func (*HyperLogLog) ReadDataFrom

func (h *HyperLogLog) ReadDataFrom(stream io.Reader) (int, error)

ReadDataFrom reads a binary representation of the Hll data written by WriteDataTo() from io stream. It returns the number of bytes read and error. If serialized Hll configuration is different it returns error with expected params

func (*HyperLogLog) Reset

func (h *HyperLogLog) Reset() *HyperLogLog

Reset restores the HyperLogLog to its original state. It returns itself to allow for chaining.

func (*HyperLogLog) SetHash

func (h *HyperLogLog) SetHash(ha hash.Hash32)

SetHash sets the hashing function used.

func (*HyperLogLog) WriteDataTo

func (h *HyperLogLog) WriteDataTo(stream io.Writer) (n int, err error)

WriteDataTo writes a binary representation of the Hll data to an io stream. It returns the number of bytes written and error

type InverseBloomFilter

type InverseBloomFilter struct {
	// contains filtered or unexported fields
}

InverseBloomFilter is a concurrent "inverse" Bloom filter, which is effectively the opposite of a classic Bloom filter. This was originally described and written by Jeff Hodges:

http://www.somethingsimilar.com/2012/05/21/the-opposite-of-a-bloom-filter/

The InverseBloomFilter may report a false negative but can never report a false positive. That is, it may report that an item has not been seen when it actually has, but it will never report an item as seen which it hasn't come across. This behaves in a similar manner to a fixed-size hashmap which does not handle conflicts.

An example use case is deduplicating events while processing a stream of data. Ideally, duplicate events are relatively close together.

func NewInverseBloomFilter

func NewInverseBloomFilter(capacity uint) *InverseBloomFilter

NewInverseBloomFilter creates and returns a new InverseBloomFilter with the specified capacity.

func (*InverseBloomFilter) Add

func (i *InverseBloomFilter) Add(data []byte) Filter

Add will add the data to the filter. It returns the filter to allow for chaining.

func (*InverseBloomFilter) Capacity

func (i *InverseBloomFilter) Capacity() uint

Capacity returns the filter capacity.

func (*InverseBloomFilter) GobDecode

func (i *InverseBloomFilter) GobDecode(data []byte) error

GobDecode implements gob.GobDecoder interface.

func (*InverseBloomFilter) GobEncode

func (i *InverseBloomFilter) GobEncode() ([]byte, error)

GobEncode implements gob.GobEncoder interface.

func (*InverseBloomFilter) ImportElementsFrom

func (i *InverseBloomFilter) ImportElementsFrom(stream io.Reader) (int, error)

ImportElementsFrom reads a binary representation of InverseBloomFilter (such as might have been written by WriteTo()) from an i/o stream into a new bloom filter using the Add() method (skipping empty elements, if any). It returns the number of elements decoded from disk.

func (*InverseBloomFilter) ReadFrom

func (i *InverseBloomFilter) ReadFrom(stream io.Reader) (int64, error)

ReadFrom reads a binary representation of InverseBloomFilter (such as might have been written by WriteTo()) from an i/o stream. ReadFrom replaces the array of its filter with the one read from disk. It returns the number of bytes read.

func (*InverseBloomFilter) SetHashFactory

func (i *InverseBloomFilter) SetHashFactory(h func() hash.Hash32)

SetHashFactory sets the hashing function factory used in the filter.

func (*InverseBloomFilter) Test

func (i *InverseBloomFilter) Test(data []byte) bool

Test will test for membership of the data and returns true if it is a member, false if not. This is a probabilistic test, meaning there is a non-zero probability of false negatives but a zero probability of false positives. That is, it may return false even though the data was added, but it will never return true for data that hasn't been added.

func (*InverseBloomFilter) TestAndAdd

func (i *InverseBloomFilter) TestAndAdd(data []byte) bool

TestAndAdd is equivalent to calling Test followed by Add atomically. It returns true if the data is a member, false if not.

func (*InverseBloomFilter) WriteTo

func (i *InverseBloomFilter) WriteTo(stream io.Writer) (int64, error)

WriteTo writes a binary representation of the InverseBloomFilter to an i/o stream. It returns the number of bytes written.

type PartitionedBloomFilter

type PartitionedBloomFilter struct {
	// contains filtered or unexported fields
}

PartitionedBloomFilter implements a variation of a classic Bloom filter as described by Almeida, Baquero, Preguica, and Hutchison in Scalable Bloom Filters:

http://gsd.di.uminho.pt/members/cbm/ps/dbloom.pdf

This filter works by partitioning the M-sized bit array into k slices of size m = M/k bits. Each hash function produces an index over m for its respective slice. Thus, each element is described by exactly k bits, meaning the distribution of false positives is uniform across all elements.

func NewPartitionedBloomFilter

func NewPartitionedBloomFilter(n uint, fpRate float64) *PartitionedBloomFilter

NewPartitionedBloomFilter creates a new partitioned Bloom filter optimized to store n items with a specified target false-positive rate.

func (*PartitionedBloomFilter) Add

func (p *PartitionedBloomFilter) Add(data []byte) Filter

Add will add the data to the Bloom filter. It returns the filter to allow for chaining.

func (*PartitionedBloomFilter) Capacity

func (p *PartitionedBloomFilter) Capacity() uint

Capacity returns the Bloom filter capacity, m.

func (*PartitionedBloomFilter) Count

func (p *PartitionedBloomFilter) Count() uint

Count returns the number of items added to the filter.

func (*PartitionedBloomFilter) EstimatedFillRatio

func (p *PartitionedBloomFilter) EstimatedFillRatio() float64

EstimatedFillRatio returns the current estimated ratio of set bits.

func (*PartitionedBloomFilter) FillRatio

func (p *PartitionedBloomFilter) FillRatio() float64

FillRatio returns the average ratio of set bits across all partitions.

func (*PartitionedBloomFilter) GobDecode

func (p *PartitionedBloomFilter) GobDecode(data []byte) error

GobDecode implements gob.GobDecoder interface.

func (*PartitionedBloomFilter) GobEncode

func (p *PartitionedBloomFilter) GobEncode() ([]byte, error)

GobEncode implements gob.GobEncoder interface.

func (*PartitionedBloomFilter) K

K returns the number of hash functions.

func (*PartitionedBloomFilter) ReadFrom

func (p *PartitionedBloomFilter) ReadFrom(stream io.Reader) (int64, error)

ReadFrom reads a binary representation of PartitionedBloomFilter (such as might have been written by WriteTo()) from an i/o stream. It returns the number of bytes read.

func (*PartitionedBloomFilter) Reset

Reset restores the Bloom filter to its original state. It returns the filter to allow for chaining.

func (*PartitionedBloomFilter) SetHash

func (p *PartitionedBloomFilter) SetHash(h hash.Hash64)

SetHash sets the hashing function used in the filter. For the effect on false positive rates see: https://github.com/tylertreat/BoomFilters/pull/1

func (*PartitionedBloomFilter) Test

func (p *PartitionedBloomFilter) Test(data []byte) bool

Test will test for membership of the data and returns true if it is a member, false if not. This is a probabilistic test, meaning there is a non-zero probability of false positives but a zero probability of false negatives. Due to the way the filter is partitioned, the probability of false positives is uniformly distributed across all elements.

func (*PartitionedBloomFilter) TestAndAdd

func (p *PartitionedBloomFilter) TestAndAdd(data []byte) bool

TestAndAdd is equivalent to calling Test followed by Add. It returns true if the data is a member, false if not.

func (*PartitionedBloomFilter) WriteTo

func (p *PartitionedBloomFilter) WriteTo(stream io.Writer) (int64, error)

WriteTo writes a binary representation of the PartitionedBloomFilter to an i/o stream. It returns the number of bytes written.

type ScalableBloomFilter

type ScalableBloomFilter struct {
	// contains filtered or unexported fields
}

ScalableBloomFilter implements a Scalable Bloom Filter as described by Almeida, Baquero, Preguica, and Hutchison in Scalable Bloom Filters:

http://gsd.di.uminho.pt/members/cbm/ps/dbloom.pdf

A Scalable Bloom Filter dynamically adapts to the number of elements in the data set while enforcing a tight upper bound on the false-positive rate. This works by adding Bloom filters with geometrically decreasing false-positive rates as filters become full. The tightening ratio, r, controls the filter growth. The compounded probability over the whole series converges to a target value, even accounting for an infinite series.

Scalable Bloom Filters are useful for cases where the size of the data set isn't known a priori and memory constraints aren't of particular concern. For situations where memory is bounded, consider using Inverse or Stable Bloom Filters.

func NewDefaultScalableBloomFilter

func NewDefaultScalableBloomFilter(fpRate float64) *ScalableBloomFilter

NewDefaultScalableBloomFilter creates a new Scalable Bloom Filter with the specified target false-positive rate and an optimal tightening ratio.

func NewScalableBloomFilter

func NewScalableBloomFilter(hint uint, fpRate, r float64) *ScalableBloomFilter

NewScalableBloomFilter creates a new Scalable Bloom Filter with the specified target false-positive rate and tightening ratio. Use NewDefaultScalableBloomFilter if you don't want to calculate these parameters.

func (*ScalableBloomFilter) Add

func (s *ScalableBloomFilter) Add(data []byte) Filter

Add will add the data to the Bloom filter. It returns the filter to allow for chaining.

func (*ScalableBloomFilter) Capacity

func (s *ScalableBloomFilter) Capacity() uint

Capacity returns the current Scalable Bloom Filter capacity, which is the sum of the capacities for the contained series of Bloom filters.

func (*ScalableBloomFilter) Count

func (s *ScalableBloomFilter) Count() uint

Count returns the total number of items added, across every Bloom filter.

func (*ScalableBloomFilter) FillRatio

func (s *ScalableBloomFilter) FillRatio() float64

FillRatio returns the average ratio of set bits across every filter.

func (*ScalableBloomFilter) GobDecode

func (s *ScalableBloomFilter) GobDecode(data []byte) error

GobDecode implements gob.GobDecoder interface.

func (*ScalableBloomFilter) GobEncode

func (s *ScalableBloomFilter) GobEncode() ([]byte, error)

GobEncode implements gob.GobEncoder interface.

func (*ScalableBloomFilter) K

func (s *ScalableBloomFilter) K() uint

K returns the number of hash functions used in each Bloom filter.

func (*ScalableBloomFilter) ReadFrom

func (s *ScalableBloomFilter) ReadFrom(stream io.Reader) (int64, error)

ReadFrom reads a binary representation of ScalableBloomFilter (such as might have been written by WriteTo()) from an i/o stream. It returns the number of bytes read.

func (*ScalableBloomFilter) Reset

Reset restores the Bloom filter to its original state. It returns the filter to allow for chaining.

func (*ScalableBloomFilter) SetHash

func (s *ScalableBloomFilter) SetHash(h hash.Hash64)

SetHash sets the hashing function used in the filter. For the effect on false positive rates see: https://github.com/tylertreat/BoomFilters/pull/1

func (*ScalableBloomFilter) Test

func (s *ScalableBloomFilter) Test(data []byte) bool

Test will test for membership of the data and returns true if it is a member, false if not. This is a probabilistic test, meaning there is a non-zero probability of false positives but a zero probability of false negatives.

func (*ScalableBloomFilter) TestAndAdd

func (s *ScalableBloomFilter) TestAndAdd(data []byte) bool

TestAndAdd is equivalent to calling Test followed by Add. It returns true if the data is a member, false if not.

func (*ScalableBloomFilter) WriteTo

func (s *ScalableBloomFilter) WriteTo(stream io.Writer) (int64, error)

WriteTo writes a binary representation of the ScalableBloomFilter to an i/o stream. It returns the number of bytes written.

type StableBloomFilter

type StableBloomFilter struct {
	// contains filtered or unexported fields
}

StableBloomFilter implements a Stable Bloom Filter as described by Deng and Rafiei in Approximately Detecting Duplicates for Streaming Data using Stable Bloom Filters:

http://webdocs.cs.ualberta.ca/~drafiei/papers/DupDet06Sigmod.pdf

A Stable Bloom Filter (SBF) continuously evicts stale information so that it has room for more recent elements. Like traditional Bloom filters, an SBF has a non-zero probability of false positives, which is controlled by several parameters. Unlike the classic Bloom filter, an SBF has a tight upper bound on the rate of false positives while introducing a non-zero rate of false negatives. The false-positive rate of a classic Bloom filter eventually reaches 1, after which all queries result in a false positive. The stable-point property of an SBF means the false-positive rate asymptotically approaches a configurable fixed constant. A classic Bloom filter is actually a special case of SBF where the eviction rate is zero, so this package provides support for them as well.

Stable Bloom Filters are useful for cases where the size of the data set isn't known a priori, which is a requirement for traditional Bloom filters, and memory is bounded. For example, an SBF can be used to deduplicate events from an unbounded event stream with a specified upper bound on false positives and minimal false negatives.

func NewDefaultStableBloomFilter

func NewDefaultStableBloomFilter(m uint, fpRate float64) *StableBloomFilter

NewDefaultStableBloomFilter creates a new Stable Bloom Filter with m 1-bit cells and which is optimized for cases where there is no prior knowledge of the input data stream while maintaining an upper bound using the provided rate of false positives.

func NewStableBloomFilter

func NewStableBloomFilter(m uint, d uint8, fpRate float64) *StableBloomFilter

NewStableBloomFilter creates a new Stable Bloom Filter with m cells and d bits allocated per cell optimized for the target false-positive rate. Use NewDefaultStableFilter if you don't want to calculate d.

func NewUnstableBloomFilter

func NewUnstableBloomFilter(m uint, fpRate float64) *StableBloomFilter

NewUnstableBloomFilter creates a new special case of Stable Bloom Filter which is a traditional Bloom filter with m bits and an optimal number of hash functions for the target false-positive rate. Unlike the stable variant, data is not evicted and a cell contains a maximum of 1 hash value.

func (*StableBloomFilter) Add

func (s *StableBloomFilter) Add(data []byte) Filter

Add will add the data to the Stable Bloom Filter. It returns the filter to allow for chaining.

func (*StableBloomFilter) Cells

func (s *StableBloomFilter) Cells() uint

Cells returns the number of cells in the Stable Bloom Filter.

func (*StableBloomFilter) FalsePositiveRate

func (s *StableBloomFilter) FalsePositiveRate() float64

FalsePositiveRate returns the upper bound on false positives when the filter has become stable.

func (*StableBloomFilter) GobDecode

func (s *StableBloomFilter) GobDecode(data []byte) error

GobDecode implements gob.GobDecoder interface.

func (*StableBloomFilter) GobEncode

func (s *StableBloomFilter) GobEncode() ([]byte, error)

GobEncode implements gob.GobEncoder interface.

func (*StableBloomFilter) K

func (s *StableBloomFilter) K() uint

K returns the number of hash functions.

func (*StableBloomFilter) P

func (s *StableBloomFilter) P() uint

P returns the number of cells decremented on every add.

func (*StableBloomFilter) ReadFrom

func (s *StableBloomFilter) ReadFrom(stream io.Reader) (int64, error)

ReadFrom reads a binary representation of StableBloomFilter (such as might have been written by WriteTo()) from an i/o stream. It returns the number of bytes read.

func (*StableBloomFilter) Reset

Reset restores the Stable Bloom Filter to its original state. It returns the filter to allow for chaining.

func (*StableBloomFilter) SetHash

func (s *StableBloomFilter) SetHash(h hash.Hash64)

SetHash sets the hashing function used in the filter. For the effect on false positive rates see: https://github.com/tylertreat/BoomFilters/pull/1

func (*StableBloomFilter) StablePoint

func (s *StableBloomFilter) StablePoint() float64

StablePoint returns the limit of the expected fraction of zeros in the Stable Bloom Filter when the number of iterations goes to infinity. When this limit is reached, the Stable Bloom Filter is considered stable.

func (*StableBloomFilter) Test

func (s *StableBloomFilter) Test(data []byte) bool

Test will test for membership of the data and returns true if it is a member, false if not. This is a probabilistic test, meaning there is a non-zero probability of false positives and false negatives.

func (*StableBloomFilter) TestAndAdd

func (s *StableBloomFilter) TestAndAdd(data []byte) bool

TestAndAdd is equivalent to calling Test followed by Add. It returns true if the data is a member, false if not.

func (*StableBloomFilter) WriteTo

func (s *StableBloomFilter) WriteTo(stream io.Writer) (int64, error)

WriteTo writes a binary representation of the StableBloomFilter to an i/o stream. It returns the number of bytes written.

type TopK

type TopK struct {
	// contains filtered or unexported fields
}

TopK uses a Count-Min Sketch to calculate the top-K frequent elements in a stream.

func NewTopK

func NewTopK(epsilon, delta float64, k uint) *TopK

NewTopK creates a new TopK backed by a Count-Min sketch whose relative accuracy is within a factor of epsilon with probability delta. It tracks the k-most frequent elements.

func (*TopK) Add

func (t *TopK) Add(data []byte) *TopK

Add will add the data to the Count-Min Sketch and update the top-k heap if applicable. Returns the TopK to allow for chaining.

func (*TopK) Elements

func (t *TopK) Elements() []*Element

Elements returns the top-k elements from lowest to highest frequency.

func (*TopK) Reset

func (t *TopK) Reset() *TopK

Reset restores the TopK to its original state. It returns itself to allow for chaining.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL