z

package
v0.0.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 20, 2020 License: Apache-2.0, Apache-2.0, MIT Imports: 22 Imported by: 0

README

bbloom: a bitset Bloom filter for go/golang

===

package implements a fast bloom filter with real 'bitset' and JSONMarshal/JSONUnmarshal to store/reload the Bloom filter.

NOTE: the package uses unsafe.Pointer to set and read the bits from the bitset. If you're uncomfortable with using the unsafe package, please consider using my bloom filter package at github.com/AndreasBriese/bloom

===

changelog 11/2015: new thread safe methods AddTS(), HasTS(), AddIfNotHasTS() following a suggestion from Srdjan Marinovic (github @a-little-srdjan), who used this to code a bloomfilter cache.

This bloom filter was developed to strengthen a website-log database and was tested and optimized for this log-entry mask: "2014/%02i/%02i %02i:%02i:%02i /info.html". Nonetheless bbloom should work with any other form of entries.

Hash function is a modified Berkeley DB sdbm hash (to optimize for smaller strings). sdbm http://www.cse.yorku.ca/~oz/hash.html

Found sipHash (SipHash-2-4, a fast short-input PRF created by Jean-Philippe Aumasson and Daniel J. Bernstein.) to be about as fast. sipHash had been ported by Dimtry Chestnyk to Go (github.com/dchest/siphash )

Minimum hashset size is: 512 ([4]uint64; will be set automatically).

###install

go get github.com/AndreasBriese/bbloom

###test

  • change to folder ../bbloom
  • create wordlist in file "words.txt" (you might use python permut.py)
  • run 'go test -bench=.' within the folder
go test -bench=.

If you've installed the GOCONVEY TDD-framework http://goconvey.co/ you can run the tests automatically.

using go's testing framework now (have in mind that the op timing is related to 65536 operations of Add, Has, AddIfNotHas respectively)

usage

after installation add

import (
	...
	"github.com/AndreasBriese/bbloom"
	...
	)

at your header. In the program use

// create a bloom filter for 65536 items and 1 % wrong-positive ratio 
bf := bbloom.New(float64(1<<16), float64(0.01))

// or 
// create a bloom filter with 650000 for 65536 items and 7 locs per hash explicitly
// bf = bbloom.New(float64(650000), float64(7))
// or
bf = bbloom.New(650000.0, 7.0)

// add one item
bf.Add([]byte("butter"))

// Number of elements added is exposed now 
// Note: ElemNum will not be included in JSON export (for compatability to older version)
nOfElementsInFilter := bf.ElemNum

// check if item is in the filter
isIn := bf.Has([]byte("butter"))    // should be true
isNotIn := bf.Has([]byte("Butter")) // should be false

// 'add only if item is new' to the bloomfilter
added := bf.AddIfNotHas([]byte("butter"))    // should be false because 'butter' is already in the set
added = bf.AddIfNotHas([]byte("buTTer"))    // should be true because 'buTTer' is new

// thread safe versions for concurrent use: AddTS, HasTS, AddIfNotHasTS
// add one item
bf.AddTS([]byte("peanutbutter"))
// check if item is in the filter
isIn = bf.HasTS([]byte("peanutbutter"))    // should be true
isNotIn = bf.HasTS([]byte("peanutButter")) // should be false
// 'add only if item is new' to the bloomfilter
added = bf.AddIfNotHasTS([]byte("butter"))    // should be false because 'peanutbutter' is already in the set
added = bf.AddIfNotHasTS([]byte("peanutbuTTer"))    // should be true because 'penutbuTTer' is new

// convert to JSON ([]byte) 
Json := bf.JSONMarshal()

// bloomfilters Mutex is exposed for external un-/locking
// i.e. mutex lock while doing JSON conversion
bf.Mtx.Lock()
Json = bf.JSONMarshal()
bf.Mtx.Unlock()

// restore a bloom filter from storage 
bfNew := bbloom.JSONUnmarshal(Json)

isInNew := bfNew.Has([]byte("butter"))    // should be true
isNotInNew := bfNew.Has([]byte("Butter")) // should be false

to work with the bloom filter.

why 'fast'?

It's about 3 times faster than William Fitzgeralds bitset bloom filter https://github.com/willf/bloom . And it is about so fast as my []bool set variant for Boom filters (see https://github.com/AndreasBriese/bloom ) but having a 8times smaller memory footprint:

Bloom filter (filter size 524288, 7 hashlocs)
github.com/AndreasBriese/bbloom 'Add' 65536 items (10 repetitions): 6595800 ns (100 ns/op)
github.com/AndreasBriese/bbloom 'Has' 65536 items (10 repetitions): 5986600 ns (91 ns/op)
github.com/AndreasBriese/bloom 'Add' 65536 items (10 repetitions): 6304684 ns (96 ns/op)
github.com/AndreasBriese/bloom 'Has' 65536 items (10 repetitions): 6568663 ns (100 ns/op)

github.com/willf/bloom 'Add' 65536 items (10 repetitions): 24367224 ns (371 ns/op)
github.com/willf/bloom 'Test' 65536 items (10 repetitions): 21881142 ns (333 ns/op)
github.com/dataence/bloom/standard 'Add' 65536 items (10 repetitions): 23041644 ns (351 ns/op)
github.com/dataence/bloom/standard 'Check' 65536 items (10 repetitions): 19153133 ns (292 ns/op)
github.com/cabello/bloom 'Add' 65536 items (10 repetitions): 131921507 ns (2012 ns/op)
github.com/cabello/bloom 'Contains' 65536 items (10 repetitions): 131108962 ns (2000 ns/op)

(on MBPro15 OSX10.8.5 i7 4Core 2.4Ghz)

With 32bit bloom filters (bloom32) using modified sdbm, bloom32 does hashing with only 2 bit shifts, one xor and one substraction per byte. smdb is about as fast as fnv64a but gives less collisions with the dataset (see mask above). bloom.New(float64(10 * 1<<16),float64(7)) populated with 1<<16 random items from the dataset (see above) and tested against the rest results in less than 0.05% collisions.

Documentation

Index

Constants

View Source
const (
	// MaxArrayLen is a safe maximum length for slices on this architecture.
	MaxArrayLen = 1<<50 - 1
)

Variables

View Source
var NewFile = errors.New("Create a new file")

Functions

func CPUTicks

func CPUTicks() int64

CPUTicks is a faster alternative to NanoTime to measure time duration.

func Calloc

func Calloc(n int64) []byte

Calloc allocates a slice of size n.

func CallocNoRef

func CallocNoRef(n int) []byte

CallocNoRef will not give you memory back without jemalloc.

func FastRand

func FastRand() uint32

FastRand is a fast thread local random function.

func Free

func Free(b []byte)

Free does not do anything in this mode.

func HistogramBounds

func HistogramBounds(minExponent, maxExponent uint32) []float64

Creates bounds for an histogram. The bounds are powers of two of the form [2^min_exponent, ..., 2^max_exponent].

func KeyToHash

func KeyToHash(key interface{}) (uint64, uint64)

TODO: Figure out a way to re-use memhash for the second uint64 hash, we

already know that appending bytes isn't reliable for generating a
second hash (see Ristretto PR #88).

We also know that while the Go runtime has a runtime memhash128
function, it's not possible to use it to generate [2]uint64 or
anything resembling a 128bit hash, even though that's exactly what
we need in this situation.

func Madvise

func Madvise(b []byte, readahead bool) error

Madvise uses the madvise system call to give advise about the use of memory when using a slice that is memory-mapped to a file. Set the readahead flag to false if page references are expected in random order.

func MemHash

func MemHash(data []byte) uint64

MemHash is the hash function used by go map, it utilizes available hardware instructions(behaves as aeshash if aes instruction is available). NOTE: The hash seed changes for every process. So, this cannot be used as a persistent hash.

func MemHashString

func MemHashString(str string) uint64

MemHashString is the hash function used by go map, it utilizes available hardware instructions (behaves as aeshash if aes instruction is available). NOTE: The hash seed changes for every process. So, this cannot be used as a persistent hash.

func Mmap

func Mmap(fd *os.File, writable bool, size int64) ([]byte, error)

Mmap uses the mmap system call to memory-map a file. If writable is true, memory protection of the pages is set so that they may be written to as well.

func Msync

func Msync(b []byte) error

Msync would call sync on the mmapped data.

func Munmap

func Munmap(b []byte) error

Munmap unmaps a previously mapped slice.

func NanoTime

func NanoTime() int64

NanoTime returns the current time in nanoseconds from a monotonic clock.

func NumAllocBytes

func NumAllocBytes() int64

NumAllocBytes returns the number of bytes allocated using calls to z.Calloc. The allocations could be happening via either Go or jemalloc, depending upon the build flags.

func PrintAllocators

func PrintAllocators()

func PrintLeaks

func PrintLeaks()

func ReadMemStats

func ReadMemStats(_ *MemStats)

ReadMemStats doesn't do anything since all the memory is being managed by the Go runtime.

func StatsPrint

func StatsPrint()

func SyncDir

func SyncDir(dir string) error

func ZeroOut

func ZeroOut(dst []byte, start, end int)

ZeroOut zeroes out all the bytes in the range [start, end).

Types

type Allocator

type Allocator struct {
	Ref uint64
	Tag string
	// contains filtered or unexported fields
}

Allocator amortizes the cost of small allocations by allocating memory in bigger chunks. Internally it uses z.Calloc to allocate memory. Once allocated, the memory is not moved, so it is safe to use the allocated bytes to unsafe cast them to Go struct pointers.

func AllocatorFrom

func AllocatorFrom(ref uint64) *Allocator

AllocatorFrom would return the allocator corresponding to the ref.

func NewAllocator

func NewAllocator(sz int) *Allocator

NewAllocator creates an allocator starting with the given size.

func (*Allocator) Allocate

func (a *Allocator) Allocate(sz int) []byte

Allocate would allocate a byte slice of length sz. It is safe to use this memory to unsafe cast to Go structs.

func (*Allocator) AllocateAligned

func (a *Allocator) AllocateAligned(sz int) []byte

func (*Allocator) Allocated

func (a *Allocator) Allocated() uint64

func (*Allocator) Copy

func (a *Allocator) Copy(buf []byte) []byte

func (*Allocator) MaxAlloc

func (a *Allocator) MaxAlloc() int

func (*Allocator) Release

func (a *Allocator) Release()

Release would release the memory back. Remember to make this call to avoid memory leaks.

func (*Allocator) Size

func (a *Allocator) Size() uint64

Size returns the size of the allocations so far.

type Bloom

type Bloom struct {
	ElemNum uint64
	// contains filtered or unexported fields
}

Bloom filter

func JSONUnmarshal

func JSONUnmarshal(dbData []byte) (*Bloom, error)

JSONUnmarshal takes JSON-Object (type bloomJSONImExport) as []bytes returns bloom32 / bloom64 object.

func NewBloomFilter

func NewBloomFilter(params ...float64) (bloomfilter *Bloom)

NewBloomFilter returns a new bloomfilter.

func (*Bloom) Add

func (bl *Bloom) Add(hash uint64)

Add adds hash of a key to the bloomfilter.

func (*Bloom) AddIfNotHas

func (bl *Bloom) AddIfNotHas(hash uint64) bool

AddIfNotHas only Adds hash, if it's not present in the bloomfilter. Returns true if hash was added. Returns false if hash was already registered in the bloomfilter.

func (*Bloom) Clear

func (bl *Bloom) Clear()

Clear resets the Bloom filter.

func (Bloom) Has

func (bl Bloom) Has(hash uint64) bool

Has checks if bit(s) for entry hash is/are set, returns true if the hash was added to the Bloom Filter.

func (*Bloom) IsSet

func (bl *Bloom) IsSet(idx uint64) bool

IsSet checks if bit[idx] of bitset is set, returns true/false.

func (Bloom) JSONMarshal

func (bl Bloom) JSONMarshal() []byte

JSONMarshal returns JSON-object (type bloomJSONImExport) as []byte.

func (*Bloom) Set

func (bl *Bloom) Set(idx uint64)

Set sets the bit[idx] of bitset.

func (*Bloom) Size

func (bl *Bloom) Size(sz uint64)

Size makes Bloom filter with as bitset of size sz.

func (*Bloom) TotalSize

func (bl *Bloom) TotalSize() int

TotalSize returns the total size of the bloom filter.

type Buffer

type Buffer struct {
	// contains filtered or unexported fields
}

Buffer is equivalent of bytes.Buffer without the ability to read. It is NOT thread-safe.

In UseCalloc mode, z.Calloc is used to allocate memory, which depending upon how the code is compiled could use jemalloc for allocations.

In UseMmap mode, Buffer uses file mmap to allocate memory. This allows us to store big data structures without using physical memory.

MaxSize can be set to limit the memory usage.

func NewBuffer

func NewBuffer(sz int64) *Buffer

Newbuffer is a helper utility, which creates a virtually unlimited Buffer in UseCalloc mode.

func NewBufferWith

func NewBufferWith(sz, maxSz int64, bufType BufferType) (*Buffer, error)

NewBufferWith would allocate a buffer of size sz upfront, with the total size of the buffer not exceeding maxSz. Both sz and maxSz can be set to zero, in which case reasonable defaults would be used. Buffer can't be used without initialization via NewBuffer.

func (*Buffer) Allocate

func (b *Buffer) Allocate(n int64) []byte

Allocate is a way to get a slice of size n back from the buffer. This slice can be directly written to. Warning: Allocate is not thread-safe. The byte slice returned MUST be used before further calls to Buffer.

func (*Buffer) AllocateOffset

func (b *Buffer) AllocateOffset(n int64) int64

AllocateOffset works the same way as allocate, but instead of returning a byte slice, it returns the offset of the allocation.

func (*Buffer) AutoMmapAfter

func (b *Buffer) AutoMmapAfter(size int64)

func (*Buffer) Bytes

func (b *Buffer) Bytes() []byte

Bytes would return all the written bytes as a slice.

func (*Buffer) Data

func (b *Buffer) Data(offset int64) []byte

func (*Buffer) Grow

func (b *Buffer) Grow(n int64)

Grow would grow the buffer to have at least n more bytes. In case the buffer is at capacity, it would reallocate twice the size of current capacity + n, to ensure n bytes can be written to the buffer without further allocation. In UseMmap mode, this might result in underlying file expansion.

func (*Buffer) IsEmpty

func (b *Buffer) IsEmpty() bool

func (*Buffer) Len

func (b *Buffer) Len() int64

Len would return the number of bytes written to the buffer so far.

func (*Buffer) Release

func (b *Buffer) Release() error

Release would free up the memory allocated by the buffer. Once the usage of buffer is done, it is important to call Release, otherwise a memory leak can happen.

func (*Buffer) Reset

func (b *Buffer) Reset()

Reset would reset the buffer to be reused.

func (*Buffer) Slice

func (b *Buffer) Slice(offset int64) ([]byte, int64)

Slice would return the slice written at offset.

func (*Buffer) SliceAllocate

func (b *Buffer) SliceAllocate(sz int64) []byte

SliceAllocate would encode the size provided into the buffer, followed by a call to Allocate, hence returning the slice of size sz. This can be used to allocate a lot of small buffers into this big buffer. Note that SliceAllocate should NOT be mixed with normal calls to Write.

func (*Buffer) SliceIterate

func (b *Buffer) SliceIterate(f func(slice []byte) error) error

func (*Buffer) SliceOffsets

func (b *Buffer) SliceOffsets() []int64

SliceOffsets is an expensive function. Use sparingly.

func (*Buffer) SortSlice

func (b *Buffer) SortSlice(less func(left, right []byte) bool)

SortSlice is like SortSliceBetween but sorting over the entire buffer.

func (*Buffer) SortSliceBetween

func (b *Buffer) SortSliceBetween(start, end int64, less LessFunc)

func (*Buffer) Write

func (b *Buffer) Write(p []byte) (n int64, err error)

Write would write p bytes to the buffer.

func (*Buffer) WriteSlice

func (b *Buffer) WriteSlice(slice []byte)

type BufferType

type BufferType int
const (
	UseCalloc BufferType = iota
	UseMmap
	UseInvalid
)

func (BufferType) String

func (t BufferType) String() string

type Closer

type Closer struct {
	// contains filtered or unexported fields
}

Closer holds the two things we need to close a goroutine and wait for it to finish: a chan to tell the goroutine to shut down, and a WaitGroup with which to wait for it to finish shutting down.

func NewCloser

func NewCloser(initial int) *Closer

NewCloser constructs a new Closer, with an initial count on the WaitGroup.

func (*Closer) AddRunning

func (lc *Closer) AddRunning(delta int)

AddRunning Add()'s delta to the WaitGroup.

func (*Closer) Ctx

func (lc *Closer) Ctx() context.Context

Ctx can be used to get a context, which would automatically get cancelled when Signal is called.

func (*Closer) Done

func (lc *Closer) Done()

Done calls Done() on the WaitGroup.

func (*Closer) HasBeenClosed

func (lc *Closer) HasBeenClosed() <-chan struct{}

HasBeenClosed gets signaled when Signal() is called.

func (*Closer) Signal

func (lc *Closer) Signal()

Signal signals the HasBeenClosed signal.

func (*Closer) SignalAndWait

func (lc *Closer) SignalAndWait()

SignalAndWait calls Signal(), then Wait().

func (*Closer) Wait

func (lc *Closer) Wait()

Wait waits on the WaitGroup. (It waits for NewCloser's initial value, AddRunning, and Done calls to balance out.)

type HistogramData

type HistogramData struct {
	Bounds         []float64
	Count          int64
	CountPerBucket []int64
	Min            int64
	Max            int64
	Sum            int64
}

HistogramData stores the information needed to represent the sizes of the keys and values as a histogram.

func NewHistogramData

func NewHistogramData(bounds []float64) *HistogramData

NewHistogramData returns a new instance of HistogramData with properly initialized fields.

func (*HistogramData) Copy

func (histogram *HistogramData) Copy() *HistogramData

func (*HistogramData) Mean

func (histogram *HistogramData) Mean() float64

Mean returns the mean value for the histogram.

func (*HistogramData) String

func (histogram *HistogramData) String() string

String converts the histogram data into human-readable string.

func (*HistogramData) Update

func (histogram *HistogramData) Update(value int64)

Update changes the Min and Max fields if value is less than or greater than the current values.

type LessFunc

type LessFunc func(a, b []byte) bool

type MemStats

type MemStats struct {
	// Total number of bytes allocated by the application.
	// http://jemalloc.net/jemalloc.3.html#stats.allocated
	Allocated uint64
	// Total number of bytes in active pages allocated by the application. This
	// is a multiple of the page size, and greater than or equal to
	// Allocated.
	// http://jemalloc.net/jemalloc.3.html#stats.active
	Active uint64
	// Maximum number of bytes in physically resident data pages mapped by the
	// allocator, comprising all pages dedicated to allocator metadata, pages
	// backing active allocations, and unused dirty pages. This is a maximum
	// rather than precise because pages may not actually be physically
	// resident if they correspond to demand-zeroed virtual memory that has not
	// yet been touched. This is a multiple of the page size, and is larger
	// than stats.active.
	// http://jemalloc.net/jemalloc.3.html#stats.resident
	Resident uint64
	// Total number of bytes in virtual memory mappings that were retained
	// rather than being returned to the operating system via e.g. munmap(2) or
	// similar. Retained virtual memory is typically untouched, decommitted, or
	// purged, so it has no strongly associated physical memory (see extent
	// hooks http://jemalloc.net/jemalloc.3.html#arena.i.extent_hooks for
	// details). Retained memory is excluded from mapped memory statistics,
	// e.g. stats.mapped (http://jemalloc.net/jemalloc.3.html#stats.mapped).
	// http://jemalloc.net/jemalloc.3.html#stats.retained
	Retained uint64
}

MemStats is used to fetch JE Malloc Stats. The stats are fetched from the mallctl namespace http://jemalloc.net/jemalloc.3.html#mallctl_namespace.

type MmapFile

type MmapFile struct {
	Data []byte
	Fd   *os.File
}

MmapFile represents an mmapd file and includes both the buffer to the data and the file descriptor.

func OpenMmapFile

func OpenMmapFile(filename string, flag int, maxSz int) (*MmapFile, error)

OpenMmapFile opens an existing file or creates a new file. If the file is created, it would truncate the file to maxSz. In both cases, it would mmap the file to maxSz and returned it. In case the file is created, z.NewFile is returned.

func OpenMmapFileUsing

func OpenMmapFileUsing(fd *os.File, maxSz int, writable bool) (*MmapFile, error)

func (*MmapFile) AllocateSlice

func (m *MmapFile) AllocateSlice(sz, offset int) ([]byte, int)

AllocateSlice allocates a slice of the given size at the given offset.

func (*MmapFile) Bytes

func (m *MmapFile) Bytes(off, sz int) ([]byte, error)

Bytes returns data starting from offset off of size sz. If there's not enough data, it would return nil slice and io.EOF.

func (*MmapFile) Close

func (m *MmapFile) Close(maxSz int64) error

Close would close the file. It would also truncate the file if maxSz >= 0.

func (*MmapFile) Delete

func (m *MmapFile) Delete() error

func (*MmapFile) NewReader

func (m *MmapFile) NewReader(offset int) io.Reader

func (*MmapFile) Slice

func (m *MmapFile) Slice(offset int) []byte

Slice returns the slice at the given offset.

func (*MmapFile) Sync

func (m *MmapFile) Sync() error

func (*MmapFile) Truncate

func (m *MmapFile) Truncate(maxSz int64) error

Truncate would truncate the mmapped file to the given size. On Linux and others, we could directly just truncate the underlying file, but in Windows, we can't do that. So, unmap first, then truncate, then re-map.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL