gorocksdb

package module
v0.0.0-...-485f034 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 12, 2014 License: MIT Imports: 4 Imported by: 0

README

gorocksdb

gorocksdb is a Go wrapper for RocksDB.

The API has been godoc'ed and is available on the web.

Building

Currently the lib is only compatible with the following rocksdb repository https://github.com/tecbot/rocksdb

CGO_CFLAGS="-I/path/to/rocksdb/include" CGO_LDFLAGS="-L/path/to/rocksdb" go get github.com/tecbot/gorocksdb

Documentation

Overview

Package gorocksdb provides the ability to create and access RocksDB databases.

gorocksdb.OpenDb opens and creates databases.

opts := gorocksdb.NewDefaultOptions()
opts.SetBlockCache(gorocksdb.NewLRUCache(3<<30))
opts.SetCreateIfMissing(true)
db, err := gorocksdb.OpenDb(opts, "/path/to/db")

The DB struct returned by OpenDb provides DB.Get, DB.Put, DB.Merge and DB.Delete to modify and query the database.

ro := gorocksdb.NewDefaultReadOptions()
wo := gorocksdb.NewDefaultWriteOptions()
// if ro and wo are not used again, be sure to Close them.
err = db.Put(wo, []byte("foo"), []byte("bar"))
...
value, err := db.Get(ro, []byte("foo"))
defer value.Free()
...
err = db.Delete(wo, []byte("foo"))

For bulk reads, use an Iterator. If you want to avoid disturbing your live traffic while doing the bulk read, be sure to call SetFillCache(false) on the ReadOptions you use when creating the Iterator.

ro := gorocksdb.NewDefaultReadOptions()
ro.SetFillCache(false)
it := db.NewIterator(ro)
defer it.Close()
it.Seek([]byte("foo"))
for it = it; it.Valid(); it.Next() {
	key := iter.Key()
	value := iter.Value()
	fmt.Printf("Key: %v Value: %v\n", key.Data(), value.Data())
	key.Free()
	value.Free()
}
if err := it.GetError(); err != nil {
	...
}

Batched, atomic writes can be performed with a WriteBatch and DB.Write.

wb := gorocksdb.NewWriteBatch()
// defer wb.Close or use wb.Clear and reuse.
wb.Delete([]byte("foo"))
wb.Put([]byte("foo"), []byte("bar"))
wb.Put([]byte("bar"), []byte("foo"))
err := db.Write(wo, wb)

If your working dataset does not fit in memory, you'll want to add a bloom filter to your database. NewBloomFilter and Options.SetFilterPolicy is what you want. NewBloomFilter is amount of bits in the filter to use per key in your database.

filter := gorocksdb.NewBloomFilter(10)
opts.SetFilterPolicy(filter)
db, err := gorocksdb.OpenDb(opts, "/path/to/db")

If you're using a custom comparator in your code, be aware you may have to make your own filter policy object.

This documentation is not a complete discussion of RocksDB. Please read the RocksDB documentation <http://rocksdb.org/> for information on its operation. You'll find lots of goodies there.

Index

Constants

View Source
const (
	NoCompression     = CompressionType(0)
	SnappyCompression = CompressionType(1)
	ZlibCompression   = CompressionType(2)
	BZip2Compression  = CompressionType(3)
)
View Source
const (
	LevelCompactionStyle     = CompactionStyle(0)
	UniversalCompactionStyle = CompactionStyle(1)
)
View Source
const (
	// data in memtable, block cache, OS cache or storage
	ReadAllTier = ReadTier(0)
	// data in memtable or block cache
	BlockCacheTier = ReadTier(1)
)
View Source
const (
	CompactionStopStyleSimilarSize = UniversalCompactionStopStyle(0)
	CompactionStopStyleTotalSize   = UniversalCompactionStopStyle(1)
)

Variables

This section is empty.

Functions

func BoolToChar

func BoolToChar(b bool) C.uchar

BoolToChar converts a bool value to C.uchar

func Btoi

func Btoi(b bool) int

Btoi converts a bool value to int

func ByteToChar

func ByteToChar(b []byte) *C.char

ByteToChar returns *C.char from byte slice

func CharToBool

func CharToBool(c C.uchar) bool

CharToBool converts a C.uchar value to bool

func CharToByte

func CharToByte(data *C.char, len C.size_t) []byte

func DestroyDb

func DestroyDb(name string, opts *Options) error

DestroyDb removes a database entirely, removing everything from the filesystem.

func RepairDb

func RepairDb(name string, opts *Options) error

RepairDb repairs a database.

func StringToChar

func StringToChar(s string) *C.char

StringToChar returns *C.char from string

Types

type Cache

type Cache struct {
	// contains filtered or unexported fields
}

Cache is a cache used to store data read from data in memory.

func NewLRUCache

func NewLRUCache(capacity int) *Cache

NewLRUCache creates a new LRU Cache object with the capacity given.

func NewNativeCache

func NewNativeCache(c *C.rocksdb_cache_t) *Cache

NewNativeCache creates a Cache object.

func (*Cache) Destroy

func (self *Cache) Destroy()

Destroy deallocates the Cache object.

type CompactionStyle

type CompactionStyle uint

type Comparator

type Comparator struct {
	// contains filtered or unexported fields
}

A Comparator object provides a total order across slices that are used as keys in an sstable or a database.

func NewComparator

func NewComparator(handler ComparatorHandler) *Comparator

NewComparator creates a new comparator for the given handler.

func NewNativeComparator

func NewNativeComparator(c *C.rocksdb_comparator_t) *Comparator

NewNativeComparator allocates a Comparator object.

func (*Comparator) Destroy

func (self *Comparator) Destroy()

Destroy deallocates the Comparator object.

type ComparatorHandler

type ComparatorHandler interface {
	// Three-way comparison. Returns value:
	//   < 0 iff "a" < "b",
	//   == 0 iff "a" == "b",
	//   > 0 iff "a" > "b"
	Compare(a []byte, b []byte) int

	// The name of the comparator.
	Name() string
}

type CompressionOptions

type CompressionOptions struct {
	WindowBits int
	Level      int
	Strategy   int
}

Compression options for different compression algorithms like Zlib.

func NewCompressionOptions

func NewCompressionOptions(windowBits, level, strategy int) *CompressionOptions

NewCompressionOptions creates a CompressionOptions object.

func NewDefaultCompressionOptions

func NewDefaultCompressionOptions() *CompressionOptions

NewDefaultCompressionOptions creates a default CompressionOptions object.

type CompressionType

type CompressionType uint

DB contents are stored in a set of blocks, each of which holds a sequence of key,value pairs. Each block may be compressed before being stored in a file. The following enum describes which compression method (if any) is used to compress a block.

type DB

type DB struct {
	// contains filtered or unexported fields
}

DB is a reusable handle to a RocksDB database on disk, created by Open.

func OpenDb

func OpenDb(opts *Options, name string) (*DB, error)

OpenDb opens a database with the specified options.

func (*DB) Close

func (self *DB) Close()

Close closes the database.

func (*DB) CompactRange

func (self *DB) CompactRange(r Range)

CompactRange runs a manual compaction on the Range of keys given. This is not likely to be needed for typical usage.

func (*DB) Delete

func (self *DB) Delete(opts *WriteOptions, key []byte) error

Delete removes the data associated with the key from the database.

func (*DB) DisableFileDeletions

func (self *DB) DisableFileDeletions() error

DisableFileDeletions disables file deletions and should be used when backup the database.

func (*DB) EnableFileDeletions

func (self *DB) EnableFileDeletions(force bool) error

EnableFileDeletions enables file deletions for the database.

func (*DB) Flush

func (self *DB) Flush(opts *FlushOptions) error

Flush triggers a manuel flush for the database.

func (*DB) Get

func (self *DB) Get(opts *ReadOptions, key []byte) (*Slice, error)

Get returns the data associated with the key from the database.

func (*DB) GetApproximateSizes

func (self *DB) GetApproximateSizes(ranges []Range) []uint64

GetApproximateSizes returns the approximate number of bytes of file system space used by one or more key ranges.

The keys counted will begin at Range.Start and end on the key before Range.Limit.

func (*DB) GetProperty

func (self *DB) GetProperty(propName string) string

GetProperty returns the value of a database property.

func (*DB) Merge

func (self *DB) Merge(opts *WriteOptions, key []byte, value []byte) error

Merge merges the data associated with the key with the actual data in the database.

func (*DB) Name

func (self *DB) Name() string

Name returns the name of the database.

func (*DB) NewIterator

func (self *DB) NewIterator(opts *ReadOptions) *Iterator

NewIterator returns an Iterator over the the database that uses the ReadOptions given.

func (*DB) NewSnapshot

func (self *DB) NewSnapshot() *Snapshot

NewSnapshot creates a new snapshot of the database.

func (*DB) Put

func (self *DB) Put(opts *WriteOptions, key, value []byte) error

Put writes data associated with a key to the database.

func (*DB) Write

func (self *DB) Write(opts *WriteOptions, batch *WriteBatch) error

Write writes a WriteBatch to the database

type Env

type Env struct {
	// contains filtered or unexported fields
}

Env is a system call environment used by a database.

func NewDefaultEnv

func NewDefaultEnv() *Env

NewDefaultEnv creates a default environment.

func NewNativeEnv

func NewNativeEnv(c *C.rocksdb_env_t) *Env

NewNativeEnv creates a Environment object.

func (*Env) Destroy

func (self *Env) Destroy()

Destroy deallocates the Env object.

func (*Env) SetBackgroundThreads

func (self *Env) SetBackgroundThreads(n int)

The number of background worker threads of a specific thread pool for this environment. 'LOW' is the default pool. Default: 1

func (*Env) SetHighPriorityBackgroundThreads

func (self *Env) SetHighPriorityBackgroundThreads(n int)

SetHighPriorityBackgroundThreads sets the size of the high priority thread pool that can be used to prevent compactions from stalling memtable flushes.

type FilterPolicy

type FilterPolicy struct {
	// contains filtered or unexported fields
}

FilterPolicy is a factory type that allows the RocksDB database to create a filter, such as a bloom filter, which will used to reduce reads.

func NewBloomFilter

func NewBloomFilter(bitsPerKey int) *FilterPolicy

Return a new filter policy that uses a bloom filter with approximately the specified number of bits per key. A good value for bits_per_key is 10, which yields a filter with ~1% false positive rate.

Note: if you are using a custom comparator that ignores some parts of the keys being compared, you must not use NewBloomFilterPolicy() and must provide your own FilterPolicy that also ignores the corresponding parts of the keys. For example, if the comparator ignores trailing spaces, it would be incorrect to use a FilterPolicy (like NewBloomFilterPolicy) that does not ignore trailing spaces in keys.

func NewFilterPolicy

func NewFilterPolicy(handler FilterPolicyHandler) *FilterPolicy

NewFilterPolicy creates a new filter policy for the given handler.

func NewNativeFilterPolicy

func NewNativeFilterPolicy(c *C.rocksdb_filterpolicy_t) *FilterPolicy

NewNativeFilterPolicy creates a filter policy object.

func (*FilterPolicy) Destroy

func (self *FilterPolicy) Destroy()

Destroy deallocates the FilterPolicy object.

type FilterPolicyHandler

type FilterPolicyHandler interface {
	// keys contains a list of keys (potentially with duplicates)
	// that are ordered according to the user supplied comparator.
	CreateFilter(keys [][]byte) []byte

	// "filter" contains the data appended by a preceding call to
	// CreateFilter(). This method must return true if
	// the key was in the list of keys passed to CreateFilter().
	// This method may return true or false if the key was not on the
	// list, but it should aim to return false with a high probability.
	KeyMayMatch(key []byte, filter []byte) bool

	// Return the name of this policy.
	Name() string
}

type FlushOptions

type FlushOptions struct {
	// contains filtered or unexported fields
}

FlushOptions represent all of the available options when manual flushing the database.

func NewDefaultFlushOptions

func NewDefaultFlushOptions() *FlushOptions

NewDefaultFlushOptions creates a default FlushOptions object.

func NewNativeFlushOptions

func NewNativeFlushOptions(c *C.rocksdb_flushoptions_t) *FlushOptions

NewNativeFlushOptions creates a FlushOptions object.

func (*FlushOptions) Destroy

func (self *FlushOptions) Destroy()

Destroy deallocates the FlushOptions object.

func (*FlushOptions) SetWait

func (self *FlushOptions) SetWait(value bool)

If true, the flush will wait until the flush is done. Default: true

type Iterator

type Iterator struct {
	// contains filtered or unexported fields
}

The iterator provides a way to seek to specific keys and iterate through the keyspace from that point, as well as access the values of those keys.

For example:

     it := db.NewIterator(readOpts)
     defer it.Close()

     it.Seek([]byte("foo"))
		for ; it.Valid(); it.Next() {
         fmt.Printf("Key: %v Value: %v\n", it.Key().Data(), it.Value().Data())
		}

     if err := it.Err(); err != nil {
         return err
     }

func NewNativeIterator

func NewNativeIterator(c *C.rocksdb_iterator_t) *Iterator

NewNativeIterator creates a Iterator object.

func (*Iterator) Close

func (self *Iterator) Close()

Close closes the iterator.

func (*Iterator) Err

func (self *Iterator) Err() error

Err returns nil if no errors happened during iteration, or the actual error otherwise.

func (*Iterator) Key

func (self *Iterator) Key() *Slice

Key returns the key the iterator currently holds.

func (*Iterator) Next

func (self *Iterator) Next()

Next moves the iterator to the next sequential key in the database.

func (*Iterator) Prev

func (self *Iterator) Prev()

Prev moves the iterator to the previous sequential key in the database.

func (*Iterator) Seek

func (self *Iterator) Seek(key []byte)

Seek moves the iterator to the position greater than or equal to the key.

func (*Iterator) SeekToFirst

func (self *Iterator) SeekToFirst()

SeekToFirst moves the iterator to the first key in the database.

func (*Iterator) SeekToLast

func (self *Iterator) SeekToLast()

SeekToLast moves the iterator to the last key in the database.

func (*Iterator) Valid

func (self *Iterator) Valid() bool

Valid returns false only when an Iterator has iterated past either the first or the last key in the database.

func (*Iterator) Value

func (self *Iterator) Value() *Slice

Value returns the value in the database the iterator currently holds.

type MergeOperator

type MergeOperator struct {
	// contains filtered or unexported fields
}

The Merge Operator

Essentially, a MergeOperator specifies the SEMANTICS of a merge, which only client knows. It could be numeric addition, list append, string concatenation, edit data structure, ... , anything. The library, on the other hand, is concerned with the exercise of this interface, at the right time (during get, iteration, compaction...)

Please read the RocksDB documentation <http://rocksdb.org/> for more details and example implementations.

func NewMergeOperator

func NewMergeOperator(handler MergeOperatorHandler) *MergeOperator

NewMergeOperator creates a new merge operator for the given handler.

func NewNativeMergeOperator

func NewNativeMergeOperator(c *C.rocksdb_mergeoperator_t) *MergeOperator

NewNativeMergeOperator allocates a MergeOperator object.

func (*MergeOperator) Destroy

func (self *MergeOperator) Destroy()

Destroy deallocates the MergeOperator object.

type MergeOperatorHandler

type MergeOperatorHandler interface {
	// Gives the client a way to express the read -> modify -> write semantics
	// key:           The key that's associated with this merge operation.
	//                Client could multiplex the merge operator based on it
	//                if the key space is partitioned and different subspaces
	//                refer to different types of data which have different
	//                merge operation semantics.
	// existingValue: null indicates that the key does not exist before this op.
	// operands:      the sequence of merge operations to apply, front() first.
	//
	// Return true on success.
	//
	// All values passed in will be client-specific values. So if this method
	// returns false, it is because client specified bad data or there was
	// internal corruption. This will be treated as an error by the library.
	FullMerge(key, existingValue []byte, operands [][]byte) ([]byte, bool)

	// This function performs merge(left_op, right_op)
	// when both the operands are themselves merge operation types
	// that you would have passed to a db.Merge() call in the same order
	// (i.e.: db.Merge(key,left_op), followed by db.Merge(key,right_op)).
	//
	// PartialMerge should combine them into a single merge operation.
	// The return value should be constructed such that a call to
	// db.Merge(key, new_value) would yield the same result as a call
	// to db.Merge(key, left_op) followed by db.Merge(key, right_op).
	//
	// If it is impossible or infeasible to combine the two operations, return false.
	// The library will internally keep track of the operations, and apply them in the
	// correct order once a base-value (a Put/Delete/End-of-Database) is seen.
	PartialMerge(key, leftOperand, rightOperand []byte) ([]byte, bool)

	// The name of the MergeOperator.
	Name() string
}

type Options

type Options struct {
	// contains filtered or unexported fields
}

Options represent all of the available options when opening a database with Open.

func NewDefaultOptions

func NewDefaultOptions() *Options

NewDefaultOptions creates the default Options.

func NewNativeOptions

func NewNativeOptions(c *C.rocksdb_options_t) *Options

NewNativeOptions creates a Options object.

func (*Options) Destroy

func (self *Options) Destroy()

Destroy deallocates the Options object.

func (*Options) EnableStatistics

func (self *Options) EnableStatistics()

If enabled, then we should collect metrics about database operations. Default: false

func (*Options) PrepareForBulkLoad

func (self *Options) PrepareForBulkLoad()

Set appropriate parameters for bulk loading.

All data will be in level 0 without any automatic compaction. It's recommended to manually call CompactRange(NULL, NULL) before reading from the database, because otherwise the read can be very slow.

func (*Options) SetAdviseRandomOnOpen

func (self *Options) SetAdviseRandomOnOpen(value bool)

If set true, will hint the underlying file system that the file access pattern is random, when a sst file is opened. Default: true

func (*Options) SetAllowMmapReads

func (self *Options) SetAllowMmapReads(value bool)

Allow the OS to mmap file for reading sst tables. Default: false

func (*Options) SetAllowMmapWrites

func (self *Options) SetAllowMmapWrites(value bool)

Allow the OS to mmap file for writing. Default: true

func (*Options) SetAllowOsBuffer

func (self *Options) SetAllowOsBuffer(value bool)

Data being read from file storage may be buffered in the OS Default: true

func (*Options) SetArenaBlockSize

func (self *Options) SetArenaBlockSize(value int)

Size of one block in arena memory allocation. If <= 0, a proper value is automatically calculated (usually 1/10 of writer_buffer_size). Default: 0

func (*Options) SetBlockCache

func (self *Options) SetBlockCache(value *Cache)

Control over blocks (user data is stored in a set of blocks, and a block is the unit of reading from disk).

If set, use the specified cache for blocks. If nil, rocksdb will automatically create and use an 8MB internal cache. Default: nil

func (*Options) SetBlockCacheCompressed

func (self *Options) SetBlockCacheCompressed(value *Cache)

If set, use the specified cache for compressed blocks. If nil, rocksdb will not use a compressed block cache. Default: nil

func (*Options) SetBlockRestartInterval

func (self *Options) SetBlockRestartInterval(value int)

Number of keys between restart points for delta encoding of keys. This parameter can be changed dynamically. Most clients should leave this parameter alone. Default: 16

func (*Options) SetBlockSize

func (self *Options) SetBlockSize(value int)

Approximate size of user data packed per block. Note that the block size specified here corresponds to uncompressed data. The actual size of the unit read from disk may be smaller if compression is enabled. This parameter can be changed dynamically. Default: 4K

func (*Options) SetBlockSizeDeviation

func (self *Options) SetBlockSizeDeviation(value int)

This is used to close a block before it reaches the configured 'block_size'. If the percentage of free space in the current block is less than this specified number and adding a new record to the block will exceed the configured block size, then this block will be closed and the new record will be written to the next block. Default: 10

func (*Options) SetBytesPerSync

func (self *Options) SetBytesPerSync(value uint64)

Allows OS to incrementally sync files to disk while they are being written, asynchronously, in the background. Issue one request for every bytes_per_sync written. Default: 0 (disabled)

func (*Options) SetCompactionStyle

func (self *Options) SetCompactionStyle(value CompactionStyle)

The compaction style. Default: LevelCompactionStyle

func (*Options) SetComparator

func (self *Options) SetComparator(value *Comparator)

Comparator used to define the order of keys in the table. Default: a comparator that uses lexicographic byte-wise ordering

func (*Options) SetCompression

func (self *Options) SetCompression(value CompressionType)

Compress blocks using the specified compression algorithm. This parameter can be changed dynamically.

Default: SnappyCompression, which gives lightweight but fast compression.

func (*Options) SetCompressionOptions

func (self *Options) SetCompressionOptions(value *CompressionOptions)

Sets different options for compression algorithms. Default: nil

func (*Options) SetCreateIfMissing

func (self *Options) SetCreateIfMissing(value bool)

If true, the database will be created if it is missing. Default: false

func (*Options) SetDbLogDir

func (self *Options) SetDbLogDir(value string)

This specifies the absolute info LOG dir. If it is empty, the log files will be in the same dir as data. If it is non empty, the log files will be in the specified dir, and the db data dir's absolute path will be used as the log file name's prefix. Default: empty

func (*Options) SetDbStatsLogInterval

func (self *Options) SetDbStatsLogInterval(value int)

This number controls how often a new scribe log about db deploy stats is written out. -1 indicates no logging at all. Default: 1800 (half an hour)

func (*Options) SetDeleteObsoleteFilesPeriodMicros

func (self *Options) SetDeleteObsoleteFilesPeriodMicros(value uint64)

The periodicity when obsolete files get deleted. The files that get out of scope by compaction process will still get automatically delete on every compaction, regardless of this setting. Default: 6 hours

func (*Options) SetDisableAutoCompactions

func (self *Options) SetDisableAutoCompactions(value bool)

Disable automatic compactions. Manual compactions can still be issued on this database. Default: false

func (*Options) SetDisableDataSync

func (self *Options) SetDisableDataSync(value bool)

If true, then the contents of data files are not synced to stable storage. Their contents remain in the OS buffers till the OS decides to flush them. This option is good for bulk-loading of data. Once the bulk-loading is complete, please issue a sync to the OS to flush all dirty buffers to stable storage. Default: false

func (*Options) SetDisableSeekCompaction

func (self *Options) SetDisableSeekCompaction(value bool)

Disable compaction triggered by seek. With bloom filter and fast storage, a miss on one level is very cheap if the file handle is cached in table cache (which is true if max_open_files is large). Default: false

func (*Options) SetEnv

func (self *Options) SetEnv(value *Env)

Use the specified object to interact with the environment, e.g. to read/write files, schedule background work, etc. Default: DefaultEnv

func (*Options) SetErrorIfExists

func (self *Options) SetErrorIfExists(value bool)

If true, an error is raised if the database already exists. Default: false

func (*Options) SetExpandedCompactionFactor

func (self *Options) SetExpandedCompactionFactor(value int)

Maximum number of bytes in all compacted files. We avoid expanding the lower level file set of a compaction if it would make the total compaction cover more than (expanded_compaction_factor * targetFileSizeLevel()) many bytes. Default: 25

func (*Options) SetFilterDeletes

func (self *Options) SetFilterDeletes(value bool)

Use KeyMayExist API to filter deletes when this is true. If KeyMayExist returns false, i.e. the key definitely does not exist, then the delete is a noop. KeyMayExist only incurs in-memory look up. This optimization avoids writing the delete to storage when appropriate. Default: false

func (*Options) SetFilterPolicy

func (self *Options) SetFilterPolicy(value *FilterPolicy)

If set use the specified filter policy to reduce disk reads. Many applications will benefit from passing the result of NewBloomFilterPolicy() here. Default: nil

func (*Options) SetHardRateLimit

func (self *Options) SetHardRateLimit(value float64)

Puts are delayed 1ms at a time when any level has a compaction score that exceeds hard_rate_limit. This is ignored when <= 1.0. Default: 0.0 (disabled)

func (*Options) SetIsFdCloseOnExec

func (self *Options) SetIsFdCloseOnExec(value bool)

Disable child process inherit open files. Default: true

func (*Options) SetKeepLogFileNum

func (self *Options) SetKeepLogFileNum(value int)

Maximal info log files to be kept. Default: 1000

func (*Options) SetLevel0FileNumCompactionTrigger

func (self *Options) SetLevel0FileNumCompactionTrigger(value int)

Number of files to trigger level-0 compaction. A value <0 means that level-0 compaction will not be triggered by number of files at all. Default: 4

func (*Options) SetLevel0SlowdownWritesTrigger

func (self *Options) SetLevel0SlowdownWritesTrigger(value int)

Soft limit on number of level-0 files. We start slowing down writes at this point. A value <0 means that no writing slow down will be triggered by number of files in level-0. Default: 8

func (*Options) SetLevel0StopWritesTrigger

func (self *Options) SetLevel0StopWritesTrigger(value int)

Maximum number of level-0 files. We stop writes at this point. Default: 12

func (*Options) SetLogFileTimeToRoll

func (self *Options) SetLogFileTimeToRoll(value int)

Time for the info log file to roll (in seconds). If specified with non-zero value, log file will be rolled if it has been active longer than `log_file_time_to_roll`. Default: 0 (disabled)

func (*Options) SetManifestPreallocationSize

func (self *Options) SetManifestPreallocationSize(value int)

Number of bytes to preallocate (via fallocate) the manifest files. Default is 4mb, which is reasonable to reduce random IO as well as prevent overallocation for mounts that preallocate large amounts of data (such as xfs's allocsize option). Default: 4mb

func (*Options) SetMaxBackgroundCompactions

func (self *Options) SetMaxBackgroundCompactions(value int)

Maximum number of concurrent background jobs, submitted to the default LOW priority thread pool Default: 1

func (*Options) SetMaxBackgroundFlushes

func (self *Options) SetMaxBackgroundFlushes(value int)

Maximum number of concurrent background memtable flush jobs, submitted to the HIGH priority thread pool. By default, all background jobs (major compaction and memtable flush) go to the LOW priority pool. If this option is set to a positive number, memtable flush jobs will be submitted to the HIGH priority pool. It is important when the same Env is shared by multiple db instances. Without a separate pool, long running major compaction jobs could potentially block memtable flush jobs of other db instances, leading to unnecessary Put stalls. Default: 0

func (*Options) SetMaxBytesForLevelBase

func (self *Options) SetMaxBytesForLevelBase(value uint64)

Control maximum total data size for a level, is the max total for level-1. Maximum number of bytes for level L can be calculated as (max_bytes_for_level_base) * (max_bytes_for_level_multiplier ^ (L-1))

For example, if max_bytes_for_level_base is 20MB, and if max_bytes_for_level_multiplier is 10, total data size for level-1 will be 20MB, total file size for level-2 will be 200MB, and total file size for level-3 will be 2GB. Default: 10MB

func (*Options) SetMaxBytesForLevelMultiplier

func (self *Options) SetMaxBytesForLevelMultiplier(value int)

Max Bytes for level multiplier. Default: 10

func (*Options) SetMaxGrandparentOverlapFactor

func (self *Options) SetMaxGrandparentOverlapFactor(value int)

Control maximum bytes of overlaps in grandparent (i.e., level+2) before we stop building a single file in a level->level+1 compaction. Default: 10

func (*Options) SetMaxLogFileSize

func (self *Options) SetMaxLogFileSize(value int)

Specify the maximal size of the info log file. If the log file is larger than `max_log_file_size`, a new info log file will be created. If max_log_file_size == 0, all logs will be written to one log file. Default: 0

func (*Options) SetMaxManifestFileSize

func (self *Options) SetMaxManifestFileSize(value uint64)

manifest file is rolled over on reaching this limit. The older manifest file be deleted. Default: MAX_INT so that roll-over does not take place.

func (*Options) SetMaxMemCompactionLevel

func (self *Options) SetMaxMemCompactionLevel(value int)

Maximum level to which a new compacted memtable is pushed if it does not create overlap. We try to push to level 2 to avoid the relatively expensive level 0=>1 compactions and to avoid some expensive manifest file operations. We do not push all the way to the largest level since that can generate a lot of wasted disk space if the same key space is being repeatedly overwritten. Default: 2

func (*Options) SetMaxOpenFiles

func (self *Options) SetMaxOpenFiles(value int)

Number of open files that can be used by the DB. You may need to increase this if your database has a large working set (budget one open file per 2MB of working set). Default: 1000

func (*Options) SetMaxSequentialSkipInIterations

func (self *Options) SetMaxSequentialSkipInIterations(value uint64)

An iteration->Next() sequentially skips over keys with the same user-key unless this option is set. This number specifies the number of keys (with the same userkey) that will be sequentially skipped before a reseek is issued. Default: 8

func (*Options) SetMaxSuccessiveMerges

func (self *Options) SetMaxSuccessiveMerges(value int)

Maximum number of successive merge operations on a key in the memtable.

When a merge operation is added to the memtable and the maximum number of successive merges is reached, the value of the key will be calculated and inserted into the memtable instead of the merge operation. This will ensure that there are never more than max_successive_merges merge operations in the memtable.

Default: 0 (disabled)

func (*Options) SetMaxWriteBufferNumber

func (self *Options) SetMaxWriteBufferNumber(value int)

The maximum number of write buffers that are built up in memory. The default is 2, so that when 1 write buffer is being flushed to storage, new writes can continue to the other write buffer. Default: 2

func (*Options) SetMemtablePrefixBloomBits

func (self *Options) SetMemtablePrefixBloomBits(value uint32)

If prefix_extractor is set and bloom_bits is not 0, create prefix bloom for memtable. Default: 0

func (*Options) SetMemtablePrefixBloomProbes

func (self *Options) SetMemtablePrefixBloomProbes(value uint32)

Number of hash probes per key. Default: 6

func (*Options) SetMemtableVectorRep

func (self *Options) SetMemtableVectorRep()

SetMemtableVectorRep causes MemTableReps. This is useful for workloads where iteration is very rare and writes are generally not issued after reads begin.

func (*Options) SetMergeOperator

func (self *Options) SetMergeOperator(value *MergeOperator)

The merge operator will called if Merge operations are used. Default: nil

func (*Options) SetMinWriteBufferNumberToMerge

func (self *Options) SetMinWriteBufferNumberToMerge(value int)

The minimum number of write buffers that will be merged together before writing to storage. If set to 1, then all write buffers are flushed to L0 as individual files and this increases read amplification because a get request has to check in all of these files. Also, an in-memory merge may result in writing lesser data to storage if there are duplicate records in each of these individual write buffers. Default: 1

func (*Options) SetNoBlockCache

func (self *Options) SetNoBlockCache(value bool)

Disable block cache. If this is set to true, then no block cache should be used. Default: false

func (*Options) SetNumLevels

func (self *Options) SetNumLevels(value int)

Number of levels for this database. Default: 7

func (*Options) SetParanoidChecks

func (self *Options) SetParanoidChecks(value bool)

If true, the implementation will do aggressive checking of the data it is processing and will stop early if it detects any errors. This may have unforeseen ramifications: for example, a corruption of one DB entry may cause a large number of entries to become unreadable or for the entire DB to become unopenable. If any of the writes to the database fails (Put, Delete, Merge, Write), the database will switch to read-only mode and fail all other Write operations. Default: false

func (*Options) SetPrefixExtractor

func (self *Options) SetPrefixExtractor(value *SliceTransform)

If set, use the specified function to determine the prefixes for keys. These prefixes will be placed in the filter. Depending on the workload, this can reduce the number of read-IOP cost for scans when a prefix is passed via ReadOptions to db.NewIterator(). Default: nil

func (*Options) SetPurgeRedundantKvsWhileFlush

func (self *Options) SetPurgeRedundantKvsWhileFlush(value bool)

Purge duplicate/deleted keys when a memtable is flushed to storage. Default: true

func (*Options) SetRateLimitDelayMaxMilliseconds

func (self *Options) SetRateLimitDelayMaxMilliseconds(value uint)

Max time a put will be stalled when hard_rate_limit is enforced. If 0, then there is no limit. Default: 1000

func (*Options) SetSkipLogErrorOnRecovery

func (self *Options) SetSkipLogErrorOnRecovery(value bool)

Skip log corruption error on recovery (If client is ok with losing most recent changes) Default: false

func (*Options) SetSoftRateLimit

func (self *Options) SetSoftRateLimit(value float64)

Puts are delayed 0-1 ms when any level has a compaction score that exceeds soft_rate_limit. This is ignored when == 0.0. CONSTRAINT: soft_rate_limit <= hard_rate_limit. If this constraint does not hold, RocksDB will set soft_rate_limit = hard_rate_limit Default: 0.0 (disabled)

func (*Options) SetSourceCompactionFactor

func (self *Options) SetSourceCompactionFactor(value int)

Maximum number of bytes in all source files to be compacted in a single compaction run. We avoid picking too many files in the source level so that we do not exceed the total source bytes for compaction to exceed (source_compaction_factor * targetFileSizeLevel()) many bytes. Default: 1

func (*Options) SetStatsDumpPeriodSec

func (self *Options) SetStatsDumpPeriodSec(value uint)

If not zero, dump stats to LOG every stats_dump_period_sec Default: 3600 (1 hour)

func (*Options) SetTableCacheNumshardbits

func (self *Options) SetTableCacheNumshardbits(value int)

Number of shards used for table cache. Default: 4

func (*Options) SetTableCacheRemoveScanCountLimit

func (self *Options) SetTableCacheRemoveScanCountLimit(value int)

During data eviction of table's LRU cache, it would be inefficient to strictly follow LRU because this piece of memory will not really be released unless its refcount falls to zero. Instead, make two passes: the first pass will release items with refcount = 1, and if not enough space releases after scanning the number of elements specified by this parameter, we will remove items in LRU order. Default: 16

func (*Options) SetTargetFileSizeBase

func (self *Options) SetTargetFileSizeBase(value uint64)

Target file size for compaction, is per-file size for level-1. Target file size for level L can be calculated by target_file_size_base * (target_file_size_multiplier ^ (L-1))

For example, if target_file_size_base is 2MB and target_file_size_multiplier is 10, then each file on level-1 will be 2MB, and each file on level 2 will be 20MB, and each file on level-3 will be 200MB. Default: 2MB

func (*Options) SetTargetFileSizeMultiplier

func (self *Options) SetTargetFileSizeMultiplier(value int)

Target file size multiplier for compaction. Default: 1

func (*Options) SetUniversalCompactionOptions

func (self *Options) SetUniversalCompactionOptions(value *UniversalCompactionOptions)

The options needed to support Universal Style compactions. Default: nil

func (*Options) SetUseAdaptiveMutex

func (self *Options) SetUseAdaptiveMutex(value bool)

Use adaptive mutex, which spins in the user space before resorting to kernel. This could reduce context switch when the mutex is not heavily contended. However, if the mutex is hot, we could end up wasting spin time. Default: false

func (*Options) SetUseFsync

func (self *Options) SetUseFsync(value bool)

If true, then every store to stable storage will issue a fsync. If false, then every store to stable storage will issue a fdatasync. This parameter should be set to true while storing data to filesystem like ext3 that can lose files after a reboot. Default: false

func (*Options) SetWALTtlSeconds

func (self *Options) SetWALTtlSeconds(value uint64)

The following two options affect how archived logs will be deleted.

  1. If both set to 0, logs will be deleted asap and will not get into the archive.
  2. If wal_ttl_seconds is 0 and wal_size_limit_mb is not 0, WAL files will be checked every 10 min and if total size is greater then wal_size_limit_mb, they will be deleted starting with the earliest until size_limit is met. All empty files will be deleted.
  3. If wal_ttl_seconds is not 0 and wall_size_limit_mb is 0, then WAL files will be checked every wal_ttl_seconds / 2 and those that are older than wal_ttl_seconds will be deleted.
  4. If both are not 0, WAL files will be checked every 10 min and both checks will be performed with ttl being first.

Default: 0

func (*Options) SetWalDir

func (self *Options) SetWalDir(value string)

This specifies the absolute dir path for write-ahead logs (WAL). If it is empty, the log files will be in the same dir as data. If it is non empty, the log files will be in the specified dir, When destroying the db, all log files and the dir itself is deleted. Default: empty

func (*Options) SetWalSizeLimitMb

func (self *Options) SetWalSizeLimitMb(value uint64)

If total size of WAL files is greater then wal_size_limit_mb, they will be deleted starting with the earliest until size_limit is met Default: 0

func (*Options) SetWholeKeyFiltering

func (self *Options) SetWholeKeyFiltering(value bool)

If true, place whole keys in the filter (not just prefixes). This must generally be true for gets to be efficient. Default: true

func (*Options) SetWriteBufferSize

func (self *Options) SetWriteBufferSize(value int)

Amount of data to build up in memory (backed by an unsorted log on disk) before converting to a sorted on-disk file.

Larger values increase performance, especially during bulk loads. Up to max_write_buffer_number write buffers may be held in memory at the same time, so you may wish to adjust this parameter to control memory usage. Also, a larger write buffer will result in a longer recovery time the next time the database is opened. Default: 4MB

type Range

type Range struct {
	Start []byte
	Limit []byte
}

Range is a range of keys in the database. GetApproximateSizes calls with it begin at the key Start and end right before the key Limit.

type ReadOptions

type ReadOptions struct {
	// contains filtered or unexported fields
}

ReadOptions represent all of the available options when reading from a database.

func NewDefaultReadOptions

func NewDefaultReadOptions() *ReadOptions

NewDefaultReadOptions creates a default ReadOptions object.

func NewNativeReadOptions

func NewNativeReadOptions(c *C.rocksdb_readoptions_t) *ReadOptions

NewNativeReadOptions creates a ReadOptions object.

func (*ReadOptions) Destroy

func (self *ReadOptions) Destroy()

Destroy deallocates the ReadOptions object.

func (*ReadOptions) SetFillCache

func (self *ReadOptions) SetFillCache(value bool)

Should the "data block"/"index block"/"filter block" read for this iteration be cached in memory? Callers may wish to set this field to false for bulk scans. Default: true

func (*ReadOptions) SetPrefix

func (self *ReadOptions) SetPrefix(prefix []byte)

If prefix is set, and ReadOptions is being passed to db.NewIterator, only return results when the key begins with this prefix. Default: nil

func (*ReadOptions) SetPrefixSeek

func (self *ReadOptions) SetPrefixSeek(value bool)

If this option is set to true and memtable implementation allows, Seek might only return keys with the same prefix as the seek-key. Default: false

func (*ReadOptions) SetReadTier

func (self *ReadOptions) SetReadTier(value ReadTier)

Specify if this read request should process data that ALREADY resides on a particular cache. If the required data is not found at the specified cache, then Status::Incomplete is returned. Default: ReadAllTier

func (*ReadOptions) SetSnapshot

func (self *ReadOptions) SetSnapshot(snap *Snapshot)

If snapshot is set, read as of the supplied snapshot which must belong to the DB that is being read and which must not have been released. Default: nil

func (*ReadOptions) SetTailing

func (self *ReadOptions) SetTailing(value bool)

Specify to create a tailing iterator -- a special iterator that has a view of the complete database (i.e. it can also be used to read newly added data) and is optimized for sequential reads. It will return records that were inserted into the database after the creation of the iterator. Default: false

func (*ReadOptions) SetVerifyChecksums

func (self *ReadOptions) SetVerifyChecksums(value bool)

If true, all data read from underlying storage will be verified against corresponding checksums. Default: false

type ReadTier

type ReadTier uint

An application can issue a read request (via Get/Iterators) and specify if that read should process data that ALREADY resides on a specified cache level. For example, if an application specifies BlockCacheTier then the Get call will process data that is already processed in the memtable or the block cache. It will not page in data from the OS cache or data that resides in storage.

type Slice

type Slice struct {
	// contains filtered or unexported fields
}

Slice is used as a wrapper for non-copy values

func NewSlice

func NewSlice(data *C.char, size C.size_t) *Slice

func (*Slice) Data

func (self *Slice) Data() []byte

func (*Slice) Free

func (self *Slice) Free()

func (*Slice) Size

func (self *Slice) Size() int

type SliceTransform

type SliceTransform struct {
	// contains filtered or unexported fields
}

A SliceTransform can be used as a prefix extractor.

func NewFixedPrefixTransform

func NewFixedPrefixTransform(prefixLen int) *SliceTransform

NewFixedPrefixTransform creates a new fixed prefix transform.

func NewNativeSliceTransform

func NewNativeSliceTransform(c *C.rocksdb_slicetransform_t) *SliceTransform

NewNativeSliceTransform allocates a SliceTransform object.

func NewSliceTransform

func NewSliceTransform(handler SliceTransformHandler) *SliceTransform

NewSliceTransform creates a new slice transform for the given handler.

func (*SliceTransform) Destroy

func (self *SliceTransform) Destroy()

Destroy deallocates the SliceTransform object.

type SliceTransformHandler

type SliceTransformHandler interface {
	// Transform a src in domain to a dst in the range.
	Transform(src []byte) []byte

	// Determine whether this is a valid src upon the function applies.
	InDomain(src []byte) bool

	// Determine whether dst=Transform(src) for some src.
	InRange(src []byte) bool

	// Return the name of this transformation.
	Name() string
}

type Snapshot

type Snapshot struct {
	// contains filtered or unexported fields
}

Snapshot provides a consistent view of read operations in a DB.

func NewNativeSnapshot

func NewNativeSnapshot(c *C.rocksdb_snapshot_t, cDb *C.rocksdb_t) *Snapshot

NewNativeSnapshot creates a Snapshot object.

func (*Snapshot) Release

func (self *Snapshot) Release()

Release removes the snapshot from the database's list of snapshots.

type UniversalCompactionOptions

type UniversalCompactionOptions struct {
	// contains filtered or unexported fields
}

UniversalCompactionOptions represent all of the available options for universal compaction.

func NewDefaultUniversalCompactionOptions

func NewDefaultUniversalCompactionOptions() *UniversalCompactionOptions

NewDefaultUniversalCompactionOptions creates a default UniversalCompactionOptions object.

func NewNativeUniversalCompactionOptions

func NewNativeUniversalCompactionOptions(c *C.rocksdb_universal_compaction_options_t) *UniversalCompactionOptions

NewNativeUniversalCompactionOptions creates a UniversalCompactionOptions object.

func (*UniversalCompactionOptions) Destroy

func (self *UniversalCompactionOptions) Destroy()

Destroy deallocates the UniversalCompactionOptions object.

func (*UniversalCompactionOptions) SetCompressionSizePercent

func (self *UniversalCompactionOptions) SetCompressionSizePercent(value int)

If this option is set to be -1, all the output files will follow compression type specified.

If this option is not negative, we will try to make sure compressed size is just above this value. In normal cases, at least this percentage of data will be compressed. When we are compacting to a new file, here is the criteria whether it needs to be compressed: assuming here are the list of files sorted by generation time:

A1...An B1...Bm C1...Ct

where A1 is the newest and Ct is the oldest, and we are going to compact B1...Bm, we calculate the total size of all the files as total_size, as well as the total size of C1...Ct as total_C, the compaction output file will be compressed iff

total_C / total_size < this percentage

Default: -1

func (*UniversalCompactionOptions) SetMaxMergeWidth

func (self *UniversalCompactionOptions) SetMaxMergeWidth(value uint)

The maximum number of files in a single compaction run. Default: UINT_MAX

func (*UniversalCompactionOptions) SetMaxSizeAmplificationPercent

func (self *UniversalCompactionOptions) SetMaxSizeAmplificationPercent(value uint)

The size amplification is defined as the amount (in percentage) of additional storage needed to store a single byte of data in the database. For example, a size amplification of 2% means that a database that contains 100 bytes of user-data may occupy upto 102 bytes of physical storage. By this definition, a fully compacted database has a size amplification of 0%. Rocksdb uses the following heuristic to calculate size amplification: it assumes that all files excluding the earliest file contribute to the size amplification. Default: 200, which means that a 100 byte database could require upto 300 bytes of storage.

func (*UniversalCompactionOptions) SetMinMergeWidth

func (self *UniversalCompactionOptions) SetMinMergeWidth(value uint)

The minimum number of files in a single compaction run. Default: 2

func (*UniversalCompactionOptions) SetSizeRatio

func (self *UniversalCompactionOptions) SetSizeRatio(value uint)

Percentage flexibilty while comparing file size. If the candidate file(s) size is 1% smaller than the next file's size, then include next file into this candidate set. Default: 1

func (*UniversalCompactionOptions) SetStopStyle

func (self *UniversalCompactionOptions) SetStopStyle(value UniversalCompactionStopStyle)

The algorithm used to stop picking files into a single compaction run Default: CompactionStopStyleTotalSize

type UniversalCompactionStopStyle

type UniversalCompactionStopStyle uint

Algorithm used to make a compaction request stop picking new files into a single compaction run.

type WriteBatch

type WriteBatch struct {
	// contains filtered or unexported fields
}

WriteBatch is a batching of Puts, Merges and Deletes. TODO: WriteBatchIterator

func NewNativeWriteBatch

func NewNativeWriteBatch(c *C.rocksdb_writebatch_t) *WriteBatch

NewNativeWriteBatch create a WriteBatch object.

func NewWriteBatch

func NewWriteBatch() *WriteBatch

NewWriteBatch create a WriteBatch object.

func (*WriteBatch) Clear

func (self *WriteBatch) Clear()

Clear removes all the enqueued Put and Deletes.

func (*WriteBatch) Delete

func (self *WriteBatch) Delete(key []byte)

Delete queues a deletion of the data at key.

func (*WriteBatch) Destroy

func (self *WriteBatch) Destroy()

Destroy deallocates the WriteBatch object.

func (*WriteBatch) Merge

func (self *WriteBatch) Merge(key, value []byte)

Merge queues a merge of "value" with the existing value of "key".

func (*WriteBatch) Put

func (self *WriteBatch) Put(key, value []byte)

Put queues a key-value pair.

type WriteOptions

type WriteOptions struct {
	// contains filtered or unexported fields
}

WriteOptions represent all of the available options when writing to a database.

func NewDefaultWriteOptions

func NewDefaultWriteOptions() *WriteOptions

NewDefaultWriteOptions creates a default WriteOptions object.

func NewNativeWriteOptions

func NewNativeWriteOptions(c *C.rocksdb_writeoptions_t) *WriteOptions

NewNativeWriteOptions creates a WriteOptions object.

func (*WriteOptions) Destroy

func (self *WriteOptions) Destroy()

Destroy deallocates the WriteOptions object.

func (*WriteOptions) DisableWAL

func (self *WriteOptions) DisableWAL(value bool)

If true, writes will not first go to the write ahead log, and the write may got lost after a crash. Default: false

func (*WriteOptions) SetSync

func (self *WriteOptions) SetSync(value bool)

If true, the write will be flushed from the operating system buffer cache before the write is considered complete. If this flag is true, writes will be slower. Default: false

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL