golbat

package module
v0.0.0-...-5a55052 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 12, 2021 License: Apache-2.0 Imports: 26 Imported by: 0

README

Globat

Introduce

Globat is an embeddable, persistent and fast key-value database like leveldb, written in pure Go and optimized for SSD with WiscKey.

Getting Started

Installing

To start using Golbat, Please install Go 1.12 or above, and also needs go modules. Run the following command to retrieve the library.

 go get github.com/neotse/golbat

Note: Golbat does not directly use CGO but it relies on https://github.com/DataDog/zstd for compression and it requires gcc/cgo. If you wish to use golbat without gcc/cgo, you can run:

CGO_ENABLED=0 go get github.com/neotse/golbat

which will download golbat without the support for ZSTD compression algorithm.

Installing Command Line Tool

Download and extract the latest release from https://github.com/NeoTse/golbat/releases and then run the following commands:

cd golbat-<version>/cmd
go install

This will install the command line utility into your $GOBIN path.

Opening A Database

The following example shows how to open a database:

import "github.com/neotse/golbat"

dir := "/tmp/golbat"
option := golbat.DefaultOption(dir)
db, err := golbat.Open(option)
if err != nil {
    return err
}

If there is no any database in the dir, golbat will create a new one. If there is already has a database, but not opened by other process or goroutine, golbat will open it. Otherwise, an error will be returned.

Closing A Database

The following example shows how to close a database:

err := golbat.Close(db)
if err != nil {
    ...
}

or

err := db.Close()
if err != nil {
    ...
}

If golbat.Close() or db.Close() called more than once, multiple times would still only close the DB once.

Reads And Writes

The database provides Put, Delete, and Get methods to modify/query the database. For example, the following code moves the value stored under key1 to key2.

value, err := db.Get(&golbat.DefaultReadOptions, key1)
if err != nil {
    return err
}

err = db.Put(&golbat.DefaultWriteOptions, key2, value)
if err != nil {
    return err
}

err = db.Delete((&golbat.DefaultWriteOptions, key1)
if err != nil {
    return err
}
Atomic Updates

Note that if the process dies after the Put of key2 but before the delete of key1, the same value may be left stored under multiple keys. Such problems can be avoided by using the WriteBatch class to atomically apply a set of updates:

value, err := db.Get(&golbat.DefaultReadOptions, key1)
if err != nil {
    return err
}

batch := NewWriteBatch(db)
batch.Delete(key1)
batch.Put(key2, value)
err = db.Write((&golbat.DefaultWriteOptions, batch)
if err != nil {
    return err
}

Apart from its atomicity benefits, WriteBatch may also be used to speed up bulk updates by placing lots of individual mutations into the same batch.

Synchronous Writes

By default, each write to leveldb is asynchronous: it returns after pushing the write from the process into the operating system. The transfer from operating system memory to the underlying persistent storage happens asynchronously. The sync flag can be turned on for a particular write to make the write operation not return until the data being written has been pushed all the way to persistent storage. (On Posix systems, this is implemented by calling either fsync(...) or fdatasync(...) or msync(..., MS_SYNC) before the write operation returns.)

var wopt golbat.WriteOptions
wopt.Sync = true
err := db.Put(&wopt, key1, value1)

WriteBatch provides an alternative to asynchronous writes. Multiple updates may be placed in the same WriteBatch and applied together using a synchronous write (i.e., WriteOptions.Sync is set to true). The extra cost of the synchronous write will be amortized across all of the writes in the batch.

Concurrency

A database may only be opened by one process at a time. The golbat implementation acquires a lock from the operating system to prevent misuse. Within a single process, the same golbat.DB object may be safely shared by multiple concurrent goroutines. I.e., different goroutines may write into or fetch iterators or call Get on the same database without any external synchronization. However other objects (like Iterator and WriteBatch) may require external synchronization. If two goroutines share such an object, they must protect access to it using their own locking protocol.

Iteration

The following example show how to print all valid key,value pairs in a database.

iter, err := db.NewItertor(&golbat.DefaultReadOptions)
if err != nil {
    return err
}
defer iter.Close()

for iter.SeekToFirst(); iter.Vaild(); iter.Next() {
    fmt.Printf("key: %s, Value: %s", string(iter.Key()), string(iter.Value().Value))
}

Note: If an iterator no longer in use, the Close method must be called.

Sometimes we want to get all the key and value pairs, regardless of which version or whether they are deleted.

var ropt golbat.ReadOptions
ropt.AllVersion = true
iter, err := db.NewItertor(&ropt)
if err != nil {
    return err
}
defer iter.Close()

for iter.SeekToFirst(); iter.Vaild(); iter.Next() {
    fmt.Printf("key: %s, Value: %v", string(iter.Key()), string(iter.Value()))
}

If you don't want the deleted key, value pairs, you should filter it by youself. The iter.Value() return a EValue that contains the status(Value/Delete) of value.

Snapshots

Snapshots provide consistent read-only views over the entire state of the key-value store. ReadOptions::snapshot may be non-NULL to indicate that a read should operate on a particular version of the DB state. If ReadOptions::snapshot is NULL, the read will operate on an implicit snapshot of the current state.

Snapshots are created by the GetSnapshot() method:

var ropt golbat.ReadOptions
ropt.snaphot = db.GetSnapshot()
defer db.ReleaseSnaphot(ropt.snaphot)

... apply some updates to db ...
iter, err := db.NewItertor(&ropt)
if err != nil {
    return err
}
defer iter.Close()
... read using iter to view the state when the snapshot was created ...

Note: that when a snapshot is no longer needed, it should be released using the ReleaseSnapshot method.

Value Log GC

The Value Log store the key, value pairs when the size of value is greater than Options.ValueThreshold. So some key, value pairs not only stored in LSM, but also stored in value log. When we deleted a key from database(aka. LSM), if the key also stored in value log, then we should also deleted it at some point in the future. We can do this job by call RunValueLogGC method.

discardRatio := 0.5
err := RunValueLogGC(db, discardRatio)
if err != nil {
    return err
}

discardRatio is the percentage of key and value pairs that have been deleted in the LSM. We recommend setting discardRatio to 0.5, thus indicating that a file be rewritten if half the space can be discarded. This results in a lifetime value log write amplification of 2 (1 from original write + 0.5 rewrite + 0.25 + 0.125 + ... = 2). Setting it to higher value would result in fewer space reclaims, while setting it to a lower value would result in more space reclaims at the cost of increased activity on the LSM tree. discardRatio must be in the range (0.0, 1.0), both endpoints excluded, otherwise an error is returned.

Performance

Performance can be tuned by changing the default values of the types defined in golat.Options

ValueThreshold

The ValueThreshold is the most important setting. If sets a higher ValueThreshold so values would be collocated with the LSM tree, with value log largely acting as a write-ahead log only. These options would reduce the disk usage of value log, and make golbat act more like a typical LSM tree. When there are many small values (<=1KB) and read frequently, sets a higher ValueThreshold (bigger than the most values) can make a good performance. Otherwise, write frequently or more big values (>= 64KB), sets a lower ValueThreshold more better. The default ValueThreshold in golat.DefaultOptions is 1MB.

Block size

Golbat groups adjacent keys together into the same block and such a block is the unit of transfer to and from persistent storage. The default block size is approximately 4096 uncompressed bytes. Applications that mostly do bulk scans over the contents of the database may wish to increase this size. Applications that do a lot of point reads of small values may wish to switch to a smaller block size if performance measurements indicate an improvement. There isn't much benefit in using blocks smaller than one kilobyte, or larger than a few megabytes. Also note that compression will be more effective with larger block sizes.

Compression

Each block is individually compressed before being written to persistent storage. Compression is on by default (snappy). the ZSTD compression recommended but it need CGO.

Checksums

Golbat associates checksums with all data it stores in the file system. There are two separate controls provided over how aggressively these checksums are verified:

ReadOptions.VerifyCheckSum may be set to true to force checksum verification of all data that is read from the file system on behalf of a particular read (include value log). By default, no such verification is done.

Options.VerifyTableCheckSum may be set to true before opening a database to make the database implementation raise an error as soon as it detects an internal corruption. Depending on which portion of the database has been corrupted, the error may be raised when the database is opened, or later by another database operation. By default, VerifyTableCheckSum is off so that the database can be used even if parts of its persistent storage have been corrupted.

Documentation

Index

Constants

View Source
const (
	Value byte = 1 << iota
	Delete
	ValPtr
)
View Source
const (
	ManifestFilename = "MANIFEST"
)
View Source
const MemFileExt string = ".mem"
View Source
const ValueFileExt = ".vlog"

Variables

View Source
var (
	ErrTruncating    = errors.New("do truncate")
	ErrStopIteration = errors.New("stop iteration")
	ErrBadWAL        = errors.New(
		"WAL log is broken, need to be truncated that might cause data loss")
	ErrChecksumMismatch     = errors.New("checksum mismatch")
	ErrCompressionType      = errors.New("unsupported compression type")
	ErrBadMagic             = errors.New("manifest has bad magic")
	ErrMFUnsupportedVersion = errors.New("manifest has unsupported version")
	ErrFillTable            = errors.New("unable to fill tables")
	ErrNoRewrite            = errors.New("value log GC attempt didn't result in any cleanup")
	ErrRejected             = errors.New("value log GC request rejected")
	ErrValueLogSize         = errors.New("invalid ValueLogFileSize, must be in range [1MB, 2GB)")
	ErrNoRoom               = errors.New("no room for write")
	ErrDBClosed             = errors.New("DB Closed")
	ErrBatchTooBig          = errors.New("batch is too big to fit into one batch write")
	ErrKeyNotFound          = errors.New("key not found")
	ErrEmptyKey             = errors.New("key cannot be empty")
	ErrEmptyBatch           = errors.New("batch cannot be empty")
	ErrBlockedWrites        = errors.New("writes are blocked, possibly due to DropAll or Close")
)
View Source
var (
	DefaultReadOptions  = ReadOptions{VerifyCheckSum: true, FillCache: false, Snapshot: nil, AllVersion: false}
	DefaultWriteOptions = WriteOptions{Sync: false}
)
View Source
var NumMemBlocks int32

Functions

func AssertTrue

func AssertTrue(b bool)

func AssertTruef

func AssertTruef(b bool, format string, args ...interface{})

func Check

func Check(err error)

func Check2

func Check2(_ interface{}, err error)

func Close

func Close(db DB) error

Close closes a DB. Calling Close(db) multiple times would still only close the DB once.

func CompareKeys

func CompareKeys(a, b []byte) int

first compare keys by increasing order, then compare sequence number by decreasing order

func GetFileID

func GetFileID(fname string) (uint64, bool)

func GetFileName

func GetFileName(id uint64) string

func KeyWithVersion

func KeyWithVersion(key []byte, version uint64) []byte

func NewBlockIterator

func NewBlockIterator(opt *Options, b *memBlock, tid, bid int) *blockIterator

func NewDiscard

func NewDiscard(option Options) (*discard, error)

func NewLevel

func NewLevel(opts *Options, id int) *level

func NewLevels

func NewLevels(opts *Options, mf *Manifest, vlog *valueLog,
	mff *manifestFile, snapshots *snapshotList) (*levels, error)

func NewMemTable

func NewMemTable(id int, option Options) (*memTable, error)

func NewTableFileName

func NewTableFileName(id uint64, dir string) string

func OpenMemTable

func OpenMemTable(fid, flags int, option Options) (*memTable, error)

func OpenValueLog

func OpenValueLog(option Options) (*valueLog, error)

func ParseKey

func ParseKey(key []byte) []byte

func ParseVersion

func ParseVersion(key []byte) uint64

func RunValueLogGC

func RunValueLogGC(db DB, discardRatio float64) error

RunValueLogGC triggers a value log garbage collection.

It picks value log files to perform GC based on statistics that are collected during compactions. If no such statistics are available, then log files are picked in random order. The process stops as soon as the first log file is encountered which does not result in garbage collection.

When a log file is picked, it is first sampled. If the sample shows that we can discard at least discardRatio space of that file, it would be rewritten.

If a call to RunValueLogGC results in no rewrites, then an ErrNoRewrite is thrown indicating that the call resulted in no file rewrites.

We recommend setting discardRatio to 0.5, thus indicating that a file be rewritten if half the space can be discarded. This results in a lifetime value log write amplification of 2 (1 from original write + 0.5 rewrite + 0.25 + 0.125 + ... = 2). Setting it to higher value would result in fewer space reclaims, while setting it to a lower value would result in more space reclaims at the cost of increased activity on the LSM tree. discardRatio must be in the range (0.0, 1.0), both endpoints excluded, otherwise an error is returned.

Only one GC is allowed at a time. If another value log GC is running, or DB has been closed, this would return an ErrRejected.

Note: Every time GC is run, it would produce a spike of activity on the LSM tree.

func SameKey

func SameKey(key1, key2 []byte) bool

func Wrap

func Wrap(err error, msg string) error

func Wrapf

func Wrapf(err error, format string, args ...interface{}) error

Types

type Comparator

type Comparator = func([]byte, []byte) int

type CompressionType

type CompressionType uint32

DB contents are stored in a set of blocks, each of which holds a sequence of key,value pairs. Each block may be compressed before being stored in a file. The following enum describes which compression method (if any) is used to compress a block.

const (
	// WARNING: DON'T change the existing entries!
	NoCompression CompressionType = iota
	SnappyCompression
	ZSTDCompression
)

type DB

type DB interface {
	Put(options *WriteOptions, key, value []byte) error
	Delete(options *WriteOptions, key []byte) error
	Write(options *WriteOptions, batch *WriteBatch) error
	Get(options *ReadOptions, key []byte) (value []byte, err error)
	NewIterator(options *ReadOptions) (iterator Iterator, err error)
	GetSnapshot() *Snapshot
	ReleaseSnapshot(snapshot *Snapshot)
	GetExtend(options *ReadOptions, key []byte) (value *EValue, err error)
	GetOption() Options
	Close() error
}

DB interface is useful when test

func Open

func Open(options Options) (DB, error)

type DBImpl

type DBImpl struct {
	sync.RWMutex // Guards list of inmemory tables, not individual reads and writes.
	// contains filtered or unexported fields
}

func (*DBImpl) Close

func (db *DBImpl) Close() error

Close closes a DB. It's crucial to call it to ensure all the pending updates make their way to disk. Calling DB.Close() multiple times would still only close the DB once.

func (*DBImpl) Delete

func (db *DBImpl) Delete(options *WriteOptions, key []byte) error

Delete delete value of the key from db with options

func (*DBImpl) DropAll

func (db *DBImpl) DropAll() error

DropAll would drop all the data stored in Golat. It does this in the following way. - Stop accepting new writes. - Pause memtable flushes and compactions. - Pick all tables from all levels, create a changeset to delete all these tables and apply it to manifest. - Pick all log files from value log, and delete all of them. Restart value log files from zero. - Resume memtable flushes and compactions.

NOTE: DropAll is resilient to concurrent writes, but not to reads. It is up to the user to not do any reads while DropAll is going on, otherwise they may result in panics. Ideally, both reads and writes are paused before running DropAll, and resumed after it is finished.

func (*DBImpl) Get

func (db *DBImpl) Get(options *ReadOptions, key []byte) (value []byte, err error)

Get read newest value of the key if not any snapshot set in options, otherwise it will read the version (or below it if there is no such version) in snapshot.

func (*DBImpl) GetExtend

func (db *DBImpl) GetExtend(options *ReadOptions, key []byte) (value *EValue, err error)

GetExtend Get read newest value (with meta) of the key if not any snapshot set in options, otherwise it will read the version (or below it if there is no such version) in snapshot.

func (*DBImpl) GetOption

func (db *DBImpl) GetOption() Options

GetOption return the options used in the db

func (*DBImpl) GetSampleKeys

func (db *DBImpl) GetSampleKeys(sampleSize, numGoroutines int) ([][]byte, error)

GetSampleKeys return the sample keys from db. The size of those keys equal to sampleSize, if the db has enough keys. Otherwise, it contains all keys and use numGoroutines goroutines to fetch those keys

func (*DBImpl) GetSnapshot

func (db *DBImpl) GetSnapshot() *Snapshot

GetSnapshot return a snapshot with current max version

func (*DBImpl) GetTables

func (db *DBImpl) GetTables() []TableMeta

GetTables return the meta of tables in the db

func (*DBImpl) IsClosed

func (db *DBImpl) IsClosed() bool

func (*DBImpl) KeySplits

func (db *DBImpl) KeySplits(prefix []byte) []string

KeySplits can be used to get rough key ranges to divide up iteration over the DB.

func (*DBImpl) LevelsToString

func (db *DBImpl) LevelsToString() string

func (*DBImpl) NewIterator

func (db *DBImpl) NewIterator(options *ReadOptions) (iterator Iterator, err error)

NewIterator returns a iterator for the db. Iterators have the nuance of being a snapshot of the writes for the transaction at the time iterator was created. If writes are performed after an iterator is created, then that iterator will not be able to see those writes. Only writes performed before an iterator was created can be viewed. CAUTION: when done with iteration, a iterator should be closed.

func (*DBImpl) Put

func (db *DBImpl) Put(options *WriteOptions, key, value []byte) error

Put write the key and value into the db with options

func (*DBImpl) ReleaseSnapshot

func (db *DBImpl) ReleaseSnapshot(snapshot *Snapshot)

ReleaseSnapshot delete the snapshot from db CAUTION: when snapshot not be used again, ReleaseSnapshot should be called.

func (*DBImpl) Write

func (db *DBImpl) Write(options *WriteOptions, batch *WriteBatch) error

Write writes the record in batch into db with write options. ATTENTION: write synchronous, if there is no room for write in memtable, it will be blocked.

type DBIterator

type DBIterator struct {
	Err error
	// contains filtered or unexported fields
}

func NewDBIterator

func NewDBIterator(db *DBImpl, option *ReadOptions, iters Iterator, version uint64) *DBIterator

func (*DBIterator) Close

func (it *DBIterator) Close() error

Close will close this iterator, if it not closed. Otherwise, do nothing

func (*DBIterator) GetItem

func (it *DBIterator) GetItem() *Item

func (*DBIterator) Key

func (it *DBIterator) Key() []byte

Key returns current key from db with this iterator.

func (*DBIterator) Next

func (it *DBIterator) Next()

Prev move iterator forward until find any valid record, and that may change iterator to invalid if there is not have any valid record.

func (*DBIterator) Prev

func (it *DBIterator) Prev()

Prev move iterator backward until find any valid record, and that may change iterator to invalid if there is not have any valid record.

func (*DBIterator) Seek

func (it *DBIterator) Seek(key []byte)

Seek find the first valid record that key is same with input key. That may change iterator to invalid if there is not have any key same with input key.

func (*DBIterator) SeekToFirst

func (it *DBIterator) SeekToFirst()

SeekToFirst move iterator to the begin of db, then find the first valid record. That may change iterator to invalid if there is not have any valid record.

func (*DBIterator) SeekToLast

func (it *DBIterator) SeekToLast()

SeekToLast move iterator to the end of db, then find the first valid record. That may change iterator to invalid if there is not have any valid record.

func (*DBIterator) Valid

func (it *DBIterator) Valid() bool

Valid returns true if this iterator can use, otherwise false

func (*DBIterator) Value

func (it *DBIterator) Value() EValue

Value returns current value from db with this iterator

type EValue

type EValue struct {
	Meta  byte
	Value []byte
	// contains filtered or unexported fields
}

func (*EValue) Decode

func (ev *EValue) Decode(b []byte)

func (*EValue) Encode

func (ev *EValue) Encode() []byte

func (*EValue) EncodeTo

func (ev *EValue) EncodeTo(buf *bytes.Buffer)

func (*EValue) EncodedSize

func (ev *EValue) EncodedSize() uint32

type Item

type Item struct {
	// contains filtered or unexported fields
}

func (*Item) Deleted

func (item *Item) Deleted() bool

func (*Item) InValueLog

func (item *Item) InValueLog() bool

func (*Item) Key

func (item *Item) Key() []byte

func (*Item) KeyCopy

func (item *Item) KeyCopy(dst []byte) []byte

func (*Item) KeySize

func (item *Item) KeySize() int64

func (*Item) Value

func (item *Item) Value() []byte

func (*Item) ValueCopy

func (item *Item) ValueCopy(dst []byte) []byte

func (*Item) ValueSize

func (item *Item) ValueSize() int64

func (*Item) Version

func (item *Item) Version() uint64

type Iterator

type Iterator interface {
	Seek(key []byte)
	SeekToFirst()
	SeekToLast()
	Next()
	Prev()
	Key() []byte
	Value() EValue
	Valid() bool
	Close() error
}

func NewTablesMergeIterator

func NewTablesMergeIterator(opt *Options, iters []Iterator, reverse bool) Iterator

NewTablesMergeIterator creates a merge iterator.

type LevelMeta

type LevelMeta struct {
	IsBaseLevel bool
	Id          int
	NumTables   int
	Size        int64
	TargetSize  int64
	MaxFileSize int64
	Score       float64
	Adjusted    float64
}

LevelMeta contains the information about a level.

type Manifest

type Manifest struct {
	Levels []levelsManifest
	Tables map[uint64]tableManifest
	// contains filtered or unexported fields
}

func NewManifest

func NewManifest() Manifest

func OpenManifestFile

func OpenManifestFile(dir string) (*manifestFile, Manifest, error)

OpenManifestFile open the manifest files if exists, or create a new one if non.

func ReplyManifestFile

func ReplyManifestFile(mf *os.File) (Manifest, int64, error)

type Option

type Option func(*Options)

type Options

type Options struct {
	Dir             string
	CompressionType CompressionType

	// Logger
	Logger internel.Logger

	NumMemtables int
	MemTableSize int

	ValueLogDir        string
	ValueLogFileSize   int
	ValueLogMaxEntries int
	ValueThreshold     int

	BlockSize int

	BloomFalsePositive   float64
	ZSTDCompressionLevel int

	MaxLevels               int
	NumLevelZeroTables      int
	NumLevelZeroTablesStall int
	// see https://github.com/facebook/rocksdb/blob/v3.11/include/rocksdb/options.h#L366-L423
	BaseTableSize       int64
	BaseLevelSize       int64
	LevelSizeMultiplier int
	TableSizeMultiplier int
	NumCompactors       int

	VerifyTableChecksum bool
	// contains filtered or unexported fields
}

Options are params for creating DB object. Each option X can be setted with WithX method.

func DefaultOptions

func DefaultOptions(dir string) Options

type ReadOptions

type ReadOptions struct {
	VerifyCheckSum bool
	FillCache      bool // current doesn't support
	Snapshot       *Snapshot
	AllVersion     bool // return all version value of key, no matter if it deleted
}

type Snapshot

type Snapshot struct {
	// contains filtered or unexported fields
}

func (*Snapshot) Version

func (s *Snapshot) Version() uint64

type Table

type Table struct {
	sync.Mutex
	*internel.MmapFile

	Checksum  []byte
	CreatedAt time.Time

	IsInmemory bool // Set to true if the table is on level 0 and opened in memory.
	// contains filtered or unexported fields
}

func CreateTable

func CreateTable(fname string, builder *TableBuilder) (*Table, error)

func OpenInMemoryTable

func OpenInMemoryTable(data []byte, id uint64, opt *Options) (*Table, error)

func OpenTable

func OpenTable(mf *internel.MmapFile, opts Options) (*Table, error)

func (*Table) Biggest

func (t *Table) Biggest() []byte

Biggest is its biggest key, or nil if there are none

func (*Table) BlockCount

func (t *Table) BlockCount() int

BlockCount returns the number of block in a table

func (*Table) BloomFilterSize

func (t *Table) BloomFilterSize() int

BloomFilterSize returns the size of the bloom filter in bytes stored in memory.

func (*Table) CompressionType

func (t *Table) CompressionType() CompressionType

CompressionType returns the compression algorithm used for block compression.

func (*Table) DecrRef

func (t *Table) DecrRef() error

DecrRef decrements the refcount and possibly deletes the table

func (*Table) DoesNotHave

func (t *Table) DoesNotHave(hash uint32) bool

DoesNotHave returns true if and only if the table does not have the key hash. It does a bloom filter lookup.

func (*Table) Filename

func (t *Table) Filename() string

Filename is NOT the file name. Just kidding, it is.

func (*Table) ID

func (t *Table) ID() uint64

ID is the table's ID number (used to make the file name).

func (*Table) IncrRef

func (t *Table) IncrRef()

IncrRef increments the refcount (having to do with whether the file should be deleted)

func (*Table) IndexSize

func (t *Table) IndexSize() int

IndexSize is the size of table index in bytes.

func (*Table) KeyCount

func (t *Table) KeyCount() uint32

KeyCount is the total number of keys in this table.

func (*Table) KeySplits

func (t *Table) KeySplits(n int, prefix []byte) []string

KeySplits will split table at most n parts by prefix that base key matched.

func (*Table) MaxVersion

func (t *Table) MaxVersion() uint64

MaxVersion returns the maximum version across all keys stored in this table.

func (*Table) NewIterator

func (t *Table) NewIterator(reverse bool) *TableIterator

func (*Table) OnDiskSize

func (t *Table) OnDiskSize() uint32

OnDiskSize returns the total size of key-values stored in this table (including the disk space occupied on the value log).

func (*Table) Size

func (t *Table) Size() int64

Size is its file size in bytes

func (*Table) Smallest

func (t *Table) Smallest() []byte

Smallest is its smallest key, or nil if there are none

func (*Table) UncompressedSize

func (t *Table) UncompressedSize() uint32

UncompressedSize is the size uncompressed data stored in this file.

func (*Table) VerifyCheckSum

func (t *Table) VerifyCheckSum() error

VerifyCheckSum will Verify that all blocks in the table are valid by the checksum

type TableBuilder

type TableBuilder struct {
	// contains filtered or unexported fields
}

The table structure looks like +---------+------------+-----------+---------------+ | Block 1 | Block 2 | Block 3 | Block 4 | +---------+------------+-----------+---------------+ | Block 5 | Block 6 | Block ... | Block N | +---------+------------+-----------+---------------+ | Index | Index Size | Checksum | Checksum Size | +---------+------------+-----------+---------------+

func NewTableBuilder

func NewTableBuilder(opts Options) *TableBuilder

func (*TableBuilder) Add

func (b *TableBuilder) Add(key []byte, value EValue, valueLen uint32)

func (*TableBuilder) Done

func (b *TableBuilder) Done() fileBlocks

func (*TableBuilder) Empty

func (b *TableBuilder) Empty() bool

func (*TableBuilder) Finish

func (b *TableBuilder) Finish() []byte

func (*TableBuilder) ReachedCapacity

func (b *TableBuilder) ReachedCapacity() bool

type TableIterator

type TableIterator struct {
	// contains filtered or unexported fields
}

TableIterator uses to iterate all entries int the table than iterator belong to

func (*TableIterator) Close

func (iter *TableIterator) Close() error

Close closes the iterator (and it must be called).

func (*TableIterator) DisableChecksum

func (iter *TableIterator) DisableChecksum()

DisableChecksum will disable checksum when iterator access the file.

func (*TableIterator) Key

func (iter *TableIterator) Key() []byte

Returns the key with version.

func (*TableIterator) Next

func (iter *TableIterator) Next()

Next get next entry in the current block (or prev entry if reverse enable).

func (*TableIterator) Prev

func (iter *TableIterator) Prev()

Prev get prev entry in the current block (or next entry if reverse enable)

func (*TableIterator) Seek

func (iter *TableIterator) Seek(key []byte)

Seek return the first entry that is >= input key from start (or <= input key if reverse enable)

func (*TableIterator) SeekToFirst

func (iter *TableIterator) SeekToFirst()

SeekToFirst get the first entry(with smallest key) in the table (or last entry if reverse enable).

func (*TableIterator) SeekToLast

func (iter *TableIterator) SeekToLast()

SeekToLast get the last entry(with biggest key) in the table (or first entry if reverse enable).

func (*TableIterator) Valid

func (iter *TableIterator) Valid() bool

func (*TableIterator) Value

func (iter *TableIterator) Value() EValue

Returns the value with meta

func (*TableIterator) ValueCopy

func (iter *TableIterator) ValueCopy() (ret EValue)

Returns the copy value with meta

type TableMeta

type TableMeta struct {
	Id               uint64
	Level            int
	Smallest         []byte
	Biggest          []byte
	KeyCount         uint32
	OnDiskSize       uint32
	UncompressedSize uint32
	IndexSize        uint32
	BloomFilterSize  uint32
	MaxVersion       uint64
}

TableMeta contains the information about a table.

type TablesIterator

type TablesIterator struct {
	// contains filtered or unexported fields
}

TablesIterator uses to iterate over all the entries in tables than the iterator has

func NewTablesIterator

func NewTablesIterator(tables []*Table, reverse bool) *TablesIterator

func (*TablesIterator) Close

func (iter *TablesIterator) Close() error

func (*TablesIterator) Key

func (iter *TablesIterator) Key() []byte

func (*TablesIterator) Next

func (iter *TablesIterator) Next()

Next get next entry in the tables, or prev entry if reverse enable.

func (*TablesIterator) Prev

func (iter *TablesIterator) Prev()

Prev get prev entry in the tables, or next entry if reverse enable.

func (*TablesIterator) Seek

func (iter *TablesIterator) Seek(key []byte)

Seek find the first entry's key >= input key, 0r <= input key if reverse enable.

func (*TablesIterator) SeekToFirst

func (iter *TablesIterator) SeekToFirst()

SeekToFirst get the first entry(with smallest key) in the tables, or last entry if reverse enable.

func (*TablesIterator) SeekToLast

func (iter *TablesIterator) SeekToLast()

SeekToLast get the last entry(with biggest key) in the table, or first entry if reverse enable.

func (*TablesIterator) Valid

func (iter *TablesIterator) Valid() bool

func (*TablesIterator) Value

func (iter *TablesIterator) Value() EValue

type TablesMergeIterator

type TablesMergeIterator struct {
	// contains filtered or unexported fields
}

TablesMergeIterator merge different tables through TableIterator or TablesIterator

func (*TablesMergeIterator) Close

func (m *TablesMergeIterator) Close() error

Close all the iterators

func (*TablesMergeIterator) Key

func (m *TablesMergeIterator) Key() []byte

Key returns the key associated with the current iterator.

func (*TablesMergeIterator) Next

func (m *TablesMergeIterator) Next()

Next returns the next entry (or prev entry if reverse enable). If it is the same as the current key, ignore it.

func (*TablesMergeIterator) Prev

func (m *TablesMergeIterator) Prev()

Next returns the prev entry (or prev entry if reverse enable). If it is the same as the current key, ignore it.

func (*TablesMergeIterator) Seek

func (m *TablesMergeIterator) Seek(key []byte)

Seek get entry with key >= given key (or key <= given key if reverse enable).

func (*TablesMergeIterator) SeekToFirst

func (m *TablesMergeIterator) SeekToFirst()

SeekToFirst get first entry (or last entry if reverse enable)

func (*TablesMergeIterator) SeekToLast

func (m *TablesMergeIterator) SeekToLast()

SeekToLast get last entry (or first entry if reverse enable)

func (*TablesMergeIterator) Valid

func (m *TablesMergeIterator) Valid() bool

Valid returns whether the TablesMergeIterator is at a valid entry.

func (*TablesMergeIterator) Value

func (m *TablesMergeIterator) Value() EValue

Value returns the value associated with the iterator.

type WriteBatch

type WriteBatch struct {
	// contains filtered or unexported fields
}

func NewWriteBatch

func NewWriteBatch(db DB) *WriteBatch

func (*WriteBatch) Append

func (b *WriteBatch) Append(other *WriteBatch)

func (*WriteBatch) ApproximateSize

func (b *WriteBatch) ApproximateSize() uint64

func (*WriteBatch) Clear

func (b *WriteBatch) Clear()

func (*WriteBatch) Count

func (b *WriteBatch) Count() int

func (*WriteBatch) Delete

func (b *WriteBatch) Delete(key []byte)

func (*WriteBatch) Empty

func (b *WriteBatch) Empty() bool

func (*WriteBatch) FullWith

func (b *WriteBatch) FullWith(key, value []byte) bool

func (*WriteBatch) Put

func (b *WriteBatch) Put(key, value []byte)

func (*WriteBatch) Validate

func (b *WriteBatch) Validate() error

type WriteOptions

type WriteOptions struct {
	Sync bool
}

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL