CouloyDB

package module
v0.0.0-...-6438891 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 19, 2023 License: MIT Imports: 27 Imported by: 0

README

CouloyDB & Kuloy

English | 简体中文

CouloyDB's goal is to compromise between performance and storage costs, as an alternative to memory KV storage like Redis in some scenarios.

icon.png

🌟 What is CouloyDB & Kuloy?

CouloyDB is a fast KV store engine based on the bitcask model.

Kuloy is a KV storage service based on CouloyDB. It is compatible with the Redis protocol and supports consistent hash clustering and dynamic scaling.

In a nutshell, Couloy is a code library that acts as an embedded storage engine like LevelDB, while Kuloy is a runnable program like Redis.

🚀 How to use CouloyDB & Kuloy?

⚠️ Notice: CouloyDB & Kuloy has not been officially released and does not guarantee completely reliable compatibility!!!

🏁 Fast start: CouloyDB

Import the library:

go get github.com/Kirov7/CouloyDB
Basic usage example

Now, the basic API of CouloyDB only supports key-value pairs of simple byte array type. These APIs only can guarantee the atomicity of a single operation. If you only want to store some simple data, you can use these APIs.They may be abolition soon, so we recommend that all operations be done in a transaction. And they should not be used concurrently with transaction APIs, which would break the ACID properties of transactions.

func TestCouloyDB(t *testing.T) {
	conf := couloy.DefaultOptions()
	db, err := couloy.NewCouloyDB(conf)
	if err != nil {
		log.Fatal(err)
	}

	key := []byte("first key")
	value := []byte("first value")

	// Be careful, you can't use single non-displayable character in ASCII code as your key (0x00 ~ 0x1F and 0x7F),
	// because those characters will be used in CouloyDB as necessary operations in the preset key tagging system.
	// This may be changed in the next major release
	err = db.Put(key, value)
	if err != nil {
		log.Fatal(err)
	}

	v, err := db.Get(key)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(v)

	err = db.Del(v)
	if err != nil {
		log.Fatal(err)
	}

	keys := db.ListKeys()
	for _, k := range keys {
		fmt.Println(k)
	}

	target := []byte("target")
	err = db.Fold(func(key []byte, value []byte) bool {
		if bytes.Equal(value, target) {
			fmt.Println("Include target")
			return false
		}
		return true
	})
	if err != nil {
		log.Fatal(err)
	}
}
Transaction usage example

Transaction should be used if you want operations to be safe and store data in a different data structure.Currently, transaction support read-committed isolation levels and serializable isolation levels.Since the Bitcask model requires a full storage index, it is not planned to implement MVCC to support snapshot isolation level or serializable snapshot isolation level.

func TestTxn(t *testing.T) {
	db, err := NewCouloyDB(DefaultOptions())
	if err != nil {
		log.Fatal(err)
	}

	// the first parameter passed in is used to determine whether to automatically retry 
	// when this transaction conflicts
	err = db.RWTransaction(false, func(txn *Txn) error {
		return txn.Set(bytex.GetTestKey(0), bytex.GetTestKey(0))
	})

	if err != nil {
		log.Fatal(err)
	}

	// the first parameter passed in is used to determine whether this transaction is read-only
	err = db.SerialTransaction(true, func(txn *Txn) error {
		value, err := txn.Get(bytex.GetTestKey(0))
		if err != nil {
			return err
		}
		if !bytes.Equal(value, bytex.GetTestKey(0)) {
			return err
		}
		return nil
	})

	if err != nil {
		log.Fatal(err)
	}
}

You can safely manipulate the database by calling methods of Txn.

Currently supported data structure types and supported operations:
  • String:
    • GET
    • SET
    • DEL
    • SETNX
    • GETSET
    • STRLEN
    • INCR
    • INCRBY
    • DECR
    • DECRBY
    • EXIST
    • APPEND
    • MGET
    • MSET
  • Hash:
    • HSET
    • HGET
    • HDEL
    • HEXIST
    • HGETALL
    • HMSET
    • HMGET
    • HLEN
    • HVALUES
    • HKEYS
    • HSTRLEN
  • List:
    • LPUSH
    • RPUSH
    • LPOP
    • RPOP

In the future, we will support more data structures and operations.

🏁 Fast start: Kuloy

⚠️ Notice: We are refactoring Kuloy so that it directly calls the native interface of the storage engine layer (CouloyDB) to implement commands. After the refactoring is completed, we will open a new repository to maintain Kuloy. Now it only supports less features.

You can download executable files directly or compile through source code:

# Compile through source code
git clone https://github.com/Kirov7/CouloyDB

cd ./CouloyDB/cmd

go mod tidy

go build run.go

mv run kuloy
# Next, you can see the executable file named kuloy in the cmd directory

Then you can deploy quickly through configuration files or command-line arguments.

You can specify all configuration items through the configuration file or command-line parameters. If there is any conflict, the configuration file prevails. You need to modify the configuration file and do the same thing on each node. And make sure your port 7946 is bound (be used for synchronous cluster status).

config.yaml:

cluster:
  # all the nodes in the cluster (including itself)
  peers:
    - 192.168.1.151:9736
    - 192.168.1.152:9736
    - 192.168.1.153:9736
  # local Index in the cluster peers
  self: 0
standalone:
  # address of this instance when standalone deployment
  addr: "127.0.0.1:9736"
engine:
  # directory Path where data logs are stored
  dirPath: "/tmp/kuloy-test"
  # maximum byte size per datafile (unit: Byte)
  dataFileSize: 268435456
  # type of memory index (hashmap/btree/art)
  indexType: "btree"
  # whether to enable write synchronization
  syncWrites: false
  # periods for data compaction (unit: Second)
  mergeInterval: 28800

You can find the configuration file template in cmd/config.

🎯 Deploying standalone Kuloy service
./kuloy standalone -c ./config/config.yam
🎯 Deploy consistent hash cluster Kuloy service
./kuloy cluster -c ./config/config.yaml
🎯 View Help Options

You can run the following command to view the functions of all configuration items:

./kuloy --help
🎯 Accessing Kuloy service

The Kuloy service currently supports some operations of the String type in Redis, as well as some general operations.More data structures will be supported after our refactoring is complete.

You can use Kuloy as you would normally use Redis (only for currently supported operations, of course).

Use the go-redis client demonstration here:

go get github.com/go-redis/redis/v8
func TestKuloy(t *testing.T) {
	// Create a Redis client
	client := redis.NewClient(&redis.Options{
		Addr:     "127.0.0.1:9736",
		DB:       0,  // Kuloy supports db selection
	})

	// Test set operation
	err := client.Set(context.Background(), "mykey", "hello world", 0).Err()
	if err != nil {
		log.Fatal(err)
	}

	// Test get operation
	val, err := client.Get(context.Background(), "mykey").Result()
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println("mykey ->", val)
}
📜 Currently supported commands
  • Key
    • DEL
    • EXISTS
    • KEYS
    • FLUSHDB
    • TYPE
    • RENAME
    • RENAMENX
  • String
    • GET
    • SET
    • SETNX
    • GETSET
    • STRLEN
  • Connection
    • PING
    • SELECT

🔮 What will we do next?

  • Implement batch write and basic transaction functions [ now, CouloyDB supports the RC and the Serializable transaction isolation level ].

  • Optimize hintfile storage structure to support the memtable build faster (may use gob).

  • Increased use of flatbuffers build options to support faster reading speed.

  • Use mmap to read data file that on disk. [ however, the official mmap library is not optimized enough and needs to be further optimized ]

  • Embedded lua script interpreter to support the execution of operations with complex logic [ currently implemented on CouloyDB, Kuloy still does not support ].

  • Extend protocol support for Redis to act as a KV storage server in the network. [ has completed the basic implementation of Kuloy ]

  • Extend to build data structures in storage engine layer with the same interface as Redis:

    • String
    • Hash
    • List
    • Set
    • ZSet
    • Bitmap
  • Extend easy-to-use distributed solution (may support both gossip and raft protocols for different usage scenarios) [ has supported gossip ]

  • Extend to add backup nodes for a single node in a consistent hash cluster.

  • Add the necessary Rehash functionality.

Contact us?

If you have any questions and want to contact us, you can send an email to: crazyfay@qq.com.

Or join this Tencent WeChat group. We will try our best to solve all the problems.

WeChat.png

Documentation

Index

Constants

View Source
const (
	PutEvent eventType = iota
	DelEvent
)
View Source
const MaxDuration time.Duration = 1<<63 - 1

Variables

This section is empty.

Functions

func BuildScript

func BuildScript(raw ...string) luaScript

Types

type Cmd

type Cmd struct {
	Value interface{}
}

func (*Cmd) AsArray

func (r *Cmd) AsArray() ([]interface{}, error)

func (*Cmd) AsBool

func (r *Cmd) AsBool() (bool, error)

func (*Cmd) AsInt

func (r *Cmd) AsInt() (int, error)

func (*Cmd) AsString

func (r *Cmd) AsString() (string, error)

type DB

type DB struct {
	L *lua.LState
	// contains filtered or unexported fields
}

func NewCouloyDB

func NewCouloyDB(opt Options) (*DB, error)

func (*DB) Clear

func (db *DB) Clear() error

func (*DB) Close

func (db *DB) Close() error

func (*DB) Del

func (db *DB) Del(key []byte) error

func (*DB) Eval

func (db *DB) Eval(script luaScript) (*Cmd, error)

func (*DB) Fold

func (db *DB) Fold(fn func(key []byte, value []byte) bool) error

Fold gets all the keys and executes the function passed in by the user. Terminates the traversal when the function returns false

func (*DB) Get

func (db *DB) Get(key []byte) ([]byte, error)

func (*DB) GetTxId

func (db *DB) GetTxId() int64

func (*DB) IsExist

func (db *DB) IsExist(key []byte) (bool, error)

func (*DB) ListKeys

func (db *DB) ListKeys() [][]byte

ListKeys get all the key and return

func (*DB) Merge

func (db *DB) Merge() error

func (*DB) NewIterator

func (db *DB) NewIterator(options IteratorOptions) *Iterator

func (*DB) NewWriteBatch

func (db *DB) NewWriteBatch(opts WriteBatchOptions) *WriteBatch

func (*DB) Notify

func (db *DB) Notify(key string, value []byte, entryType eventType)

func (*DB) Persist

func (db *DB) Persist(key []byte)

func (*DB) Put

func (db *DB) Put(key, value []byte) error

func (*DB) PutWithExpiration

func (db *DB) PutWithExpiration(key, value []byte, duration time.Duration) error

func (*DB) RWTransaction

func (db *DB) RWTransaction(retryOnConflict bool, fn func(txn *Txn) error) error

RWTransaction Read/Write transaction if retryOnConflict is true, then the transaction will automatically retry until the transaction commits correctly fn is the real transaction that you want to perform

func (*DB) SerialTransaction

func (db *DB) SerialTransaction(readOnly bool, fn func(txn *Txn) error) error

SerialTransaction serializable transaction For now, the commit of a serializable transaction is unlikely to conflict so no retry is required

func (*DB) Size

func (db *DB) Size() int

func (*DB) Sync

func (db *DB) Sync() error

func (*DB) UnWatch

func (db *DB) UnWatch(watcher *Watcher)

func (*DB) Watch

func (db *DB) Watch(ctx context.Context, key string) <-chan *watchEvent

type IsolationLevel

type IsolationLevel uint8
const (
	ReadCommitted IsolationLevel = iota
	Serializable
)

type Iterator

type Iterator struct {
	IndexIterator meta.Iterator
	// contains filtered or unexported fields
}

func (*Iterator) Close

func (it *Iterator) Close()

func (*Iterator) Key

func (it *Iterator) Key() []byte

func (*Iterator) Next

func (it *Iterator) Next()

func (*Iterator) Rewind

func (it *Iterator) Rewind()

func (*Iterator) Seek

func (it *Iterator) Seek(key []byte)

func (*Iterator) Valid

func (it *Iterator) Valid() bool

func (*Iterator) Value

func (it *Iterator) Value() ([]byte, error)

type IteratorOptions

type IteratorOptions struct {
	Prefix  []byte
	Reverse bool
}

type LuaScriptBuilder

type LuaScriptBuilder struct {
	// contains filtered or unexported fields
}

func NewLuaScriptBuilder

func NewLuaScriptBuilder() *LuaScriptBuilder

func (*LuaScriptBuilder) Build

func (b *LuaScriptBuilder) Build() luaScript

func (*LuaScriptBuilder) DeclareArray

func (b *LuaScriptBuilder) DeclareArray(name string, values []string) *LuaScriptBuilder

func (*LuaScriptBuilder) Del

func (*LuaScriptBuilder) Else

func (b *LuaScriptBuilder) Else(fn func(builder *LuaScriptBuilder)) *LuaScriptBuilder

func (*LuaScriptBuilder) ElseIf

func (b *LuaScriptBuilder) ElseIf(condition string, fn func(builder *LuaScriptBuilder)) *LuaScriptBuilder

func (*LuaScriptBuilder) For

func (b *LuaScriptBuilder) For(init, condition, step string, fn func(builder *LuaScriptBuilder)) *LuaScriptBuilder

func (*LuaScriptBuilder) GetArrayLength

func (b *LuaScriptBuilder) GetArrayLength(name string) string

func (*LuaScriptBuilder) GetValueFromArray

func (b *LuaScriptBuilder) GetValueFromArray(name, index string) string

func (*LuaScriptBuilder) If

func (b *LuaScriptBuilder) If(condition string, fn func(builder *LuaScriptBuilder)) *LuaScriptBuilder

func (*LuaScriptBuilder) Put

func (b *LuaScriptBuilder) Put(key, value string) *LuaScriptBuilder

func (*LuaScriptBuilder) Raw

func (b *LuaScriptBuilder) Raw(raw ...string) *LuaScriptBuilder

func (*LuaScriptBuilder) RawCode

func (b *LuaScriptBuilder) RawCode(code string) *LuaScriptBuilder

func (*LuaScriptBuilder) Set

func (b *LuaScriptBuilder) Set(key, value string) *LuaScriptBuilder

func (*LuaScriptBuilder) SetValueInArray

func (b *LuaScriptBuilder) SetValueInArray(name, index, value string) *LuaScriptBuilder

type Options

type Options struct {
	DirPath              string
	DataFileSize         int64
	IndexType            meta.MemTableType
	SyncWrites           bool
	BytesPerSync         uint64
	MergeInterval        int64
	EnableLuaInterpreter bool
	SerializableLua      bool
}

func DefaultOptions

func DefaultOptions() Options

func (*Options) SetDataFileSizeByte

func (o *Options) SetDataFileSizeByte(size int64) *Options

func (*Options) SetDataFileSizeGB

func (o *Options) SetDataFileSizeGB(size int64) *Options

func (*Options) SetDataFileSizeKB

func (o *Options) SetDataFileSizeKB(size int64) *Options

func (*Options) SetDataFileSizeMB

func (o *Options) SetDataFileSizeMB(size int64) *Options

func (*Options) SetDirPath

func (o *Options) SetDirPath(path string) *Options

func (*Options) SetIndexType

func (o *Options) SetIndexType(typ meta.MemTableType) *Options

func (*Options) SetSyncWrites

func (o *Options) SetSyncWrites(sync bool) *Options

type Txn

type Txn struct {
	// contains filtered or unexported fields
}

func (*Txn) Append

func (txn *Txn) Append(key []byte, value []byte) error

func (*Txn) Decr

func (txn *Txn) Decr(key []byte) (int, error)

func (*Txn) DecrBy

func (txn *Txn) DecrBy(key []byte, delta int) (int, error)

func (*Txn) Del

func (txn *Txn) Del(key []byte) error

Del delete data to the db, but instead of writing it back to memtable, it writes to pendingWrites first

func (*Txn) Exist

func (txn *Txn) Exist(key []byte) bool

func (*Txn) Get

func (txn *Txn) Get(key []byte) ([]byte, error)

Get the key first in pendingWrites, if not then in db

func (*Txn) GetSet

func (txn *Txn) GetSet(key, value []byte) ([]byte, error)

func (*Txn) HDel

func (txn *Txn) HDel(key, field []byte) error

func (*Txn) HExist

func (txn *Txn) HExist(key, field []byte) bool

func (*Txn) HGet

func (txn *Txn) HGet(key, field []byte) ([]byte, error)

func (*Txn) HGetAll

func (txn *Txn) HGetAll(key []byte) ([][]byte, [][]byte, error)

func (*Txn) HKeys

func (txn *Txn) HKeys(key []byte) ([][]byte, error)

func (*Txn) HLen

func (txn *Txn) HLen(key []byte) (int64, error)

func (*Txn) HMGet

func (txn *Txn) HMGet(key []byte, fields [][]byte) ([][]byte, error)

func (*Txn) HMSet

func (txn *Txn) HMSet(key []byte, args [][]byte) error

func (*Txn) HSet

func (txn *Txn) HSet(key, field, value []byte) error

func (*Txn) HStrLen

func (txn *Txn) HStrLen(key, field []byte) (int64, error)

func (*Txn) HValues

func (txn *Txn) HValues(key []byte) ([][]byte, error)

func (*Txn) Incr

func (txn *Txn) Incr(key []byte) (int, error)

func (*Txn) IncrBy

func (txn *Txn) IncrBy(key []byte, delta int) (int, error)

func (*Txn) LIndex

func (txn *Txn) LIndex(key []byte, index int) ([]byte, error)

func (*Txn) LLen

func (txn *Txn) LLen(key []byte) (int, error)

func (*Txn) LPop

func (txn *Txn) LPop(key []byte) ([]byte, error)

func (*Txn) LPush

func (txn *Txn) LPush(key []byte, values [][]byte) error

func (*Txn) LRange

func (txn *Txn) LRange(key []byte, start, stop int) ([][]byte, error)

func (*Txn) LRem

func (txn *Txn) LRem(key []byte, index int) error

func (*Txn) LSet

func (txn *Txn) LSet(key []byte, index int, value []byte) error

func (*Txn) LTrim

func (txn *Txn) LTrim(key []byte, start, stop int) error

func (*Txn) MGet

func (txn *Txn) MGet(keys [][]byte) ([][]byte, error)

func (*Txn) MSet

func (txn *Txn) MSet(args [][]byte) error

func (*Txn) RPop

func (txn *Txn) RPop(key []byte) ([]byte, error)

func (*Txn) RPush

func (txn *Txn) RPush(key []byte, values [][]byte) error

func (*Txn) SAdd

func (txn *Txn) SAdd(key []byte, members ...[]byte) error

func (*Txn) SCard

func (txn *Txn) SCard(key []byte) (int64, error)

func (*Txn) SMembers

func (txn *Txn) SMembers(key []byte) ([][]byte, error)

func (*Txn) SRem

func (txn *Txn) SRem(key []byte, members ...[]byte) error

func (*Txn) Set

func (txn *Txn) Set(key []byte, value []byte) error

Set writes data to the db, but instead of writing it back to memtable, it writes to pendingWrites first

func (*Txn) SetEX

func (txn *Txn) SetEX(key, value []byte) error

func (*Txn) SetNX

func (txn *Txn) SetNX(key, value []byte) error

func (*Txn) StrLen

func (txn *Txn) StrLen(key []byte) (int, error)

type Watcher

type Watcher struct {
	// contains filtered or unexported fields
}

type WriteBatch

type WriteBatch struct {
	// contains filtered or unexported fields
}

WriteBatch Atomic operation writeBatch

func (*WriteBatch) Commit

func (wb *WriteBatch) Commit() error

func (*WriteBatch) Del

func (wb *WriteBatch) Del(key []byte) error

func (*WriteBatch) Put

func (wb *WriteBatch) Put(key []byte, value []byte) error

type WriteBatchOptions

type WriteBatchOptions struct {
	MaxBatchNum uint32
	SyncWrites  bool
}

func DefaultBatchOptions

func DefaultBatchOptions() WriteBatchOptions

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL