qdb

package module
v3.14.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 4, 2023 License: BSD-3-Clause Imports: 14 Imported by: 3

README

Quasardb Go API

GoDoc

Note: The Go API works on Windows x64 and Unix-like operating systems. 32-bit Windows is not currently supported.

Warning: Early stage development version. Use at your own risk.

Warning: Go1.9.0 is not supported. If you encounter a compilation problem with time.go please upgrade.

Go API for quasardb.

Requirements
  1. Go compiler and tools
  2. quasardb daemon
  3. quasardb C API version corresponding to the OS you use
  4. The version of the quasardb C API must match the current git branch, the master branch corresponds to the nightly C API
Build instructions:
  1. go get -d github.com/bureau14/qdb-api-go
  2. Extract the downloaded C API into $GOPATH/src/github.com/bureau14/qdb-api-go/qdb
Test instructions:
  1. export QDB_SERVER_PATH=/path/to/qdb-server/bin # a path to a dir containing qdbd, qdb_cluster_keygen and qdb_user_add executables
  2. cd $GOPATH/src/github.com/bureau14/qdb-api-go
  3. go test
Coverage instructions:
  1. export QDB_SERVER_PATH=/path/to/qdb-server/bin # a path to a dir containing qdbd, qdb_cluster_keygen and qdb_user_add executables
  2. cd $GOPATH/src/github.com/bureau14/qdb-api-go
  3. go test -coverprofile=coverage.out
  4. go tool cover -html=coverage.out # if you want to see coverage detail in a browser
Usage (OS X)
  1. mkdir $GOPATH/src/<PROJECT>
  2. Extract the downloaded C API into $GOPATH/src/<PROJECT>/qdb
  3. Ensure quasardb C library is in the DYLD_LIBRARY_PATH: export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:$GOPATH/src/<PROJECT>/qdb/lib
Troubleshooting

If you encounter any problems, please create an issue in the bug tracker.

Getting started

Simple test

Assuming a non secured database (see "Setup a secured connection" section for secured databases)

import qdb "github.com/bureau14/qdb-api-go"
import "time"

func main() {
    handle, err := qdb.SetupHandle("qdb://127.0.0.1:2836", time.Duration(120) * time.Second)
    if err != nil {
        // do something with error
    }
    defer handle.Close()

    blob := handle.Blob("alias")

    content := []byte("content")
    err = blob.Put(content, qdb.NeverExpires())
    if err != nil {
        // do something with error
    }

    contentUpdate := []byte("updated content")
    err = blob.Update(contentUpdate, qdb.NeverExpires())
    if err != nil {
        // do something with error
    }

    blob.Remove()
}

The following tests samples are presuming you import as specified in the previous example. The error checking will be omitted for brevity.

Setup a non secure connection

    handle, err := qdb.SetupHandle("qdb://127.0.0.1:2836", time.Duration(120) * time.Second)

    // alternatively:
    handle := qdb.MustSetupHandle("qdb://127.0.0.1:2836", time.Duration(120) * time.Second)

Setup a secured connection

    handle, err := qdb.SetupSecureHandle("qdb://127.0.0.1:2836", "/path/to/cluster_public.key", "/path/to/user_private.key", time.Duration(120) * time.Second, qdb.EncryptNone)

    // alternatively:
    handle := qdb.MustSetupSecureHandle("qdb://127.0.0.1:2836", "/path/to/cluster_public.key", "/path/to/user_private.key", time.Duration(120) * time.Second, qdb.EncryptNone)

Setup a handle manually

This could prove useful if you need to manage the flow of creation of your handle.

    handle, err := qdb.NewHandle()

    // Set timeout
    err = handle.SetTimeout(time.Duration(120) * time.Second)

    // Set encryption if enabled server side
    err = handle.SetEncryption(qdb.EncryptAES)

    // add security if enabled server side
    clusterKey, err := ClusterKeyFromFile("/path/to/cluster_public.key")
    err = handle.AddClusterPublicKey(clusterKey)
    user, secret, err := ClusterKeyFromFile("/path/to/cluster_public.key")
    err = handle.AddUserCredentials(user, secret)

    // connect
    err = handle.Connect("qdb://127.0.0.1:2836)

Documentation

Overview

Package qdb provides an api to a quasardb server

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

func ClusterKeyFromFile

func ClusterKeyFromFile(clusterPublicKeyFile string) (string, error)

ClusterKeyFromFile : retrieve cluster key from a file

func CountUndefined

func CountUndefined() uint64

CountUndefined : return a uint64 value corresponding to quasardb undefined count value

func Int64Undefined

func Int64Undefined() int64

Int64Undefined : return a int64 value corresponding to quasardb undefined int64 value

func MaxTimespec

func MaxTimespec() time.Time

MaxTimespec : return a time value corresponding to quasardb maximum timespec value

func MinTimespec

func MinTimespec() time.Time

MinTimespec : return a time value corresponding to quasardb minimum timespec value

func NeverExpires

func NeverExpires() time.Time

NeverExpires : return a time value corresponding to quasardb never expires value

func PreserveExpiration

func PreserveExpiration() time.Time

PreserveExpiration : return a time value corresponding to quasardb preserve expiration value

func SetLogFile

func SetLogFile(filePath string)

SetLogFile set the log file to use

func UserCredentialFromFile

func UserCredentialFromFile(userCredentialFile string) (string, string, error)

UserCredentialFromFile : retrieve user credentials from a file

Types

type BlobEntry

type BlobEntry struct {
	Entry
}

BlobEntry : blob data type

Example
SetLogFile(ExamplesLogFilePath)
h := MustSetupHandle(insecureURI, 120*time.Second)
defer h.Close()

alias := "BlobAlias"
blob := h.Blob(alias)
defer blob.Remove()

content := []byte("content")
blob.Put(content, NeverExpires())

obtainedContent, _ := blob.Get()
fmt.Println("Get content:", string(obtainedContent))

updateContent := []byte("updated content")
blob.Update(updateContent, PreserveExpiration())

obtainedContent, _ = blob.Get()
fmt.Println("Get updated content:", string(obtainedContent))

newContent := []byte("new content")
previousContent, _ := blob.GetAndUpdate(newContent, PreserveExpiration())
fmt.Println("Previous content:", string(previousContent))

obtainedContent, _ = blob.Get()
fmt.Println("Get new content:", string(obtainedContent))
Output:

Get content: content
Get updated content: updated content
Previous content: updated content
Get new content: new content

func (*BlobEntry) CompareAndSwap

func (entry *BlobEntry) CompareAndSwap(newValue []byte, newComparand []byte, expiry time.Time) ([]byte, error)

CompareAndSwap : Atomically compares the entry with comparand and updates it to new_value if, and only if, they match.

The function returns the original value of the entry in case of a mismatch. When it matches, no content is returned.
The entry must already exist.
Update will occur if and only if the content of the entry matches bit for bit the content of the comparand buffer.

func (BlobEntry) Get

func (entry BlobEntry) Get() ([]byte, error)

Get : Retrieve an entry's content

If the entry does not exist, the function will fail and return 'alias not found' error.

func (BlobEntry) GetAndRemove

func (entry BlobEntry) GetAndRemove() ([]byte, error)

GetAndRemove : Atomically gets an entry from the quasardb server and removes it.

If the entry does not exist, the function will fail and return 'alias not found' error.

func (*BlobEntry) GetAndUpdate

func (entry *BlobEntry) GetAndUpdate(newContent []byte, expiry time.Time) ([]byte, error)

GetAndUpdate : Atomically gets and updates (in this order) the entry on the quasardb server.

The entry must already exist.

func (BlobEntry) GetNoAlloc

func (entry BlobEntry) GetNoAlloc(content []byte) (int, error)

GetNoAlloc : Retrieve an entry's content to already allocated buffer

If the entry does not exist, the function will fail and return 'alias not found' error.
If the buffer is not large enough to hold the data, the function will fail
and return `buffer is too small`, content length will nevertheless be
returned with entry size so that the caller may resize its buffer and try again.

func (BlobEntry) Put

func (entry BlobEntry) Put(content []byte, expiry time.Time) error

Put : Creates a new entry and sets its content to the provided blob.

If the entry already exists the function will fail and will return 'alias already exists' error.
You can specify an expiry or use NeverExpires if you don’t want the entry to expire.

func (BlobEntry) RemoveIf

func (entry BlobEntry) RemoveIf(comparand []byte) error

RemoveIf : Atomically removes the entry on the server if the content matches.

The entry must already exist.
Removal will occur if and only if the content of the entry matches bit for bit the content of the comparand buffer.

func (*BlobEntry) Update

func (entry *BlobEntry) Update(newContent []byte, expiry time.Time) error

Update : Creates or updates an entry and sets its content to the provided blob.

If the entry already exists, the function will modify the entry.
You can specify an expiry or use NeverExpires if you don’t want the entry to expire.

type Cluster

type Cluster struct {
	HandleType
}

Cluster : An object permitting calls to a cluster

func (Cluster) Endpoints

func (c Cluster) Endpoints() ([]Endpoint, error)

Endpoints : Retrieve all endpoints accessible to this handle.

func (Cluster) PurgeAll

func (c Cluster) PurgeAll() error

PurgeAll : Removes irremediably all data from all the nodes of the cluster.

This function is useful when quasardb is used as a cache and is not the golden source.
This call is not atomic: if the command cannot be dispatched on the whole cluster, it will be dispatched on as many nodes as possible and the function will return with a qdb_e_ok code.
By default cluster does not allow this operation and the function returns a qdb_e_operation_disabled error.

func (Cluster) PurgeCache

func (c Cluster) PurgeCache() error

PurgeCache : Removes all cached data from all the nodes of the cluster.

This function is disabled on a transient cluster.
Prefer purge_all in this case.

This call is not atomic: if the command cannot be dispatched on the whole cluster, it will be dispatched on as many nodes as possible and the function will return with a qdb_e_ok code.

func (Cluster) TrimAll

func (c Cluster) TrimAll() error

TrimAll : Trims all data on all the nodes of the cluster.

Quasardb uses Multi-Version Concurrency Control (MVCC) as a foundation of its transaction engine. It will automatically clean up old versions as entries are accessed.
This call is not atomic: if the command cannot be dispatched on the whole cluster, it will be dispatched on as many nodes as possible and the function will return with a qdb_e_ok code.
Entries that are not accessed may not be cleaned up, resulting in increasing disk usage.

This function will request each nodes to trim all entries, release unused memory and compact files on disk.
Because this operation is I/O and CPU intensive it is not recommended to run it when the cluster is heavily used.

func (Cluster) WaitForStabilization

func (c Cluster) WaitForStabilization(timeout time.Duration) error

WaitForStabilization : Wait for all nodes of the cluster to be stabilized.

Takes a timeout value, in milliseconds.

type Compression

type Compression C.qdb_compression_t

Compression : compression parameter

const (
	CompNone Compression = C.qdb_comp_none
	CompFast Compression = C.qdb_comp_fast
	CompBest Compression = C.qdb_comp_best
)

Compression values:

CompNone : No compression.
CompFast : Maximum compression speed, potentially minimum compression ratio. This is currently the default.
CompBest : Maximum compression ratio, potentially minimum compression speed. This is currently not implemented.

type DirectBlobEntry

type DirectBlobEntry struct {
	DirectEntry
}

DirectBlobEntry is an Entry for a blob data type

func (DirectBlobEntry) Get

func (e DirectBlobEntry) Get() ([]byte, error)

Get returns an entry's contents

func (DirectBlobEntry) Put

func (e DirectBlobEntry) Put(content []byte, expiry time.Time) error

Put creates a new entry and sets its content to the provided blob This will return an error if the entry alias already exists You can specify an expiry or use NeverExpires if you don’t want the entry to expire.

func (*DirectBlobEntry) Update

func (e *DirectBlobEntry) Update(newContent []byte, expiry time.Time) error

Update creates or updates an entry and sets its content to the provided blob. If the entry already exists, the function will modify the entry. You can specify an expiry or use NeverExpires if you don’t want the entry to expire.

type DirectEntry

type DirectEntry struct {
	DirectHandleType
	// contains filtered or unexported fields
}

DirectEntry is a base type for composition. Similar to a regular entry

func (DirectEntry) Alias

func (e DirectEntry) Alias() string

Alias returns an alias name

func (DirectEntry) Remove

func (e DirectEntry) Remove() error

Remove an entry from the local node's storage, regardless of its type.

This function bypasses the clustering mechanism and accesses the node local storage. Entries in the local node storage are not accessible via the regular API and vice versa.

The call is ACID, regardless of the type of the entry and a transaction will be created if need be.

type DirectHandleType

type DirectHandleType struct {
	// contains filtered or unexported fields
}

DirectHandleType is an opaque handle needed for maintaining a direct connection to a node.

func (DirectHandleType) Blob

func (h DirectHandleType) Blob(alias string) DirectBlobEntry

Blob creates a direct blob entry object

func (DirectHandleType) Close

func (h DirectHandleType) Close() error

Close releases a direct connect previously opened with DirectConnect

func (DirectHandleType) Integer

func (h DirectHandleType) Integer(alias string) DirectIntegerEntry

Integer creates a direct integer entry object

func (DirectHandleType) PrefixGet

func (h DirectHandleType) PrefixGet(prefix string, limit int) ([]string, error)

PrefixGet : Retrieves the list of all entries matching the provided prefix.

A prefix-based search will enable you to find all entries matching a provided prefix.
This function returns the list of aliases. It’s up to the user to query the content associated with every entry, if needed.

func (DirectHandleType) Release

func (h DirectHandleType) Release(buffer unsafe.Pointer)

Release frees API allocated buffers

type DirectIntegerEntry

type DirectIntegerEntry struct {
	DirectEntry
}

DirectIntegerEntry is an Entry for a int data type

func (DirectIntegerEntry) Add

func (e DirectIntegerEntry) Add(added int64) (int64, error)

Add : Atomically increases or decreases a signed 64-bit integer.

The specified entry will be atomically increased (or decreased) according to the given addend value:
	To increase the value, specify a positive added
	To decrease the value, specify a negative added

The function return the result of the operation.
The entry must already exist.

func (DirectIntegerEntry) Get

func (e DirectIntegerEntry) Get() (int64, error)

Get returns the value of a signed 64-bit integer

func (DirectIntegerEntry) Put

func (e DirectIntegerEntry) Put(content int64, expiry time.Time) error

Put creates a new signed 64-bit integer.

Atomically creates an entry of the given alias and sets it to a cross-platform signed 64-bit integer.
If the entry already exists, the function returns an error.

You can specify an expiry time or use NeverExpires if you don’t want the entry to expire.
If you want to create or update an entry use Update.

The value will be correctly translated independently of the endianness of the client’s platform.

func (DirectIntegerEntry) Update

func (e DirectIntegerEntry) Update(newContent int64, expiry time.Time) error

Update creates or updates a signed 64-bit integer.

Atomically updates an entry of the given alias to the provided value.
If the entry doesn’t exist, it will be created.

You can specify an expiry time or use NeverExpires if you don’t want the entry to expire.

type Encryption

type Encryption C.qdb_encryption_t

Encryption : encryption option

const (
	EncryptNone Encryption = C.qdb_crypt_none
	EncryptAES  Encryption = C.qdb_crypt_aes_gcm_256
)

Encryption values:

EncryptNone : No encryption.
EncryptAES : Uses aes gcm 256 encryption.

type Endpoint

type Endpoint struct {
	Address string
	Port    int64
}

Endpoint : A structure representing a qdb url endpoint

func (Endpoint) URI

func (t Endpoint) URI() string

URI : Returns a formatted URI of the endpoint

type Entry

type Entry struct {
	HandleType
	// contains filtered or unexported fields
}

Entry : cannot be constructed base type for composition

func (Entry) Alias

func (e Entry) Alias() string

Alias : Return an alias string of the object

Example
SetLogFile(ExamplesLogFilePath)
h := MustSetupHandle(insecureURI, 120*time.Second)
defer h.Close()

blob1 := h.Blob("BLOB_1")
blob1.Put([]byte("blob 1 content"), NeverExpires())
defer blob1.Remove()
blob2 := h.Blob("BLOB_2")
blob2.Put([]byte("blob 2 content"), NeverExpires())
defer blob2.Remove()

fmt.Println("Alias blob 1:", blob1.Alias())
fmt.Println("Alias blob 2:", blob2.Alias())

tags1 := []string{"tag blob 1", "tag both blob"}
blob1.AttachTags(tags1)
defer blob1.DetachTags(tags1)
tags2 := []string{"tag blob 2", "tag both blob"}
blob2.AttachTags(tags2)
defer blob2.DetachTags(tags2)

resultTagBlob1, _ := blob1.GetTagged("tag blob 1")
fmt.Println("Tagged with 'tag blob 1':", resultTagBlob1)
resultTagBlob2, _ := blob1.GetTagged("tag blob 2")
fmt.Println("Tagged with 'tag blob 2':", resultTagBlob2)
resultTagBoth, _ := blob1.GetTagged("tag both blob")
fmt.Println("Tagged with 'tag both blob':", resultTagBoth)
Output:

Alias blob 1: BLOB_1
Alias blob 2: BLOB_2
Tagged with 'tag blob 1': [BLOB_1]
Tagged with 'tag blob 2': [BLOB_2]
Tagged with 'tag both blob': [BLOB_1 BLOB_2]

func (Entry) AttachTag

func (e Entry) AttachTag(tag string) error

AttachTag : Adds a tag entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.
The tag may or may not exist.

func (Entry) AttachTags

func (e Entry) AttachTags(tags []string) error

AttachTags : Adds a collection of tags to a single entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The function will ignore existing tags.
The entry must exist.
The tag may or may not exist.

func (Entry) DetachTag

func (e Entry) DetachTag(tag string) error

DetachTag : Removes a tag from an entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.
The tag must exist.

func (Entry) DetachTags

func (e Entry) DetachTags(tags []string) error

DetachTags : Removes a collection of tags from a single entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.
The tags must exist.

func (Entry) ExpiresAt

func (e Entry) ExpiresAt(expiry time.Time) error

ExpiresAt : Sets the absolute expiration time of an entry.

Blobs and integers can have an expiration time and will be automatically removed by the cluster when they expire.

The absolute expiration time is the Unix epoch, that is, the number of milliseconds since 1 January 1970, 00:00::00 UTC.
To use a relative expiration time (that is expiration relative to the time of the call), use ExpiresFromNow.

To remove the expiration time of an entry, specify the value NeverExpires as ExpiryTime parameter.
Values in the past are refused, but the cluster will have a certain tolerance to account for clock skews.

func (Entry) ExpiresFromNow

func (e Entry) ExpiresFromNow(expiry time.Duration) error

ExpiresFromNow : Sets the expiration time of an entry, relative to the current time of the client.

Blobs and integers can have an expiration time and will automatically be removed by the cluster when they expire.

The expiration is relative to the current time of the machine.
To remove the expiration time of an entry or to use an absolute expiration time use ExpiresAt.

func (Entry) GetLocation

func (e Entry) GetLocation() (NodeLocation, error)

GetLocation : Returns the primary node of an entry.

The exact location of an entry should be assumed random and users should not bother about its location as the API will transparently locate the best node for the requested operation.
This function is intended for higher level APIs that need to optimize transfers and potentially push computation close to the data.

func (Entry) GetMetadata

func (e Entry) GetMetadata() (Metadata, error)

GetMetadata : Gets the meta-information about an entry, if it exists.

func (Entry) GetTagged

func (e Entry) GetTagged(tag string) ([]string, error)

GetTagged : Retrieves all entries that have the specified tag.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The tag must exist.
The complexity of this function is constant.

func (Entry) GetTags

func (e Entry) GetTags() ([]string, error)

GetTags : Retrieves all the tags of an entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.

func (Entry) HasTag

func (e Entry) HasTag(tag string) error

HasTag : Tests if an entry has the request tag.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.

func (Entry) Remove

func (e Entry) Remove() error

Remove : Removes an entry from the cluster, regardless of its type.

This call will remove the entry, whether it is a blob, integer, deque, stream.
It will properly untag the entry.
If the entry spawns on multiple entries or nodes (deques and streams) all blocks will be properly removed.

The call is ACID, regardless of the type of the entry and a transaction will be created if need be

type EntryType

type EntryType C.qdb_entry_type_t

EntryType : An enumeration representing possible entries type.

const (
	EntryUninitialized EntryType = C.qdb_entry_uninitialized
	EntryBlob          EntryType = C.qdb_entry_blob
	EntryInteger       EntryType = C.qdb_entry_integer
	EntryHSet          EntryType = C.qdb_entry_hset
	EntryTag           EntryType = C.qdb_entry_tag
	EntryDeque         EntryType = C.qdb_entry_deque
	EntryStream        EntryType = C.qdb_entry_stream
	EntryTS            EntryType = C.qdb_entry_ts
)

EntryType Values

EntryUninitialized : Uninitialized value.
EntryBlob : A binary large object (blob).
EntryInteger : A signed 64-bit integer.
EntryHSet : A distributed hash set.
EntryTag : A tag.
EntryDeque : A distributed double-entry queue (deque).
EntryTS : A distributed time series.
EntryStream : A distributed binary stream.

type ErrorType

type ErrorType C.qdb_error_t

ErrorType obfuscating qdb_error_t

const (
	Success                      ErrorType = C.qdb_e_ok
	Created                      ErrorType = C.qdb_e_ok_created
	ErrUninitialized             ErrorType = C.qdb_e_uninitialized
	ErrAliasNotFound             ErrorType = C.qdb_e_alias_not_found
	ErrAliasAlreadyExists        ErrorType = C.qdb_e_alias_already_exists
	ErrOutOfBounds               ErrorType = C.qdb_e_out_of_bounds
	ErrSkipped                   ErrorType = C.qdb_e_skipped
	ErrIncompatibleType          ErrorType = C.qdb_e_incompatible_type
	ErrContainerEmpty            ErrorType = C.qdb_e_container_empty
	ErrContainerFull             ErrorType = C.qdb_e_container_full
	ErrElementNotFound           ErrorType = C.qdb_e_element_not_found
	ErrElementAlreadyExists      ErrorType = C.qdb_e_element_already_exists
	ErrOverflow                  ErrorType = C.qdb_e_overflow
	ErrUnderflow                 ErrorType = C.qdb_e_underflow
	ErrTagAlreadySet             ErrorType = C.qdb_e_tag_already_set
	ErrTagNotSet                 ErrorType = C.qdb_e_tag_not_set
	ErrTimeout                   ErrorType = C.qdb_e_timeout
	ErrConnectionRefused         ErrorType = C.qdb_e_connection_refused
	ErrConnectionReset           ErrorType = C.qdb_e_connection_reset
	ErrUnstableCluster           ErrorType = C.qdb_e_unstable_cluster
	ErrTryAgain                  ErrorType = C.qdb_e_try_again
	ErrConflict                  ErrorType = C.qdb_e_conflict
	ErrNotConnected              ErrorType = C.qdb_e_not_connected
	ErrResourceLocked            ErrorType = C.qdb_e_resource_locked
	ErrSystemRemote              ErrorType = C.qdb_e_system_remote
	ErrSystemLocal               ErrorType = C.qdb_e_system_local
	ErrInternalRemote            ErrorType = C.qdb_e_internal_remote
	ErrInternalLocal             ErrorType = C.qdb_e_internal_local
	ErrNoMemoryRemote            ErrorType = C.qdb_e_no_memory_remote
	ErrNoMemoryLocal             ErrorType = C.qdb_e_no_memory_local
	ErrInvalidProtocol           ErrorType = C.qdb_e_invalid_protocol
	ErrHostNotFound              ErrorType = C.qdb_e_host_not_found
	ErrBufferTooSmall            ErrorType = C.qdb_e_buffer_too_small
	ErrNotImplemented            ErrorType = C.qdb_e_not_implemented
	ErrInvalidVersion            ErrorType = C.qdb_e_invalid_version
	ErrInvalidArgument           ErrorType = C.qdb_e_invalid_argument
	ErrInvalidHandle             ErrorType = C.qdb_e_invalid_handle
	ErrReservedAlias             ErrorType = C.qdb_e_reserved_alias
	ErrUnmatchedContent          ErrorType = C.qdb_e_unmatched_content
	ErrInvalidIterator           ErrorType = C.qdb_e_invalid_iterator
	ErrEntryTooLarge             ErrorType = C.qdb_e_entry_too_large
	ErrTransactionPartialFailure ErrorType = C.qdb_e_transaction_partial_failure
	ErrOperationDisabled         ErrorType = C.qdb_e_operation_disabled
	ErrOperationNotPermitted     ErrorType = C.qdb_e_operation_not_permitted
	ErrIteratorEnd               ErrorType = C.qdb_e_iterator_end
	ErrInvalidReply              ErrorType = C.qdb_e_invalid_reply
	ErrNoSpaceLeft               ErrorType = C.qdb_e_no_space_left
	ErrQuotaExceeded             ErrorType = C.qdb_e_quota_exceeded
	ErrAliasTooLong              ErrorType = C.qdb_e_alias_too_long
	ErrClockSkew                 ErrorType = C.qdb_e_clock_skew
	ErrAccessDenied              ErrorType = C.qdb_e_access_denied
	ErrLoginFailed               ErrorType = C.qdb_e_login_failed
	ErrColumnNotFound            ErrorType = C.qdb_e_column_not_found
	ErrQueryTooComplex           ErrorType = C.qdb_e_query_too_complex
	ErrInvalidCryptoKey          ErrorType = C.qdb_e_invalid_crypto_key
	ErrInvalidQuery              ErrorType = C.qdb_e_invalid_query
	ErrInvalidRegex              ErrorType = C.qdb_e_invalid_regex
	ErrUnknownUser               ErrorType = C.qdb_e_unknown_user
	ErrInterrupted               ErrorType = C.qdb_e_interrupted
	ErrNetworkInbufTooSmall      ErrorType = C.qdb_e_network_inbuf_too_small
	ErrNetworkError              ErrorType = C.qdb_e_network_error
	ErrDataCorruption            ErrorType = C.qdb_e_data_corruption
)

Success : Success. Created : Success. A new entry has been created. ErrUninitialized : Uninitialized error. ErrAliasNotFound : Entry alias/key was not found. ErrAliasAlreadyExists : Entry alias/key already exists. ErrOutOfBounds : Index out of bounds. ErrSkipped : Skipped operation. Used in batches and transactions. ErrIncompatibleType : Entry or column is incompatible with the operation. ErrContainerEmpty : Container is empty. ErrContainerFull : Container is full. ErrElementNotFound : Element was not found. ErrElementAlreadyExists : Element already exists. ErrOverflow : Arithmetic operation overflows. ErrUnderflow : Arithmetic operation underflows. ErrTagAlreadySet : Tag is already set. ErrTagNotSet : Tag is not set. ErrTimeout : Operation timed out. ErrConnectionRefused : Connection was refused. ErrConnectionReset : Connection was reset. ErrUnstableCluster : Cluster is unstable. ErrTryAgain : Please retry. ErrConflict : There is another ongoing conflicting operation. ErrNotConnected : Handle is not connected. ErrResourceLocked : Resource is locked. ErrSystemRemote : System error on remote node (server-side). Please check errno or GetLastError() for actual error. ErrSystemLocal : System error on local system (client-side). Please check errno or GetLastError() for actual error. ErrInternalRemote : Internal error on remote node (server-side). ErrInternalLocal : Internal error on local system (client-side). ErrNoMemoryRemote : No memory on remote node (server-side). ErrNoMemoryLocal : No memory on local system (client-side). ErrInvalidProtocol : Protocol is invalid. ErrHostNotFound : Host was not found. ErrBufferTooSmall : Buffer is too small. ErrNotImplemented : Operation is not implemented. ErrInvalidVersion : Version is invalid. ErrInvalidArgument : Argument is invalid. ErrInvalidHandle : Handle is invalid. ErrReservedAlias : Alias/key is reserved. ErrUnmatchedContent : Content did not match. ErrInvalidIterator : Iterator is invalid. ErrEntryTooLarge : Entry is too large. ErrTransactionPartialFailure : Transaction failed partially. ErrOperationDisabled : Operation has not been enabled in cluster configuration. ErrOperationNotPermitted : Operation is not permitted. ErrIteratorEnd : Iterator reached the end. ErrInvalidReply : Cluster sent an invalid reply. ErrNoSpaceLeft : No more space on disk. ErrQuotaExceeded : Disk space quota has been reached. ErrAliasTooLong : Alias is too long. ErrClockSkew : Cluster nodes have important clock differences. ErrAccessDenied : Access is denied. ErrLoginFailed : Login failed. ErrColumnNotFound : Column was not found. ErrQueryTooComplex : Find is too complex. ErrInvalidCryptoKey : Security key is invalid. ErrInvalidQuery : Query is invalid. ErrInvalidRegex : Regular expression is invalid. ErrUnknownUser : Unknown user. ErrInterrupted : Operation has been interrupted. ErrNetworkInbufTooSmall : Network input buffer is too small to complete the operation. ErrNetworkError : Network error. ErrDataCorruption : Data corruption has been detected.

func (ErrorType) Error

func (e ErrorType) Error() string

type Find

type Find struct {
	HandleType
	// contains filtered or unexported fields
}

Find : a building type to execute a query Retrieves all entries’ aliases that match the specified query. For the complete grammar, please refer to the documentation. Queries are transactional. The complexity of this function is dependent on the complexity of the query.

func (Find) Execute

func (q Find) Execute() ([]string, error)

Execute : Execute the current query

func (Find) ExecuteString

func (q Find) ExecuteString(query string) ([]string, error)

ExecuteString : Execute a string query immediately

func (*Find) NotTag

func (q *Find) NotTag(t string) *Find

NotTag : Adds a tag to exclude from the current query results

func (*Find) Tag

func (q *Find) Tag(t string) *Find

Tag : Adds a tag to include into the current query results

func (*Find) Type

func (q *Find) Type(t string) *Find

Type : Restrict the query results to a particular type

type HandleType

type HandleType struct {
	// contains filtered or unexported fields
}

HandleType : An opaque handle to internal API-allocated structures needed for maintaining connection to a cluster.

Example
var h HandleType
h.Open(ProtocolTCP)
Output:

func MustSetupHandle

func MustSetupHandle(clusterURI string, timeout time.Duration) HandleType

MustSetupHandle : Setup a handle, panic on error

The handle is already opened with tcp protocol
The handle is already connected with the clusterURI string

Panic on error

func MustSetupSecuredHandle

func MustSetupSecuredHandle(clusterURI, clusterPublicKeyFile, userCredentialFile string, timeout time.Duration, encryption Encryption) HandleType

MustSetupSecuredHandle : Setup a secured handle, panic on error

The handle is already opened with tcp protocol
The handle is already secured with the cluster public key and the user credential files provided
(Note: the filenames are needed, not the content of the files)
The handle is already connected with the clusterURI string

func NewHandle

func NewHandle() (HandleType, error)

NewHandle : Create a new handle, return error if needed

The handle is already opened (not connected) with tcp protocol

func SetupHandle

func SetupHandle(clusterURI string, timeout time.Duration) (HandleType, error)

SetupHandle : Setup a handle, return error if needed

The handle is already opened with tcp protocol
The handle is already connected with the clusterURI string

func SetupSecuredHandle

func SetupSecuredHandle(clusterURI, clusterPublicKeyFile, userCredentialFile string, timeout time.Duration, encryption Encryption) (HandleType, error)

SetupSecuredHandle : Setup a secured handle, return error if needed

The handle is already opened with tcp protocol
The handle is already secured with the cluster public key and the user credential files provided
(Note: the filenames are needed, not the content of the files)
The handle is already connected with the clusterURI string

func (HandleType) APIBuild

func (h HandleType) APIBuild() string

APIBuild : Returns a string describing the exact API build.

func (HandleType) APIVersion

func (h HandleType) APIVersion() string

APIVersion : Returns a string describing the API version.

func (HandleType) AddClusterPublicKey

func (h HandleType) AddClusterPublicKey(secret string) error

AddClusterPublicKey : add the cluster public key from a cluster config file.

func (HandleType) AddUserCredentials

func (h HandleType) AddUserCredentials(name, secret string) error

AddUserCredentials : add a username and key from a user name and secret.

func (HandleType) Blob

func (h HandleType) Blob(alias string) BlobEntry

Blob : Create a blob entry object

func (HandleType) Close

func (h HandleType) Close() error

Close : Closes the handle previously opened.

This results in terminating all connections and releasing all internal buffers,
including buffers which may have been allocated as or a result of batch operations or get operations.

func (HandleType) Cluster

func (h HandleType) Cluster() *Cluster

Cluster : Create a cluster object to execute commands on a cluster

func (HandleType) Connect

func (h HandleType) Connect(clusterURI string) error

Connect : connect a previously opened handle

Binds the client instance to a quasardb cluster and connect to at least one node within.
Quasardb URI are in the form qdb://<address>:<port> where <address> is either an IPv4 or IPv6 (surrounded with square brackets), or a domain name. It is recommended to specify multiple addresses should the designated node be unavailable.

URI examples:
	qdb://myserver.org:2836 - Connects to myserver.org on the port 2836
	qdb://127.0.0.1:2836 - Connects to the local IPv4 loopback on the port 2836
	qdb://myserver1.org:2836,myserver2.org:2836 - Connects to myserver1.org or myserver2.org on the port 2836
	qdb://[::1]:2836 - Connects to the local IPv6 loopback on the port 2836

func (HandleType) DirectConnect

func (h HandleType) DirectConnect(nodeURI string) (DirectHandleType, error)

DirectConnect opens a connection to a node for use with the direct API

The returned direct handle must be freed with Close(). Releasing the handle has no impact on non-direct connections or other direct handles.

func (HandleType) Find

func (h HandleType) Find() *Find

Find : Create a query object to execute

func (HandleType) GetClientMaxInBufSize

func (h HandleType) GetClientMaxInBufSize() (uint, error)

GetClientMaxInBufSize : Gets the maximum incoming buffer size for all network operations of the client.

func (HandleType) GetClientMaxParallelism added in v3.13.2

func (h HandleType) GetClientMaxParallelism() (uint, error)

GetClientMaxParallelism : Gets the maximum parallelism option of the client.

func (HandleType) GetClusterMaxInBufSize

func (h HandleType) GetClusterMaxInBufSize() (uint, error)

GetClusterMaxInBufSize : Gets the maximum incoming buffer size for all network operations of the client.

func (HandleType) GetLastError

func (h HandleType) GetLastError() (string, error)

func (HandleType) GetTagged

func (h HandleType) GetTagged(tag string) ([]string, error)

GetTagged : Retrieves all entries that have the specified tag.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The tag must exist.
The complexity of this function is constant.

func (HandleType) GetTags

func (h HandleType) GetTags(entryAlias string) ([]string, error)

GetTags : Retrieves all the tags of an entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.

func (HandleType) Integer

func (h HandleType) Integer(alias string) IntegerEntry

Integer : Create an integer entry object

func (HandleType) Node

func (h HandleType) Node(uri string) *Node

Node : Create a node object

func (HandleType) NodeStatistics deprecated

func (h HandleType) NodeStatistics(nodeID string) (Statistics, error)

NodeStatistics : Retrieve statistics for a specific node

Deprecated: Statistics will be fetched directly from the node using the new direct API

func (HandleType) Open

func (h HandleType) Open(protocol Protocol) error

Open : Creates a handle.

No connection will be established.
Not needed if you created your handle with NewHandle.

func (HandleType) PrefixCount

func (h HandleType) PrefixCount(prefix string) (uint64, error)

PrefixCount : Retrieves the count of all entries matching the provided prefix.

A prefix-based count counts all entries matching a provided prefix.

func (HandleType) PrefixGet

func (h HandleType) PrefixGet(prefix string, limit int) ([]string, error)

PrefixGet : Retrieves the list of all entries matching the provided prefix.

A prefix-based search will enable you to find all entries matching a provided prefix.
This function returns the list of aliases. It’s up to the user to query the content associated with every entry, if needed.

func (HandleType) Query

func (h HandleType) Query(query string) *Query

Query : Create an query object to execute

func (HandleType) Release

func (h HandleType) Release(buffer unsafe.Pointer)

Release : Releases an API-allocated buffer.

Failure to properly call this function may result in excessive memory usage.
Most operations that return a content (e.g. batch operations, qdb_blob_get, qdb_blob_get_and_update, qdb_blob_compare_and_swap...)
will allocate a buffer for the content and will not release the allocated buffer until you either call this function or close the handle.

The function will be able to release any kind of buffer allocated by a quasardb API call, whether it’s a single buffer, an array or an array of buffers.

func (HandleType) SetClientMaxInBufSize

func (h HandleType) SetClientMaxInBufSize(bufSize uint) error

SetClientMaxInBufSize : Set the Sets the maximum incoming buffer size for all network operations of the client.

Only modify this setting if you expect to receive very large answers from the server.

func (HandleType) SetClientMaxParallelism added in v3.13.2

func (h HandleType) SetClientMaxParallelism(threadCount uint) error

SetClientMaxParallelism : Gets the maximum parallelism option of the client.

func (HandleType) SetCompression

func (h HandleType) SetCompression(compressionLevel Compression) error

SetCompression : Set the compression level for all future messages emitted by the specified handle.

Regardless of this parameter, the API will be able to read whatever compression the server uses.

func (HandleType) SetEncryption

func (h HandleType) SetEncryption(encryption Encryption) error

SetEncryption : Creates a handle.

No connection will be established.
Not needed if you created your handle with NewHandle.

func (HandleType) SetMaxCardinality

func (h HandleType) SetMaxCardinality(maxCardinality uint) error

SetMaxCardinality : Sets the maximum allowed cardinality of a quasardb query.

The default value is 10,007. The minimum allowed values is 100.

func (HandleType) SetTimeout

func (h HandleType) SetTimeout(timeout time.Duration) error

SetTimeout : Sets the timeout of all network operations.

The lower the timeout, the higher the risk of having timeout errors.
Keep in mind that the server-side timeout might be shorter.

func (HandleType) Statistics

func (h HandleType) Statistics() (map[string]Statistics, error)

Statistics : Retrieve statistics for all nodes

func (HandleType) Timeseries

func (h HandleType) Timeseries(alias string) TimeseriesEntry

Timeseries : Create a timeseries entry object

func (HandleType) TsBatch

func (h HandleType) TsBatch(cols ...TsBatchColumnInfo) (*TsBatch, error)

TsBatch : create a batch object for the specified columns

type IntegerEntry

type IntegerEntry struct {
	Entry
}

IntegerEntry : int data type

Example
SetLogFile(ExamplesLogFilePath)
h := MustSetupHandle(insecureURI, 120*time.Second)
defer h.Close()

alias := "IntAlias"
integer := h.Integer(alias)

integer.Put(int64(3), NeverExpires())
defer integer.Remove()

obtainedContent, _ := integer.Get()
fmt.Println("Get content:", obtainedContent)

newContent := int64(87)
integer.Update(newContent, NeverExpires())

obtainedContent, _ = integer.Get()
fmt.Println("Get updated content:", obtainedContent)

integer.Add(3)

obtainedContent, _ = integer.Get()
fmt.Println("Get added content:", obtainedContent)
Output:

Get content: 3
Get updated content: 87
Get added content: 90

func (IntegerEntry) Add

func (entry IntegerEntry) Add(added int64) (int64, error)

Add : Atomically increases or decreases a signed 64-bit integer.

The specified entry will be atomically increased (or decreased) according to the given addend value:
	To increase the value, specify a positive added
	To decrease the value, specify a negative added

The function return the result of the operation.
The entry must already exist.

func (IntegerEntry) Get

func (entry IntegerEntry) Get() (int64, error)

Get : Atomically retrieves the value of a signed 64-bit integer.

Atomically retrieves the value of an existing 64-bit integer.

func (IntegerEntry) Put

func (entry IntegerEntry) Put(content int64, expiry time.Time) error

Put : Creates a new signed 64-bit integer.

Atomically creates an entry of the given alias and sets it to a cross-platform signed 64-bit integer.
If the entry already exists, the function returns an error.

You can specify an expiry time or use NeverExpires if you don’t want the entry to expire.
If you want to create or update an entry use Update.

The value will be correctly translated independently of the endianness of the client’s platform.

func (*IntegerEntry) Update

func (entry *IntegerEntry) Update(newContent int64, expiry time.Time) error

Update : Creates or updates a signed 64-bit integer.

Atomically updates an entry of the given alias to the provided value.
If the entry doesn’t exist, it will be created.

You can specify an expiry time or use NeverExpires if you don’t want the entry to expire.

type Metadata

type Metadata struct {
	Ref              RefID
	Type             EntryType
	Size             uint64
	ModificationTime time.Time
	ExpiryTime       time.Time
}

Metadata : A structure representing the metadata of an entry in the database.

type Node

type Node struct {
	HandleType
	// contains filtered or unexported fields
}

Node : a structure giving access to various pieces of information or actions on a node

Example
SetLogFile(ExamplesLogFilePath)
h := MustSetupHandle(insecureURI, 120*time.Second)
defer h.Close()

node := h.Node(insecureURI)

status, _ := node.Status()
fmt.Println("Status - Network.ListeningEndpoint:", status.Network.ListeningEndpoint)

config_bytes, _ := node.Config()
config, _ := gabs.ParseJSON(config_bytes)
fmt.Println("Config - Listen On:", config.Path("local.network.listen_on").Data().(string))

topology, _ := node.Topology()
fmt.Println("Topology - Successor is same as predecessor:", topology.Successor.Endpoint == topology.Predecessor.Endpoint)
Output:

Status - Network.ListeningEndpoint: 127.0.0.1:2836
Config - Listen On: 127.0.0.1:2836
Topology - Successor is same as predecessor: true

func (Node) Config

func (n Node) Config() ([]byte, error)

Config :

Returns the configuration as a byte array of a json object, you can use a method of your choice to unmarshall it.
An example is available using the gabs library

The configuration is a JSON object, as described in the documentation.

func (Node) RawConfig

func (n Node) RawConfig() ([]byte, error)

RawConfig :

Returns the configuration of a node.

The configuration is a JSON object as a byte array, as described in the documentation.

func (Node) RawStatus

func (n Node) RawStatus() ([]byte, error)

RawStatus :

Returns the status of a node.

The status is a JSON object as a byte array and contains current information of the node state, as described in the documentation.

func (Node) RawTopology

func (n Node) RawTopology() ([]byte, error)

RawTopology :

Returns the topology of a node.

The topology is a JSON object as a byte array containing the node address, and the addresses of its successor and predecessor.

func (Node) Status

func (n Node) Status() (NodeStatus, error)

Status :

Returns the status of a node.

The status is a JSON object and contains current information of the node state, as described in the documentation.

func (Node) Topology

func (n Node) Topology() (NodeTopology, error)

Topology :

Returns the topology of a node.

The topology is a JSON object containing the node address, and the addresses of its successor and predecessor.

type NodeLocation

type NodeLocation struct {
	Address string
	Port    int16
}

NodeLocation : A structure representing the address of a quasardb node.

type NodeStatus

type NodeStatus struct {
	Memory struct {
		VM struct {
			Used  int64 `json:"used"`
			Total int64 `json:"total"`
		} `json:"vm"`
		Physmem struct {
			Used  int64 `json:"used"`
			Total int64 `json:"total"`
		} `json:"physmem"`
	} `json:"memory"`
	CPUTimes struct {
		Idle   int64 `json:"idle"`
		System int64 `json:"system"`
		User   int64 `json:"user"`
	} `json:"cpu_times"`
	DiskUsage struct {
		Free  int64 `json:"free"`
		Total int64 `json:"total"`
	} `json:"disk_usage"`
	Network struct {
		ListeningEndpoint string `json:"listening_endpoint"`
		Partitions        struct {
			Count             int `json:"count"`
			MaxSessions       int `json:"max_sessions"`
			AvailableSessions int `json:"available_sessions"`
		} `json:"partitions"`
	} `json:"network"`
	NodeID              string    `json:"node_id"`
	OperatingSystem     string    `json:"operating_system"`
	HardwareConcurrency int       `json:"hardware_concurrency"`
	Timestamp           time.Time `json:"timestamp"`
	Startup             time.Time `json:"startup"`
	EngineVersion       string    `json:"engine_version"`
	EngineBuildDate     time.Time `json:"engine_build_date"`
	Entries             struct {
		Resident struct {
			Count int `json:"count"`
			Size  int `json:"size"`
		} `json:"resident"`
		Persisted struct {
			Count int `json:"count"`
			Size  int `json:"size"`
		} `json:"persisted"`
	} `json:"entries"`
	Operations struct {
		Get struct {
			Count     int `json:"count"`
			Successes int `json:"successes"`
			Failures  int `json:"failures"`
			Pageins   int `json:"pageins"`
			Evictions int `json:"evictions"`
			InBytes   int `json:"in_bytes"`
			OutBytes  int `json:"out_bytes"`
		} `json:"get"`
		GetAndRemove struct {
			Count     int `json:"count"`
			Successes int `json:"successes"`
			Failures  int `json:"failures"`
			Pageins   int `json:"pageins"`
			Evictions int `json:"evictions"`
			InBytes   int `json:"in_bytes"`
			OutBytes  int `json:"out_bytes"`
		} `json:"get_and_remove"`
		Put struct {
			Count     int `json:"count"`
			Successes int `json:"successes"`
			Failures  int `json:"failures"`
			Pageins   int `json:"pageins"`
			Evictions int `json:"evictions"`
			InBytes   int `json:"in_bytes"`
			OutBytes  int `json:"out_bytes"`
		} `json:"put"`
		Update struct {
			Count     int `json:"count"`
			Successes int `json:"successes"`
			Failures  int `json:"failures"`
			Pageins   int `json:"pageins"`
			Evictions int `json:"evictions"`
			InBytes   int `json:"in_bytes"`
			OutBytes  int `json:"out_bytes"`
		} `json:"update"`
		GetAndUpdate struct {
			Count     int `json:"count"`
			Successes int `json:"successes"`
			Failures  int `json:"failures"`
			Pageins   int `json:"pageins"`
			Evictions int `json:"evictions"`
			InBytes   int `json:"in_bytes"`
			OutBytes  int `json:"out_bytes"`
		} `json:"get_and_update"`
		CompareAndSwap struct {
			Count     int `json:"count"`
			Successes int `json:"successes"`
			Failures  int `json:"failures"`
			Pageins   int `json:"pageins"`
			Evictions int `json:"evictions"`
			InBytes   int `json:"in_bytes"`
			OutBytes  int `json:"out_bytes"`
		} `json:"compare_and_swap"`
		Remove struct {
			Count     int `json:"count"`
			Successes int `json:"successes"`
			Failures  int `json:"failures"`
			Pageins   int `json:"pageins"`
			Evictions int `json:"evictions"`
			InBytes   int `json:"in_bytes"`
			OutBytes  int `json:"out_bytes"`
		} `json:"remove"`
		RemoveIf struct {
			Count     int `json:"count"`
			Successes int `json:"successes"`
			Failures  int `json:"failures"`
			Pageins   int `json:"pageins"`
			Evictions int `json:"evictions"`
			InBytes   int `json:"in_bytes"`
			OutBytes  int `json:"out_bytes"`
		} `json:"remove_if"`
		PurgeAll struct {
			Count     int `json:"count"`
			Successes int `json:"successes"`
			Failures  int `json:"failures"`
			Pageins   int `json:"pageins"`
			Evictions int `json:"evictions"`
			InBytes   int `json:"in_bytes"`
			OutBytes  int `json:"out_bytes"`
		} `json:"purge_all"`
	} `json:"operations"`
	Overall struct {
		Count     int `json:"count"`
		Successes int `json:"successes"`
		Failures  int `json:"failures"`
		Pageins   int `json:"pageins"`
		Evictions int `json:"evictions"`
		InBytes   int `json:"in_bytes"`
		OutBytes  int `json:"out_bytes"`
	} `json:"overall"`
}

NodeStatus : a json representation object containing the status of a node

type NodeTopology

type NodeTopology struct {
	Predecessor struct {
		Reference string `json:"reference"`
		Endpoint  string `json:"endpoint"`
	} `json:"predecessor"`
	Center struct {
		Reference string `json:"reference"`
		Endpoint  string `json:"endpoint"`
	} `json:"center"`
	Successor struct {
		Reference string `json:"reference"`
		Endpoint  string `json:"endpoint"`
	} `json:"successor"`
}

type Protocol

type Protocol C.qdb_protocol_t

Protocol : A network protocol.

const (
	ProtocolTCP Protocol = C.qdb_p_tcp
)

Protocol values:

ProtocolTCP : Uses TCP/IP to communicate with the cluster. This is currently the only supported network protocol.

type Query

type Query struct {
	HandleType
	// contains filtered or unexported fields
}

Query : query object

Example
SetLogFile(ExamplesLogFilePath)
h := MustSetupHandle(insecureURI, 120*time.Second)
defer h.Close()

var aliases []string
aliases = append(aliases, generateAlias(16))
aliases = append(aliases, generateAlias(16))

blob := h.Blob("alias_blob")
blob.Put([]byte("asd"), NeverExpires())
defer blob.Remove()
blob.AttachTag("all")
blob.AttachTag("first")

integer := h.Integer("alias_integer")
integer.Put(32, NeverExpires())
defer integer.Remove()
integer.AttachTag("all")
integer.AttachTag("second")

var obtainedAliases []string
obtainedAliases, _ = h.Find().Tag("all").Execute()
fmt.Println("Get all aliases:", obtainedAliases)

obtainedAliases, _ = h.Find().Tag("all").NotTag("second").Execute()
fmt.Println("Get only first alias:", obtainedAliases)

obtainedAliases, _ = h.Find().Tag("all").Type("int").Execute()
fmt.Println("Get only integer alias:", obtainedAliases)

obtainedAliases, _ = h.Find().Tag("unexisting_alias").Execute()
fmt.Println("Get no aliases:", obtainedAliases)

_, err := h.Find().NotTag("second").Execute()
fmt.Println("Error:", err)

_, err = h.Find().Type("int").Execute()
fmt.Println("Error:", err)
Output:

Get all aliases: [alias_blob alias_integer]
Get only first alias: [alias_blob]
Get only integer alias: [alias_integer]
Get no aliases: []
Error: query should have at least one valid tag
Error: query should have at least one valid tag

func (Query) Execute

func (q Query) Execute() (*QueryResult, error)

Execute : execute a query

type QueryPoint

type QueryPoint C.qdb_point_result_t

QueryPoint : a variadic structure holding the result type as well as the result value

func (*QueryPoint) Get

func (r *QueryPoint) Get() QueryPointResult

Get : retrieve the raw interface

func (*QueryPoint) GetBlob

func (r *QueryPoint) GetBlob() ([]byte, error)

GetBlob : retrieve a double from the interface

func (*QueryPoint) GetCount

func (r *QueryPoint) GetCount() (int64, error)

GetCount : retrieve the count from the interface

func (*QueryPoint) GetDouble

func (r *QueryPoint) GetDouble() (float64, error)

GetDouble : retrieve a double from the interface

func (*QueryPoint) GetInt64

func (r *QueryPoint) GetInt64() (int64, error)

GetInt64 : retrieve an int64 from the interface

func (*QueryPoint) GetString

func (r *QueryPoint) GetString() (string, error)

GetString : retrieve a string from the interface

func (*QueryPoint) GetTimestamp

func (r *QueryPoint) GetTimestamp() (time.Time, error)

GetTimestamp : retrieve a timestamp from the interface

type QueryPointResult

type QueryPointResult struct {
	// contains filtered or unexported fields
}

QueryPointResult : a query result point

func (QueryPointResult) Type

Type : gives the type of the query point result

func (QueryPointResult) Value

func (r QueryPointResult) Value() interface{}

Value : gives the interface{} value of the query point result

type QueryResult

type QueryResult struct {
	// contains filtered or unexported fields
}

QueryResult : a query result

func (QueryResult) Columns

func (r QueryResult) Columns(row *QueryPoint) QueryRow

Columns : create columns from a row

func (QueryResult) ColumnsCount

func (r QueryResult) ColumnsCount() int64

ColumnsCount : get the number of columns of each row

func (QueryResult) ColumnsNames

func (r QueryResult) ColumnsNames() []string

ColumnsNames : get the number of columns names of each row

func (QueryResult) ErrorMessage

func (r QueryResult) ErrorMessage() string

ErrorMessage : the error message in case of failure

func (QueryResult) RowCount

func (r QueryResult) RowCount() int64

RowCount : the number of returned rows

func (QueryResult) Rows

func (r QueryResult) Rows() QueryRows

Rows : get rows of a query table result

func (QueryResult) ScannedPoints

func (r QueryResult) ScannedPoints() int64

ScannedPoints : number of points scanned

The actual number of scanned points may be greater

type QueryResultValueType

type QueryResultValueType int64

QueryResultValueType : an enum of possible query point result types

QueryResultNone : query result value none QueryResultDouble : query result value double QueryResultBlob : query result value blob QueryResultInt64 : query result value int64 QueryResultString : query result value string QueryResultSymbol : query result value symbol QueryResultTimestamp : query result value timestamp QueryResultCount : query result value count

type QueryRow

type QueryRow []QueryPoint

QueryRow : query result table row

type QueryRows

type QueryRows []*QueryPoint

QueryRows : query result table rows

type RefID

type RefID C.qdb_id_t

RefID : Unique identifier

type Statistics

type Statistics struct {
	CPU struct {
		Idle   int64 `json:"idle"`
		System int64 `json:"system"`
		User   int64 `json:"user"`
	} `json:"cpu"`
	Disk struct {
		BytesFree  int64  `json:"bytes_free"`
		BytesTotal int64  `json:"bytes_total"`
		Path       string `json:"path"`
	} `json:"disk"`
	EngineBuildDate     string `json:"engine_build_date"`
	EngineVersion       string `json:"engine_version"`
	HardwareConcurrency int64  `json:"hardware_concurrency"`
	Memory              struct {
		BytesResident int64 `json:"bytes_resident_size"`
		ResidentCount int64 `json:"resident_count"`
		Physmem       struct {
			Used  int64 `json:"bytes_used"`
			Total int64 `json:"bytes_total"`
		} `json:"physmem"`
		VM struct {
			Used  int64 `json:"bytes_used"`
			Total int64 `json:"bytes_total"`
		} `json:"vm"`
	} `json:"memory"`
	Network struct {
		CurrentUsersCount int64 `json:"current_users_count"`
		Sessions          struct {
			AvailableCount   int64 `json:"available_count"`
			UnavailableCount int64 `json:"unavailable_count"`
			MaxCount         int64 `json:"max_count"`
		} `json:"sessions"`
	} `json:"network"`
	PartitionsCount int64  `json:"partitions_count"`
	NodeID          string `json:"node_id"`
	OperatingSystem string `json:"operating_system"`
	Persistence     struct {
		BytesCapacity int64 `json:"bytes_capacity"`
		BytesRead     int64 `json:"bytes_read"`
		BytesUtilized int64 `json:"bytes_utilized"`
		BytesWritten  int64 `json:"bytes_written"`
		EntriesCount  int64 `json:"entries_count"`
	} `json:"persistence"`
	Requests struct {
		BytesOut       int64 `json:"bytes_out"`
		SuccessesCount int64 `json:"successes_count"`
		TotalCount     int64 `json:"total_count"`
	} `json:"requests"`
	Startup int64 `json:"startup"`
}

Statistics : json adptable structure with node information

type TimeseriesEntry

type TimeseriesEntry struct {
	Entry
}

TimeseriesEntry : timeseries double entry data type

Example
SetLogFile(ExamplesLogFilePath)
h := MustSetupHandle(insecureURI, 120*time.Second)
defer h.Close()

timeseries := h.Timeseries("alias")

fmt.Println("timeseries:", timeseries.Alias())
Output:

timeseries: alias

func (TimeseriesEntry) BlobColumn

func (entry TimeseriesEntry) BlobColumn(columnName string) TsBlobColumn

BlobColumn : create a column object

Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithData("ExampleTimeseriesEntry_BlobColumn")
defer h.Close()

column := timeseries.BlobColumn("series_column_blob")
fmt.Println("column:", column.Name())
Output:

column: series_column_blob

func (TimeseriesEntry) Bulk

func (entry TimeseriesEntry) Bulk(cols ...TsColumnInfo) (*TsBulk, error)

Bulk : create a bulk object for the specified columns

If no columns are specified it gets the server side registered columns
Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_Bulk")
defer h.Close()

bulk, err := timeseries.Bulk(NewTsColumnInfo("series_column_blob", TsColumnBlob), NewTsColumnInfo("series_column_double", TsColumnDouble))
if err != nil {
	return // handle error
}
// Don't forget to release
defer bulk.Release()
if err != nil {
	return // handle error
}
fmt.Println("RowCount:", bulk.RowCount())
Output:

RowCount: 0

func (TimeseriesEntry) Columns

Columns : return the current columns

Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_Columns")
defer h.Close()

blobColumns, doubleColumns, int64Columns, stringColumns, timestampColumns, err := timeseries.Columns()
if err != nil {
	// handle error
}
for _, col := range blobColumns {
	fmt.Println("column:", col.Name())
	// do something like Insert, GetRanges with a blob column
}
for _, col := range doubleColumns {
	fmt.Println("column:", col.Name())
	// do something like Insert, GetRanges with a double column
}
for _, col := range int64Columns {
	fmt.Println("column:", col.Name())
	// do something like Insert, GetRanges with a int64 column
}
for _, col := range stringColumns {
	fmt.Println("column:", col.Name())
	// do something like Insert, GetRanges with a string column
}
for _, col := range timestampColumns {
	fmt.Println("column:", col.Name())
	// do something like Insert, GetRanges with a timestamp column
}
Output:

column: series_column_blob
column: series_column_double
column: series_column_int64
column: series_column_string
column: series_column_symbol
column: series_column_timestamp

func (TimeseriesEntry) ColumnsInfo

func (entry TimeseriesEntry) ColumnsInfo() ([]TsColumnInfo, error)

ColumnsInfo : return the current columns information

Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_ColumnsInfo")
defer h.Close()

columns, err := timeseries.ColumnsInfo()
if err != nil {
	// handle error
}
for _, col := range columns {
	fmt.Println("column:", col.Name())
}
Output:

column: series_column_blob
column: series_column_double
column: series_column_int64
column: series_column_string
column: series_column_timestamp
column: series_column_symbol

func (TimeseriesEntry) Create

func (entry TimeseriesEntry) Create(shardSize time.Duration, cols ...TsColumnInfo) error

Create : create a new timeseries

First parameter is the duration limit to organize a shard
Ex: shardSize := 24 * time.Hour
Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseries("ExampleTimeseriesEntry_Create")
defer h.Close()

// duration, columns...
timeseries.Create(24*time.Hour, NewTsColumnInfo("series_column_blob", TsColumnBlob), NewTsColumnInfo("series_column_double", TsColumnDouble))
Output:

func (TimeseriesEntry) DoubleColumn

func (entry TimeseriesEntry) DoubleColumn(columnName string) TsDoubleColumn

DoubleColumn : create a column object

Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_DoubleColumn")
defer h.Close()

column := timeseries.DoubleColumn("series_column_double")
fmt.Println("column:", column.Name())
Output:

column: series_column_double

func (TimeseriesEntry) InsertColumns

func (entry TimeseriesEntry) InsertColumns(cols ...TsColumnInfo) error

InsertColumns : insert columns in a existing timeseries

Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_InsertColumns")
defer h.Close()

err := timeseries.InsertColumns(NewTsColumnInfo("series_column_blob_2", TsColumnBlob), NewTsColumnInfo("series_column_double_2", TsColumnDouble))
if err != nil {
	// handle error
}
columns, err := timeseries.ColumnsInfo()
if err != nil {
	// handle error
}
for _, col := range columns {
	fmt.Println("column:", col.Name())
}
Output:

column: series_column_blob
column: series_column_double
column: series_column_int64
column: series_column_string
column: series_column_timestamp
column: series_column_symbol
column: series_column_blob_2
column: series_column_double_2

func (TimeseriesEntry) Int64Column

func (entry TimeseriesEntry) Int64Column(columnName string) TsInt64Column

Int64Column : create a column object

Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_Int64Column")
defer h.Close()

column := timeseries.Int64Column("series_column_int64")
fmt.Println("column:", column.Name())
Output:

column: series_column_int64

func (TimeseriesEntry) StringColumn

func (entry TimeseriesEntry) StringColumn(columnName string) TsStringColumn

StringColumn : create a column object

func (TimeseriesEntry) SymbolColumn added in v3.13.0

func (entry TimeseriesEntry) SymbolColumn(columnName string, symtableName string) TsStringColumn

SymbolColumn : create a column object (the symbol table name is not set)

func (TimeseriesEntry) TimestampColumn

func (entry TimeseriesEntry) TimestampColumn(columnName string) TsTimestampColumn

TimestampColumn : create a column object

Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_TimestampColumn")
defer h.Close()

column := timeseries.TimestampColumn("series_column_timestamp")
fmt.Println("column:", column.Name())
Output:

column: series_column_timestamp

type TsAggregationType

type TsAggregationType C.qdb_ts_aggregation_type_t

TsAggregationType typedef of C.qdb_ts_aggregation_type

type TsBatch

type TsBatch struct {
	// contains filtered or unexported fields
}

TsBatch : A structure that permits to append data to a timeseries

func (*TsBatch) ExtraColumns

func (t *TsBatch) ExtraColumns(cols ...TsBatchColumnInfo) error

ExtraColumns : Appends columns to the current batch table

func (*TsBatch) Push

func (t *TsBatch) Push() error

Push : Push the inserted data

func (*TsBatch) PushFast

func (t *TsBatch) PushFast() error

PushFast : Fast, in-place batch push that is efficient when doing lots of small, incremental pushes.

func (*TsBatch) Release

func (t *TsBatch) Release()

Release : release the memory of the batch table

func (*TsBatch) RowSetBlob

func (t *TsBatch) RowSetBlob(index int64, content []byte) error

RowSetBlob : Set blob at specified index in current row

func (*TsBatch) RowSetBlobNoCopy

func (t *TsBatch) RowSetBlobNoCopy(index int64, content []byte) error

RowSetBlobNoCopy : Set blob at specified index in current row without copying it

func (*TsBatch) RowSetDouble

func (t *TsBatch) RowSetDouble(index int64, value float64) error

RowSetDouble : Set double at specified index in current row

func (*TsBatch) RowSetInt64

func (t *TsBatch) RowSetInt64(index, value int64) error

RowSetInt64 : Set int64 at specified index in current row

func (*TsBatch) RowSetString

func (t *TsBatch) RowSetString(index int64, content string) error

RowSetString : Set string at specified index in current row

func (*TsBatch) RowSetStringNoCopy

func (t *TsBatch) RowSetStringNoCopy(index int64, content string) error

RowSetStringNoCopy : Set string at specified index in current row without copying it

func (*TsBatch) RowSetTimestamp

func (t *TsBatch) RowSetTimestamp(index int64, value time.Time) error

RowSetTimestamp : Add a timestamp to current row

func (*TsBatch) StartRow

func (t *TsBatch) StartRow(timestamp time.Time) error

StartRow : Start a new row

type TsBatchColumnInfo

type TsBatchColumnInfo struct {
	Timeseries       string
	Column           string
	ElementCountHint int64
}

TsBatchColumnInfo : Represents one column in a timeseries Preallocate the underlying structure with the ElementCountHint

func NewTsBatchColumnInfo

func NewTsBatchColumnInfo(timeseries string, column string, hint int64) TsBatchColumnInfo

NewTsBatchColumnInfo : Creates a new TsBatchColumnInfo

type TsBlobAggregation

type TsBlobAggregation struct {
	// contains filtered or unexported fields
}

TsBlobAggregation : Aggregation of double type

func NewBlobAggregation

func NewBlobAggregation(kind TsAggregationType, rng TsRange) *TsBlobAggregation

NewBlobAggregation : Create new timeseries blob aggregation

func (TsBlobAggregation) Count

func (t TsBlobAggregation) Count() int64

Count : returns the number of points aggregated into the result

func (TsBlobAggregation) Range

func (t TsBlobAggregation) Range() TsRange

Range : returns the range of the aggregation

func (TsBlobAggregation) Result

func (t TsBlobAggregation) Result() TsBlobPoint

Result : result of the aggregation

func (TsBlobAggregation) Type

Type : returns the type of the aggregation

type TsBlobColumn

type TsBlobColumn struct {
	// contains filtered or unexported fields
}

TsBlobColumn : a time series blob column

func (TsBlobColumn) Aggregate

func (column TsBlobColumn) Aggregate(aggs ...*TsBlobAggregation) ([]TsBlobAggregation, error)

Aggregate : Aggregate a sub-part of the time series.

It is an error to call this function on a non existing time-series.
Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithData("ExampleTsBlobColumn_Aggregate")
defer h.Close()

column := timeseries.BlobColumn("series_column_blob")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
aggFirst := NewBlobAggregation(AggFirst, r)
results, err := column.Aggregate(aggFirst)
if err != nil {
	// handle error
}
fmt.Println("first:", string(results[0].Result().Content()))
Output:

first: content_0

func (TsBlobColumn) EraseRanges

func (column TsBlobColumn) EraseRanges(rgs ...TsRange) (uint64, error)

EraseRanges : erase all points in the specified ranges

Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithData("ExampleTsBlobColumn_EraseRanges")
defer h.Close()

column := timeseries.BlobColumn("series_column_blob")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
numberOfErasedValues, err := column.EraseRanges(r)
if err != nil {
	// handle error
}
fmt.Println("Number of erased values:", numberOfErasedValues)
Output:

Number of erased values: 4

func (TsBlobColumn) GetRanges

func (column TsBlobColumn) GetRanges(rgs ...TsRange) ([]TsBlobPoint, error)

GetRanges : Retrieves blobs in the specified range of the time series column.

It is an error to call this function on a non existing time-series.
Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithData("ExampleTsBlobColumn_GetRanges")
defer h.Close()

column := timeseries.BlobColumn("series_column_blob")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
blobPoints, err := column.GetRanges(r)
if err != nil {
	// handle error
}
for _, point := range blobPoints {
	fmt.Println("timestamp:", point.Timestamp().UTC(), "- value:", string(point.Content()))
}
Output:

timestamp: 1970-01-01 00:00:10 +0000 UTC - value: content_0
timestamp: 1970-01-01 00:00:20 +0000 UTC - value: content_1
timestamp: 1970-01-01 00:00:30 +0000 UTC - value: content_2
timestamp: 1970-01-01 00:00:40 +0000 UTC - value: content_3

func (TsBlobColumn) Insert

func (column TsBlobColumn) Insert(points ...TsBlobPoint) error

Insert blob points into a timeseries

Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithColumns("ExampleTsBlobColumn_Insert")
defer h.Close()

column := timeseries.BlobColumn("series_column_blob")

// Insert only one point:
column.Insert(NewTsBlobPoint(time.Now(), []byte("content")))

// Insert multiple points
blobPoints := make([]TsBlobPoint, 2)
blobPoints[0] = NewTsBlobPoint(time.Now(), []byte("content"))
blobPoints[1] = NewTsBlobPoint(time.Now(), []byte("content_2"))

err := column.Insert(blobPoints...)
if err != nil {
	// handle error
}
Output:

type TsBlobPoint

type TsBlobPoint struct {
	// contains filtered or unexported fields
}

TsBlobPoint : timestamped data

func NewTsBlobPoint

func NewTsBlobPoint(timestamp time.Time, value []byte) TsBlobPoint

NewTsBlobPoint : Create new timeseries blob point

func (TsBlobPoint) Content

func (t TsBlobPoint) Content() []byte

Content : return data point content

func (TsBlobPoint) Timestamp

func (t TsBlobPoint) Timestamp() time.Time

Timestamp : return data point timestamp

type TsBulk

type TsBulk struct {
	// contains filtered or unexported fields
}

TsBulk : A structure that permits to append data to a timeseries

func (*TsBulk) GetBlob

func (t *TsBulk) GetBlob() ([]byte, error)

GetBlob : gets a blob in row

func (*TsBulk) GetDouble

func (t *TsBulk) GetDouble() (float64, error)

GetDouble : gets a double in row

func (*TsBulk) GetInt64

func (t *TsBulk) GetInt64() (int64, error)

GetInt64 : gets an int64 in row

func (*TsBulk) GetRanges

func (t *TsBulk) GetRanges(rgs ...TsRange) error

GetRanges : create a range bulk query

func (*TsBulk) GetString

func (t *TsBulk) GetString() (string, error)

GetString : gets a string in row

func (*TsBulk) GetTimestamp

func (t *TsBulk) GetTimestamp() (time.Time, error)

GetTimestamp : gets a timestamp in row

func (*TsBulk) Ignore

func (t *TsBulk) Ignore() *TsBulk

Ignore : ignores this column in a row transaction

func (*TsBulk) NextRow

func (t *TsBulk) NextRow() (time.Time, error)

NextRow : advance to the next row, or the first one if not already used

func (*TsBulk) Release

func (t *TsBulk) Release()

Release : release the memory of the local table

func (*TsBulk) Row

func (t *TsBulk) Row(timestamp time.Time) *TsBulk

Row : initialize a row append

func (TsBulk) RowCount

func (t TsBulk) RowCount() int

RowCount : returns the number of rows to be append

type TsColumnInfo

type TsColumnInfo struct {
	// contains filtered or unexported fields
}

TsColumnInfo : column information in timeseries

func NewSymbolColumnInfo added in v3.13.0

func NewSymbolColumnInfo(columnName string, symtableName string) TsColumnInfo

func NewTsColumnInfo

func NewTsColumnInfo(columnName string, columnType TsColumnType) TsColumnInfo

NewTsColumnInfo : create a column info structure

func (TsColumnInfo) Name

func (t TsColumnInfo) Name() string

Name : return column name

func (TsColumnInfo) Symtable added in v3.13.0

func (t TsColumnInfo) Symtable() string

Symtable : return column symbol table name

func (TsColumnInfo) Type

func (t TsColumnInfo) Type() TsColumnType

Type : return column type

type TsColumnType

type TsColumnType C.qdb_ts_column_type_t

TsColumnType : Timeseries column types

Values

TsColumnDouble : column is a double point
TsColumnBlob : column is a blob point
TsColumnInt64 : column is a int64 point
TsColumnTimestamp : column is a timestamp point
TsColumnString : column is a string point
TsColumnSymbol : column is a symbol point

type TsDoubleAggregation

type TsDoubleAggregation struct {
	// contains filtered or unexported fields
}

TsDoubleAggregation : Aggregation of double type

func NewDoubleAggregation

func NewDoubleAggregation(kind TsAggregationType, rng TsRange) *TsDoubleAggregation

NewDoubleAggregation : Create new timeseries double aggregation

func (TsDoubleAggregation) Count

func (t TsDoubleAggregation) Count() int64

Count : returns the number of points aggregated into the result

func (TsDoubleAggregation) Range

func (t TsDoubleAggregation) Range() TsRange

Range : returns the range of the aggregation

func (TsDoubleAggregation) Result

Result : result of the aggregation

func (TsDoubleAggregation) Type

Type : returns the type of the aggregation

type TsDoubleColumn

type TsDoubleColumn struct {
	// contains filtered or unexported fields
}

TsDoubleColumn : a time series double column

func (TsDoubleColumn) Aggregate

func (column TsDoubleColumn) Aggregate(aggs ...*TsDoubleAggregation) ([]TsDoubleAggregation, error)

Aggregate : Aggregate a sub-part of a timeseries from the specified aggregations.

It is an error to call this function on a non existing time-series.
Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithData("ExampleTsDoubleColumn_Aggregate")
defer h.Close()

column := timeseries.DoubleColumn("series_column_double")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
aggFirst := NewDoubleAggregation(AggFirst, r)
aggMean := NewDoubleAggregation(AggArithmeticMean, r)
results, err := column.Aggregate(aggFirst, aggMean)
if err != nil {
	// handle error
}
fmt.Println("first:", results[0].Result().Content())
fmt.Println("mean:", results[1].Result().Content())
fmt.Println("number of elements reviewed for mean:", results[1].Count())
Output:

first: 0
mean: 1.5
number of elements reviewed for mean: 4

func (TsDoubleColumn) EraseRanges

func (column TsDoubleColumn) EraseRanges(rgs ...TsRange) (uint64, error)

EraseRanges : erase all points in the specified ranges

Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithData("ExampleTsDoubleColumn_EraseRanges")
defer h.Close()

column := timeseries.DoubleColumn("series_column_double")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
numberOfErasedValues, err := column.EraseRanges(r)
if err != nil {
	// handle error
}
fmt.Println("Number of erased values:", numberOfErasedValues)
Output:

Number of erased values: 4

func (TsDoubleColumn) GetRanges

func (column TsDoubleColumn) GetRanges(rgs ...TsRange) ([]TsDoublePoint, error)

GetRanges : Retrieves blobs in the specified range of the time series column.

It is an error to call this function on a non existing time-series.
Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithData("ExampleTsDoubleColumn_GetRanges")
defer h.Close()

column := timeseries.DoubleColumn("series_column_double")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
doublePoints, err := column.GetRanges(r)
if err != nil {
	// handle error
}
for _, point := range doublePoints {
	fmt.Println("timestamp:", point.Timestamp().UTC(), "- value:", point.Content())
}
Output:

timestamp: 1970-01-01 00:00:10 +0000 UTC - value: 0
timestamp: 1970-01-01 00:00:20 +0000 UTC - value: 1
timestamp: 1970-01-01 00:00:30 +0000 UTC - value: 2
timestamp: 1970-01-01 00:00:40 +0000 UTC - value: 3

func (TsDoubleColumn) Insert

func (column TsDoubleColumn) Insert(points ...TsDoublePoint) error

Insert double points into a timeseries

Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithColumns("ExampleTsDoubleColumn_Insert")
defer h.Close()

column := timeseries.DoubleColumn("series_column_double")

// Insert only one point:
column.Insert(NewTsDoublePoint(time.Now(), 3.2))

// Insert multiple points
doublePoints := make([]TsDoublePoint, 2)
doublePoints[0] = NewTsDoublePoint(time.Now(), 3.2)
doublePoints[1] = NewTsDoublePoint(time.Now(), 4.8)

err := column.Insert(doublePoints...)
if err != nil {
	// handle error
}
Output:

type TsDoublePoint

type TsDoublePoint struct {
	// contains filtered or unexported fields
}

TsDoublePoint : timestamped double data point

func NewTsDoublePoint

func NewTsDoublePoint(timestamp time.Time, value float64) TsDoublePoint

NewTsDoublePoint : Create new timeseries double point

func (TsDoublePoint) Content

func (t TsDoublePoint) Content() float64

Content : return data point content

func (TsDoublePoint) Timestamp

func (t TsDoublePoint) Timestamp() time.Time

Timestamp : return data point timestamp

type TsInt64Aggregation

type TsInt64Aggregation struct {
	// contains filtered or unexported fields
}

TsInt64Aggregation : Aggregation of int64 type

func NewInt64Aggregation

func NewInt64Aggregation(kind TsAggregationType, rng TsRange) *TsInt64Aggregation

NewInt64Aggregation : Create new timeseries int64 aggregation

func (TsInt64Aggregation) Count

func (t TsInt64Aggregation) Count() int64

Count : returns the number of points aggregated into the result

func (TsInt64Aggregation) Range

func (t TsInt64Aggregation) Range() TsRange

Range : returns the range of the aggregation

func (TsInt64Aggregation) Result

func (t TsInt64Aggregation) Result() TsInt64Point

Result : result of the aggregation

func (TsInt64Aggregation) Type

Type : returns the type of the aggregation

type TsInt64Column

type TsInt64Column struct {
	// contains filtered or unexported fields
}

TsInt64Column : a time series int64 column

func (TsInt64Column) Aggregate

func (column TsInt64Column) Aggregate(aggs ...*TsInt64Aggregation) ([]TsInt64Aggregation, error)

Aggregate : Aggregate a sub-part of a timeseries from the specified aggregations.

It is an error to call this function on a non existing time-series.

func (TsInt64Column) EraseRanges

func (column TsInt64Column) EraseRanges(rgs ...TsRange) (uint64, error)

EraseRanges : erase all points in the specified ranges

Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithData("ExampleTsInt64Column_EraseRanges")
defer h.Close()

column := timeseries.Int64Column("series_column_int64")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
numberOfErasedValues, err := column.EraseRanges(r)
if err != nil {
	// handle error
}
fmt.Println("Number of erased values:", numberOfErasedValues)
Output:

Number of erased values: 4

func (TsInt64Column) GetRanges

func (column TsInt64Column) GetRanges(rgs ...TsRange) ([]TsInt64Point, error)

GetRanges : Retrieves int64s in the specified range of the time series column.

It is an error to call this function on a non existing time-series.
Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithData("ExampleTsInt64Column_GetRanges")
defer h.Close()

column := timeseries.Int64Column("series_column_int64")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
int64Points, err := column.GetRanges(r)
if err != nil {
	// handle error
}
for _, point := range int64Points {
	fmt.Println("timestamp:", point.Timestamp().UTC(), "- value:", point.Content())
}
Output:

timestamp: 1970-01-01 00:00:10 +0000 UTC - value: 0
timestamp: 1970-01-01 00:00:20 +0000 UTC - value: 1
timestamp: 1970-01-01 00:00:30 +0000 UTC - value: 2
timestamp: 1970-01-01 00:00:40 +0000 UTC - value: 3

func (TsInt64Column) Insert

func (column TsInt64Column) Insert(points ...TsInt64Point) error

Insert int64 points into a timeseries

Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithColumns("ExampleTsInt64Column_Insert")
defer h.Close()

column := timeseries.Int64Column("series_column_int64")

// Insert only one point:
column.Insert(NewTsInt64Point(time.Now(), 3))

// Insert multiple points
int64Points := make([]TsInt64Point, 2)
int64Points[0] = NewTsInt64Point(time.Now(), 3)
int64Points[1] = NewTsInt64Point(time.Now(), 4)

err := column.Insert(int64Points...)
if err != nil {
	// handle error
}
Output:

type TsInt64Point

type TsInt64Point struct {
	// contains filtered or unexported fields
}

TsInt64Point : timestamped int64 data point

func NewTsInt64Point

func NewTsInt64Point(timestamp time.Time, value int64) TsInt64Point

NewTsInt64Point : Create new timeseries int64 point

func (TsInt64Point) Content

func (t TsInt64Point) Content() int64

Content : return data point content

func (TsInt64Point) Timestamp

func (t TsInt64Point) Timestamp() time.Time

Timestamp : return data point timestamp

type TsRange

type TsRange struct {
	// contains filtered or unexported fields
}

TsRange : timeseries range with begin and end timestamp

func NewRange

func NewRange(begin, end time.Time) TsRange

NewRange : creates a time range

func (TsRange) Begin

func (t TsRange) Begin() time.Time

Begin : returns the start of the time range

func (TsRange) End

func (t TsRange) End() time.Time

End : returns the end of the time range

type TsStringAggregation

type TsStringAggregation struct {
	// contains filtered or unexported fields
}

TsStringAggregation : Aggregation of double type

func NewStringAggregation

func NewStringAggregation(kind TsAggregationType, rng TsRange) *TsStringAggregation

NewStringAggregation : Create new timeseries string aggregation

func (TsStringAggregation) Count

func (t TsStringAggregation) Count() int64

Count : returns the number of points aggregated into the result

func (TsStringAggregation) Range

func (t TsStringAggregation) Range() TsRange

Range : returns the range of the aggregation

func (TsStringAggregation) Result

Result : result of the aggregation

func (TsStringAggregation) Type

Type : returns the type of the aggregation

type TsStringColumn

type TsStringColumn struct {
	// contains filtered or unexported fields
}

TsStringColumn : a time series string column

func (TsStringColumn) Aggregate

func (column TsStringColumn) Aggregate(aggs ...*TsStringAggregation) ([]TsStringAggregation, error)

Aggregate : Aggregate a sub-part of the time series.

It is an error to call this function on a non existing time-series.

func (TsStringColumn) EraseRanges

func (column TsStringColumn) EraseRanges(rgs ...TsRange) (uint64, error)

EraseRanges : erase all points in the specified ranges

func (TsStringColumn) GetRanges

func (column TsStringColumn) GetRanges(rgs ...TsRange) ([]TsStringPoint, error)

GetRanges : Retrieves strings in the specified range of the time series column.

It is an error to call this function on a non existing time-series.

func (TsStringColumn) Insert

func (column TsStringColumn) Insert(points ...TsStringPoint) error

Insert string points into a timeseries

type TsStringPoint

type TsStringPoint struct {
	// contains filtered or unexported fields
}

TsStringPoint : timestamped data

func NewTsStringPoint

func NewTsStringPoint(timestamp time.Time, value string) TsStringPoint

NewTsStringPoint : Create new timeseries string point

func (TsStringPoint) Content

func (t TsStringPoint) Content() string

Content : return data point content

func (TsStringPoint) Timestamp

func (t TsStringPoint) Timestamp() time.Time

Timestamp : return data point timestamp

type TsTimestampAggregation

type TsTimestampAggregation struct {
	// contains filtered or unexported fields
}

TsTimestampAggregation : Aggregation of timestamp type

func NewTimestampAggregation

func NewTimestampAggregation(kind TsAggregationType, rng TsRange) *TsTimestampAggregation

NewTimestampAggregation : Create new timeseries timestamp aggregation

func (TsTimestampAggregation) Count

func (t TsTimestampAggregation) Count() int64

Count : returns the number of points aggregated into the result

func (TsTimestampAggregation) Range

func (t TsTimestampAggregation) Range() TsRange

Range : returns the range of the aggregation

func (TsTimestampAggregation) Result

Result : result of the aggregation

func (TsTimestampAggregation) Type

Type : returns the type of the aggregation

type TsTimestampColumn

type TsTimestampColumn struct {
	// contains filtered or unexported fields
}

TsTimestampColumn : a time series timestamp column

func (TsTimestampColumn) Aggregate

func (column TsTimestampColumn) Aggregate(aggs ...*TsTimestampAggregation) ([]TsTimestampAggregation, error)

Aggregate : Aggregate a sub-part of a timeseries from the specified aggregations.

It is an error to call this function on a non existing time-series.

func (TsTimestampColumn) EraseRanges

func (column TsTimestampColumn) EraseRanges(rgs ...TsRange) (uint64, error)

EraseRanges : erase all points in the specified ranges

Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithData("ExampleTsTimestampColumn_EraseRanges")
defer h.Close()

column := timeseries.TimestampColumn("series_column_timestamp")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
numberOfErasedValues, err := column.EraseRanges(r)
if err != nil {
	// handle error
}
fmt.Println("Number of erased values:", numberOfErasedValues)
Output:

Number of erased values: 4

func (TsTimestampColumn) GetRanges

func (column TsTimestampColumn) GetRanges(rgs ...TsRange) ([]TsTimestampPoint, error)

GetRanges : Retrieves timestamps in the specified range of the time series column.

It is an error to call this function on a non existing time-series.
Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithData("ExampleTsTimestampColumn_GetRanges")
defer h.Close()

column := timeseries.TimestampColumn("series_column_timestamp")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
timestampPoints, err := column.GetRanges(r)
if err != nil {
	// handle error
}
for _, point := range timestampPoints {
	fmt.Println("timestamp:", point.Timestamp().UTC(), "- value:", point.Content().UTC())
}
Output:

timestamp: 1970-01-01 00:00:10 +0000 UTC - value: 1970-01-01 00:00:10 +0000 UTC
timestamp: 1970-01-01 00:00:20 +0000 UTC - value: 1970-01-01 00:00:20 +0000 UTC
timestamp: 1970-01-01 00:00:30 +0000 UTC - value: 1970-01-01 00:00:30 +0000 UTC
timestamp: 1970-01-01 00:00:40 +0000 UTC - value: 1970-01-01 00:00:40 +0000 UTC

func (TsTimestampColumn) Insert

func (column TsTimestampColumn) Insert(points ...TsTimestampPoint) error

Insert timestamp points into a timeseries

Example
SetLogFile(ExamplesLogFilePath)
h, timeseries := MustCreateTimeseriesWithColumns("ExampleTsTimestampColumn_Insert")
defer h.Close()

column := timeseries.TimestampColumn("series_column_timestamp")

// Insert only one point:
column.Insert(NewTsTimestampPoint(time.Now(), time.Now()))

// Insert multiple points
timestampPoints := make([]TsTimestampPoint, 2)
timestampPoints[0] = NewTsTimestampPoint(time.Now(), time.Now())
timestampPoints[1] = NewTsTimestampPoint(time.Now(), time.Now())

err := column.Insert(timestampPoints...)
if err != nil {
	// handle error
}
Output:

type TsTimestampPoint

type TsTimestampPoint struct {
	// contains filtered or unexported fields
}

TsTimestampPoint : timestamped timestamp data point

func NewTsTimestampPoint

func NewTsTimestampPoint(timestamp time.Time, value time.Time) TsTimestampPoint

NewTsTimestampPoint : Create new timeseries timestamp point

func (TsTimestampPoint) Content

func (t TsTimestampPoint) Content() time.Time

Content : return data point content

func (TsTimestampPoint) Timestamp

func (t TsTimestampPoint) Timestamp() time.Time

Timestamp : return data point timestamp

Directories

Path Synopsis
examples

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL