qdb-api-go: github.com/bureau14/qdb-api-go Index | Examples | Files

package qdb

import "github.com/bureau14/qdb-api-go"

Package qdb provides an api to a quasardb server

Index

Examples

Package Files

cluster.go constants.go direct.go entry.go entry_blob.go entry_integer.go entry_timeseries.go entry_timeseries_aggregation.go entry_timeseries_batch.go entry_timeseries_bulk.go entry_timeseries_columns.go entry_timeseries_points.go entry_timeseries_range.go error.go find.go handle.go json_objects.go library_link.go node.go query.go statistics.go time.go utils.go

func ClusterKeyFromFile Uses

func ClusterKeyFromFile(clusterPublicKeyFile string) (string, error)

ClusterKeyFromFile : retrieve cluster key from a file

func CountUndefined Uses

func CountUndefined() uint64

CountUndefined : return a uint64 value corresponding to quasardb undefined count value

func Int64Undefined Uses

func Int64Undefined() int64

Int64Undefined : return a int64 value corresponding to quasardb undefined int64 value

func MaxTimespec Uses

func MaxTimespec() time.Time

MaxTimespec : return a time value corresponding to quasardb maximum timespec value

func MinTimespec Uses

func MinTimespec() time.Time

MinTimespec : return a time value corresponding to quasardb minimum timespec value

func NeverExpires Uses

func NeverExpires() time.Time

NeverExpires : return a time value corresponding to quasardb never expires value

func PreserveExpiration Uses

func PreserveExpiration() time.Time

PreserveExpiration : return a time value corresponding to quasardb preserve expiration value

func UserCredentialFromFile Uses

func UserCredentialFromFile(userCredentialFile string) (string, string, error)

UserCredentialFromFile : retrieve user credentials from a file

type BlobEntry Uses

type BlobEntry struct {
    Entry
}

BlobEntry : blob data type

Code:

h := MustSetupHandle(insecureURI, 120*time.Second)
defer h.Close()

alias := "BlobAlias"
blob := h.Blob(alias)
defer blob.Remove()

content := []byte("content")
blob.Put(content, NeverExpires())

obtainedContent, _ := blob.Get()
fmt.Println("Get content:", string(obtainedContent))

updateContent := []byte("updated content")
blob.Update(updateContent, PreserveExpiration())

obtainedContent, _ = blob.Get()
fmt.Println("Get updated content:", string(obtainedContent))

newContent := []byte("new content")
previousContent, _ := blob.GetAndUpdate(newContent, PreserveExpiration())
fmt.Println("Previous content:", string(previousContent))

obtainedContent, _ = blob.Get()
fmt.Println("Get new content:", string(obtainedContent))

Output:

Get content: content
Get updated content: updated content
Previous content: updated content
Get new content: new content

func (*BlobEntry) CompareAndSwap Uses

func (entry *BlobEntry) CompareAndSwap(newValue []byte, newComparand []byte, expiry time.Time) ([]byte, error)

CompareAndSwap : Atomically compares the entry with comparand and updates it to new_value if, and only if, they match.

The function returns the original value of the entry in case of a mismatch. When it matches, no content is returned.
The entry must already exist.
Update will occur if and only if the content of the entry matches bit for bit the content of the comparand buffer.

func (BlobEntry) Get Uses

func (entry BlobEntry) Get() ([]byte, error)

Get : Retrieve an entry's content

If the entry does not exist, the function will fail and return 'alias not found' error.

func (BlobEntry) GetAndRemove Uses

func (entry BlobEntry) GetAndRemove() ([]byte, error)

GetAndRemove : Atomically gets an entry from the quasardb server and removes it.

If the entry does not exist, the function will fail and return 'alias not found' error.

func (*BlobEntry) GetAndUpdate Uses

func (entry *BlobEntry) GetAndUpdate(newContent []byte, expiry time.Time) ([]byte, error)

GetAndUpdate : Atomically gets and updates (in this order) the entry on the quasardb server.

The entry must already exist.

func (BlobEntry) GetNoAlloc Uses

func (entry BlobEntry) GetNoAlloc(content []byte) (int, error)

GetNoAlloc : Retrieve an entry's content to already allocated buffer

If the entry does not exist, the function will fail and return 'alias not found' error.
If the buffer is not large enough to hold the data, the function will fail
and return `buffer is too small`, content length will nevertheless be
returned with entry size so that the caller may resize its buffer and try again.

func (BlobEntry) Put Uses

func (entry BlobEntry) Put(content []byte, expiry time.Time) error

Put : Creates a new entry and sets its content to the provided blob.

If the entry already exists the function will fail and will return 'alias already exists' error.
You can specify an expiry or use NeverExpires if you don’t want the entry to expire.

func (BlobEntry) RemoveIf Uses

func (entry BlobEntry) RemoveIf(comparand []byte) error

RemoveIf : Atomically removes the entry on the server if the content matches.

The entry must already exist.
Removal will occur if and only if the content of the entry matches bit for bit the content of the comparand buffer.

func (*BlobEntry) Update Uses

func (entry *BlobEntry) Update(newContent []byte, expiry time.Time) error

Update : Creates or updates an entry and sets its content to the provided blob.

If the entry already exists, the function will modify the entry.
You can specify an expiry or use NeverExpires if you don’t want the entry to expire.

type Cluster Uses

type Cluster struct {
    HandleType
}

Cluster : An object permitting calls to a cluster

func (Cluster) Endpoints Uses

func (c Cluster) Endpoints() ([]Endpoint, error)

Endpoints : Retrieve all endpoints accessible to this handle.

func (Cluster) PurgeAll Uses

func (c Cluster) PurgeAll() error

PurgeAll : Removes irremediably all data from all the nodes of the cluster.

This function is useful when quasardb is used as a cache and is not the golden source.
This call is not atomic: if the command cannot be dispatched on the whole cluster, it will be dispatched on as many nodes as possible and the function will return with a qdb_e_ok code.
By default cluster does not allow this operation and the function returns a qdb_e_operation_disabled error.

func (Cluster) PurgeCache Uses

func (c Cluster) PurgeCache() error

PurgeCache : Removes all cached data from all the nodes of the cluster.

This function is disabled on a transient cluster.
Prefer purge_all in this case.

This call is not atomic: if the command cannot be dispatched on the whole cluster, it will be dispatched on as many nodes as possible and the function will return with a qdb_e_ok code.

func (Cluster) TrimAll Uses

func (c Cluster) TrimAll() error

TrimAll : Trims all data on all the nodes of the cluster.

Quasardb uses Multi-Version Concurrency Control (MVCC) as a foundation of its transaction engine. It will automatically clean up old versions as entries are accessed.
This call is not atomic: if the command cannot be dispatched on the whole cluster, it will be dispatched on as many nodes as possible and the function will return with a qdb_e_ok code.
Entries that are not accessed may not be cleaned up, resulting in increasing disk usage.

This function will request each nodes to trim all entries, release unused memory and compact files on disk.
Because this operation is I/O and CPU intensive it is not recommended to run it when the cluster is heavily used.

func (Cluster) WaitForStabilization Uses

func (c Cluster) WaitForStabilization(timeout time.Duration) error

WaitForStabilization : Wait for all nodes of the cluster to be stabilized.

Takes a timeout value, in milliseconds.

type Compression Uses

type Compression C.qdb_compression_t

Compression : compression parameter

const (
    CompNone Compression = C.qdb_comp_none
    CompFast Compression = C.qdb_comp_fast
    CompBest Compression = C.qdb_comp_best
)

Compression values:

CompNone : No compression.
CompFast : Maximum compression speed, potentially minimum compression ratio. This is currently the default.
CompBest : Maximum compression ratio, potentially minimum compression speed. This is currently not implemented.

type DirectBlobEntry Uses

type DirectBlobEntry struct {
    DirectEntry
}

DirectBlobEntry is an Entry for a blob data type

func (DirectBlobEntry) Get Uses

func (e DirectBlobEntry) Get() ([]byte, error)

Get returns an entry's contents

func (DirectBlobEntry) Put Uses

func (e DirectBlobEntry) Put(content []byte, expiry time.Time) error

Put creates a new entry and sets its content to the provided blob This will return an error if the entry alias already exists You can specify an expiry or use NeverExpires if you don’t want the entry to expire.

func (*DirectBlobEntry) Update Uses

func (e *DirectBlobEntry) Update(newContent []byte, expiry time.Time) error

Update creates or updates an entry and sets its content to the provided blob. If the entry already exists, the function will modify the entry. You can specify an expiry or use NeverExpires if you don’t want the entry to expire.

type DirectEntry Uses

type DirectEntry struct {
    DirectHandleType
    // contains filtered or unexported fields
}

DirectEntry is a base type for composition. Similar to a regular entry

func (DirectEntry) Alias Uses

func (e DirectEntry) Alias() string

Alias returns an alias name

func (DirectEntry) Remove Uses

func (e DirectEntry) Remove() error

Remove an entry from the local node's storage, regardless of its type.

This function bypasses the clustering mechanism and accesses the node local storage. Entries in the local node storage are not accessible via the regular API and vice versa.

The call is ACID, regardless of the type of the entry and a transaction will be created if need be.

type DirectHandleType Uses

type DirectHandleType struct {
    // contains filtered or unexported fields
}

DirectHandleType is an opaque handle needed for maintaining a direct connection to a node.

func (DirectHandleType) Blob Uses

func (h DirectHandleType) Blob(alias string) DirectBlobEntry

Blob creates a direct blob entry object

func (DirectHandleType) Close Uses

func (h DirectHandleType) Close() error

Close releases a direct connect previously opened with DirectConnect

func (DirectHandleType) Integer Uses

func (h DirectHandleType) Integer(alias string) DirectIntegerEntry

Integer creates a direct integer entry object

func (DirectHandleType) PrefixGet Uses

func (h DirectHandleType) PrefixGet(prefix string, limit int) ([]string, error)

PrefixGet : Retrieves the list of all entries matching the provided prefix.

A prefix-based search will enable you to find all entries matching a provided prefix.
This function returns the list of aliases. It’s up to the user to query the content associated with every entry, if needed.

func (DirectHandleType) Release Uses

func (h DirectHandleType) Release(buffer unsafe.Pointer)

Release frees API allocated buffers

type DirectIntegerEntry Uses

type DirectIntegerEntry struct {
    DirectEntry
}

DirectIntegerEntry is an Entry for a int data type

func (DirectIntegerEntry) Add Uses

func (e DirectIntegerEntry) Add(added int64) (int64, error)

Add : Atomically increases or decreases a signed 64-bit integer.

The specified entry will be atomically increased (or decreased) according to the given addend value:
	To increase the value, specify a positive added
	To decrease the value, specify a negative added

The function return the result of the operation.
The entry must already exist.

func (DirectIntegerEntry) Get Uses

func (e DirectIntegerEntry) Get() (int64, error)

Get returns the value of a signed 64-bit integer

func (DirectIntegerEntry) Put Uses

func (e DirectIntegerEntry) Put(content int64, expiry time.Time) error

Put creates a new signed 64-bit integer.

Atomically creates an entry of the given alias and sets it to a cross-platform signed 64-bit integer.
If the entry already exists, the function returns an error.

You can specify an expiry time or use NeverExpires if you don’t want the entry to expire.
If you want to create or update an entry use Update.

The value will be correctly translated independently of the endianness of the client’s platform.

func (DirectIntegerEntry) Update Uses

func (e DirectIntegerEntry) Update(newContent int64, expiry time.Time) error

Update creates or updates a signed 64-bit integer.

Atomically updates an entry of the given alias to the provided value.
If the entry doesn’t exist, it will be created.

You can specify an expiry time or use NeverExpires if you don’t want the entry to expire.

type Encryption Uses

type Encryption C.qdb_encryption_t

Encryption : encryption option

const (
    EncryptNone Encryption = C.qdb_crypt_none
    EncryptAES  Encryption = C.qdb_crypt_aes_gcm_256
)

Encryption values:

EncryptNone : No encryption.
EncryptAES : Uses aes gcm 256 encryption.

type Endpoint Uses

type Endpoint struct {
    Address string
    Port    int64
}

Endpoint : A structure representing a qdb url endpoint

func (Endpoint) URI Uses

func (t Endpoint) URI() string

URI : Returns a formatted URI of the endpoint

type Entry Uses

type Entry struct {
    HandleType
    // contains filtered or unexported fields
}

Entry : cannot be constructed base type for composition

func (Entry) Alias Uses

func (e Entry) Alias() string

Alias : Return an alias string of the object

Code:

h := MustSetupHandle(insecureURI, 120*time.Second)
defer h.Close()

blob1 := h.Blob("BLOB_1")
blob1.Put([]byte("blob 1 content"), NeverExpires())
defer blob1.Remove()
blob2 := h.Blob("BLOB_2")
blob2.Put([]byte("blob 2 content"), NeverExpires())
defer blob2.Remove()

fmt.Println("Alias blob 1:", blob1.Alias())
fmt.Println("Alias blob 2:", blob2.Alias())

tags1 := []string{"tag blob 1", "tag both blob"}
blob1.AttachTags(tags1)
defer blob1.DetachTags(tags1)
tags2 := []string{"tag blob 2", "tag both blob"}
blob2.AttachTags(tags2)
defer blob2.DetachTags(tags2)

resultTagBlob1, _ := blob1.GetTagged("tag blob 1")
fmt.Println("Tagged with 'tag blob 1':", resultTagBlob1)
resultTagBlob2, _ := blob1.GetTagged("tag blob 2")
fmt.Println("Tagged with 'tag blob 2':", resultTagBlob2)
resultTagBoth, _ := blob1.GetTagged("tag both blob")
fmt.Println("Tagged with 'tag both blob':", resultTagBoth)

Output:

Alias blob 1: BLOB_1
Alias blob 2: BLOB_2
Tagged with 'tag blob 1': [BLOB_1]
Tagged with 'tag blob 2': [BLOB_2]
Tagged with 'tag both blob': [BLOB_2 BLOB_1]

func (Entry) AttachTag Uses

func (e Entry) AttachTag(tag string) error

AttachTag : Adds a tag entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.
The tag may or may not exist.

func (Entry) AttachTags Uses

func (e Entry) AttachTags(tags []string) error

AttachTags : Adds a collection of tags to a single entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The function will ignore existing tags.
The entry must exist.
The tag may or may not exist.

func (Entry) DetachTag Uses

func (e Entry) DetachTag(tag string) error

DetachTag : Removes a tag from an entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.
The tag must exist.

func (Entry) DetachTags Uses

func (e Entry) DetachTags(tags []string) error

DetachTags : Removes a collection of tags from a single entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.
The tags must exist.

func (Entry) ExpiresAt Uses

func (e Entry) ExpiresAt(expiry time.Time) error

ExpiresAt : Sets the absolute expiration time of an entry.

Blobs and integers can have an expiration time and will be automatically removed by the cluster when they expire.

The absolute expiration time is the Unix epoch, that is, the number of milliseconds since 1 January 1970, 00:00::00 UTC.
To use a relative expiration time (that is expiration relative to the time of the call), use ExpiresFromNow.

To remove the expiration time of an entry, specify the value NeverExpires as ExpiryTime parameter.
Values in the past are refused, but the cluster will have a certain tolerance to account for clock skews.

func (Entry) ExpiresFromNow Uses

func (e Entry) ExpiresFromNow(expiry time.Duration) error

ExpiresFromNow : Sets the expiration time of an entry, relative to the current time of the client.

Blobs and integers can have an expiration time and will automatically be removed by the cluster when they expire.

The expiration is relative to the current time of the machine.
To remove the expiration time of an entry or to use an absolute expiration time use ExpiresAt.

func (Entry) GetLocation Uses

func (e Entry) GetLocation() (NodeLocation, error)

GetLocation : Returns the primary node of an entry.

The exact location of an entry should be assumed random and users should not bother about its location as the API will transparently locate the best node for the requested operation.
This function is intended for higher level APIs that need to optimize transfers and potentially push computation close to the data.

func (Entry) GetMetadata Uses

func (e Entry) GetMetadata() (Metadata, error)

GetMetadata : Gets the meta-information about an entry, if it exists.

func (Entry) GetTagged Uses

func (e Entry) GetTagged(tag string) ([]string, error)

GetTagged : Retrieves all entries that have the specified tag.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The tag must exist.
The complexity of this function is constant.

func (Entry) GetTags Uses

func (e Entry) GetTags() ([]string, error)

GetTags : Retrieves all the tags of an entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.

func (Entry) HasTag Uses

func (e Entry) HasTag(tag string) error

HasTag : Tests if an entry has the request tag.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.

func (Entry) Remove Uses

func (e Entry) Remove() error

Remove : Removes an entry from the cluster, regardless of its type.

This call will remove the entry, whether it is a blob, integer, deque, stream.
It will properly untag the entry.
If the entry spawns on multiple entries or nodes (deques and streams) all blocks will be properly removed.

The call is ACID, regardless of the type of the entry and a transaction will be created if need be

type EntryType Uses

type EntryType C.qdb_entry_type_t

EntryType : An enumeration representing possible entries type.

const (
    EntryUnitialized EntryType = C.qdb_entry_uninitialized
    EntryBlob        EntryType = C.qdb_entry_blob
    EntryInteger     EntryType = C.qdb_entry_integer
    EntryHSet        EntryType = C.qdb_entry_hset
    EntryTag         EntryType = C.qdb_entry_tag
    EntryDeque       EntryType = C.qdb_entry_deque
    EntryStream      EntryType = C.qdb_entry_stream
    EntryTS          EntryType = C.qdb_entry_ts
)

EntryType Values

EntryUnitialized : Uninitialized value.
EntryBlob : A binary large object (blob).
EntryInteger : A signed 64-bit integer.
EntryHSet : A distributed hash set.
EntryTag : A tag.
EntryDeque : A distributed double-entry queue (deque).
EntryTS : A distributed time series.
EntryStream : A distributed binary stream.

type ErrorType Uses

type ErrorType C.qdb_error_t

ErrorType obfuscating qdb_error_t

const (
    Success                      ErrorType = C.qdb_e_ok
    Created                      ErrorType = C.qdb_e_ok_created
    ErrUninitialized             ErrorType = C.qdb_e_uninitialized
    ErrAliasNotFound             ErrorType = C.qdb_e_alias_not_found
    ErrAliasAlreadyExists        ErrorType = C.qdb_e_alias_already_exists
    ErrOutOfBounds               ErrorType = C.qdb_e_out_of_bounds
    ErrSkipped                   ErrorType = C.qdb_e_skipped
    ErrIncompatibleType          ErrorType = C.qdb_e_incompatible_type
    ErrContainerEmpty            ErrorType = C.qdb_e_container_empty
    ErrContainerFull             ErrorType = C.qdb_e_container_full
    ErrElementNotFound           ErrorType = C.qdb_e_element_not_found
    ErrElementAlreadyExists      ErrorType = C.qdb_e_element_already_exists
    ErrOverflow                  ErrorType = C.qdb_e_overflow
    ErrUnderflow                 ErrorType = C.qdb_e_underflow
    ErrTagAlreadySet             ErrorType = C.qdb_e_tag_already_set
    ErrTagNotSet                 ErrorType = C.qdb_e_tag_not_set
    ErrTimeout                   ErrorType = C.qdb_e_timeout
    ErrConnectionRefused         ErrorType = C.qdb_e_connection_refused
    ErrConnectionReset           ErrorType = C.qdb_e_connection_reset
    ErrUnstableCluster           ErrorType = C.qdb_e_unstable_cluster
    ErrTryAgain                  ErrorType = C.qdb_e_try_again
    ErrConflict                  ErrorType = C.qdb_e_conflict
    ErrNotConnected              ErrorType = C.qdb_e_not_connected
    ErrResourceLocked            ErrorType = C.qdb_e_resource_locked
    ErrSystemRemote              ErrorType = C.qdb_e_system_remote
    ErrSystemLocal               ErrorType = C.qdb_e_system_local
    ErrInternalRemote            ErrorType = C.qdb_e_internal_remote
    ErrInternalLocal             ErrorType = C.qdb_e_internal_local
    ErrNoMemoryRemote            ErrorType = C.qdb_e_no_memory_remote
    ErrNoMemoryLocal             ErrorType = C.qdb_e_no_memory_local
    ErrInvalidProtocol           ErrorType = C.qdb_e_invalid_protocol
    ErrHostNotFound              ErrorType = C.qdb_e_host_not_found
    ErrBufferTooSmall            ErrorType = C.qdb_e_buffer_too_small
    ErrNotImplemented            ErrorType = C.qdb_e_not_implemented
    ErrInvalidVersion            ErrorType = C.qdb_e_invalid_version
    ErrInvalidArgument           ErrorType = C.qdb_e_invalid_argument
    ErrInvalidHandle             ErrorType = C.qdb_e_invalid_handle
    ErrReservedAlias             ErrorType = C.qdb_e_reserved_alias
    ErrUnmatchedContent          ErrorType = C.qdb_e_unmatched_content
    ErrInvalidIterator           ErrorType = C.qdb_e_invalid_iterator
    ErrEntryTooLarge             ErrorType = C.qdb_e_entry_too_large
    ErrTransactionPartialFailure ErrorType = C.qdb_e_transaction_partial_failure
    ErrOperationDisabled         ErrorType = C.qdb_e_operation_disabled
    ErrOperationNotPermitted     ErrorType = C.qdb_e_operation_not_permitted
    ErrIteratorEnd               ErrorType = C.qdb_e_iterator_end
    ErrInvalidReply              ErrorType = C.qdb_e_invalid_reply
    ErrNoSpaceLeft               ErrorType = C.qdb_e_no_space_left
    ErrQuotaExceeded             ErrorType = C.qdb_e_quota_exceeded
    ErrAliasTooLong              ErrorType = C.qdb_e_alias_too_long
    ErrClockSkew                 ErrorType = C.qdb_e_clock_skew
    ErrAccessDenied              ErrorType = C.qdb_e_access_denied
    ErrLoginFailed               ErrorType = C.qdb_e_login_failed
    ErrColumnNotFound            ErrorType = C.qdb_e_column_not_found
    ErrQueryTooComplex           ErrorType = C.qdb_e_query_too_complex
    ErrInvalidCryptoKey          ErrorType = C.qdb_e_invalid_crypto_key
    ErrInvalidQuery              ErrorType = C.qdb_e_invalid_query
    ErrInvalidRegex              ErrorType = C.qdb_e_invalid_regex
)

Success : Success. Created : Success. A new entry has been created. ErrUninitialized : Uninitialized error. ErrAliasNotFound : Entry alias/key was not found. ErrAliasAlreadyExists : Entry alias/key already exists. ErrOutOfBounds : Index out of bounds. ErrSkipped : Skipped operation. Used in batches and transactions. ErrIncompatibleType : Entry or column is incompatible with the operation. ErrContainerEmpty : Container is empty. ErrContainerFull : Container is full. ErrElementNotFound : Element was not found. ErrElementAlreadyExists : Element already exists. ErrOverflow : Arithmetic operation overflows. ErrUnderflow : Arithmetic operation underflows. ErrTagAlreadySet : Tag is already set. ErrTagNotSet : Tag is not set. ErrTimeout : Operation timed out. ErrConnectionRefused : Connection was refused. ErrConnectionReset : Connection was reset. ErrUnstableCluster : Cluster is unstable. ErrTryAgain : Please retry. ErrConflict : There is another ongoing conflicting operation. ErrNotConnected : Handle is not connected. ErrResourceLocked : Resource is locked. ErrSystemRemote : System error on remote node (server-side). Please check errno or GetLastError() for actual error. ErrSystemLocal : System error on local system (client-side). Please check errno or GetLastError() for actual error. ErrInternalRemote : Internal error on remote node (server-side). ErrInternalLocal : Internal error on local system (client-side). ErrNoMemoryRemote : No memory on remote node (server-side). ErrNoMemoryLocal : No memory on local system (client-side). ErrInvalidProtocol : Protocol is invalid. ErrHostNotFound : Host was not found. ErrBufferTooSmall : Buffer is too small. ErrNotImplemented : Operation is not implemented. ErrInvalidVersion : Version is invalid. ErrInvalidArgument : Argument is invalid. ErrInvalidHandle : Handle is invalid. ErrReservedAlias : Alias/key is reserved. ErrUnmatchedContent : Content did not match. ErrInvalidIterator : Iterator is invalid. ErrEntryTooLarge : Entry is too large. ErrTransactionPartialFailure : Transaction failed partially. ErrOperationDisabled : Operation has not been enabled in cluster configuration. ErrOperationNotPermitted : Operation is not permitted. ErrIteratorEnd : Iterator reached the end. ErrInvalidReply : Cluster sent an invalid reply. ErrNoSpaceLeft : No more space on disk. ErrQuotaExceeded : Disk space quota has been reached. ErrAliasTooLong : Alias is too long. ErrClockSkew : Cluster nodes have important clock differences. ErrAccessDenied : Access is denied. ErrLoginFailed : Login failed. ErrColumnNotFound : Column was not found. ErrQueryTooComplex : Find is too complex. ErrInvalidCryptoKey : Security key is invalid.

func (ErrorType) Error Uses

func (e ErrorType) Error() string

type Find Uses

type Find struct {
    HandleType
    // contains filtered or unexported fields
}

Find : a building type to execute a query Retrieves all entries’ aliases that match the specified query. For the complete grammar, please refer to the documentation. Queries are transactional. The complexity of this function is dependent on the complexity of the query.

func (Find) Execute Uses

func (q Find) Execute() ([]string, error)

Execute : Execute the current query

func (Find) ExecuteString Uses

func (q Find) ExecuteString(query string) ([]string, error)

ExecuteString : Execute a string query immediately

func (*Find) NotTag Uses

func (q *Find) NotTag(t string) *Find

NotTag : Adds a tag to exclude from the current query results

func (*Find) Tag Uses

func (q *Find) Tag(t string) *Find

Tag : Adds a tag to include into the current query results

func (*Find) Type Uses

func (q *Find) Type(t string) *Find

Type : Restrict the query results to a particular type

type HandleType Uses

type HandleType struct {
    // contains filtered or unexported fields
}

HandleType : An opaque handle to internal API-allocated structures needed for maintaining connection to a cluster.

Code:

var h HandleType
h.Open(ProtocolTCP)

func MustSetupHandle Uses

func MustSetupHandle(clusterURI string, timeout time.Duration) HandleType

MustSetupHandle : Setup an handle, panic on error

The handle is already opened with tcp protocol
The handle is already connected with the clusterURI string

Panic on error

func MustSetupSecuredHandle Uses

func MustSetupSecuredHandle(clusterURI, clusterPublicKeyFile, userCredentialFile string, timeout time.Duration, encryption Encryption) HandleType

MustSetupSecuredHandle : Setup a secured handle, panic on error

The handle is already opened with tcp protocol
The handle is already secured with the cluster public key and the user credential files provided
(Note: the filenames are needed, not the content of the files)
The handle is already connected with the clusterURI string

func NewHandle Uses

func NewHandle() (HandleType, error)

NewHandle : Create a new handle, return error if needed

The handle is already opened (not connected) with tcp protocol

func SetupHandle Uses

func SetupHandle(clusterURI string, timeout time.Duration) (HandleType, error)

SetupHandle : Setup an handle, return error if needed

The handle is already opened with tcp protocol
The handle is already connected with the clusterURI string

func SetupSecuredHandle Uses

func SetupSecuredHandle(clusterURI, clusterPublicKeyFile, userCredentialFile string, timeout time.Duration, encryption Encryption) (HandleType, error)

SetupSecuredHandle : Setup a secured handle, return error if needed

The handle is already opened with tcp protocol
The handle is already secured with the cluster public key and the user credential files provided
(Note: the filenames are needed, not the content of the files)
The handle is already connected with the clusterURI string

func (HandleType) APIBuild Uses

func (h HandleType) APIBuild() string

APIBuild : Returns a string describing the exact API build.

func (HandleType) APIVersion Uses

func (h HandleType) APIVersion() string

APIVersion : Returns a string describing the API version.

func (HandleType) AddClusterPublicKey Uses

func (h HandleType) AddClusterPublicKey(secret string) error

AddClusterPublicKey : add the cluster public key from a cluster config file.

func (HandleType) AddUserCredentials Uses

func (h HandleType) AddUserCredentials(name, secret string) error

AddUserCredentials : add a username and key from a user name and secret.

func (HandleType) Blob Uses

func (h HandleType) Blob(alias string) BlobEntry

Blob : Create a blob entry object

func (HandleType) Close Uses

func (h HandleType) Close() error

Close : Closes the handle previously opened.

This results in terminating all connections and releasing all internal buffers,
including buffers which may have been allocated as or a result of batch operations or get operations.

func (HandleType) Cluster Uses

func (h HandleType) Cluster() *Cluster

Cluster : Create a cluster object to execute commands on a cluster

func (HandleType) Connect Uses

func (h HandleType) Connect(clusterURI string) error

Connect : connect a previously opened handle

Binds the client instance to a quasardb cluster and connect to at least one node within.
Quasardb URI are in the form qdb://<address>:<port> where <address> is either an IPv4 or IPv6 (surrounded with square brackets), or a domain name. It is recommended to specify multiple addresses should the designated node be unavailable.

URI examples:
	qdb://myserver.org:2836 - Connects to myserver.org on the port 2836
	qdb://127.0.0.1:2836 - Connects to the local IPv4 loopback on the port 2836
	qdb://myserver1.org:2836,myserver2.org:2836 - Connects to myserver1.org or myserver2.org on the port 2836
	qdb://[::1]:2836 - Connects to the local IPv6 loopback on the port 2836

func (HandleType) DirectConnect Uses

func (h HandleType) DirectConnect(nodeURI string) (DirectHandleType, error)

DirectConnect opens a connection to a node for use with the direct API

The returned direct handle must be freed with Close(). Releasing the handle has no impact on non-direct connections or other direct handles.

func (HandleType) Find Uses

func (h HandleType) Find() *Find

Find : Create a query object to execute

func (HandleType) GetClientMaxInBufSize Uses

func (h HandleType) GetClientMaxInBufSize() (uint, error)

GetClientMaxInBufSize : Gets the maximum incoming buffer size for all network operations of the client.

func (HandleType) GetClusterMaxInBufSize Uses

func (h HandleType) GetClusterMaxInBufSize() (uint, error)

GetClusterMaxInBufSize : Gets the maximum incoming buffer size for all network operations of the client.

func (HandleType) GetTagged Uses

func (h HandleType) GetTagged(tag string) ([]string, error)

GetTagged : Retrieves all entries that have the specified tag.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The tag must exist.
The complexity of this function is constant.

func (HandleType) GetTags Uses

func (h HandleType) GetTags(entryAlias string) ([]string, error)

GetTags : Retrieves all the tags of an entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.

func (HandleType) Integer Uses

func (h HandleType) Integer(alias string) IntegerEntry

Integer : Create an integer entry object

func (HandleType) Node Uses

func (h HandleType) Node(uri string) *Node

Node : Create a node object

func (HandleType) NodeStatistics Uses

func (h HandleType) NodeStatistics(nodeID string) (Statistics, error)

NodeStatistics : Retrieve statistics for a specific node

Deprecated: Statistics will be fetched directly from the node using the new direct API

func (HandleType) Open Uses

func (h HandleType) Open(protocol Protocol) error

Open : Creates a handle.

No connection will be established.
Not needed if you created your handle with NewHandle.

func (HandleType) PrefixCount Uses

func (h HandleType) PrefixCount(prefix string) (uint64, error)

PrefixCount : Retrieves the count of all entries matching the provided prefix.

A prefix-based count counts all entries matching a provided prefix.

func (HandleType) PrefixGet Uses

func (h HandleType) PrefixGet(prefix string, limit int) ([]string, error)

PrefixGet : Retrieves the list of all entries matching the provided prefix.

A prefix-based search will enable you to find all entries matching a provided prefix.
This function returns the list of aliases. It’s up to the user to query the content associated with every entry, if needed.

func (HandleType) Query Uses

func (h HandleType) Query(query string) *Query

Query : Create an query object to execute

func (HandleType) Release Uses

func (h HandleType) Release(buffer unsafe.Pointer)

Release : Releases an API-allocated buffer.

Failure to properly call this function may result in excessive memory usage.
Most operations that return a content (e.g. batch operations, qdb_blob_get, qdb_blob_get_and_update, qdb_blob_compare_and_swap...)
will allocate a buffer for the content and will not release the allocated buffer until you either call this function or close the handle.

The function will be able to release any kind of buffer allocated by a quasardb API call, whether it’s a single buffer, an array or an array of buffers.

func (HandleType) SetClientMaxInBufSize Uses

func (h HandleType) SetClientMaxInBufSize(bufSize uint) error

SetClientMaxInBufSize : Set the Sets the maximum incoming buffer size for all network operations of the client.

Only modify this setting if you expect to receive very large answers from the server.

func (HandleType) SetCompression Uses

func (h HandleType) SetCompression(compressionLevel Compression) error

SetCompression : Set the compression level for all future messages emitted by the specified handle.

Regardless of this parameter, the API will be able to read whatever compression the server uses.

func (HandleType) SetEncryption Uses

func (h HandleType) SetEncryption(encryption Encryption) error

SetEncryption : Creates a handle.

No connection will be established.
Not needed if you created your handle with NewHandle.

func (HandleType) SetMaxCardinality Uses

func (h HandleType) SetMaxCardinality(maxCardinality uint) error

SetMaxCardinality : Sets the maximum allowed cardinality of a quasardb query.

The default value is 10,007. The minimum allowed values is 100.

func (HandleType) SetTimeout Uses

func (h HandleType) SetTimeout(timeout time.Duration) error

SetTimeout : Sets the timeout of all network operations.

The lower the timeout, the higher the risk of having timeout errors.
Keep in mind that the server-side timeout might be shorter.

func (HandleType) Statistics Uses

func (h HandleType) Statistics() (map[string]Statistics, error)

Statistics : Retrieve statistics for all nodes

func (HandleType) Timeseries Uses

func (h HandleType) Timeseries(alias string) TimeseriesEntry

Timeseries : Create a timeseries entry object

func (HandleType) TsBatch Uses

func (h HandleType) TsBatch(cols ...TsBatchColumnInfo) (*TsBatch, error)

TsBatch : create a batch object for the specified columns

type IntegerEntry Uses

type IntegerEntry struct {
    Entry
}

IntegerEntry : int data type

Code:

h := MustSetupHandle(insecureURI, 120*time.Second)
defer h.Close()

alias := "IntAlias"
integer := h.Integer(alias)

integer.Put(int64(3), NeverExpires())
defer integer.Remove()

obtainedContent, _ := integer.Get()
fmt.Println("Get content:", obtainedContent)

newContent := int64(87)
integer.Update(newContent, NeverExpires())

obtainedContent, _ = integer.Get()
fmt.Println("Get updated content:", obtainedContent)

integer.Add(3)

obtainedContent, _ = integer.Get()
fmt.Println("Get added content:", obtainedContent)

Output:

Get content: 3
Get updated content: 87
Get added content: 90

func (IntegerEntry) Add Uses

func (entry IntegerEntry) Add(added int64) (int64, error)

Add : Atomically increases or decreases a signed 64-bit integer.

The specified entry will be atomically increased (or decreased) according to the given addend value:
	To increase the value, specify a positive added
	To decrease the value, specify a negative added

The function return the result of the operation.
The entry must already exist.

func (IntegerEntry) Get Uses

func (entry IntegerEntry) Get() (int64, error)

Get : Atomically retrieves the value of a signed 64-bit integer.

Atomically retrieves the value of an existing 64-bit integer.

func (IntegerEntry) Put Uses

func (entry IntegerEntry) Put(content int64, expiry time.Time) error

Put : Creates a new signed 64-bit integer.

Atomically creates an entry of the given alias and sets it to a cross-platform signed 64-bit integer.
If the entry already exists, the function returns an error.

You can specify an expiry time or use NeverExpires if you don’t want the entry to expire.
If you want to create or update an entry use Update.

The value will be correctly translated independently of the endianness of the client’s platform.

func (*IntegerEntry) Update Uses

func (entry *IntegerEntry) Update(newContent int64, expiry time.Time) error

Update : Creates or updates a signed 64-bit integer.

Atomically updates an entry of the given alias to the provided value.
If the entry doesn’t exist, it will be created.

You can specify an expiry time or use NeverExpires if you don’t want the entry to expire.

type Metadata Uses

type Metadata struct {
    Ref              RefID
    Type             EntryType
    Size             uint64
    ModificationTime time.Time
    ExpiryTime       time.Time
}

Metadata : A structure representing the metadata of an entry in the database.

type Node Uses

type Node struct {
    HandleType
    // contains filtered or unexported fields
}

Node : a structure giving access to various

informations or actions on a node

Code:

h := MustSetupHandle(insecureURI, 120*time.Second)
defer h.Close()

node := h.Node(insecureURI)

status, _ := node.Status()
fmt.Println("Status - Max sessions:", status.Network.Partitions.MaxSessions)

config, _ := node.Config()
fmt.Println("Config - Listen On:", config.Local.Network.ListenOn)

topology, _ := node.Topology()
fmt.Println("Topology - Successor is same as predecessor:", topology.Successor.Endpoint == topology.Predecessor.Endpoint)

Output:

Status - Max sessions: 64
Config - Listen On: 127.0.0.1:2836
Topology - Successor is same as predecessor: true

func (Node) Config Uses

func (n Node) Config() (NodeConfig, error)

Config :

Returns the configuration of a node.

The configuration is a JSON object, as described in the documentation.

func (Node) RawConfig Uses

func (n Node) RawConfig() ([]byte, error)

RawConfig :

Returns the configuration of a node.

The configuration is a JSON object as a byte array, as described in the documentation.

func (Node) RawStatus Uses

func (n Node) RawStatus() ([]byte, error)

RawStatus :

Returns the status of a node.

The status is a JSON object as a byte array and contains current information of the node state, as described in the documentation.

func (Node) RawTopology Uses

func (n Node) RawTopology() ([]byte, error)

RawTopology :

Returns the topology of a node.

The topology is a JSON object as a byte array containing the node address, and the addresses of its successor and predecessor.

func (Node) Status Uses

func (n Node) Status() (NodeStatus, error)

Status :

Returns the status of a node.

The status is a JSON object and contains current information of the node state, as described in the documentation.

func (Node) Topology Uses

func (n Node) Topology() (NodeTopology, error)

Topology :

Returns the topology of a node.

The topology is a JSON object containing the node address, and the addresses of its successor and predecessor.

type NodeConfig Uses

type NodeConfig struct {
    Local struct {
        Depot struct {
            RocksDB struct {
                SyncEveryWrite         bool   `json:"sync_every_write"`
                Root                   string `json:"root"`
                MaxBytes               int64  `json:"max_bytes"`
                StorageWarningLevel    int    `json:"storage_warning_level"`
                StorageWarningInterval int    `json:"storage_warning_interval"`
                DisableWal             bool   `json:"disable_wal"`
                DirectRead             bool   `json:"direct_read"`
                DirectWrite            bool   `json:"direct_write"`
                MaxTotalWalSize        int    `json:"max_total_wal_size"`
                MetadataMemBudget      int    `json:"metadata_mem_budget"`
                DataCache              int    `json:"data_cache"`
                Threads                int    `json:"threads"`
                HiThreads              int    `json:"hi_threads"`
                MaxOpenFiles           int    `json:"max_open_files"`
            }   `json:"rocksdb"`
            Helium struct {
                Url              string `json:"url"`
                Fanout           int    `json:"fanout"`
                GCFanout         int    `json:"gc_fanout"`
                ReadCache        int64  `json:"read_cache"`
                WriteCache       int64  `json:"write_cache"`
                AutoCommitPeriod int64  `json:"auto_commit_period"`
                AutoCleanPeriod  int64  `json:"auto_clean_period"`
            }   `json:"helium"`
            AsyncTS struct {
                Pipelines           int   `json:"pipelines"`
                PipelineBufferSize  int64 `json:"pipeline_buffer_size"`
                PipelineQueueLength int64 `json:"pipeline_queue_length"`
                FlushDeadline       int   `json:"flush_deadline"`
            }   `json:"async_ts"`
        }   `json:"depot"`
        User struct {
            LicenseFile string `json:"license_file"`
            LicenseKey  string `json:"license_key"`
            Daemon      bool   `json:"daemon"`
        }   `json:"user"`
        Limiter struct {
            MaxResidentEntries int   `json:"max_resident_entries"`
            MaxBytes           int64 `json:"max_bytes"`
            MaxTrimQueueLength int   `json:"max_trim_queue_length"`
        }   `json:"limiter"`
        Logger struct {
            LogLevel      int    `json:"log_level"`
            FlushInterval int    `json:"flush_interval"`
            LogDirectory  string `json:"log_directory"`
            LogToConsole  bool   `json:"log_to_console"`
            LogToSyslog   bool   `json:"log_to_syslog"`
        }   `json:"logger"`
        Network struct {
            ServerSessions  int    `json:"server_sessions"`
            PartitionsCount int    `json:"partitions_count"`
            IdleTimeout     int    `json:"idle_timeout"`
            ClientTimeout   int    `json:"client_timeout"`
            ListenOn        string `json:"listen_on"`
        }   `json:"network"`
        Chord struct {
            NodeID                   string   `json:"node_id"`
            NoStabilization          bool     `json:"no_stabilization"`
            BootstrappingPeers       []string `json:"bootstrapping_peers"`
            MinStabilizationInterval int      `json:"min_stabilization_interval"`
            MaxStabilizationInterval int      `json:"max_stabilization_interval"`
        }   `json:"chord"`
    }   `json:"local"`
    Global struct {
        Cluster struct {
            StorageEngine          string `json:"storage_engine"`
            History                bool   `json:"history"`
            ReplicationFactor      int    `json:"replication_factor"`
            MaxVersions            int    `json:"max_versions"`
            MaxTransactionDuration int    `json:"max_transaction_duration"`
        }   `json:"cluster"`
        Security struct {
            EnableStop         bool   `json:"enable_stop"`
            EnablePurgeAll     bool   `json:"enable_purge_all"`
            Enabled            bool   `json:"enabled"`
            EncryptTraffic     bool   `json:"encrypt_traffic"`
            ClusterPrivateFile string `json:"cluster_private_file"`
            UserList           string `json:"user_list"`
        }   `json:"security"`
    }   `json:"global"`
}

NodeConfig : a json representation object containing the configuration of a node

type NodeLocation Uses

type NodeLocation struct {
    Address string
    Port    int16
}

NodeLocation : A structure representing the address of a quasardb node.

type NodeStatus Uses

type NodeStatus struct {
    Memory struct {
        VM  struct {
            Used  int64 `json:"used"`
            Total int64 `json:"total"`
        }   `json:"vm"`
        Physmem struct {
            Used  int64 `json:"used"`
            Total int64 `json:"total"`
        }   `json:"physmem"`
    }   `json:"memory"`
    CPUTimes struct {
        Idle   int64 `json:"idle"`
        System int   `json:"system"`
        User   int64 `json:"user"`
    }   `json:"cpu_times"`
    DiskUsage struct {
        Free  int64 `json:"free"`
        Total int64 `json:"total"`
    }   `json:"disk_usage"`
    Network struct {
        ListeningEndpoint string `json:"listening_endpoint"`
        Partitions        struct {
            Count             int `json:"count"`
            MaxSessions       int `json:"max_sessions"`
            AvailableSessions int `json:"available_sessions"`
        }   `json:"partitions"`
    }   `json:"network"`
    NodeID              string    `json:"node_id"`
    OperatingSystem     string    `json:"operating_system"`
    HardwareConcurrency int       `json:"hardware_concurrency"`
    Timestamp           time.Time `json:"timestamp"`
    Startup             time.Time `json:"startup"`
    EngineVersion       string    `json:"engine_version"`
    EngineBuildDate     time.Time `json:"engine_build_date"`
    Entries             struct {
        Resident struct {
            Count int `json:"count"`
            Size  int `json:"size"`
        }   `json:"resident"`
        Persisted struct {
            Count int `json:"count"`
            Size  int `json:"size"`
        }   `json:"persisted"`
    }   `json:"entries"`
    Operations struct {
        Get struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        }   `json:"get"`
        GetAndRemove struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        }   `json:"get_and_remove"`
        Put struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        }   `json:"put"`
        Update struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        }   `json:"update"`
        GetAndUpdate struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        }   `json:"get_and_update"`
        CompareAndSwap struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        }   `json:"compare_and_swap"`
        Remove struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        }   `json:"remove"`
        RemoveIf struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        }   `json:"remove_if"`
        PurgeAll struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        }   `json:"purge_all"`
    }   `json:"operations"`
    Overall struct {
        Count     int `json:"count"`
        Successes int `json:"successes"`
        Failures  int `json:"failures"`
        Pageins   int `json:"pageins"`
        Evictions int `json:"evictions"`
        InBytes   int `json:"in_bytes"`
        OutBytes  int `json:"out_bytes"`
    }   `json:"overall"`
}

NodeStatus : a json representation object containing the status of a node

type NodeTopology Uses

type NodeTopology struct {
    Predecessor struct {
        Reference string `json:"reference"`
        Endpoint  string `json:"endpoint"`
    }   `json:"predecessor"`
    Center struct {
        Reference string `json:"reference"`
        Endpoint  string `json:"endpoint"`
    }   `json:"center"`
    Successor struct {
        Reference string `json:"reference"`
        Endpoint  string `json:"endpoint"`
    }   `json:"successor"`
}

type Protocol Uses

type Protocol C.qdb_protocol_t

Protocol : A network protocol.

const (
    ProtocolTCP Protocol = C.qdb_p_tcp
)

Protocol values:

ProtocolTCP : Uses TCP/IP to communicate with the cluster. This is currently the only supported network protocol.

type Query Uses

type Query struct {
    HandleType
    // contains filtered or unexported fields
}

Query : query object

Code:

h := MustSetupHandle(insecureURI, 120*time.Second)
defer h.Close()

var aliases []string
aliases = append(aliases, generateAlias(16))
aliases = append(aliases, generateAlias(16))

blob := h.Blob("alias_blob")
blob.Put([]byte("asd"), NeverExpires())
defer blob.Remove()
blob.AttachTag("all")
blob.AttachTag("first")

integer := h.Integer("alias_integer")
integer.Put(32, NeverExpires())
defer integer.Remove()
integer.AttachTag("all")
integer.AttachTag("second")

var obtainedAliases []string
obtainedAliases, _ = h.Find().Tag("all").Execute()
fmt.Println("Get all aliases:", obtainedAliases)

obtainedAliases, _ = h.Find().Tag("all").NotTag("second").Execute()
fmt.Println("Get only first alias:", obtainedAliases)

obtainedAliases, _ = h.Find().Tag("all").Type("int").Execute()
fmt.Println("Get only integer alias:", obtainedAliases)

obtainedAliases, _ = h.Find().Tag("adsda").Execute()
fmt.Println("Get no aliases:", obtainedAliases)

_, err := h.Find().NotTag("second").Execute()
fmt.Println("Error:", err)

_, err = h.Find().Type("int").Execute()
fmt.Println("Error:", err)

Output:

Get all aliases: [alias_blob alias_integer]
Get only first alias: [alias_blob]
Get only integer alias: [alias_integer]
Get no aliases: []
Error: query should have at least one valid tag
Error: query should have at least one valid tag

func (Query) Execute Uses

func (q Query) Execute() (*QueryResult, error)

Execute : execute a query

type QueryPoint Uses

type QueryPoint C.qdb_point_result_t

QueryPoint : a variadic structure holding the result type as well as the result value

func (*QueryPoint) Get Uses

func (r *QueryPoint) Get() QueryPointResult

Get : retrieve the raw interface

func (*QueryPoint) GetBlob Uses

func (r *QueryPoint) GetBlob() ([]byte, error)

GetBlob : retrieve a double from the interface

func (*QueryPoint) GetCount Uses

func (r *QueryPoint) GetCount() (int64, error)

GetCount : retrieve the count from the interface

func (*QueryPoint) GetDouble Uses

func (r *QueryPoint) GetDouble() (float64, error)

GetDouble : retrieve a double from the interface

func (*QueryPoint) GetInt64 Uses

func (r *QueryPoint) GetInt64() (int64, error)

GetInt64 : retrieve an int64 from the interface

func (*QueryPoint) GetTimestamp Uses

func (r *QueryPoint) GetTimestamp() (time.Time, error)

GetTimestamp : retrieve a timestamp from the interface

type QueryPointResult Uses

type QueryPointResult struct {
    // contains filtered or unexported fields
}

QueryPointResult : a query result point

func (QueryPointResult) Type Uses

func (r QueryPointResult) Type() QueryResultValueType

Type : gives the type of the query point result

func (QueryPointResult) Value Uses

func (r QueryPointResult) Value() interface{}

Value : gives the interface{} value of the query point result

type QueryResult Uses

type QueryResult struct {
    // contains filtered or unexported fields
}

QueryResult : a query result

func (QueryResult) Columns Uses

func (r QueryResult) Columns(row *QueryPoint) QueryRow

Columns : create columns from a row

func (QueryResult) ColumnsCount Uses

func (r QueryResult) ColumnsCount() int64

ColumnsCount : get the number of columns of each row

func (QueryResult) ColumnsNames Uses

func (r QueryResult) ColumnsNames() []string

ColumnsNames : get the number of columns names of each row

func (QueryResult) RowCount Uses

func (r QueryResult) RowCount() int64

RowCount : the number of returned rows

func (QueryResult) Rows Uses

func (r QueryResult) Rows() QueryRows

Rows : get rows of a query table result

func (QueryResult) ScannedPoints Uses

func (r QueryResult) ScannedPoints() int64

ScannedPoints : number of points scanned

The actual number of scanned points may be greater

type QueryResultValueType Uses

type QueryResultValueType int64

QueryResultValueType : an enum of possible query point result types

const (
    QueryResultNone      QueryResultValueType = C.qdb_query_result_none
    QueryResultDouble    QueryResultValueType = C.qdb_query_result_double
    QueryResultBlob      QueryResultValueType = C.qdb_query_result_blob
    QueryResultInt64     QueryResultValueType = C.qdb_query_result_int64
    QueryResultTimestamp QueryResultValueType = C.qdb_query_result_timestamp
    QueryResultCount     QueryResultValueType = C.qdb_query_result_count
)

QueryResultNone : query result value none QueryResultDouble : query result value double QueryResultBlob : query result value blob QueryResultInt64 : query result value int64 QueryResultTimestamp : query result value timestamp QueryResultCount : query result value count

type QueryRow Uses

type QueryRow []QueryPoint

QueryRow : query result table row

type QueryRows Uses

type QueryRows []*QueryPoint

QueryRows : query result table rows

type RefID Uses

type RefID C.qdb_id_t

RefID : Unique identifier

type Statistics Uses

type Statistics struct {
    CPU struct {
        Idle   int64 `json:"idle"`
        System int64 `json:"system"`
        User   int64 `json:"user"`
    }   `json:"cpu"`
    Disk struct {
        BytesFree  int64  `json:"bytes_free"`
        BytesTotal int64  `json:"bytes_total"`
        Path       string `json:"path"`
    }   `json:"disk"`
    EngineBuildDate     string `json:"engine_build_date"`
    EngineVersion       string `json:"engine_version"`
    HardwareConcurrency int64  `json:"hardware_concurrency"`
    Memory              struct {
        BytesResident int64 `json:"bytes_resident_size"`
        ResidentCount int64 `json:"resident_count"`
        Physmem       struct {
            Used  int64 `json:"bytes_used"`
            Total int64 `json:"bytes_total"`
        }   `json:"physmem"`
        VM  struct {
            Used  int64 `json:"bytes_used"`
            Total int64 `json:"bytes_total"`
        }   `json:"vm"`
    }   `json:"memory"`
    Network struct {
        CurrentUsersCount int64 `json:"current_users_count"`
        Sessions          struct {
            AvailableCount   int64 `json:"available_count"`
            UnavailableCount int64 `json:"unavailable_count"`
            MaxCount         int64 `json:"max_count"`
        }   `json:"sessions"`
    }   `json:"network"`
    PartitionsCount int64  `json:"partitions_count"`
    NodeID          string `json:"node_id"`
    OperatingSystem string `json:"operating_system"`
    Persistence     struct {
        BytesCapacity int64 `json:"bytes_capacity"`
        BytesRead     int64 `json:"bytes_read"`
        BytesUtilized int64 `json:"bytes_utilized"`
        BytesWritten  int64 `json:"bytes_written"`
        EntriesCount  int64 `json:"entries_count"`
    }   `json:"persistence"`
    Requests struct {
        BytesOut       int64 `json:"bytes_out"`
        SuccessesCount int64 `json:"successes_count"`
        TotalCount     int64 `json:"total_count"`
    }   `json:"requests"`
    Startup int64 `json:"startup"`
}

Statistics : json adptable structure with node information

type TimeseriesEntry Uses

type TimeseriesEntry struct {
    Entry
}

TimeseriesEntry : timeseries double entry data type

Code:

h := MustSetupHandle(insecureURI, 120*time.Second)
defer h.Close()
timeseries := h.Timeseries("alias")

fmt.Println("timeseries:", timeseries.Alias())

Output:

timeseries: alias

func (TimeseriesEntry) BlobColumn Uses

func (entry TimeseriesEntry) BlobColumn(columnName string) TsBlobColumn

BlobColumn : create a column object

Code:

h, timeseries := MustCreateTimeseriesWithData("ExampleTimeseriesEntry_BlobColumn")
defer h.Close()

column := timeseries.BlobColumn("serie_column_blob")
fmt.Println("column:", column.Name())

Output:

column: serie_column_blob

func (TimeseriesEntry) Bulk Uses

func (entry TimeseriesEntry) Bulk(cols ...TsColumnInfo) (*TsBulk, error)

Bulk : create a bulk object for the specified columns

If no columns are specified it gets the server side registered columns

Code:

h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_Bulk")
defer h.Close()

bulk, err := timeseries.Bulk(NewTsColumnInfo("serie_column_blob", TsColumnBlob), NewTsColumnInfo("serie_column_double", TsColumnDouble))
if err != nil {
    return // handle error
}
// Don't forget to release
defer bulk.Release()
if err != nil {
    return // handle error
}
fmt.Println("RowCount:", bulk.RowCount())

Output:

RowCount: 0

func (TimeseriesEntry) Columns Uses

func (entry TimeseriesEntry) Columns() ([]TsDoubleColumn, []TsBlobColumn, []TsInt64Column, []TsTimestampColumn, error)

Columns : return the current columns

Code:

h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_Columns")
defer h.Close()

doubleColumns, blobColumns, int64Columns, timestampColumns, err := timeseries.Columns()
if err != nil {
    // handle error
}
for _, col := range doubleColumns {
    fmt.Println("column:", col.Name())
    // do something like Insert, GetRanges with a double column
}
for _, col := range blobColumns {
    fmt.Println("column:", col.Name())
    // do something like Insert, GetRanges with a blob column
}
for _, col := range int64Columns {
    fmt.Println("column:", col.Name())
    // do something like Insert, GetRanges with a blob column
}
for _, col := range timestampColumns {
    fmt.Println("column:", col.Name())
    // do something like Insert, GetRanges with a blob column
}

Output:

column: serie_column_double
column: serie_column_blob
column: serie_column_int64
column: serie_column_timestamp

func (TimeseriesEntry) ColumnsInfo Uses

func (entry TimeseriesEntry) ColumnsInfo() ([]TsColumnInfo, error)

ColumnsInfo : return the current columns information

Code:

h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_ColumnsInfo")
defer h.Close()

columns, err := timeseries.ColumnsInfo()
if err != nil {
    // handle error
}
for _, col := range columns {
    fmt.Println("column:", col.Name())
}

Output:

column: serie_column_blob
column: serie_column_double
column: serie_column_int64
column: serie_column_timestamp

func (TimeseriesEntry) Create Uses

func (entry TimeseriesEntry) Create(shardSize time.Duration, cols ...TsColumnInfo) error

Create : create a new timeseries

First parameter is the duration limit to organize a shard
Ex: shardSize := 24 * time.Hour

Code:

h, timeseries := MustCreateTimeseries("ExampleTimeseriesEntry_Create")
defer h.Close()

// duration, columns...
timeseries.Create(24*time.Hour, NewTsColumnInfo("serie_column_blob", TsColumnBlob), NewTsColumnInfo("serie_column_double", TsColumnDouble))

func (TimeseriesEntry) DoubleColumn Uses

func (entry TimeseriesEntry) DoubleColumn(columnName string) TsDoubleColumn

DoubleColumn : create a column object

Code:

h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_DoubleColumn")
defer h.Close()

column := timeseries.DoubleColumn("serie_column_double")
fmt.Println("column:", column.Name())

Output:

column: serie_column_double

func (TimeseriesEntry) InsertColumns Uses

func (entry TimeseriesEntry) InsertColumns(cols ...TsColumnInfo) error

InsertColumns : insert columns in a existing timeseries

Code:

h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_InsertColumns")
defer h.Close()

err := timeseries.InsertColumns(NewTsColumnInfo("serie_column_blob_2", TsColumnBlob), NewTsColumnInfo("serie_column_double_2", TsColumnDouble))
if err != nil {
    // handle error
}
columns, err := timeseries.ColumnsInfo()
if err != nil {
    // handle error
}
for _, col := range columns {
    fmt.Println("column:", col.Name())
}

Output:

column: serie_column_blob
column: serie_column_double
column: serie_column_int64
column: serie_column_timestamp
column: serie_column_blob_2
column: serie_column_double_2

func (TimeseriesEntry) Int64Column Uses

func (entry TimeseriesEntry) Int64Column(columnName string) TsInt64Column

Int64Column : create a column object

Code:

h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_Int64Column")
defer h.Close()

column := timeseries.Int64Column("serie_column_int64")
fmt.Println("column:", column.Name())

Output:

column: serie_column_int64

func (TimeseriesEntry) TimestampColumn Uses

func (entry TimeseriesEntry) TimestampColumn(columnName string) TsTimestampColumn

TimestampColumn : create a column object

Code:

h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_TimestampColumn")
defer h.Close()

column := timeseries.TimestampColumn("serie_column_timestamp")
fmt.Println("column:", column.Name())

Output:

column: serie_column_timestamp

type TsAggregationType Uses

type TsAggregationType C.qdb_ts_aggregation_type_t

TsAggregationType typedef of C.qdb_ts_aggregation_type

const (
    AggFirst              TsAggregationType = C.qdb_agg_first
    AggLast               TsAggregationType = C.qdb_agg_last
    AggMin                TsAggregationType = C.qdb_agg_min
    AggMax                TsAggregationType = C.qdb_agg_max
    AggArithmeticMean     TsAggregationType = C.qdb_agg_arithmetic_mean
    AggHarmonicMean       TsAggregationType = C.qdb_agg_harmonic_mean
    AggGeometricMean      TsAggregationType = C.qdb_agg_geometric_mean
    AggQuadraticMean      TsAggregationType = C.qdb_agg_quadratic_mean
    AggCount              TsAggregationType = C.qdb_agg_count
    AggSum                TsAggregationType = C.qdb_agg_sum
    AggSumOfSquares       TsAggregationType = C.qdb_agg_sum_of_squares
    AggSpread             TsAggregationType = C.qdb_agg_spread
    AggSampleVariance     TsAggregationType = C.qdb_agg_sample_variance
    AggSampleStddev       TsAggregationType = C.qdb_agg_sample_stddev
    AggPopulationVariance TsAggregationType = C.qdb_agg_population_variance
    AggPopulationStddev   TsAggregationType = C.qdb_agg_population_stddev
    AggAbsMin             TsAggregationType = C.qdb_agg_abs_min
    AggAbsMax             TsAggregationType = C.qdb_agg_abs_max
    AggProduct            TsAggregationType = C.qdb_agg_product
    AggSkewness           TsAggregationType = C.qdb_agg_skewness
    AggKurtosis           TsAggregationType = C.qdb_agg_kurtosis
)

Each type gets its value between the begin and end timestamps of aggregation

type TsBatch Uses

type TsBatch struct {
    // contains filtered or unexported fields
}

TsBatch : A structure that permits to append data to a timeseries

func (*TsBatch) ExtraColumns Uses

func (t *TsBatch) ExtraColumns(cols ...TsBatchColumnInfo) error

ExtraColumns : Appends columns to the current batch table

func (*TsBatch) Push Uses

func (t *TsBatch) Push() error

Push : Push the inserted data

func (*TsBatch) Release Uses

func (t *TsBatch) Release()

Release : release the memory of the batch table

func (*TsBatch) RowSetBlob Uses

func (t *TsBatch) RowSetBlob(index int64, content []byte) error

RowSetBlob : Set blob at specified index in current row

func (*TsBatch) RowSetBlobNoCopy Uses

func (t *TsBatch) RowSetBlobNoCopy(index int64, content []byte) error

RowSetBlobNoCopy : Set blob at specified index in current row without copying it

func (*TsBatch) RowSetDouble Uses

func (t *TsBatch) RowSetDouble(index int64, value float64) error

RowSetDouble : Set double at specified index in current row

func (*TsBatch) RowSetInt64 Uses

func (t *TsBatch) RowSetInt64(index, value int64) error

RowSetInt64 : Set int64 at specified index in current row

func (*TsBatch) RowSetTimestamp Uses

func (t *TsBatch) RowSetTimestamp(index int64, value time.Time) error

RowSetTimestamp : Add a timestamp to current row

func (*TsBatch) StartRow Uses

func (t *TsBatch) StartRow(timestamp time.Time) error

StartRow : Start a new row

type TsBatchColumnInfo Uses

type TsBatchColumnInfo struct {
    Timeseries       string
    Column           string
    ElementCountHint int64
}

TsBatchColumnInfo : Represents one column in a timeseries Preallocate the underlying structure with the ElementCountHint

func NewTsBatchColumnInfo Uses

func NewTsBatchColumnInfo(timeseries string, column string, hint int64) TsBatchColumnInfo

NewTsBatchColumnInfo : Creates a new TsBatchColumnInfo

type TsBlobAggregation Uses

type TsBlobAggregation struct {
    // contains filtered or unexported fields
}

TsBlobAggregation : Aggregation of double type

func NewBlobAggregation Uses

func NewBlobAggregation(kind TsAggregationType, rng TsRange) *TsBlobAggregation

NewBlobAggregation : Create new timeseries blob aggregation

func (TsBlobAggregation) Count Uses

func (t TsBlobAggregation) Count() int64

Count : returns the number of points aggregated into the result

func (TsBlobAggregation) Range Uses

func (t TsBlobAggregation) Range() TsRange

Range : returns the range of the aggregation

func (TsBlobAggregation) Result Uses

func (t TsBlobAggregation) Result() TsBlobPoint

Result : result of the aggregation

func (TsBlobAggregation) Type Uses

func (t TsBlobAggregation) Type() TsAggregationType

Type : returns the type of the aggregation

type TsBlobColumn Uses

type TsBlobColumn struct {
    // contains filtered or unexported fields
}

TsBlobColumn : a time series blob column

func (TsBlobColumn) Aggregate Uses

func (column TsBlobColumn) Aggregate(aggs ...*TsBlobAggregation) ([]TsBlobAggregation, error)

Aggregate : Aggregate a sub-part of the time series.

It is an error to call this function on a non existing time-series.

Code:

h, timeseries := MustCreateTimeseriesWithData("ExampleTsBlobColumn_Aggregate")
defer h.Close()

column := timeseries.BlobColumn("serie_column_blob")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
aggFirst := NewBlobAggregation(AggFirst, r)
results, err := column.Aggregate(aggFirst)
if err != nil {
    // handle error
}
fmt.Println("first:", string(results[0].Result().Content()))

Output:

first: content_0

func (TsBlobColumn) EraseRanges Uses

func (column TsBlobColumn) EraseRanges(rgs ...TsRange) (uint64, error)

EraseRanges : erase all points in the specified ranges

Code:

h, timeseries := MustCreateTimeseriesWithData("ExampleTsBlobColumn_EraseRanges")
defer h.Close()

column := timeseries.BlobColumn("serie_column_blob")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
numberOfErasedValues, err := column.EraseRanges(r)
if err != nil {
    // handle error
}
fmt.Println("Number of erased values:", numberOfErasedValues)

Output:

Number of erased values: 4

func (TsBlobColumn) GetRanges Uses

func (column TsBlobColumn) GetRanges(rgs ...TsRange) ([]TsBlobPoint, error)

GetRanges : Retrieves blobs in the specified range of the time series column.

It is an error to call this function on a non existing time-series.

Code:

h, timeseries := MustCreateTimeseriesWithData("ExampleTsBlobColumn_GetRanges")
defer h.Close()

column := timeseries.BlobColumn("serie_column_blob")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
blobPoints, err := column.GetRanges(r)
if err != nil {
    // handle error
}
for _, point := range blobPoints {
    fmt.Println("timestamp:", point.Timestamp().UTC(), "- value:", string(point.Content()))
}

Output:

timestamp: 1970-01-01 00:00:10 +0000 UTC - value: content_0
timestamp: 1970-01-01 00:00:20 +0000 UTC - value: content_1
timestamp: 1970-01-01 00:00:30 +0000 UTC - value: content_2
timestamp: 1970-01-01 00:00:40 +0000 UTC - value: content_3

func (TsBlobColumn) Insert Uses

func (column TsBlobColumn) Insert(points ...TsBlobPoint) error

Insert blob points into a timeseries

Code:

h, timeseries := MustCreateTimeseriesWithColumns("ExampleTsBlobColumn_Insert")
defer h.Close()

column := timeseries.BlobColumn("serie_column_blob")

// Insert only one point:
column.Insert(NewTsBlobPoint(time.Now(), []byte("content")))

// Insert multiple points
blobPoints := make([]TsBlobPoint, 2)
blobPoints[0] = NewTsBlobPoint(time.Now(), []byte("content"))
blobPoints[1] = NewTsBlobPoint(time.Now(), []byte("content_2"))

err := column.Insert(blobPoints...)
if err != nil {
    // handle error
}

type TsBlobPoint Uses

type TsBlobPoint struct {
    // contains filtered or unexported fields
}

TsBlobPoint : timestamped data

func NewTsBlobPoint Uses

func NewTsBlobPoint(timestamp time.Time, value []byte) TsBlobPoint

NewTsBlobPoint : Create new timeseries double point

func (TsBlobPoint) Content Uses

func (t TsBlobPoint) Content() []byte

Content : return data point content

func (TsBlobPoint) Timestamp Uses

func (t TsBlobPoint) Timestamp() time.Time

Timestamp : return data point timestamp

type TsBulk Uses

type TsBulk struct {
    // contains filtered or unexported fields
}

TsBulk : A structure that permits to append data to a timeseries

func (*TsBulk) Append Uses

func (t *TsBulk) Append() error

Append : Adds the append to the list to be pushed

func (*TsBulk) Blob Uses

func (t *TsBulk) Blob(content []byte) *TsBulk

Blob : adds a blob in row transaction

func (*TsBulk) Double Uses

func (t *TsBulk) Double(value float64) *TsBulk

Double : adds a double in row transaction

func (*TsBulk) GetBlob Uses

func (t *TsBulk) GetBlob() ([]byte, error)

GetBlob : gets a blob in row

func (*TsBulk) GetDouble Uses

func (t *TsBulk) GetDouble() (float64, error)

GetDouble : gets a double in row

func (*TsBulk) GetInt64 Uses

func (t *TsBulk) GetInt64() (int64, error)

GetInt64 : gets an int64 in row

func (*TsBulk) GetRanges Uses

func (t *TsBulk) GetRanges(rgs ...TsRange) error

GetRanges : create a range bulk query

func (*TsBulk) GetTimestamp Uses

func (t *TsBulk) GetTimestamp() (time.Time, error)

GetTimestamp : gets a timestamp in row

func (*TsBulk) Ignore Uses

func (t *TsBulk) Ignore() *TsBulk

Ignore : ignores this column in a row transaction

func (*TsBulk) Int64 Uses

func (t *TsBulk) Int64(value int64) *TsBulk

Int64 : adds an int64 in row transaction

func (*TsBulk) NextRow Uses

func (t *TsBulk) NextRow() (time.Time, error)

NextRow : advance to the next row, or the first one if not already used

func (*TsBulk) Push Uses

func (t *TsBulk) Push() (int, error)

Push : push the list of appended rows returns the number of rows added

Code:

h, timeseries := MustCreateTimeseriesWithColumns("ExampleTsBulk_Push")
defer h.Close()

bulk, err := timeseries.Bulk(NewTsColumnInfo("serie_column_blob", TsColumnBlob), NewTsColumnInfo("serie_column_double", TsColumnDouble))
if err != nil {
    // handle error
    return
}
// Don't forget to release
defer bulk.Release()

bulk.Row(time.Now()).Blob([]byte("content")).Double(3.2).Append()
bulk.Row(time.Now()).Blob([]byte("content 2")).Double(4.8).Append()
rowCount, err := bulk.Push()
if err != nil {
    // handle error
}
fmt.Println("RowCount:", rowCount)

Output:

RowCount: 2

func (*TsBulk) Release Uses

func (t *TsBulk) Release()

Release : release the memory of the local table

func (*TsBulk) Row Uses

func (t *TsBulk) Row(timestamp time.Time) *TsBulk

Row : initialize a row append

func (TsBulk) RowCount Uses

func (t TsBulk) RowCount() int

RowCount : returns the number of rows to be append

func (*TsBulk) Timestamp Uses

func (t *TsBulk) Timestamp(value time.Time) *TsBulk

Timestamp : adds a timestamp in row transaction

type TsColumnInfo Uses

type TsColumnInfo struct {
    // contains filtered or unexported fields
}

TsColumnInfo : column information in timeseries

func NewTsColumnInfo Uses

func NewTsColumnInfo(columnName string, columnType TsColumnType) TsColumnInfo

NewTsColumnInfo : create a column info structure

func (TsColumnInfo) Name Uses

func (t TsColumnInfo) Name() string

Name : return column name

func (TsColumnInfo) Type Uses

func (t TsColumnInfo) Type() TsColumnType

Type : return column type

type TsColumnType Uses

type TsColumnType C.qdb_ts_column_type_t

TsColumnType : Timeseries column types

const (
    TsColumnUninitialized TsColumnType = C.qdb_ts_column_uninitialized
    TsColumnDouble        TsColumnType = C.qdb_ts_column_double
    TsColumnBlob          TsColumnType = C.qdb_ts_column_blob
    TsColumnInt64         TsColumnType = C.qdb_ts_column_int64
    TsColumnTimestamp     TsColumnType = C.qdb_ts_column_timestamp
)

Values

TsColumnDouble : column is a double point
TsColumnBlob : column is a blob point
TsColumnInt64 : column is a int64 point
TsColumnTimestamp : column is a timestamp point

type TsDoubleAggregation Uses

type TsDoubleAggregation struct {
    // contains filtered or unexported fields
}

TsDoubleAggregation : Aggregation of double type

func NewDoubleAggregation Uses

func NewDoubleAggregation(kind TsAggregationType, rng TsRange) *TsDoubleAggregation

NewDoubleAggregation : Create new timeseries double aggregation

func (TsDoubleAggregation) Count Uses

func (t TsDoubleAggregation) Count() int64

Count : returns the number of points aggregated into the result

func (TsDoubleAggregation) Range Uses

func (t TsDoubleAggregation) Range() TsRange

Range : returns the range of the aggregation

func (TsDoubleAggregation) Result Uses

func (t TsDoubleAggregation) Result() TsDoublePoint

Result : result of the aggregation

func (TsDoubleAggregation) Type Uses

func (t TsDoubleAggregation) Type() TsAggregationType

Type : returns the type of the aggregation

type TsDoubleColumn Uses

type TsDoubleColumn struct {
    // contains filtered or unexported fields
}

TsDoubleColumn : a time series double column

func (TsDoubleColumn) Aggregate Uses

func (column TsDoubleColumn) Aggregate(aggs ...*TsDoubleAggregation) ([]TsDoubleAggregation, error)

Aggregate : Aggregate a sub-part of a timeseries from the specified aggregations.

It is an error to call this function on a non existing time-series.

Code:

h, timeseries := MustCreateTimeseriesWithData("ExampleTsDoubleColumn_Aggregate")
defer h.Close()

column := timeseries.DoubleColumn("serie_column_double")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
aggFirst := NewDoubleAggregation(AggFirst, r)
aggMean := NewDoubleAggregation(AggArithmeticMean, r)
results, err := column.Aggregate(aggFirst, aggMean)
if err != nil {
    // handle error
}
fmt.Println("first:", results[0].Result().Content())
fmt.Println("mean:", results[1].Result().Content())
fmt.Println("number of elements reviewed for mean:", results[1].Count())

Output:

first: 0
mean: 1.5
number of elements reviewed for mean: 4

func (TsDoubleColumn) EraseRanges Uses

func (column TsDoubleColumn) EraseRanges(rgs ...TsRange) (uint64, error)

EraseRanges : erase all points in the specified ranges

Code:

h, timeseries := MustCreateTimeseriesWithData("ExampleTsDoubleColumn_EraseRanges")
defer h.Close()

column := timeseries.DoubleColumn("serie_column_double")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
numberOfErasedValues, err := column.EraseRanges(r)
if err != nil {
    // handle error
}
fmt.Println("Number of erased values:", numberOfErasedValues)

Output:

Number of erased values: 4

func (TsDoubleColumn) GetRanges Uses

func (column TsDoubleColumn) GetRanges(rgs ...TsRange) ([]TsDoublePoint, error)

GetRanges : Retrieves blobs in the specified range of the time series column.

It is an error to call this function on a non existing time-series.

Code:

h, timeseries := MustCreateTimeseriesWithData("ExampleTsDoubleColumn_GetRanges")
defer h.Close()

column := timeseries.DoubleColumn("serie_column_double")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
doublePoints, err := column.GetRanges(r)
if err != nil {
    // handle error
}
for _, point := range doublePoints {
    fmt.Println("timestamp:", point.Timestamp().UTC(), "- value:", point.Content())
}

Output:

timestamp: 1970-01-01 00:00:10 +0000 UTC - value: 0
timestamp: 1970-01-01 00:00:20 +0000 UTC - value: 1
timestamp: 1970-01-01 00:00:30 +0000 UTC - value: 2
timestamp: 1970-01-01 00:00:40 +0000 UTC - value: 3

func (TsDoubleColumn) Insert Uses

func (column TsDoubleColumn) Insert(points ...TsDoublePoint) error

Insert double points into a timeseries

Code:

h, timeseries := MustCreateTimeseriesWithColumns("ExampleTsDoubleColumn_Insert")
defer h.Close()

column := timeseries.DoubleColumn("serie_column_double")

// Insert only one point:
column.Insert(NewTsDoublePoint(time.Now(), 3.2))

// Insert multiple points
doublePoints := make([]TsDoublePoint, 2)
doublePoints[0] = NewTsDoublePoint(time.Now(), 3.2)
doublePoints[1] = NewTsDoublePoint(time.Now(), 4.8)

err := column.Insert(doublePoints...)
if err != nil {
    // handle error
}

type TsDoublePoint Uses

type TsDoublePoint struct {
    // contains filtered or unexported fields
}

TsDoublePoint : timestamped double data point

func NewTsDoublePoint Uses

func NewTsDoublePoint(timestamp time.Time, value float64) TsDoublePoint

NewTsDoublePoint : Create new timeseries double point

func (TsDoublePoint) Content Uses

func (t TsDoublePoint) Content() float64

Content : return data point content

func (TsDoublePoint) Timestamp Uses

func (t TsDoublePoint) Timestamp() time.Time

Timestamp : return data point timestamp

type TsInt64Aggregation Uses

type TsInt64Aggregation struct {
    // contains filtered or unexported fields
}

TsInt64Aggregation : Aggregation of int64 type

func NewInt64Aggregation Uses

func NewInt64Aggregation(kind TsAggregationType, rng TsRange) *TsInt64Aggregation

NewInt64Aggregation : Create new timeseries int64 aggregation

func (TsInt64Aggregation) Count Uses

func (t TsInt64Aggregation) Count() int64

Count : returns the number of points aggregated into the result

func (TsInt64Aggregation) Range Uses

func (t TsInt64Aggregation) Range() TsRange

Range : returns the range of the aggregation

func (TsInt64Aggregation) Result Uses

func (t TsInt64Aggregation) Result() TsInt64Point

Result : result of the aggregation

func (TsInt64Aggregation) Type Uses

func (t TsInt64Aggregation) Type() TsAggregationType

Type : returns the type of the aggregation

type TsInt64Column Uses

type TsInt64Column struct {
    // contains filtered or unexported fields
}

TsInt64Column : a time series int64 column

func (TsInt64Column) EraseRanges Uses

func (column TsInt64Column) EraseRanges(rgs ...TsRange) (uint64, error)

EraseRanges : erase all points in the specified ranges

Code:

h, timeseries := MustCreateTimeseriesWithData("ExampleTsInt64Column_EraseRanges")
defer h.Close()

column := timeseries.Int64Column("serie_column_int64")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
numberOfErasedValues, err := column.EraseRanges(r)
if err != nil {
    // handle error
}
fmt.Println("Number of erased values:", numberOfErasedValues)

Output:

Number of erased values: 4

func (TsInt64Column) GetRanges Uses

func (column TsInt64Column) GetRanges(rgs ...TsRange) ([]TsInt64Point, error)

GetRanges : Retrieves int64s in the specified range of the time series column.

It is an error to call this function on a non existing time-series.

Code:

h, timeseries := MustCreateTimeseriesWithData("ExampleTsInt64Column_GetRanges")
defer h.Close()

column := timeseries.Int64Column("serie_column_int64")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
int64Points, err := column.GetRanges(r)
if err != nil {
    // handle error
}
for _, point := range int64Points {
    fmt.Println("timestamp:", point.Timestamp().UTC(), "- value:", point.Content())
}

Output:

timestamp: 1970-01-01 00:00:10 +0000 UTC - value: 0
timestamp: 1970-01-01 00:00:20 +0000 UTC - value: 1
timestamp: 1970-01-01 00:00:30 +0000 UTC - value: 2
timestamp: 1970-01-01 00:00:40 +0000 UTC - value: 3

func (TsInt64Column) Insert Uses

func (column TsInt64Column) Insert(points ...TsInt64Point) error

Insert int64 points into a timeseries

Code:

h, timeseries := MustCreateTimeseriesWithColumns("ExampleTsInt64Column_Insert")
defer h.Close()

column := timeseries.Int64Column("serie_column_int64")

// Insert only one point:
column.Insert(NewTsInt64Point(time.Now(), 3))

// Insert multiple points
int64Points := make([]TsInt64Point, 2)
int64Points[0] = NewTsInt64Point(time.Now(), 3)
int64Points[1] = NewTsInt64Point(time.Now(), 4)

err := column.Insert(int64Points...)
if err != nil {
    // handle error
}

type TsInt64Point Uses

type TsInt64Point struct {
    // contains filtered or unexported fields
}

TsInt64Point : timestamped int64 data point

func NewTsInt64Point Uses

func NewTsInt64Point(timestamp time.Time, value int64) TsInt64Point

NewTsInt64Point : Create new timeseries int64 point

func (TsInt64Point) Content Uses

func (t TsInt64Point) Content() int64

Content : return data point content

func (TsInt64Point) Timestamp Uses

func (t TsInt64Point) Timestamp() time.Time

Timestamp : return data point timestamp

type TsRange Uses

type TsRange struct {
    // contains filtered or unexported fields
}

TsRange : timeseries range with begin and end timestamp

func NewRange Uses

func NewRange(begin, end time.Time) TsRange

NewRange : creates a time range

func (TsRange) Begin Uses

func (t TsRange) Begin() time.Time

Begin : returns the start of the time range

func (TsRange) End Uses

func (t TsRange) End() time.Time

End : returns the end of the time range

type TsTimestampAggregation Uses

type TsTimestampAggregation struct {
    // contains filtered or unexported fields
}

TsTimestampAggregation : Aggregation of timestamp type

func NewTimestampAggregation Uses

func NewTimestampAggregation(kind TsAggregationType, rng TsRange) *TsTimestampAggregation

NewTimestampAggregation : Create new timeseries timestamp aggregation

func (TsTimestampAggregation) Count Uses

func (t TsTimestampAggregation) Count() int64

Count : returns the number of points aggregated into the result

func (TsTimestampAggregation) Range Uses

func (t TsTimestampAggregation) Range() TsRange

Range : returns the range of the aggregation

func (TsTimestampAggregation) Result Uses

func (t TsTimestampAggregation) Result() TsTimestampPoint

Result : result of the aggregation

func (TsTimestampAggregation) Type Uses

func (t TsTimestampAggregation) Type() TsAggregationType

Type : returns the type of the aggregation

type TsTimestampColumn Uses

type TsTimestampColumn struct {
    // contains filtered or unexported fields
}

TsTimestampColumn : a time series timestamp column

func (TsTimestampColumn) EraseRanges Uses

func (column TsTimestampColumn) EraseRanges(rgs ...TsRange) (uint64, error)

EraseRanges : erase all points in the specified ranges

Code:

h, timeseries := MustCreateTimeseriesWithData("ExampleTsTimestampColumn_EraseRanges")
defer h.Close()

column := timeseries.TimestampColumn("serie_column_timestamp")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
numberOfErasedValues, err := column.EraseRanges(r)
if err != nil {
    // handle error
}
fmt.Println("Number of erased values:", numberOfErasedValues)

Output:

Number of erased values: 4

func (TsTimestampColumn) GetRanges Uses

func (column TsTimestampColumn) GetRanges(rgs ...TsRange) ([]TsTimestampPoint, error)

GetRanges : Retrieves timestamps in the specified range of the time series column.

It is an error to call this function on a non existing time-series.

Code:

h, timeseries := MustCreateTimeseriesWithData("ExampleTsTimestampColumn_GetRanges")
defer h.Close()

column := timeseries.TimestampColumn("serie_column_timestamp")

r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
timestampPoints, err := column.GetRanges(r)
if err != nil {
    // handle error
}
for _, point := range timestampPoints {
    fmt.Println("timestamp:", point.Timestamp().UTC(), "- value:", point.Content().UTC())
}

Output:

timestamp: 1970-01-01 00:00:10 +0000 UTC - value: 1970-01-01 00:00:10 +0000 UTC
timestamp: 1970-01-01 00:00:20 +0000 UTC - value: 1970-01-01 00:00:20 +0000 UTC
timestamp: 1970-01-01 00:00:30 +0000 UTC - value: 1970-01-01 00:00:30 +0000 UTC
timestamp: 1970-01-01 00:00:40 +0000 UTC - value: 1970-01-01 00:00:40 +0000 UTC

func (TsTimestampColumn) Insert Uses

func (column TsTimestampColumn) Insert(points ...TsTimestampPoint) error

Insert timestamp points into a timeseries

Code:

h, timeseries := MustCreateTimeseriesWithColumns("ExampleTsTimestampColumn_Insert")
defer h.Close()

column := timeseries.TimestampColumn("serie_column_timestamp")

// Insert only one point:
column.Insert(NewTsTimestampPoint(time.Now(), time.Now()))

// Insert multiple points
timestampPoints := make([]TsTimestampPoint, 2)
timestampPoints[0] = NewTsTimestampPoint(time.Now(), time.Now())
timestampPoints[1] = NewTsTimestampPoint(time.Now(), time.Now())

err := column.Insert(timestampPoints...)
if err != nil {
    // handle error
}

type TsTimestampPoint Uses

type TsTimestampPoint struct {
    // contains filtered or unexported fields
}

TsTimestampPoint : timestamped timestamp data point

func NewTsTimestampPoint Uses

func NewTsTimestampPoint(timestamp time.Time, value time.Time) TsTimestampPoint

NewTsTimestampPoint : Create new timeseries timestamp point

func (TsTimestampPoint) Content Uses

func (t TsTimestampPoint) Content() time.Time

Content : return data point content

func (TsTimestampPoint) Timestamp Uses

func (t TsTimestampPoint) Timestamp() time.Time

Timestamp : return data point timestamp

Package qdb imports 11 packages (graph). Updated 2019-10-18. Refresh now. Tools for package owners.