radix

package module
v0.0.0-...-e7c7023 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 10, 2023 License: MIT Imports: 22 Imported by: 0

README

Radix

Build Status GitHub tag (latest SemVer) GoDoc Go Report Card

Twitter Follow

Radix is a full-featured Redis client for Go. See the GoDoc for documentation and general usage examples.

This is the third revision of this project, the previous one has been deprecated but can be found here.

Features

  • Standard print-like API which supports all current and future redis commands.

  • Support for using an io.Reader as a command argument and writing responses to an io.Writer, as well as marshaling/unmarshaling command arguments from structs.

  • Connection pooling, which takes advantage of implicit pipelining to reduce system calls.

  • Helpers for EVAL, SCAN, and manual pipelining.

  • Support for pubsub, as well as persistent pubsub wherein if a connection is lost a new one transparently replaces it.

  • Full support for sentinel and cluster.

  • Nearly all important types are interfaces, allowing for custom implementations of nearly anything.

Installation and Usage

Radix always aims to support the most recent two versions of go, and is likely to support others prior to those two.

Module-aware mode:

go get github.com/mediocregopher/radix/v3
// import github.com/mediocregopher/radix/v3

Legacy GOPATH mode:

go get github.com/mediocregopher/radix
// import github.com/mediocregopher/radix

Testing

# requires a redis server running on 127.0.0.1:6379
go test github.com/mediocregopher/radix/v3

Benchmarks

Thanks to a huge amount of work put in by @nussjustin, and inspiration from the redispipe project and @funny-falcon, radix/v3 is significantly faster than most redis drivers, including redigo, for normal parallel workloads, and is pretty comparable for serial workloads.

Benchmarks can be run from the bench folder. The following results were obtained by running the benchmarks with -cpu set to 32 and 64, on a 32 core machine, with the redis server on a separate machine. See this thread for more details.

Some of radix's results are not included below because they use a non-default configuration.

# go get rsc.io/benchstat
# cd bench
# go test -v -run=XXX -bench=ParallelGetSet -cpu 32 -cpu 64 -benchmem . >/tmp/radix.stat
# benchstat radix.stat
name                                   time/op
ParallelGetSet/radix/default-32        2.15µs ± 0% <--- The good stuff
ParallelGetSet/radix/default-64        2.05µs ± 0% <--- The better stuff
ParallelGetSet/redigo-32               27.9µs ± 0%
ParallelGetSet/redigo-64               28.5µs ± 0%
ParallelGetSet/redispipe-32            2.02µs ± 0%
ParallelGetSet/redispipe-64            1.71µs ± 0%

name                                   alloc/op
ParallelGetSet/radix/default-32         72.0B ± 0%
ParallelGetSet/radix/default-64         84.0B ± 0%
ParallelGetSet/redigo-32                 119B ± 0%
ParallelGetSet/redigo-64                 120B ± 0%
ParallelGetSet/redispipe-32              168B ± 0%
ParallelGetSet/redispipe-64              172B ± 0%

name                                   allocs/op
ParallelGetSet/radix/default-32          4.00 ± 0%
ParallelGetSet/radix/default-64          4.00 ± 0%
ParallelGetSet/redigo-32                 6.00 ± 0%
ParallelGetSet/redigo-64                 6.00 ± 0%
ParallelGetSet/redispipe-32              8.00 ± 0%
ParallelGetSet/redispipe-64              8.00 ± 0%

Unless otherwise noted, the source files are distributed under the MIT License found in the LICENSE.txt file.

Documentation

Overview

Package radix implements all functionality needed to work with redis and all things related to it, including redis cluster, pubsub, sentinel, scanning, lua scripting, and more.

Creating a client

For a single node redis instance use NewPool to create a connection pool. The connection pool is thread-safe and will automatically create, reuse, and recreate connections as needed:

pool, err := radix.NewPool("tcp", "127.0.0.1:6379", 10)
if err != nil {
	// handle error
}

If you're using sentinel or cluster you should use NewSentinel or NewCluster (respectively) to create your client instead.

Commands

Any redis command can be performed by passing a Cmd into a Client's Do method. Each Cmd should only be used once. The return from the Cmd can be captured into any appopriate go primitive type, or a slice, map, or struct, if the command returns an array.

err := client.Do(radix.Cmd(nil, "SET", "foo", "someval"))

var fooVal string
err := client.Do(radix.Cmd(&fooVal, "GET", "foo"))

var fooValB []byte
err := client.Do(radix.Cmd(&fooValB, "GET", "foo"))

var barI int
err := client.Do(radix.Cmd(&barI, "INCR", "bar"))

var bazEls []string
err := client.Do(radix.Cmd(&bazEls, "LRANGE", "baz", "0", "-1"))

var buzMap map[string]string
err := client.Do(radix.Cmd(&buzMap, "HGETALL", "buz"))

FlatCmd can also be used if you wish to use non-string arguments like integers, slices, maps, or structs, and have them automatically be flattened into a single string slice.

Struct Scanning

Cmd and FlatCmd can unmarshal results into a struct. The results must be a key/value array, such as that returned by HGETALL. Exported field names will be used as keys, unless the fields have the "redis" tag:

type MyType struct {
	Foo string               // Will be populated with the value for key "Foo"
	Bar string `redis:"BAR"` // Will be populated with the value for key "BAR"
	Baz string `redis:"-"`   // Will not be populated
}

Embedded structs will inline that struct's fields into the parent's:

type MyOtherType struct {
	// adds fields "Foo" and "BAR" (from above example) to MyOtherType
	MyType
	Biz int
}

The same rules for field naming apply when a struct is passed into FlatCmd as an argument.

Actions

Cmd and FlatCmd both implement the Action interface. Other Actions include Pipeline, WithConn, and EvalScript.Cmd. Any of these may be passed into any Client's Do method.

var fooVal string
p := radix.Pipeline(
	radix.FlatCmd(nil, "SET", "foo", 1),
	radix.Cmd(&fooVal, "GET", "foo"),
)
if err := client.Do(p); err != nil {
	panic(err)
}
fmt.Printf("fooVal: %q\n", fooVal)

Transactions

There are two ways to perform transactions in redis. The first is with the MULTI/EXEC commands, which can be done using the WithConn Action (see its example). The second is using EVAL with lua scripting, which can be done using the EvalScript Action (again, see its example).

EVAL with lua scripting is recommended in almost all cases. It only requires a single round-trip, it's infinitely more flexible than MULTI/EXEC, it's simpler to code, and for complex transactions, which would otherwise need a WATCH statement with MULTI/EXEC, it's significantly faster.

AUTH and other settings via ConnFunc and ClientFunc

All the client creation functions (e.g. NewPool) take in either a ConnFunc or a ClientFunc via their options. These can be used in order to set up timeouts on connections, perform authentication commands, or even implement custom pools.

// this is a ConnFunc which will set up a connection which is authenticated
// and has a 1 minute timeout on all operations
customConnFunc := func(network, addr string) (radix.Conn, error) {
	return radix.Dial(network, addr,
		radix.DialTimeout(1 * time.Minute),
		radix.DialAuthPass("mySuperSecretPassword"),
	)
}

// this pool will use our ConnFunc for all connections it creates
pool, err := radix.NewPool("tcp", redisAddr, 10, PoolConnFunc(customConnFunc))

// this cluster will use the ClientFunc to create a pool to each node in the
// cluster. The pools also use our customConnFunc, but have more connections
poolFunc := func(network, addr string) (radix.Client, error) {
	return radix.NewPool(network, addr, 100, PoolConnFunc(customConnFunc))
}
cluster, err := radix.NewCluster([]string{redisAddr1, redisAddr2}, ClusterPoolFunc(poolFunc))

Custom implementations

All interfaces in this package were designed such that they could have custom implementations. There is no dependency within radix that demands any interface be implemented by a particular underlying type, so feel free to create your own Pools or Conns or Actions or whatever makes your life easier.

Errors

Errors returned from redis can be explicitly checked for using the the resp2.Error type. Note that the errors.As function, introduced in go 1.13, should be used.

var redisErr resp2.Error
err := client.Do(radix.Cmd(nil, "AUTH", "wrong password"))
if errors.As(err, &redisErr) {
	log.Printf("redis error returned: %s", redisErr.E)
}

Use the golang.org/x/xerrors package if you're using an older version of go.

Implicit pipelining

Implicit pipelining is an optimization implemented and enabled in the default Pool implementation (and therefore also used by Cluster and Sentinel) which involves delaying concurrent Cmds and FlatCmds a small amount of time and sending them to redis in a single batch, similar to manually using a Pipeline. By doing this radix significantly reduces the I/O and CPU overhead for concurrent requests.

Note that only commands which do not block are eligible for implicit pipelining.

See the documentation on Pool for more information about the current implementation of implicit pipelining and for how to configure or disable the feature.

For a performance comparisons between Clients with and without implicit pipelining see the benchmark results in the README.md.

Index

Examples

Constants

This section is empty.

Variables

View Source
var DefaultClientFunc = func(network, addr string) (Client, error) {
	return NewPool(network, addr, 4)
}

DefaultClientFunc is a ClientFunc which will return a Client for a redis instance using sane defaults.

View Source
var DefaultConnFunc = func(network, addr string) (Conn, error) {
	return Dial(network, addr)
}

DefaultConnFunc is a ConnFunc which will return a Conn for a redis instance using sane defaults.

View Source
var ErrPoolEmpty = errors.New("connection pool is empty")

ErrPoolEmpty is used by Pools created using the PoolOnEmptyErrAfter option

View Source
var ScanAllKeys = ScanOpts{
	Command: "SCAN",
}

ScanAllKeys is a shortcut ScanOpts which can be used to scan all keys

Functions

func CRC16

func CRC16(buf []byte) uint16

CRC16 returns checksum for a given set of bytes based on the crc algorithm defined for hashing redis keys in a cluster setup

func ClusterSlot

func ClusterSlot(key []byte) uint16

ClusterSlot returns the slot number the key belongs to in any redis cluster, taking into account key hash tags

Types

type Action

type Action interface {
	// Keys returns the keys which will be acted on. Empty slice or nil may be
	// returned if no keys are being acted on. The returned slice must not be
	// modified.
	Keys() []string

	// Run actually performs the Action using the given Conn.
	Run(c Conn) error
}

Action performs a task using a Conn.

func Pipeline

func Pipeline(cmds ...CmdAction) Action

Pipeline returns an Action which first writes multiple commands to a Conn in a single write, then reads their responses in a single read. This reduces network delay into a single round-trip.

Run will not be called on any of the passed in CmdActions.

NOTE that, while a Pipeline performs all commands on a single Conn, it shouldn't be used by itself for MULTI/EXEC transactions, because if there's an error it won't discard the incomplete transaction. Use WithConn or EvalScript for transactional functionality instead.

Example
client, err := NewPool("tcp", "127.0.0.1:6379", 10) // or any other client
if err != nil {
	// handle error
}
var fooVal string
p := Pipeline(
	FlatCmd(nil, "SET", "foo", 1),
	Cmd(&fooVal, "GET", "foo"),
)
if err := client.Do(p); err != nil {
	// handle error
}
fmt.Printf("fooVal: %q\n", fooVal)
Output:

fooVal: "1"

func WithConn

func WithConn(key string, fn func(Conn) error) Action

WithConn is used to perform a set of independent Actions on the same Conn. key should be a key which one or more of the inner Actions is acting on, or "" if no keys are being acted on. The callback function is what should actually carry out the inner actions, and the error it returns will be passed back up immediately.

NOTE that WithConn only ensures all inner Actions are performed on the same Conn, it doesn't make them transactional. Use MULTI/WATCH/EXEC within a WithConn for transactions, or use EvalScript

Example
client, err := NewPool("tcp", "127.0.0.1:6379", 10) // or any other client
if err != nil {
	// handle error
}

// This example retrieves the current integer value of `key` and sets its
// new value to be the increment of that, all using the same connection
// instance. NOTE that it does not do this atomically like the INCR command
// would.
key := "someKey"
err = client.Do(WithConn(key, func(conn Conn) error {
	var curr int
	if err := conn.Do(Cmd(&curr, "GET", key)); err != nil {
		return err
	}

	curr++
	return conn.Do(FlatCmd(nil, "SET", key, curr))
}))
if err != nil {
	// handle error
}
Output:

Example (Transaction)
client, err := NewPool("tcp", "127.0.0.1:6379", 10) // or any other client
if err != nil {
	// handle error
}

// This example retrieves the current value of `key` and then sets a new
// value on it in an atomic transaction.
key := "someKey"
var prevVal string

err = client.Do(WithConn(key, func(c Conn) error {

	// Begin the transaction with a MULTI command
	if err := c.Do(Cmd(nil, "MULTI")); err != nil {
		return err
	}

	// If any of the calls after the MULTI call error it's important that
	// the transaction is discarded. This isn't strictly necessary if the
	// error was a network error, as the connection would be closed by the
	// client anyway, but it's important otherwise.
	var err error
	defer func() {
		if err != nil {
			// The return from DISCARD doesn't matter. If it's an error then
			// it's a network error and the Conn will be closed by the
			// client.
			c.Do(Cmd(nil, "DISCARD"))
		}
	}()

	// queue up the transaction's commands
	if err = c.Do(Cmd(nil, "GET", key)); err != nil {
		return err
	}
	if err = c.Do(Cmd(nil, "SET", key, "someOtherValue")); err != nil {
		return err
	}

	// execute the transaction, capturing the result
	var result []string
	if err = c.Do(Cmd(&result, "EXEC")); err != nil {
		return err
	}

	// capture the output of the first transaction command, i.e. the GET
	prevVal = result[0]
	return nil
}))
if err != nil {
	// handle error
}

fmt.Printf("the value of key %q was %q\n", key, prevVal)
Output:

type Client

type Client interface {
	// Do performs an Action, returning any error.
	Do(Action) error

	// Once Close() is called all future method calls on the Client will return
	// an error
	Close() error
}

Client describes an entity which can carry out Actions, e.g. a connection pool for a single redis instance or the cluster client.

Implementations of Client are expected to be thread-safe, except in cases like Conn where they specify otherwise.

type ClientFunc

type ClientFunc func(network, addr string) (Client, error)

ClientFunc is a function which can be used to create a Client for a single redis instance on the given network/address.

type Cluster

type Cluster struct {

	// Any errors encountered internally will be written to this channel. If
	// nothing is reading the channel the errors will be dropped. The channel
	// will be closed when the Close method is called.
	ErrCh chan error
	// contains filtered or unexported fields
}

Cluster contains all information about a redis cluster needed to interact with it, including a set of pools to each of its instances. All methods on Cluster are thread-safe

func NewCluster

func NewCluster(clusterAddrs []string, opts ...ClusterOpt) (*Cluster, error)

NewCluster initializes and returns a Cluster instance. It will try every address given until it finds a usable one. From there it uses CLUSTER SLOTS to discover the cluster topology and make all the necessary connections.

NewCluster takes in a number of options which can overwrite its default behavior. The default options NewCluster uses are:

ClusterPoolFunc(DefaultClientFunc)
ClusterSyncEvery(5 * time.Second)
ClusterOnDownDelayActionsBy(100 * time.Millisecond)

func (*Cluster) Client

func (c *Cluster) Client(addr string) (Client, error)

Client returns a Client for the given address, which could be either the primary or one of the secondaries (see Topo method for retrieving known addresses).

NOTE that if there is a failover while a Client returned by this method is being used the Client may or may not continue to work as expected, depending on the nature of the failover.

NOTE the Client should _not_ be closed.

func (*Cluster) Close

func (c *Cluster) Close() error

Close cleans up all goroutines spawned by Cluster and closes all of its Pools.

func (*Cluster) Do

func (c *Cluster) Do(a Action) error

Do performs an Action on a redis instance in the cluster, with the instance being determeined by the key returned from the Action's Key() method.

This method handles MOVED and ASK errors automatically in most cases, see ClusterCanRetryAction's docs for more.

func (*Cluster) NewScanner

func (c *Cluster) NewScanner(o ScanOpts) Scanner

NewScanner will return a Scanner which will scan over every node in the cluster. This will panic if the ScanOpt's Command isn't "SCAN".

If the cluster topology changes during a scan the Scanner may or may not error out due to it, depending on the nature of the change.

func (*Cluster) Sync

func (c *Cluster) Sync() error

Sync will synchronize the Cluster with the actual cluster, making new pools to new instances and removing ones from instances no longer in the cluster. This will be called periodically automatically, but you can manually call it at any time as well

func (*Cluster) Topo

func (c *Cluster) Topo() ClusterTopo

Topo returns the Cluster's topology as it currently knows it. See ClusterTopo's docs for more on its default order.

type ClusterCanRetryAction

type ClusterCanRetryAction interface {
	Action
	ClusterCanRetry() bool
}

ClusterCanRetryAction is an Action which is aware of Cluster's retry behavior in the event of a slot migration. If an Action receives an error from a Cluster node which is either MOVED or ASK, and that Action implements ClusterCanRetryAction, and the ClusterCanRetry method returns true, then the Action will be retried on the correct node.

NOTE that the Actions which are returned by Cmd, FlatCmd, and EvalScript.Cmd all implicitly implement this interface.

type ClusterNode

type ClusterNode struct {
	// older versions of redis might not actually send back the id, so it may be
	// blank
	Addr, ID string
	// start is inclusive, end is exclusive
	Slots [][2]uint16
	// address and id this node is the secondary of, if it's a secondary
	SecondaryOfAddr, SecondaryOfID string
}

ClusterNode describes a single node in the cluster at a moment in time.

type ClusterOpt

type ClusterOpt func(*clusterOpts)

ClusterOpt is an optional behavior which can be applied to the NewCluster function to effect a Cluster's behavior

func ClusterOnDownDelayActionsBy

func ClusterOnDownDelayActionsBy(d time.Duration) ClusterOpt

ClusterOnDownDelayActionsBy tells the Cluster to delay all commands by the given duration while the cluster is seen to be in the CLUSTERDOWN state. This allows fewer actions to be affected by brief outages, e.g. during a failover.

If the given duration is 0 then Cluster will not delay actions during the CLUSTERDOWN state. Note that calls to Sync will not be delayed regardless of this option.

func ClusterPoolFunc

func ClusterPoolFunc(pf ClientFunc) ClusterOpt

ClusterPoolFunc tells the Cluster to use the given ClientFunc when creating pools of connections to cluster members.

func ClusterSyncEvery

func ClusterSyncEvery(d time.Duration) ClusterOpt

ClusterSyncEvery tells the Cluster to synchronize itself with the cluster's topology at the given interval. On every synchronization Cluster will ask the cluster for its topology and make/destroy its connections as necessary.

func ClusterWithTrace

func ClusterWithTrace(ct trace.ClusterTrace) ClusterOpt

ClusterWithTrace tells the Cluster to trace itself with the given ClusterTrace. Note that ClusterTrace will block every point that you set to trace.

type ClusterTopo

type ClusterTopo []ClusterNode

ClusterTopo describes the cluster topology at a given moment. It will be sorted first by slot number of each node and then by secondary status, so primaries will come before secondaries.

func (ClusterTopo) Map

func (tt ClusterTopo) Map() map[string]ClusterNode

Map returns the topology as a mapping of node address to its ClusterNode

func (ClusterTopo) MarshalRESP

func (tt ClusterTopo) MarshalRESP(w io.Writer) error

MarshalRESP implements the resp.Marshaler interface, and will marshal the ClusterTopo in the same format as the return from CLUSTER SLOTS

func (ClusterTopo) Primaries

func (tt ClusterTopo) Primaries() ClusterTopo

Primaries returns a ClusterTopo instance containing only the primary nodes from the ClusterTopo being called on

func (*ClusterTopo) UnmarshalRESP

func (tt *ClusterTopo) UnmarshalRESP(br *bufio.Reader) error

UnmarshalRESP implements the resp.Unmarshaler interface, but only supports unmarshaling the return from CLUSTER SLOTS. The unmarshaled nodes will be sorted before they are returned

type CmdAction

type CmdAction interface {
	Action
	resp.Marshaler
	resp.Unmarshaler
}

CmdAction is a sub-class of Action which can be used in two different ways. The first is as a normal Action, where Run is called with a Conn and returns once the Action has been completed.

The second way is as a Pipeline-able command, where one or more commands are written in one step (via the MarshalRESP method) and their results are read later (via the UnmarshalRESP method).

When used directly with Do then MarshalRESP/UnmarshalRESP are not called, and when used in a Pipeline the Run method is not called.

func Cmd

func Cmd(rcv interface{}, cmd string, args ...string) CmdAction

Cmd is used to perform a redis command and retrieve a result. It should not be passed into Do more than once.

If the receiver value of Cmd is a primitive, a slice/map, or a struct then a pointer must be passed in. It may also be an io.Writer, an encoding.Text/BinaryUnmarshaler, or a resp.Unmarshaler. See the package docs for more on how results are unmarshaled into the receiver.

Example
client, err := NewPool("tcp", "127.0.0.1:6379", 10) // or any other client
if err != nil {
	panic(err)
}

if err := client.Do(Cmd(nil, "SET", "foo", "bar")); err != nil {
	panic(err)
}

var fooVal string
if err := client.Do(Cmd(&fooVal, "GET", "foo")); err != nil {
	panic(err)
}
fmt.Println(fooVal)
Output:

bar

func FlatCmd

func FlatCmd(rcv interface{}, cmd, key string, args ...interface{}) CmdAction

FlatCmd is like Cmd, but the arguments can be of almost any type, and FlatCmd will automatically flatten them into a single array of strings. Like Cmd, a FlatCmd should not be passed into Do more than once.

FlatCmd does _not_ work for commands whose first parameter isn't a key, or (generally) for MSET. Use Cmd for those.

FlatCmd supports using a resp.LenReader (an io.Reader with a Len() method) as an argument. *bytes.Buffer is an example of a LenReader, and the resp package has a NewLenReader function which can wrap an existing io.Reader.

FlatCmd also supports encoding.Text/BinaryMarshalers. It does _not_ currently support resp.Marshaler.

The receiver to FlatCmd follows the same rules as for Cmd.

Example
client, err := NewPool("tcp", "127.0.0.1:6379", 10) // or any other client
if err != nil {
	panic(err)
}

// performs "SET" "foo" "1"
err = client.Do(FlatCmd(nil, "SET", "foo", 1))
if err != nil {
	panic(err)
}

// performs "SADD" "fooSet" "1" "2" "3"
err = client.Do(FlatCmd(nil, "SADD", "fooSet", []string{"1", "2", "3"}))
if err != nil {
	panic(err)
}

// performs "HMSET" "foohash" "a" "1" "b" "2" "c" "3"
m := map[string]int{"a": 1, "b": 2, "c": 3}
err = client.Do(FlatCmd(nil, "HMSET", "fooHash", m))
if err != nil {
	panic(err)
}
Output:

type Conn

type Conn interface {
	// The Do method of a Conn is _not_ expected to be thread-safe with the
	// other methods of Conn, and merely calls the Action's Run method with
	// itself as the argument.
	Client

	// Encode and Decode may be called at the same time by two different
	// go-routines, but each should only be called once at a time (i.e. two
	// routines shouldn't call Encode at the same time, same with Decode).
	//
	// Encode and Decode should _not_ be called at the same time as Do.
	//
	// If either Encode or Decode encounter a net.Error the Conn will be
	// automatically closed.
	//
	// Encode is expected to encode an entire resp message, not a partial one.
	// In other words, when sending commands to redis, Encode should only be
	// called once per command. Similarly, Decode is expected to decode an
	// entire resp response.
	Encode(resp.Marshaler) error
	Decode(resp.Unmarshaler) error

	// Returns the underlying network connection, as-is. Read, Write, and Close
	// should not be called on the returned Conn.
	NetConn() net.Conn
}

Conn is a Client wrapping a single network connection which synchronously reads/writes data using the redis resp protocol.

A Conn can be used directly as a Client, but in general you probably want to use a *Pool instead

func Dial

func Dial(network, addr string, opts ...DialOpt) (Conn, error)

Dial is a ConnFunc which creates a Conn using net.Dial and NewConn. It takes in a number of options which can overwrite its default behavior as well.

In place of a host:port address, Dial also accepts a URI, as per:

https://www.iana.org/assignments/uri-schemes/prov/redis

If the URI has an AUTH password or db specified Dial will attempt to perform the AUTH and/or SELECT as well.

If either DialAuthPass or DialSelectDB is used it overwrites the associated value passed in by the URI.

The default options Dial uses are:

DialTimeout(10 * time.Second)

func NewConn

func NewConn(conn net.Conn) Conn

NewConn takes an existing net.Conn and wraps it to support the Conn interface of this package. The Read and Write methods on the original net.Conn should not be used after calling this method.

func PubSubStub

func PubSubStub(remoteNetwork, remoteAddr string, fn func([]string) interface{}) (Conn, chan<- PubSubMessage)

PubSubStub returns a (fake) Conn, much like Stub does, which pretends it is a Conn to a real redis instance, but is instead using the given callback to service requests. It is primarily useful for writing tests.

PubSubStub differes from Stub in that Encode calls for (P)SUBSCRIBE, (P)UNSUBSCRIBE, MESSAGE, and PING will be intercepted and handled as per redis' expected pubsub functionality. A PubSubMessage may be written to the returned channel at any time, and if the PubSubStub has had (P)SUBSCRIBE called matching that PubSubMessage it will be written to the PubSubStub's internal buffer as expected.

This is intended to be used so that it can mock services which can perform both normal redis commands and pubsub (e.g. a real redis instance, redis sentinel). Once created this stub can be passed into PubSub and treated like a real connection.

Example
// Make a pubsub stub conn which will return nil for everything except
// pubsub commands (which will be handled automatically)
stub, stubCh := PubSubStub("tcp", "127.0.0.1:6379", func([]string) interface{} {
	return nil
})

// These writes shouldn't do anything, initially, since we haven't
// subscribed to anything
go func() {
	for {
		stubCh <- PubSubMessage{
			Channel: "foo",
			Message: []byte("bar"),
		}
		time.Sleep(1 * time.Second)
	}
}()

// Use PubSub to wrap the stub like we would for a normal redis connection
pstub := PubSub(stub)

// Subscribe msgCh to "foo"
msgCh := make(chan PubSubMessage)
if err := pstub.Subscribe(msgCh, "foo"); err != nil {
	log.Fatal(err)
}

// now msgCh is subscribed the publishes being made by the go-routine above
// will start being written to it
for m := range msgCh {
	log.Printf("read m: %#v", m)
}
Output:

func Stub

func Stub(remoteNetwork, remoteAddr string, fn func([]string) interface{}) Conn

Stub returns a (fake) Conn which pretends it is a Conn to a real redis instance, but is instead using the given callback to service requests. It is primarily useful for writing tests.

When Encode is called the given value is marshalled into bytes then unmarshalled into a []string, which is passed to the callback. The return from the callback is then marshalled and buffered interanlly, and will be unmarshalled in the next call to Decode.

remoteNetwork and remoteAddr can be empty, but if given will be used as the return from the RemoteAddr method.

If the internal buffer is empty then Decode will block until Encode is called in a separate go-routine. The SetDeadline and SetReadDeadline methods can be used as usual to limit how long Decode blocks. All other inherited net.Conn methods will panic.

Example
m := map[string]string{}
stub := Stub("tcp", "127.0.0.1:6379", func(args []string) interface{} {
	switch args[0] {
	case "GET":
		return m[args[1]]
	case "SET":
		m[args[1]] = args[2]
		return nil
	default:
		return errors.Errorf("this stub doesn't support command %q", args[0])
	}
})

stub.Do(Cmd(nil, "SET", "foo", "1"))

var foo int
stub.Do(Cmd(&foo, "GET", "foo"))
fmt.Printf("foo: %d\n", foo)
Output:

type ConnFunc

type ConnFunc func(network, addr string) (Conn, error)

ConnFunc is a function which returns an initialized, ready-to-be-used Conn. Functions like NewPool or NewCluster take in a ConnFunc in order to allow for things like calls to AUTH on each new connection, setting timeouts, custom Conn implementations, etc... See the package docs for more details.

type DialOpt

type DialOpt func(*dialOpts)

DialOpt is an optional behavior which can be applied to the Dial function to effect its behavior, or the behavior of the Conn it creates.

func DialAuthPass

func DialAuthPass(pass string) DialOpt

DialAuthPass will cause Dial to perform an AUTH command once the connection is created, using the given pass.

If this is set and a redis URI is passed to Dial which also has a password set, this takes precedence.

func DialConnectTimeout

func DialConnectTimeout(d time.Duration) DialOpt

DialConnectTimeout determines the timeout value to pass into net.DialTimeout when creating the connection. If not set then net.Dial is called instead.

func DialReadTimeout

func DialReadTimeout(d time.Duration) DialOpt

DialReadTimeout determines the deadline to set when reading from a dialed connection. If not set then SetReadDeadline is never called.

func DialSelectDB

func DialSelectDB(db int) DialOpt

DialSelectDB will cause Dial to perform a SELECT command once the connection is created, using the given database index.

If this is set and a redis URI is passed to Dial which also has a database index set, this takes precedence.

func DialTimeout

func DialTimeout(d time.Duration) DialOpt

DialTimeout is the equivalent to using DialConnectTimeout, DialReadTimeout, and DialWriteTimeout all with the same value.

func DialUseTLS

func DialUseTLS(config *tls.Config) DialOpt

DialUseTLS will cause Dial to perform a TLS handshake using the provided config. If config is nil the config is interpreted as equivalent to the zero configuration. See https://golang.org/pkg/crypto/tls/#Config

func DialWriteTimeout

func DialWriteTimeout(d time.Duration) DialOpt

DialWriteTimeout determines the deadline to set when writing to a dialed connection. If not set then SetWriteDeadline is never called.

type EvalScript

type EvalScript struct {
	// contains filtered or unexported fields
}

EvalScript contains the body of a script to be used with redis' EVAL functionality. Call Cmd on a EvalScript to actually create an Action which can be run.

Example
// set as a global variable, this script is equivalent to the builtin GETSET
// redis command
var getSet = NewEvalScript(1, `
		local prev = redis.call("GET", KEYS[1])
		redis.call("SET", KEYS[1], ARGV[1])
		return prev
`)

client, err := NewPool("tcp", "127.0.0.1:6379", 10) // or any other client
if err != nil {
	// handle error
}

key := "someKey"
var prevVal string
if err := client.Do(getSet.Cmd(&prevVal, key, "myVal")); err != nil {
	// handle error
}

fmt.Printf("value of key %q used to be %q\n", key, prevVal)
Output:

func NewEvalScript

func NewEvalScript(numKeys int, script string) EvalScript

NewEvalScript initializes a EvalScript instance. numKeys corresponds to the number of arguments which will be keys when Cmd is called

func (EvalScript) Cmd

func (es EvalScript) Cmd(rcv interface{}, args ...string) Action

Cmd is like the top-level Cmd but it uses the the EvalScript to perform an EVALSHA command (and will automatically fallback to EVAL as necessary). args must be at least as long as the numKeys argument of NewEvalScript.

type MaybeNil

type MaybeNil struct {
	Nil bool
	Rcv interface{}
}

MaybeNil is a type which wraps a receiver. It will first detect if what's being received is a nil RESP type (either bulk string or array), and if so set Nil to true. If not the return value will be unmarshaled into Rcv normally.

Example
client, err := NewPool("tcp", "127.0.0.1:6379", 10) // or any other client
if err != nil {
	// handle error
}

var rcv int64
mn := MaybeNil{Rcv: &rcv}
if err := client.Do(Cmd(&mn, "GET", "foo")); err != nil {
	// handle error
} else if mn.Nil {
	fmt.Println("rcv is nil")
} else {
	fmt.Printf("rcv is %d\n", rcv)
}
Output:

func (*MaybeNil) UnmarshalRESP

func (mn *MaybeNil) UnmarshalRESP(br *bufio.Reader) error

UnmarshalRESP implements the method for the resp.Unmarshaler interface.

type PersistentPubSubOpt

type PersistentPubSubOpt func(*persistentPubSubOpts)

PersistentPubSubOpt is an optional parameter which can be passed into PersistentPubSub in order to affect its behavior.

func PersistentPubSubAbortAfter

func PersistentPubSubAbortAfter(attempts int) PersistentPubSubOpt

PersistentPubSubAbortAfter changes PersistentPubSub's reconnect behavior. Usually PersistentPubSub will try to reconnect forever upon a disconnect, blocking any methods which have been called until reconnect is successful.

When PersistentPubSubAbortAfter is used, it will give up after that many attempts and return the error to the method which has been blocked the longest. Another method will need to be called in order for PersistentPubSub to resume trying to reconnect.

func PersistentPubSubConnFunc

func PersistentPubSubConnFunc(connFn ConnFunc) PersistentPubSubOpt

PersistentPubSubConnFunc causes PersistentPubSub to use the given ConnFunc when connecting to its destination.

type Pool

type Pool struct {

	// Any errors encountered internally will be written to this channel. If
	// nothing is reading the channel the errors will be dropped. The channel
	// will be closed when Close is called.
	ErrCh chan error
	// contains filtered or unexported fields
}

Pool is a dynamic connection pool which implements the Client interface. It takes in a number of options which can effect its specific behavior; see the NewPool method.

Pool is dynamic in that it can create more connections on-the-fly to handle increased load. The maximum number of extra connections (if any) can be configured, along with how long they are kept after load has returned to normal.

Pool also takes advantage of implicit pipelining. If multiple commands are being performed simultaneously, then Pool will write them all to a single connection using a single system call, and read all their responses together using another single system call. Implicit pipelining significantly improves performance during high-concurrency usage, at the expense of slightly worse performance during low-concurrency usage. It can be disabled using PoolPipelineWindow(0, 0).

func NewPool

func NewPool(network, addr string, size int, opts ...PoolOpt) (*Pool, error)

NewPool creates a *Pool which will keep open at least the given number of connections to the redis instance at the given address.

NewPool takes in a number of options which can overwrite its default behavior. The default options NewPool uses are:

PoolConnFunc(DefaultConnFunc)
PoolOnEmptyCreateAfter(1 * time.Second)
PoolRefillInterval(1 * time.Second)
PoolOnFullBuffer((size / 3)+1, 1 * time.Second)
PoolPingInterval(5 * time.Second / (size+1))
PoolPipelineConcurrency(size)
PoolPipelineWindow(150 * time.Microsecond, 0)

The recommended size of the pool depends on the number of concurrent goroutines that will use the pool and whether implicit pipelining is enabled or not.

As a general rule, when implicit pipelining is enabled (the default) the size of the pool can be kept low without problems to reduce resource and file descriptor usage.

func (*Pool) Close

func (p *Pool) Close() error

Close implements the Close method of the Client

func (*Pool) Do

func (p *Pool) Do(a Action) error

Do implements the Do method of the Client interface by retrieving a Conn out of the pool, calling Run on the given Action with it, and returning the Conn to the pool.

If the given Action is a CmdAction, it will be pipelined with other concurrent calls to Do, which can improve the performance and resource usage of the Redis server, but will increase the latency for some of the Actions. To avoid the implicit pipelining you can either set PoolPipelineWindow(0, 0) when creating the Pool or use WithConn. Pipelines created manually (via Pipeline) are also excluded from this and will be executed as if using WithConn.

Due to a limitation in the implementation, custom CmdAction implementations are currently not automatically pipelined.

func (*Pool) NumAvailConns

func (p *Pool) NumAvailConns() int

NumAvailConns returns the number of connections currently available in the pool, as well as in the overflow buffer if that option is enabled.

type PoolOpt

type PoolOpt func(*poolOpts)

PoolOpt is an optional behavior which can be applied to the NewPool function to effect a Pool's behavior

func PoolConnFunc

func PoolConnFunc(cf ConnFunc) PoolOpt

PoolConnFunc tells the Pool to use the given ConnFunc when creating new Conns to its redis instance. The ConnFunc can be used to set timeouts, perform AUTH, or even use custom Conn implementations.

func PoolOnEmptyCreateAfter

func PoolOnEmptyCreateAfter(wait time.Duration) PoolOpt

PoolOnEmptyCreateAfter effects the Pool's behavior when there are no available connections in the Pool. The effect is to cause actions to block until a connection becomes available or until the duration has passed. If the duration is passed a new connection is created and used.

If wait is 0 then a new connection is created immediately upon an empty Pool.

func PoolOnEmptyErrAfter

func PoolOnEmptyErrAfter(wait time.Duration) PoolOpt

PoolOnEmptyErrAfter effects the Pool's behavior when there are no available connections in the Pool. The effect is to cause actions to block until a connection becomes available or until the duration has passed. If the duration is passed then ErrEmptyPool is returned.

If wait is 0 then ErrEmptyPool is returned immediately upon an empty Pool.

func PoolOnEmptyWait

func PoolOnEmptyWait() PoolOpt

PoolOnEmptyWait effects the Pool's behavior when there are no available connections in the Pool. The effect is to cause actions to block as long as it takes until a connection becomes available.

func PoolOnFullBuffer

func PoolOnFullBuffer(size int, drainInterval time.Duration) PoolOpt

PoolOnFullBuffer effects the Pool's behavior when it is full. The effect is to give the pool an additional buffer for connections, called the overflow. If a connection is being put back into a full pool it will be put into the overflow. If the overflow is also full then the connection will be closed and discarded.

drainInterval specifies the interval at which a drain event happens. On each drain event a connection will be removed from the overflow buffer (if any are present in it), closed, and discarded.

If drainInterval is zero then drain events will never occur.

func PoolOnFullClose

func PoolOnFullClose() PoolOpt

PoolOnFullClose effects the Pool's behavior when it is full. The effect is to cause any connection which is being put back into a full pool to be closed and discarded.

func PoolPingInterval

func PoolPingInterval(d time.Duration) PoolOpt

PoolPingInterval specifies the interval at which a ping event happens. On each ping event the Pool calls the PING redis command over one of it's available connections.

Since connections are used in LIFO order, the ping interval * pool size is the duration of time it takes to ping every connection once when the pool is idle.

A shorter interval means connections are pinged more frequently, but also means more traffic with the server.

func PoolPipelineConcurrency

func PoolPipelineConcurrency(limit int) PoolOpt

PoolPipelineConcurrency sets the maximum number of pipelines that can be executed concurrently.

If limit is greater than the pool size or less than 1, the limit will be set to the pool size.

func PoolPipelineWindow

func PoolPipelineWindow(window time.Duration, limit int) PoolOpt

PoolPipelineWindow sets the duration after which internal pipelines will be flushed and the maximum number of commands that can be pipelined before flushing.

If window is zero then implicit pipelining will be disabled. If limit is zero then no limit will be used and pipelines will only be limited by the specified time window.

func PoolRefillInterval

func PoolRefillInterval(d time.Duration) PoolOpt

PoolRefillInterval specifies the interval at which a refill event happens. On each refill event the Pool checks to see if it is full, and if it's not a single connection is created and added to it.

func PoolWithTrace

func PoolWithTrace(pt trace.PoolTrace) PoolOpt

PoolWithTrace tells the Pool to trace itself with the given PoolTrace Note that PoolTrace will block every point that you set to trace.

type PubSubConn

type PubSubConn interface {
	// Subscribe subscribes the PubSubConn to the given set of channels. msgCh
	// will receieve a PubSubMessage for every publish written to any of the
	// channels. This may be called multiple times for the same channels and
	// different msgCh's, each msgCh will receieve a copy of the PubSubMessage
	// for each publish.
	Subscribe(msgCh chan<- PubSubMessage, channels ...string) error

	// Unsubscribe unsubscribes the msgCh from the given set of channels, if it
	// was subscribed at all.
	//
	// NOTE even if msgCh is not subscribed to any other redis channels, it
	// should still be considered "active", and therefore still be having
	// messages read from it, until Unsubscribe has returned
	Unsubscribe(msgCh chan<- PubSubMessage, channels ...string) error

	// PSubscribe is like Subscribe, but it subscribes msgCh to a set of
	// patterns and not individual channels.
	PSubscribe(msgCh chan<- PubSubMessage, patterns ...string) error

	// PUnsubscribe is like Unsubscribe, but it unsubscribes msgCh from a set of
	// patterns and not individual channels.
	//
	// NOTE even if msgCh is not subscribed to any other redis channels, it
	// should still be considered "active", and therefore still be having
	// messages read from it, until PUnsubscribe has returned
	PUnsubscribe(msgCh chan<- PubSubMessage, patterns ...string) error

	// Ping performs a simple Ping command on the PubSubConn, returning an error
	// if it failed for some reason
	Ping() error

	// Close closes the PubSubConn so it can't be used anymore. All subscribed
	// channels will stop receiving PubSubMessages from this Conn (but will not
	// themselves be closed).
	//
	// NOTE all msgChs should be considered "active", and therefore still be
	// having messages read from them, until Close has returned.
	Close() error
}

PubSubConn wraps an existing Conn to support redis' pubsub system. User-created channels can be subscribed to redis channels to receive PubSubMessages which have been published.

If any methods return an error it means the PubSubConn has been Close'd and subscribed msgCh's will no longer receive PubSubMessages from it. All methods are threadsafe and non-blocking.

NOTE if any channels block when being written to they will block all other channels from receiving a publish.

func PersistentPubSub

func PersistentPubSub(network, addr string, connFn ConnFunc) PubSubConn

PersistentPubSub is deprecated in favor of PersistentPubSubWithOpts instead.

Example (Cluster)
// Example of how to use PersistentPubSub with a Cluster instance.

// Initialize the cluster in any way you see fit
cluster, err := NewCluster([]string{"127.0.0.1:6379"})
if err != nil {
	panic(err)
}

// Have PersistentPubSub pick a random cluster node everytime it wants to
// make a new connection. If the node fails PersistentPubSub will
// automatically pick a new node to connect to.
ps := PersistentPubSub("", "", func(string, string) (Conn, error) {
	topo := cluster.Topo()
	node := topo[rand.Intn(len(topo))]
	return Dial("tcp", node.Addr)
})

// Use the PubSubConn as normal.
msgCh := make(chan PubSubMessage)
ps.Subscribe(msgCh, "myChannel")
for msg := range msgCh {
	log.Printf("publish to channel %q received: %q", msg.Channel, msg.Message)
}
Output:

func PersistentPubSubWithOpts

func PersistentPubSubWithOpts(
	network, addr string, options ...PersistentPubSubOpt,
) (
	PubSubConn, error,
)

PersistentPubSubWithOpts is like PubSub, but instead of taking in an existing Conn to wrap it will create one on the fly. If the connection is ever terminated then a new one will be created and will be reset to the previous connection's state.

This is effectively a way to have a permanent PubSubConn established which supports subscribing/unsubscribing but without the hassle of implementing reconnect/re-subscribe logic.

With default options, none of the methods on the returned PubSubConn will ever return an error, they will instead block until a connection can be successfully reinstated.

PersistentPubSubWithOpts takes in a number of options which can overwrite its default behavior. The default options PersistentPubSubWithOpts uses are:

PersistentPubSubConnFunc(DefaultConnFunc)

func PubSub

func PubSub(rc Conn) PubSubConn

PubSub wraps the given Conn so that it becomes a PubSubConn. The passed in Conn should not be used after this call.

Example
// Create a normal redis connection
conn, err := Dial("tcp", "127.0.0.1:6379")
if err != nil {
	panic(err)
}

// Pass that connection into PubSub, conn should never get used after this
ps := PubSub(conn)

// Subscribe to a channel called "myChannel". All publishes to "myChannel"
// will get sent to msgCh after this
msgCh := make(chan PubSubMessage)
if err := ps.Subscribe(msgCh, "myChannel"); err != nil {
	panic(err)
}

for msg := range msgCh {
	log.Printf("publish to channel %q received: %q", msg.Channel, msg.Message)
}
Output:

type PubSubMessage

type PubSubMessage struct {
	Type    string // "message" or "pmessage"
	Pattern string // will be set if Type is "pmessage"
	Channel string
	Message []byte
}

PubSubMessage describes a message being published to a subscribed channel

func (PubSubMessage) MarshalRESP

func (m PubSubMessage) MarshalRESP(w io.Writer) error

MarshalRESP implements the Marshaler interface.

func (*PubSubMessage) UnmarshalRESP

func (m *PubSubMessage) UnmarshalRESP(br *bufio.Reader) error

UnmarshalRESP implements the Unmarshaler interface

type ScanOpts

type ScanOpts struct {
	// The scan command to do, e.g. "SCAN", "HSCAN", etc...
	Command string

	// The key to perform the scan on. Only necessary when Command isn't "SCAN"
	Key string

	// An optional pattern to filter returned keys by
	Pattern string

	// An optional count hint to send to redis to indicate number of keys to
	// return per call. This does not affect the actual results of the scan
	// command, but it may be useful for optimizing certain datasets
	Count int
}

ScanOpts are various parameters which can be passed into ScanWithOpts. Some fields are required depending on which type of scan is being done.

type Scanner

type Scanner interface {
	Next(*string) bool
	Close() error
}

Scanner is used to iterate through the results of a SCAN call (or HSCAN, SSCAN, etc...)

Once created, repeatedly call Next() on it to fill the passed in string pointer with the next result. Next will return false if there's no more results to retrieve or if an error occurred, at which point Close should be called to retrieve any error.

func NewScanner

func NewScanner(c Client, o ScanOpts) Scanner

NewScanner creates a new Scanner instance which will iterate over the redis instance's Client using the ScanOpts.

NOTE if Client is a *Cluster this will not work correctly, use the NewScanner method on Cluster instead.

Example (Hscan)
client, err := DefaultClientFunc("tcp", "126.0.0.1:6379")
if err != nil {
	log.Fatal(err)
}

s := NewScanner(client, ScanOpts{Command: "HSCAN", Key: "somekey"})
var key string
for s.Next(&key) {
	log.Printf("key: %q", key)
}
if err := s.Close(); err != nil {
	log.Fatal(err)
}
Output:

Example (Scan)
client, err := DefaultClientFunc("tcp", "126.0.0.1:6379")
if err != nil {
	log.Fatal(err)
}

s := NewScanner(client, ScanAllKeys)
var key string
for s.Next(&key) {
	log.Printf("key: %q", key)
}
if err := s.Close(); err != nil {
	log.Fatal(err)
}
Output:

type Sentinel

type Sentinel struct {

	// Any errors encountered internally will be written to this channel. If
	// nothing is reading the channel the errors will be dropped. The channel
	// will be closed when the Close is called.
	ErrCh chan error
	// contains filtered or unexported fields
}

Sentinel is a Client which, in the background, connects to an available sentinel node and handles all of the following:

* Creates a pool to the current primary instance, as advertised by the sentinel

* Listens for events indicating the primary has changed, and automatically creates a new Client to the new primary

* Keeps track of other sentinels in the cluster, and uses them if the currently connected one becomes unreachable

func NewSentinel

func NewSentinel(primaryName string, sentinelAddrs []string, opts ...SentinelOpt) (*Sentinel, error)

NewSentinel creates and returns a *Sentinel instance. NewSentinel takes in a number of options which can overwrite its default behavior. The default options NewSentinel uses are:

SentinelConnFunc(DefaultConnFunc)
SentinelPoolFunc(DefaultClientFunc)

func (*Sentinel) Addrs

func (sc *Sentinel) Addrs() (string, []string)

Addrs returns the currently known network address of the current primary instance and the addresses of the secondaries.

func (*Sentinel) Client

func (sc *Sentinel) Client(addr string) (Client, error)

Client returns a Client for the given address, which could be either the primary or one of the secondaries (see Addrs method for retrieving known addresses).

NOTE that if there is a failover while a Client returned by this method is being used the Client may or may not continue to work as expected, depending on the nature of the failover.

NOTE the Client should _not_ be closed.

func (*Sentinel) Close

func (sc *Sentinel) Close() error

Close implements the method for the Client interface.

func (*Sentinel) Do

func (sc *Sentinel) Do(a Action) error

Do implements the method for the Client interface. It will pass the given action on to the current primary.

NOTE it's possible that in between Do being called and the Action being actually carried out that there could be a failover event. In that case, the Action will likely fail and return an error.

func (*Sentinel) SentinelAddrs

func (sc *Sentinel) SentinelAddrs() []string

SentinelAddrs returns the addresses of all known sentinels.

type SentinelOpt

type SentinelOpt func(*sentinelOpts)

SentinelOpt is an optional behavior which can be applied to the NewSentinel function to effect a Sentinel's behavior.

func SentinelConnFunc

func SentinelConnFunc(cf ConnFunc) SentinelOpt

SentinelConnFunc tells the Sentinel to use the given ConnFunc when connecting to sentinel instances.

NOTE that if SentinelConnFunc is not used then Sentinel will attempt to retrieve AUTH and SELECT information from the address provided to NewSentinel, and use that for dialing all Sentinels. If SentinelConnFunc is provided, however, those options must be given through DialAuthPass/DialSelectDB within the ConnFunc.

func SentinelPoolFunc

func SentinelPoolFunc(pf ClientFunc) SentinelOpt

SentinelPoolFunc tells the Sentinel to use the given ClientFunc when creating a pool of connections to the sentinel's primary.

type StreamEntry

type StreamEntry struct {
	// ID is the ID of the entry in a stream.
	ID StreamEntryID

	// Fields contains the fields and values for the stream entry.
	Fields map[string]string
}

StreamEntry is an entry in a Redis stream as returned by XRANGE, XREAD and XREADGROUP.

func (*StreamEntry) UnmarshalRESP

func (s *StreamEntry) UnmarshalRESP(br *bufio.Reader) error

UnmarshalRESP implements the resp.Unmarshaler interface.

type StreamEntryID

type StreamEntryID struct {
	// Time is the first part of the ID, which is based on the time of the server that Redis runs on.
	Time uint64

	// Seq is the sequence number of the ID for entries with the same Time value.
	Seq uint64
}

StreamEntryID represents an ID used in a Redis stream with the format <time>-<seq>.

func (StreamEntryID) Before

func (s StreamEntryID) Before(o StreamEntryID) bool

Before returns true if s comes before o in a stream (is less than o).

func (*StreamEntryID) MarshalRESP

func (s *StreamEntryID) MarshalRESP(w io.Writer) error

MarshalRESP implements the resp.Marshaler interface.

func (StreamEntryID) Next

func (s StreamEntryID) Next() StreamEntryID

Next returns the next stream entry ID or s if there is no higher id (s is 18446744073709551615-18446744073709551615).

func (StreamEntryID) Prev

func (s StreamEntryID) Prev() StreamEntryID

Prev returns the previous stream entry ID or s if there is no prior id (s is 0-0).

func (StreamEntryID) String

func (s StreamEntryID) String() string

String returns the ID in the format <time>-<seq> (the same format used by Redis).

String implements the fmt.Stringer interface.

func (*StreamEntryID) UnmarshalRESP

func (s *StreamEntryID) UnmarshalRESP(br *bufio.Reader) error

UnmarshalRESP implements the resp.Unmarshaler interface.

type StreamReader

type StreamReader interface {
	// Err returns any error that happened while calling Next or nil if no error happened.
	//
	// Once Err returns a non-nil error, all successive calls will return the same error.
	Err() error

	// Next returns new entries for any of the configured streams.
	//
	// The returned slice is only valid until the next call to Next.
	//
	// If there was an error, ok will be false. Otherwise, even if no entries were read, ok will be true.
	//
	// If there was an error, all future calls to Next will return ok == false.
	Next() (stream string, entries []StreamEntry, ok bool)
}

StreamReader allows reading from on or more streams, always returning newer entries

func NewStreamReader

func NewStreamReader(c Client, opts StreamReaderOpts) StreamReader

NewStreamReader returns a new StreamReader for the given client.

Any changes on opts after calling NewStreamReader will have no effect.

type StreamReaderOpts

type StreamReaderOpts struct {
	// Streams must contain one or more stream names that will be read.
	//
	// The value for each stream can either be nil or an existing ID.
	// If a value is non-nil, only newer stream entries will be returned.
	Streams map[string]*StreamEntryID

	// Group is an optional consumer group name.
	//
	// If Group is not empty reads will use XREADGROUP with the Group as consumer group instead of XREAD.
	Group string

	// Consumer is an optional consumer name for use with Group.
	Consumer string

	// NoAck optionally enables passing the NOACK flag to XREADGROUP.
	NoAck bool

	// Block specifies the duration in milliseconds that reads will wait for new data before returning.
	//
	// If Block is negative, reads will block indefinitely until new entries can be read or there is an error.
	//
	// The default, if Block is 0, is 5 seconds.
	//
	// If Block is non-negative, the Client used for the StreamReader must not have a timeout for commands or
	// the timeout duration must be substantial higher than the Block duration (at least 50% for small Block values,
	// but may be less for higher values).
	Block time.Duration

	// NoBlock disables blocking when no new data is available.
	//
	// If this is true, setting Block will not have any effect.
	NoBlock bool

	// Count can be used to limit the number of entries retrieved by each call to Next.
	//
	// If Count is 0, all available entries will be retrieved.
	Count int
}

StreamReaderOpts contains various options given for NewStreamReader that influence the behaviour.

The only required field is Streams.

Directories

Path Synopsis
internal
bytesutil
Package bytesutil provides utility functions for working with bytes and byte streams that are useful when working with the RESP protocol.
Package bytesutil provides utility functions for working with bytes and byte streams that are useful when working with the RESP protocol.
Package resp is an umbrella package which covers both the old RESP protocol (resp2) and the new one (resp3), allowing clients to choose which one they care to use
Package resp is an umbrella package which covers both the old RESP protocol (resp2) and the new one (resp3), allowing clients to choose which one they care to use
resp2
Package resp2 implements the original redis RESP protocol, a plaintext protocol which is also binary safe.
Package resp2 implements the original redis RESP protocol, a plaintext protocol which is also binary safe.
Package trace contains all the types provided for tracing within the radix package.
Package trace contains all the types provided for tracing within the radix package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL