codec

package module
v1.2.12 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 28, 2023 License: MIT Imports: 22 Imported by: 5,241

README

Package Documentation for github.com/ugorji/go/codec

Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json.

Supported Serialization formats are:

This package will carefully use 'package unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=codec.safe ...'.

This library works with both the standard gc and the gccgo compilers.

For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer .

The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc).

Rich Feature Set includes:

  • Simple but extremely powerful and feature-rich API
  • Support for go 1.4 and above, while selectively using newer APIs for later releases
  • Excellent code coverage ( > 90% )
  • Very High Performance. Our extensive benchmarks show us outperforming Gob, Json, Bson, etc by 2-4X.
  • Careful selected use of 'unsafe' for targeted performance gains.
  • 100% safe mode supported, where 'unsafe' is not used at all.
  • Lock-free (sans mutex) concurrency for scaling to 100's of cores
  • In-place updates during decode, with option to zero value in maps and slices prior to decode
  • Coerce types where appropriate e.g. decode an int in the stream into a float, decode numbers from formatted strings, etc
  • Corner Cases: Overflows, nil maps/slices, nil values in streams are handled correctly
  • Standard field renaming via tags
  • Support for omitting empty fields during an encoding
  • Encoding from any value and decoding into pointer to any value (struct, slice, map, primitives, pointers, interface{}, etc)
  • Extensions to support efficient encoding/decoding of any named types
  • Support encoding.(Binary|Text)(M|Unm)arshaler interfaces
  • Support using existence of IsZero() bool to determine if a value is a zero value. Analogous to time.Time.IsZero() bool.
  • Decoding without a schema (into a interface{}). Includes Options to configure what specific map or slice type to use when decoding an encoded list or map into a nil interface{}
  • Mapping a non-interface type to an interface, so we can decode appropriately into any interface type with a correctly configured non-interface value.
  • Encode a struct as an array, and decode struct from an array in the data stream
  • Option to encode struct keys as numbers (instead of strings) (to support structured streams with fields encoded as numeric codes)
  • Comprehensive support for anonymous fields
  • Fast (no-reflection) encoding/decoding of common maps and slices
  • Code-generation for faster performance, supported in go 1.6+
  • Support binary (e.g. messagepack, cbor) and text (e.g. json) formats
  • Support indefinite-length formats to enable true streaming (for formats which support it e.g. json, cbor)
  • Support canonical encoding, where a value is ALWAYS encoded as same sequence of bytes. This mostly applies to maps, where iteration order is non-deterministic.
  • NIL in data stream decoded as zero value
  • Never silently skip data when decoding. User decides whether to return an error or silently skip data when keys or indexes in the data stream do not map to fields in the struct.
  • Detect and error when encoding a cyclic reference (instead of stack overflow shutdown)
  • Encode/Decode from/to chan types (for iterative streaming support)
  • Drop-in replacement for encoding/json. json: key in struct tag supported.
  • Provides a RPC Server and Client Codec for net/rpc communication protocol.
  • Handle unique idiosyncrasies of codecs e.g. For messagepack, configure how ambiguities in handling raw bytes are resolved and provide rpc server/client codec to support msgpack-rpc protocol defined at: https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md

Extension Support

Users can register a function to handle the encoding or decoding of their custom types.

There are no restrictions on what the custom type can be. Some examples:

    type BisSet   []int
    type BitSet64 uint64
    type UUID     string
    type MyStructWithUnexportedFields struct { a int; b bool; c []int; }
    type GifImage struct { ... }

As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like.

There is also seamless support provided for registering an extension (with a tag) but letting the encoding mechanism default to the standard way.

Custom Encoding and Decoding

This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree

  • is there an extension registered for the type?
  • is type a codec.Selfer?
  • is format binary, and is type a encoding.BinaryMarshaler and BinaryUnmarshaler?
  • is format specifically json, and is type a encoding/json.Marshaler and Unmarshaler?
  • is format text-based, and type an encoding.TextMarshaler and TextUnmarshaler?
  • else we use a pair of functions based on the "kind" of the type e.g. map, slice, int64, etc

This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode.

Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree.

RPC

RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package.

Usage

The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification.

The Encoder and Decoder are NOT safe for concurrent use.

Consequently, the usage model is basically:

  • Create and initialize the Handle before any use. Once created, DO NOT modify it.
  • Multiple Encoders or Decoders can now use the Handle concurrently. They only read information off the Handle (never write).
  • However, each Encoder or Decoder MUST not be used concurrently
  • To re-use an Encoder/Decoder, call Reset(...) on it first. This allows you use state maintained on the Encoder/Decoder.

Sample usage model:

    // create and configure Handle
    var (
      bh codec.BincHandle
      mh codec.MsgpackHandle
      ch codec.CborHandle
    )

    mh.MapType = reflect.TypeOf(map[string]interface{}(nil))

    // configure extensions
    // e.g. for msgpack, define functions and enable Time support for tag 1
    // mh.SetExt(reflect.TypeOf(time.Time{}), 1, myExt)

    // create and use decoder/encoder
    var (
      r io.Reader
      w io.Writer
      b []byte
      h = &bh // or mh to use msgpack
    )

    dec = codec.NewDecoder(r, h)
    dec = codec.NewDecoderBytes(b, h)
    err = dec.Decode(&v)

    enc = codec.NewEncoder(w, h)
    enc = codec.NewEncoderBytes(&b, h)
    err = enc.Encode(v)

    //RPC Server
    go func() {
        for {
            conn, err := listener.Accept()
            rpcCodec := codec.GoRpc.ServerCodec(conn, h)
            //OR rpcCodec := codec.MsgpackSpecRpc.ServerCodec(conn, h)
            rpc.ServeCodec(rpcCodec)
        }
    }()

    //RPC Communication (client side)
    conn, err = net.Dial("tcp", "localhost:5555")
    rpcCodec := codec.GoRpc.ClientCodec(conn, h)
    //OR rpcCodec := codec.MsgpackSpecRpc.ClientCodec(conn, h)
    client := rpc.NewClientWithCodec(rpcCodec)

Running Tests

To run tests, use the following:

    go test

To run the full suite of tests, use the following:

    go test -tags alltests -run Suite

You can run the tag 'codec.safe' to run tests or build in safe mode. e.g.

    go test -tags codec.safe -run Json
    go test -tags "alltests codec.safe" -run Suite

Running Benchmarks

    cd bench
    go test -bench . -benchmem -benchtime 1s

Please see http://github.com/ugorji/go-codec-bench .

Caveats

Struct fields matching the following are ignored during encoding and decoding

  • struct tag value set to -
  • func, complex numbers, unsafe pointers
  • unexported and not embedded
  • unexported and embedded and not struct kind
  • unexported and embedded pointers (from go1.10)

Every other field in a struct will be encoded/decoded.

Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.

Exported Package API

const CborStreamBytes byte = 0x5f ...
const GenVersion = 28
var SelfExt = &extFailWrapper{}
var GoRpc goRpc
var MsgpackSpecRpc msgpackSpecRpc
func GenHelper() (g genHelper)
type BasicHandle struct{ ... }
type BincHandle struct{ ... }
type BytesExt interface{ ... }
type CborHandle struct{ ... }
type DecodeOptions struct{ ... }
type Decoder struct{ ... }
    func NewDecoder(r io.Reader, h Handle) *Decoder
    func NewDecoderBytes(in []byte, h Handle) *Decoder
    func NewDecoderString(s string, h Handle) *Decoder
type EncodeOptions struct{ ... }
type Encoder struct{ ... }
    func NewEncoder(w io.Writer, h Handle) *Encoder
    func NewEncoderBytes(out *[]byte, h Handle) *Encoder
type Ext interface{ ... }
type Handle interface{ ... }
type InterfaceExt interface{ ... }
type JsonHandle struct{ ... }
type MapBySlice interface{ ... }
type MissingFielder interface{ ... }
type MsgpackHandle struct{ ... }
type MsgpackSpecRpcMultiArgs []interface{}
type RPCOptions struct{ ... }
type Raw []byte
type RawExt struct{ ... }
type Rpc interface{ ... }
type Selfer interface{ ... }
type SimpleHandle struct{ ... }
type TypeInfos struct{ ... }
    func NewTypeInfos(tags []string) *TypeInfos

Documentation

Overview

Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json.

Supported Serialization formats are:

This package will carefully use 'package unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=codec.safe ...'.

This library works with both the standard `gc` and the `gccgo` compilers.

For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer .

The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc).

Rich Feature Set includes:

  • Simple but extremely powerful and feature-rich API
  • Support for go 1.4 and above, while selectively using newer APIs for later releases
  • Excellent code coverage ( > 90% )
  • Very High Performance. Our extensive benchmarks show us outperforming Gob, Json, Bson, etc by 2-4X.
  • Careful selected use of 'unsafe' for targeted performance gains.
  • 100% safe mode supported, where 'unsafe' is not used at all.
  • Lock-free (sans mutex) concurrency for scaling to 100's of cores
  • In-place updates during decode, with option to zero value in maps and slices prior to decode
  • Coerce types where appropriate e.g. decode an int in the stream into a float, decode numbers from formatted strings, etc
  • Corner Cases: Overflows, nil maps/slices, nil values in streams are handled correctly
  • Standard field renaming via tags
  • Support for omitting empty fields during an encoding
  • Encoding from any value and decoding into pointer to any value (struct, slice, map, primitives, pointers, interface{}, etc)
  • Extensions to support efficient encoding/decoding of any named types
  • Support encoding.(Binary|Text)(M|Unm)arshaler interfaces
  • Support using existence of `IsZero() bool` to determine if a value is a zero value. Analogous to time.Time.IsZero() bool.
  • Decoding without a schema (into a interface{}). Includes Options to configure what specific map or slice type to use when decoding an encoded list or map into a nil interface{}
  • Mapping a non-interface type to an interface, so we can decode appropriately into any interface type with a correctly configured non-interface value.
  • Encode a struct as an array, and decode struct from an array in the data stream
  • Option to encode struct keys as numbers (instead of strings) (to support structured streams with fields encoded as numeric codes)
  • Comprehensive support for anonymous fields
  • Fast (no-reflection) encoding/decoding of common maps and slices
  • Code-generation for faster performance, supported in go 1.6+
  • Support binary (e.g. messagepack, cbor) and text (e.g. json) formats
  • Support indefinite-length formats to enable true streaming (for formats which support it e.g. json, cbor)
  • Support canonical encoding, where a value is ALWAYS encoded as same sequence of bytes. This mostly applies to maps, where iteration order is non-deterministic.
  • NIL in data stream decoded as zero value
  • Never silently skip data when decoding. User decides whether to return an error or silently skip data when keys or indexes in the data stream do not map to fields in the struct.
  • Detect and error when encoding a cyclic reference (instead of stack overflow shutdown)
  • Encode/Decode from/to chan types (for iterative streaming support)
  • Drop-in replacement for encoding/json. `json:` key in struct tag supported.
  • Provides a RPC Server and Client Codec for net/rpc communication protocol.
  • Handle unique idiosyncrasies of codecs e.g. For messagepack, configure how ambiguities in handling raw bytes are resolved and provide rpc server/client codec to support msgpack-rpc protocol defined at: https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md

Extension Support

Users can register a function to handle the encoding or decoding of their custom types.

There are no restrictions on what the custom type can be. Some examples:

type BisSet   []int
type BitSet64 uint64
type UUID     string
type MyStructWithUnexportedFields struct { a int; b bool; c []int; }
type GifImage struct { ... }

As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like.

There is also seamless support provided for registering an extension (with a tag) but letting the encoding mechanism default to the standard way.

Custom Encoding and Decoding

This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree

  • is there an extension registered for the type?
  • is type a codec.Selfer?
  • is format binary, and is type a encoding.BinaryMarshaler and BinaryUnmarshaler?
  • is format specifically json, and is type a encoding/json.Marshaler and Unmarshaler?
  • is format text-based, and type an encoding.TextMarshaler and TextUnmarshaler?
  • else we use a pair of functions based on the "kind" of the type e.g. map, slice, int64, etc

This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode.

Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree.

RPC

RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package.

Usage

The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification.

The Encoder and Decoder are NOT safe for concurrent use.

Consequently, the usage model is basically:

  • Create and initialize the Handle before any use. Once created, DO NOT modify it.
  • Multiple Encoders or Decoders can now use the Handle concurrently. They only read information off the Handle (never write).
  • However, each Encoder or Decoder MUST not be used concurrently
  • To re-use an Encoder/Decoder, call Reset(...) on it first. This allows you use state maintained on the Encoder/Decoder.

Sample usage model:

// create and configure Handle
var (
  bh codec.BincHandle
  mh codec.MsgpackHandle
  ch codec.CborHandle
)

mh.MapType = reflect.TypeOf(map[string]interface{}(nil))

// configure extensions
// e.g. for msgpack, define functions and enable Time support for tag 1
// mh.SetExt(reflect.TypeOf(time.Time{}), 1, myExt)

// create and use decoder/encoder
var (
  r io.Reader
  w io.Writer
  b []byte
  h = &bh // or mh to use msgpack
)

dec = codec.NewDecoder(r, h)
dec = codec.NewDecoderBytes(b, h)
err = dec.Decode(&v)

enc = codec.NewEncoder(w, h)
enc = codec.NewEncoderBytes(&b, h)
err = enc.Encode(v)

//RPC Server
go func() {
    for {
        conn, err := listener.Accept()
        rpcCodec := codec.GoRpc.ServerCodec(conn, h)
        //OR rpcCodec := codec.MsgpackSpecRpc.ServerCodec(conn, h)
        rpc.ServeCodec(rpcCodec)
    }
}()

//RPC Communication (client side)
conn, err = net.Dial("tcp", "localhost:5555")
rpcCodec := codec.GoRpc.ClientCodec(conn, h)
//OR rpcCodec := codec.MsgpackSpecRpc.ClientCodec(conn, h)
client := rpc.NewClientWithCodec(rpcCodec)

Running Tests

To run tests, use the following:

go test

To run the full suite of tests, use the following:

go test -tags alltests -run Suite

You can run the tag 'codec.safe' to run tests or build in safe mode. e.g.

go test -tags codec.safe -run Json
go test -tags "alltests codec.safe" -run Suite

Running Benchmarks

cd bench
go test -bench . -benchmem -benchtime 1s

Please see http://github.com/ugorji/go-codec-bench .

Caveats

Struct fields matching the following are ignored during encoding and decoding

  • struct tag value set to -
  • func, complex numbers, unsafe pointers
  • unexported and not embedded
  • unexported and embedded and not struct kind
  • unexported and embedded pointers (from go1.10)

Every other field in a struct will be encoded/decoded.

Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.

Index

Constants

View Source
const (
	CborStreamBytes  byte = 0x5f
	CborStreamString byte = 0x7f
	CborStreamArray  byte = 0x9f
	CborStreamMap    byte = 0xbf
	CborStreamBreak  byte = 0xff
)

These define some in-stream descriptors for manual encoding e.g. when doing explicit indefinite-length

View Source
const GenVersion = 28

GenVersion is the current version of codecgen.

Variables

View Source
var GoRpc goRpc

GoRpc implements Rpc using the communication protocol defined in net/rpc package.

Note: network connection (from net.Dial, of type io.ReadWriteCloser) is not buffered.

For performance, you should configure WriterBufferSize and ReaderBufferSize on the handle. This ensures we use an adequate buffer during reading and writing. If not configured, we will internally initialize and use a buffer during reads and writes. This can be turned off via the RPCNoBuffer option on the Handle.

var handle codec.JsonHandle
handle.RPCNoBuffer = true // turns off attempt by rpc module to initialize a buffer

Example 1: one way of configuring buffering explicitly:

var handle codec.JsonHandle // codec handle
handle.ReaderBufferSize = 1024
handle.WriterBufferSize = 1024
var conn io.ReadWriteCloser // connection got from a socket
var serverCodec = GoRpc.ServerCodec(conn, handle)
var clientCodec = GoRpc.ClientCodec(conn, handle)

Example 2: you can also explicitly create a buffered connection yourself, and not worry about configuring the buffer sizes in the Handle.

var handle codec.Handle     // codec handle
var conn io.ReadWriteCloser // connection got from a socket
var bufconn = struct {      // bufconn here is a buffered io.ReadWriteCloser
    io.Closer
    *bufio.Reader
    *bufio.Writer
}{conn, bufio.NewReader(conn), bufio.NewWriter(conn)}
var serverCodec = GoRpc.ServerCodec(bufconn, handle)
var clientCodec = GoRpc.ClientCodec(bufconn, handle)
View Source
var MsgpackSpecRpc msgpackSpecRpc

MsgpackSpecRpc implements Rpc using the communication protocol defined in the msgpack spec at https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md .

See GoRpc documentation, for information on buffering for better performance.

View Source
var SelfExt = &extFailWrapper{}

SelfExt is a sentinel extension signifying that types registered with it SHOULD be encoded and decoded based on the native mode of the format.

This allows users to define a tag for an extension, but signify that the types should be encoded/decoded as the native encoding. This way, users need not also define how to encode or decode the extension.

Functions

func GenHelper added in v1.1.14

func GenHelper() (g genHelper)

GenHelperEncoder is exported so that it can be used externally by codecgen.

Library users: DO NOT USE IT DIRECTLY or INDIRECTLY. IT WILL CHANGE CONTINOUSLY WITHOUT NOTICE.

Types

type BasicHandle deprecated

type BasicHandle struct {

	// TypeInfos is used to get the type info for any type.
	//
	// If not configured, the default TypeInfos is used, which uses struct tag keys: codec, json
	TypeInfos *TypeInfos

	DecodeOptions

	EncodeOptions

	RPCOptions

	// TimeNotBuiltin configures whether time.Time should be treated as a builtin type.
	//
	// All Handlers should know how to encode/decode time.Time as part of the core
	// format specification, or as a standard extension defined by the format.
	//
	// However, users can elect to handle time.Time as a custom extension, or via the
	// standard library's encoding.Binary(M|Unm)arshaler or Text(M|Unm)arshaler interface.
	// To elect this behavior, users can set TimeNotBuiltin=true.
	//
	// Note: Setting TimeNotBuiltin=true can be used to enable the legacy behavior
	// (for Cbor and Msgpack), where time.Time was not a builtin supported type.
	//
	// Note: DO NOT CHANGE AFTER FIRST USE.
	//
	// Once a Handle has been initialized (used), do not modify this option. It will be ignored.
	TimeNotBuiltin bool

	// ExplicitRelease is ignored and has no effect.
	//
	// Deprecated: Pools are only used for long-lived objects shared across goroutines.
	// It is maintained for backward compatibility.
	ExplicitRelease bool
	// contains filtered or unexported fields
}

BasicHandle encapsulates the common options and extension functions.

Deprecated: DO NOT USE DIRECTLY. EXPORTED FOR GODOC BENEFIT. WILL BE REMOVED.

func (*BasicHandle) AddExt deprecated

func (x *BasicHandle) AddExt(rt reflect.Type, tag byte,
	encfn func(reflect.Value) ([]byte, error),
	decfn func(reflect.Value, []byte) error) (err error)

AddExt registes an encode and decode function for a reflect.Type. To deregister an Ext, call AddExt with nil encfn and/or nil decfn.

Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.

func (*BasicHandle) SetExt deprecated

func (x *BasicHandle) SetExt(rt reflect.Type, tag uint64, ext Ext) (err error)

SetExt will set the extension for a tag and reflect.Type. Note that the type must be a named type, and specifically not a pointer or Interface. An error is returned if that is not honored. To Deregister an ext, call SetExt with nil Ext.

Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.

func (BasicHandle) TimeBuiltin added in v1.2.3

func (x BasicHandle) TimeBuiltin() bool

TimeBuiltin returns whether time.Time OOTB support is used, based on the initial configuration of TimeNotBuiltin

type BincHandle

type BincHandle struct {
	BasicHandle

	// AsSymbols defines what should be encoded as symbols.
	//
	// Encoding as symbols can reduce the encoded size significantly.
	//
	// However, during decoding, each string to be encoded as a symbol must
	// be checked to see if it has been seen before. Consequently, encoding time
	// will increase if using symbols, because string comparisons has a clear cost.
	//
	// Values:
	// - 0: default: library uses best judgement
	// - 1: use symbols
	// - 2: do not use symbols
	AsSymbols uint8
	// contains filtered or unexported fields
}

BincHandle is a Handle for the Binc Schema-Free Encoding Format defined at https://github.com/ugorji/binc .

BincHandle currently supports all Binc features with the following EXCEPTIONS:

  • only integers up to 64 bits of precision are supported. big integers are unsupported.
  • Only IEEE 754 binary32 and binary64 floats are supported (ie Go float32 and float64 types). extended precision and decimal IEEE 754 floats are unsupported.
  • Only UTF-8 strings supported. Unicode_Other Binc types (UTF16, UTF32) are currently unsupported.

Note that these EXCEPTIONS are temporary and full support is possible and may happen soon.

func (*BincHandle) Name

func (h *BincHandle) Name() string

Name returns the name of the handle: binc

func (*BincHandle) SetBytesExt

func (h *BincHandle) SetBytesExt(rt reflect.Type, tag uint64, ext BytesExt) (err error)

SetBytesExt sets an extension

func (BincHandle) TimeBuiltin added in v1.2.5

func (x BincHandle) TimeBuiltin() bool

TimeBuiltin returns whether time.Time OOTB support is used, based on the initial configuration of TimeNotBuiltin

type BytesExt

type BytesExt interface {
	// WriteExt converts a value to a []byte.
	//
	// Note: v is a pointer iff the registered extension type is a struct or array kind.
	WriteExt(v interface{}) []byte

	// ReadExt updates a value from a []byte.
	//
	// Note: dst is always a pointer kind to the registered extension type.
	ReadExt(dst interface{}, src []byte)
}

BytesExt handles custom (de)serialization of types to/from []byte. It is used by codecs (e.g. binc, msgpack, simple) which do custom serialization of the types.

type CborHandle

type CborHandle struct {

	// noElemSeparators
	BasicHandle

	// IndefiniteLength=true, means that we encode using indefinitelength
	IndefiniteLength bool

	// TimeRFC3339 says to encode time.Time using RFC3339 format.
	// If unset, we encode time.Time using seconds past epoch.
	TimeRFC3339 bool

	// SkipUnexpectedTags says to skip over any tags for which extensions are
	// not defined. This is in keeping with the cbor spec on "Optional Tagging of Items".
	//
	// Furthermore, this allows the skipping over of the Self Describing Tag 0xd9d9f7.
	SkipUnexpectedTags bool
	// contains filtered or unexported fields
}

CborHandle is a Handle for the CBOR encoding format, defined at http://tools.ietf.org/html/rfc7049 and documented further at http://cbor.io .

CBOR is comprehensively supported, including support for:

  • indefinite-length arrays/maps/bytes/strings
  • (extension) tags in range 0..0xffff (0 .. 65535)
  • half, single and double-precision floats
  • all numbers (1, 2, 4 and 8-byte signed and unsigned integers)
  • nil, true, false, ...
  • arrays and maps, bytes and text strings

None of the optional extensions (with tags) defined in the spec are supported out-of-the-box. Users can implement them as needed (using SetExt), including spec-documented ones:

  • timestamp, BigNum, BigFloat, Decimals,
  • Encoded Text (e.g. URL, regexp, base64, MIME Message), etc.

func (*CborHandle) Name

func (h *CborHandle) Name() string

Name returns the name of the handle: cbor

func (*CborHandle) SetInterfaceExt

func (h *CborHandle) SetInterfaceExt(rt reflect.Type, tag uint64, ext InterfaceExt) (err error)

SetInterfaceExt sets an extension

func (CborHandle) TimeBuiltin added in v1.2.5

func (x CborHandle) TimeBuiltin() bool

TimeBuiltin returns whether time.Time OOTB support is used, based on the initial configuration of TimeNotBuiltin

type DecodeOptions

type DecodeOptions struct {
	// MapType specifies type to use during schema-less decoding of a map in the stream.
	// If nil (unset), we default to map[string]interface{} iff json handle and MapKeyAsString=true,
	// else map[interface{}]interface{}.
	MapType reflect.Type

	// SliceType specifies type to use during schema-less decoding of an array in the stream.
	// If nil (unset), we default to []interface{} for all formats.
	SliceType reflect.Type

	// MaxInitLen defines the maxinum initial length that we "make" a collection
	// (string, slice, map, chan). If 0 or negative, we default to a sensible value
	// based on the size of an element in the collection.
	//
	// For example, when decoding, a stream may say that it has 2^64 elements.
	// We should not auto-matically provision a slice of that size, to prevent Out-Of-Memory crash.
	// Instead, we provision up to MaxInitLen, fill that up, and start appending after that.
	MaxInitLen int

	// ReaderBufferSize is the size of the buffer used when reading.
	//
	// if > 0, we use a smart buffer internally for performance purposes.
	ReaderBufferSize int

	// MaxDepth defines the maximum depth when decoding nested
	// maps and slices. If 0 or negative, we default to a suitably large number (currently 1024).
	MaxDepth int16

	// If ErrorIfNoField, return an error when decoding a map
	// from a codec stream into a struct, and no matching struct field is found.
	ErrorIfNoField bool

	// If ErrorIfNoArrayExpand, return an error when decoding a slice/array that cannot be expanded.
	// For example, the stream contains an array of 8 items, but you are decoding into a [4]T array,
	// or you are decoding into a slice of length 4 which is non-addressable (and so cannot be set).
	ErrorIfNoArrayExpand bool

	// If SignedInteger, use the int64 during schema-less decoding of unsigned values (not uint64).
	SignedInteger bool

	// MapValueReset controls how we decode into a map value.
	//
	// By default, we MAY retrieve the mapping for a key, and then decode into that.
	// However, especially with big maps, that retrieval may be expensive and unnecessary
	// if the stream already contains all that is necessary to recreate the value.
	//
	// If true, we will never retrieve the previous mapping,
	// but rather decode into a new value and set that in the map.
	//
	// If false, we will retrieve the previous mapping if necessary e.g.
	// the previous mapping is a pointer, or is a struct or array with pre-set state,
	// or is an interface.
	MapValueReset bool

	// SliceElementReset: on decoding a slice, reset the element to a zero value first.
	//
	// concern: if the slice already contained some garbage, we will decode into that garbage.
	SliceElementReset bool

	// InterfaceReset controls how we decode into an interface.
	//
	// By default, when we see a field that is an interface{...},
	// or a map with interface{...} value, we will attempt decoding into the
	// "contained" value.
	//
	// However, this prevents us from reading a string into an interface{}
	// that formerly contained a number.
	//
	// If true, we will decode into a new "blank" value, and set that in the interface.
	// If false, we will decode into whatever is contained in the interface.
	InterfaceReset bool

	// InternString controls interning of strings during decoding.
	//
	// Some handles, e.g. json, typically will read map keys as strings.
	// If the set of keys are finite, it may help reduce allocation to
	// look them up from a map (than to allocate them afresh).
	//
	// Note: Handles will be smart when using the intern functionality.
	// Every string should not be interned.
	// An excellent use-case for interning is struct field names,
	// or map keys where key type is string.
	InternString bool

	// PreferArrayOverSlice controls whether to decode to an array or a slice.
	//
	// This only impacts decoding into a nil interface{}.
	//
	// Consequently, it has no effect on codecgen.
	//
	// *Note*: This only applies if using go1.5 and above,
	// as it requires reflect.ArrayOf support which was absent before go1.5.
	PreferArrayOverSlice bool

	// DeleteOnNilMapValue controls how to decode a nil value in the stream.
	//
	// If true, we will delete the mapping of the key.
	// Else, just set the mapping to the zero value of the type.
	//
	// Deprecated: This does NOTHING and is left behind for compiling compatibility.
	// This change is necessitated because 'nil' in a stream now consistently
	// means the zero value (ie reset the value to its zero state).
	DeleteOnNilMapValue bool

	// RawToString controls how raw bytes in a stream are decoded into a nil interface{}.
	// By default, they are decoded as []byte, but can be decoded as string (if configured).
	RawToString bool

	// ZeroCopy controls whether decoded values of []byte or string type
	// point into the input []byte parameter passed to a NewDecoderBytes/ResetBytes(...) call.
	//
	// To illustrate, if ZeroCopy and decoding from a []byte (not io.Writer),
	// then a []byte or string in the output result may just be a slice of (point into)
	// the input bytes.
	//
	// This optimization prevents unnecessary copying.
	//
	// However, it is made optional, as the caller MUST ensure that the input parameter []byte is
	// not modified after the Decode() happens, as any changes are mirrored in the decoded result.
	ZeroCopy bool

	// PreferPointerForStructOrArray controls whether a struct or array
	// is stored in a nil interface{}, or a pointer to it.
	//
	// This mostly impacts when we decode registered extensions.
	PreferPointerForStructOrArray bool

	// ValidateUnicode controls will cause decoding to fail if an expected unicode
	// string is well-formed but include invalid codepoints.
	//
	// This could have a performance impact.
	ValidateUnicode bool
}

DecodeOptions captures configuration options during decode.

type Decoder

type Decoder struct {
	// contains filtered or unexported fields
}

Decoder reads and decodes an object from an input stream in a supported format.

Decoder is NOT safe for concurrent use i.e. a Decoder cannot be used concurrently in multiple goroutines.

However, as Decoder could be allocation heavy to initialize, a Reset method is provided so its state can be reused to decode new input streams repeatedly. This is the idiomatic way to use.

func NewDecoder

func NewDecoder(r io.Reader, h Handle) *Decoder

NewDecoder returns a Decoder for decoding a stream of bytes from an io.Reader.

For efficiency, Users are encouraged to configure ReaderBufferSize on the handle OR pass in a memory buffered reader (eg bufio.Reader, bytes.Buffer).

func NewDecoderBytes

func NewDecoderBytes(in []byte, h Handle) *Decoder

NewDecoderBytes returns a Decoder which efficiently decodes directly from a byte slice with zero copying.

func NewDecoderString added in v1.2.3

func NewDecoderString(s string, h Handle) *Decoder

NewDecoderString returns a Decoder which efficiently decodes directly from a string with zero copying.

It is a convenience function that calls NewDecoderBytes with a []byte view into the string.

This can be an efficient zero-copy if using default mode i.e. without codec.safe tag.

func (*Decoder) Decode

func (d *Decoder) Decode(v interface{}) (err error)

Decode decodes the stream from reader and stores the result in the value pointed to by v. v cannot be a nil pointer. v can also be a reflect.Value of a pointer.

Note that a pointer to a nil interface is not a nil pointer. If you do not know what type of stream it is, pass in a pointer to a nil interface. We will decode and store a value in that nil interface.

Sample usages:

// Decoding into a non-nil typed value
var f float32
err = codec.NewDecoder(r, handle).Decode(&f)

// Decoding into nil interface
var v interface{}
dec := codec.NewDecoder(r, handle)
err = dec.Decode(&v)

When decoding into a nil interface{}, we will decode into an appropriate value based on the contents of the stream:

  • Numbers are decoded as float64, int64 or uint64.
  • Other values are decoded appropriately depending on the type: bool, string, []byte, time.Time, etc
  • Extensions are decoded as RawExt (if no ext function registered for the tag)

Configurations exist on the Handle to override defaults (e.g. for MapType, SliceType and how to decode raw bytes).

When decoding into a non-nil interface{} value, the mode of encoding is based on the type of the value. When a value is seen:

  • If an extension is registered for it, call that extension function
  • If it implements BinaryUnmarshaler, call its UnmarshalBinary(data []byte) error
  • Else decode it based on its reflect.Kind

There are some special rules when decoding into containers (slice/array/map/struct). Decode will typically use the stream contents to UPDATE the container i.e. the values in these containers will not be zero'ed before decoding.

  • A map can be decoded from a stream map, by updating matching keys.
  • A slice can be decoded from a stream array, by updating the first n elements, where n is length of the stream.
  • A slice can be decoded from a stream map, by decoding as if it contains a sequence of key-value pairs.
  • A struct can be decoded from a stream map, by updating matching fields.
  • A struct can be decoded from a stream array, by updating fields as they occur in the struct (by index).

This in-place update maintains consistency in the decoding philosophy (i.e. we ALWAYS update in place by default). However, the consequence of this is that values in slices or maps which are not zero'ed before hand, will have part of the prior values in place after decode if the stream doesn't contain an update for those parts.

This in-place update can be disabled by configuring the MapValueReset and SliceElementReset decode options available on every handle.

Furthermore, when decoding a stream map or array with length of 0 into a nil map or slice, we reset the destination map or slice to a zero-length value.

However, when decoding a stream nil, we reset the destination container to its "zero" value (e.g. nil for slice/map, etc).

Note: we allow nil values in the stream anywhere except for map keys. A nil value in the encoded stream where a map key is expected is treated as an error.

func (*Decoder) HandleName added in v1.2.12

func (d *Decoder) HandleName() string

func (*Decoder) MustDecode

func (d *Decoder) MustDecode(v interface{})

MustDecode is like Decode, but panics if unable to Decode.

Note: This provides insight to the code location that triggered the error.

func (*Decoder) NumBytesRead

func (d *Decoder) NumBytesRead() int

NumBytesRead returns the number of bytes read

func (*Decoder) Release deprecated

func (d *Decoder) Release()

Release is a no-op.

Deprecated: Pooled resources are not used with a Decoder. This method is kept for compatibility reasons only.

func (*Decoder) Reset

func (d *Decoder) Reset(r io.Reader)

Reset the Decoder with a new Reader to decode from, clearing all state from last run(s).

func (*Decoder) ResetBytes

func (d *Decoder) ResetBytes(in []byte)

ResetBytes resets the Decoder with a new []byte to decode from, clearing all state from last run(s).

func (*Decoder) ResetString added in v1.2.3

func (d *Decoder) ResetString(s string)

ResetString resets the Decoder with a new string to decode from, clearing all state from last run(s).

It is a convenience function that calls ResetBytes with a []byte view into the string.

This can be an efficient zero-copy if using default mode i.e. without codec.safe tag.

type EncodeOptions

type EncodeOptions struct {
	// WriterBufferSize is the size of the buffer used when writing.
	//
	// if > 0, we use a smart buffer internally for performance purposes.
	WriterBufferSize int

	// ChanRecvTimeout is the timeout used when selecting from a chan.
	//
	// Configuring this controls how we receive from a chan during the encoding process.
	//   - If ==0, we only consume the elements currently available in the chan.
	//   - if  <0, we consume until the chan is closed.
	//   - If  >0, we consume until this timeout.
	ChanRecvTimeout time.Duration

	// StructToArray specifies to encode a struct as an array, and not as a map
	StructToArray bool

	// Canonical representation means that encoding a value will always result in the same
	// sequence of bytes.
	//
	// This only affects maps, as the iteration order for maps is random.
	//
	// The implementation MAY use the natural sort order for the map keys if possible:
	//
	//     - If there is a natural sort order (ie for number, bool, string or []byte keys),
	//       then the map keys are first sorted in natural order and then written
	//       with corresponding map values to the strema.
	//     - If there is no natural sort order, then the map keys will first be
	//       encoded into []byte, and then sorted,
	//       before writing the sorted keys and the corresponding map values to the stream.
	//
	Canonical bool

	// CheckCircularRef controls whether we check for circular references
	// and error fast during an encode.
	//
	// If enabled, an error is received if a pointer to a struct
	// references itself either directly or through one of its fields (iteratively).
	//
	// This is opt-in, as there may be a performance hit to checking circular references.
	CheckCircularRef bool

	// RecursiveEmptyCheck controls how we determine whether a value is empty.
	//
	// If true, we descend into interfaces and pointers to reursively check if value is empty.
	//
	// We *might* check struct fields one by one to see if empty
	// (if we cannot directly check if a struct value is equal to its zero value).
	// If so, we honor IsZero, Comparable, IsCodecEmpty(), etc.
	// Note: This *may* make OmitEmpty more expensive due to the large number of reflect calls.
	//
	// If false, we check if the value is equal to its zero value (newly allocated state).
	RecursiveEmptyCheck bool

	// Raw controls whether we encode Raw values.
	// This is a "dangerous" option and must be explicitly set.
	// If set, we blindly encode Raw values as-is, without checking
	// if they are a correct representation of a value in that format.
	// If unset, we error out.
	Raw bool

	// StringToRaw controls how strings are encoded.
	//
	// As a go string is just an (immutable) sequence of bytes,
	// it can be encoded either as raw bytes or as a UTF string.
	//
	// By default, strings are encoded as UTF-8.
	// but can be treated as []byte during an encode.
	//
	// Note that things which we know (by definition) to be UTF-8
	// are ALWAYS encoded as UTF-8 strings.
	// These include encoding.TextMarshaler, time.Format calls, struct field names, etc.
	StringToRaw bool

	// OptimumSize controls whether we optimize for the smallest size.
	//
	// Some formats will use this flag to determine whether to encode
	// in the smallest size possible, even if it takes slightly longer.
	//
	// For example, some formats that support half-floats might check if it is possible
	// to store a float64 as a half float. Doing this check has a small performance cost,
	// but the benefit is that the encoded message will be smaller.
	OptimumSize bool

	// NoAddressableReadonly controls whether we try to force a non-addressable value
	// to be addressable so we can call a pointer method on it e.g. for types
	// that support Selfer, json.Marshaler, etc.
	//
	// Use it in the very rare occurrence that your types modify a pointer value when calling
	// an encode callback function e.g. JsonMarshal, TextMarshal, BinaryMarshal or CodecEncodeSelf.
	NoAddressableReadonly bool
}

EncodeOptions captures configuration options during encode.

type Encoder

type Encoder struct {
	// contains filtered or unexported fields
}

Encoder writes an object to an output stream in a supported format.

Encoder is NOT safe for concurrent use i.e. a Encoder cannot be used concurrently in multiple goroutines.

However, as Encoder could be allocation heavy to initialize, a Reset method is provided so its state can be reused to decode new input streams repeatedly. This is the idiomatic way to use.

func NewEncoder

func NewEncoder(w io.Writer, h Handle) *Encoder

NewEncoder returns an Encoder for encoding into an io.Writer.

For efficiency, Users are encouraged to configure WriterBufferSize on the handle OR pass in a memory buffered writer (eg bufio.Writer, bytes.Buffer).

func NewEncoderBytes

func NewEncoderBytes(out *[]byte, h Handle) *Encoder

NewEncoderBytes returns an encoder for encoding directly and efficiently into a byte slice, using zero-copying to temporary slices.

It will potentially replace the output byte slice pointed to. After encoding, the out parameter contains the encoded contents.

func (*Encoder) Encode

func (e *Encoder) Encode(v interface{}) (err error)

Encode writes an object into a stream.

Encoding can be configured via the struct tag for the fields. The key (in the struct tags) that we look at is configurable.

By default, we look up the "codec" key in the struct field's tags, and fall bak to the "json" key if "codec" is absent. That key in struct field's tag value is the key name, followed by an optional comma and options.

To set an option on all fields (e.g. omitempty on all fields), you can create a field called _struct, and set flags on it. The options which can be set on _struct are:

  • omitempty: so all fields are omitted if empty
  • toarray: so struct is encoded as an array
  • int: so struct key names are encoded as signed integers (instead of strings)
  • uint: so struct key names are encoded as unsigned integers (instead of strings)
  • float: so struct key names are encoded as floats (instead of strings)

More details on these below.

Struct values "usually" encode as maps. Each exported struct field is encoded unless:

  • the field's tag is "-", OR
  • the field is empty (empty or the zero value) and its tag specifies the "omitempty" option.

When encoding as a map, the first string in the tag (before the comma) is the map key string to use when encoding. ... This key is typically encoded as a string. However, there are instances where the encoded stream has mapping keys encoded as numbers. For example, some cbor streams have keys as integer codes in the stream, but they should map to fields in a structured object. Consequently, a struct is the natural representation in code. For these, configure the struct to encode/decode the keys as numbers (instead of string). This is done with the int,uint or float option on the _struct field (see above).

However, struct values may encode as arrays. This happens when:

  • StructToArray Encode option is set, OR
  • the tag on the _struct field sets the "toarray" option

Note that omitempty is ignored when encoding struct values as arrays, as an entry must be encoded for each field, to maintain its position.

Values with types that implement MapBySlice are encoded as stream maps.

The empty values (for omitempty option) are false, 0, any nil pointer or interface value, and any array, slice, map, or string of length zero.

Anonymous fields are encoded inline except:

  • the struct tag specifies a replacement name (first value)
  • the field is of an interface type

Examples:

// NOTE: 'json:' can be used as struct tag key, in place 'codec:' below.
type MyStruct struct {
    _struct bool    `codec:",omitempty"`   //set omitempty for every field
    Field1 string   `codec:"-"`            //skip this field
    Field2 int      `codec:"myName"`       //Use key "myName" in encode stream
    Field3 int32    `codec:",omitempty"`   //use key "Field3". Omit if empty.
    Field4 bool     `codec:"f4,omitempty"` //use key "f4". Omit if empty.
    io.Reader                              //use key "Reader".
    MyStruct        `codec:"my1"           //use key "my1".
    MyStruct                               //inline it
    ...
}

type MyStruct struct {
    _struct bool    `codec:",toarray"`     //encode struct as an array
}

type MyStruct struct {
    _struct bool    `codec:",uint"`        //encode struct with "unsigned integer" keys
    Field1 string   `codec:"1"`            //encode Field1 key using: EncodeInt(1)
    Field2 string   `codec:"2"`            //encode Field2 key using: EncodeInt(2)
}

The mode of encoding is based on the type of the value. When a value is seen:

  • If a Selfer, call its CodecEncodeSelf method
  • If an extension is registered for it, call that extension function
  • If implements encoding.(Binary|Text|JSON)Marshaler, call Marshal(Binary|Text|JSON) method
  • Else encode it based on its reflect.Kind

Note that struct field names and keys in map[string]XXX will be treated as symbols. Some formats support symbols (e.g. binc) and will properly encode the string only once in the stream, and use a tag to refer to it thereafter.

func (*Encoder) HandleName added in v1.2.12

func (e *Encoder) HandleName() string

func (*Encoder) MustEncode

func (e *Encoder) MustEncode(v interface{})

MustEncode is like Encode, but panics if unable to Encode.

Note: This provides insight to the code location that triggered the error.

func (*Encoder) Release deprecated

func (e *Encoder) Release()

Release is a no-op.

Deprecated: Pooled resources are not used with an Encoder. This method is kept for compatibility reasons only.

func (*Encoder) Reset

func (e *Encoder) Reset(w io.Writer)

Reset resets the Encoder with a new output stream.

This accommodates using the state of the Encoder, where it has "cached" information about sub-engines.

func (*Encoder) ResetBytes

func (e *Encoder) ResetBytes(out *[]byte)

ResetBytes resets the Encoder with a new destination output []byte.

func (*Encoder) WriteStr added in v1.2.9

func (z *Encoder) WriteStr(s string)

type Ext

type Ext interface {
	BytesExt
	InterfaceExt
}

Ext handles custom (de)serialization of custom types / extensions.

type Handle

type Handle interface {
	Name() string
	// contains filtered or unexported methods
}

Handle defines a specific encoding format. It also stores any runtime state used during an Encoding or Decoding session e.g. stored state about Types, etc.

Once a handle is configured, it can be shared across multiple Encoders and Decoders.

Note that a Handle is NOT safe for concurrent modification.

A Handle also should not be modified after it is configured and has been used at least once. This is because stored state may be out of sync with the new configuration, and a data race can occur when multiple goroutines access it. i.e. multiple Encoders or Decoders in different goroutines.

Consequently, the typical usage model is that a Handle is pre-configured before first time use, and not modified while in use. Such a pre-configured Handle is safe for concurrent access.

type InterfaceExt

type InterfaceExt interface {
	// ConvertExt converts a value into a simpler interface for easy encoding
	// e.g. convert time.Time to int64.
	//
	// Note: v is a pointer iff the registered extension type is a struct or array kind.
	ConvertExt(v interface{}) interface{}

	// UpdateExt updates a value from a simpler interface for easy decoding
	// e.g. convert int64 to time.Time.
	//
	// Note: dst is always a pointer kind to the registered extension type.
	UpdateExt(dst interface{}, src interface{})
}

InterfaceExt handles custom (de)serialization of types to/from another interface{} value. The Encoder or Decoder will then handle the further (de)serialization of that known type.

It is used by codecs (e.g. cbor, json) which use the format to do custom serialization of types.

type JsonHandle

type JsonHandle struct {
	BasicHandle

	// Indent indicates how a value is encoded.
	//   - If positive, indent by that number of spaces.
	//   - If negative, indent by that number of tabs.
	Indent int8

	// IntegerAsString controls how integers (signed and unsigned) are encoded.
	//
	// Per the JSON Spec, JSON numbers are 64-bit floating point numbers.
	// Consequently, integers > 2^53 cannot be represented as a JSON number without losing precision.
	// This can be mitigated by configuring how to encode integers.
	//
	// IntegerAsString interpretes the following values:
	//   - if 'L', then encode integers > 2^53 as a json string.
	//   - if 'A', then encode all integers as a json string
	//             containing the exact integer representation as a decimal.
	//   - else    encode all integers as a json number (default)
	IntegerAsString byte

	// HTMLCharsAsIs controls how to encode some special characters to html: < > &
	//
	// By default, we encode them as \uXXX
	// to prevent security holes when served from some browsers.
	HTMLCharsAsIs bool

	// PreferFloat says that we will default to decoding a number as a float.
	// If not set, we will examine the characters of the number and decode as an
	// integer type if it doesn't have any of the characters [.eE].
	PreferFloat bool

	// TermWhitespace says that we add a whitespace character
	// at the end of an encoding.
	//
	// The whitespace is important, especially if using numbers in a context
	// where multiple items are written to a stream.
	TermWhitespace bool

	// MapKeyAsString says to encode all map keys as strings.
	//
	// Use this to enforce strict json output.
	// The only caveat is that nil value is ALWAYS written as null (never as "null")
	MapKeyAsString bool

	// RawBytesExt, if configured, is used to encode and decode raw bytes in a custom way.
	// If not configured, raw bytes are encoded to/from base64 text.
	RawBytesExt InterfaceExt
	// contains filtered or unexported fields
}

JsonHandle is a handle for JSON encoding format.

Json is comprehensively supported:

  • decodes numbers into interface{} as int, uint or float64 based on how the number looks and some config parameters e.g. PreferFloat, SignedInt, etc.
  • decode integers from float formatted numbers e.g. 1.27e+8
  • decode any json value (numbers, bool, etc) from quoted strings
  • configurable way to encode/decode []byte . by default, encodes and decodes []byte using base64 Std Encoding
  • UTF-8 support for encoding and decoding

It has better performance than the json library in the standard library, by leveraging the performance improvements of the codec library.

In addition, it doesn't read more bytes than necessary during a decode, which allows reading multiple values from a stream containing json and non-json content. For example, a user can read a json value, then a cbor value, then a msgpack value, all from the same stream in sequence.

Note that, when decoding quoted strings, invalid UTF-8 or invalid UTF-16 surrogate pairs are not treated as an error. Instead, they are replaced by the Unicode replacement character U+FFFD.

Note also that the float values for NaN, +Inf or -Inf are encoded as null, as suggested by NOTE 4 of the ECMA-262 ECMAScript Language Specification 5.1 edition. see http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf .

Note the following behaviour differences vs std-library encoding/json package:

  • struct field names matched in case-sensitive manner

func (*JsonHandle) Name

func (h *JsonHandle) Name() string

Name returns the name of the handle: json

func (*JsonHandle) SetInterfaceExt

func (h *JsonHandle) SetInterfaceExt(rt reflect.Type, tag uint64, ext InterfaceExt) (err error)

SetInterfaceExt sets an extension

func (JsonHandle) TimeBuiltin added in v1.2.5

func (x JsonHandle) TimeBuiltin() bool

TimeBuiltin returns whether time.Time OOTB support is used, based on the initial configuration of TimeNotBuiltin

type MapBySlice

type MapBySlice interface {
	MapBySlice()
}

MapBySlice is a tag interface that denotes the slice or array value should encode as a map in the stream, and can be decoded from a map in the stream.

The slice or array must contain a sequence of key-value pairs. The length of the slice or array must be even (fully divisible by 2).

This affords storing a map in a specific sequence in the stream.

Example usage:

type T1 []string         // or []int or []Point or any other "slice" type
func (_ T1) MapBySlice{} // T1 now implements MapBySlice, and will be encoded as a map
type T2 struct { KeyValues T1 }

var kvs = []string{"one", "1", "two", "2", "three", "3"}
var v2 = T2{ KeyValues: T1(kvs) }
// v2 will be encoded like the map: {"KeyValues": {"one": "1", "two": "2", "three": "3"} }

The support of MapBySlice affords the following:

  • A slice or array type which implements MapBySlice will be encoded as a map
  • A slice can be decoded from a map in the stream

type MissingFielder

type MissingFielder interface {
	// CodecMissingField is called to set a missing field and value pair.
	//
	// It returns true if the missing field was set on the struct.
	CodecMissingField(field []byte, value interface{}) bool

	// CodecMissingFields returns the set of fields which are not struct fields.
	//
	// Note that the returned map may be mutated by the caller.
	CodecMissingFields() map[string]interface{}
}

MissingFielder defines the interface allowing structs to internally decode or encode values which do not map to struct fields.

We expect that this interface is bound to a pointer type (so the mutation function works).

A use-case is if a version of a type unexports a field, but you want compatibility between both versions during encoding and decoding.

Note that the interface is completely ignored during codecgen.

type MsgpackHandle

type MsgpackHandle struct {
	BasicHandle

	// NoFixedNum says to output all signed integers as 2-bytes, never as 1-byte fixednum.
	NoFixedNum bool

	// WriteExt controls whether the new spec is honored.
	//
	// With WriteExt=true, we can encode configured extensions with extension tags
	// and encode string/[]byte/extensions in a way compatible with the new spec
	// but incompatible with the old spec.
	//
	// For compatibility with the old spec, set WriteExt=false.
	//
	// With WriteExt=false:
	//    configured extensions are serialized as raw bytes (not msgpack extensions).
	//    reserved byte descriptors like Str8 and those enabling the new msgpack Binary type
	//    are not encoded.
	WriteExt bool

	// PositiveIntUnsigned says to encode positive integers as unsigned.
	PositiveIntUnsigned bool
	// contains filtered or unexported fields
}

MsgpackHandle is a Handle for the Msgpack Schema-Free Encoding Format.

func (*MsgpackHandle) Name

func (h *MsgpackHandle) Name() string

Name returns the name of the handle: msgpack

func (*MsgpackHandle) SetBytesExt

func (h *MsgpackHandle) SetBytesExt(rt reflect.Type, tag uint64, ext BytesExt) (err error)

SetBytesExt sets an extension

func (MsgpackHandle) TimeBuiltin added in v1.2.5

func (x MsgpackHandle) TimeBuiltin() bool

TimeBuiltin returns whether time.Time OOTB support is used, based on the initial configuration of TimeNotBuiltin

type MsgpackSpecRpcMultiArgs

type MsgpackSpecRpcMultiArgs []interface{}

MsgpackSpecRpcMultiArgs is a special type which signifies to the MsgpackSpecRpcCodec that the backend RPC service takes multiple arguments, which have been arranged in sequence in the slice.

The Codec then passes it AS-IS to the rpc service (without wrapping it in an array of 1 element).

type RPCOptions

type RPCOptions struct {
	// RPCNoBuffer configures whether we attempt to buffer reads and writes during RPC calls.
	//
	// Set RPCNoBuffer=true to turn buffering off.
	// Buffering can still be done if buffered connections are passed in, or
	// buffering is configured on the handle.
	RPCNoBuffer bool
}

RPCOptions holds options specific to rpc functionality

type Raw

type Raw []byte

Raw represents raw formatted bytes. We "blindly" store it during encode and retrieve the raw bytes during decode. Note: it is dangerous during encode, so we may gate the behaviour behind an Encode flag which must be explicitly set.

type RawExt

type RawExt struct {
	Tag uint64
	// Data is the []byte which represents the raw ext. If nil, ext is exposed in Value.
	// Data is used by codecs (e.g. binc, msgpack, simple) which do custom serialization of types
	Data []byte
	// Value represents the extension, if Data is nil.
	// Value is used by codecs (e.g. cbor, json) which leverage the format to do
	// custom serialization of the types.
	Value interface{}
}

RawExt represents raw unprocessed extension data. Some codecs will decode extension data as a *RawExt if there is no registered extension for the tag.

Only one of Data or Value is nil. If Data is nil, then the content of the RawExt is in the Value.

type Rpc

type Rpc interface {
	ServerCodec(conn io.ReadWriteCloser, h Handle) rpc.ServerCodec
	ClientCodec(conn io.ReadWriteCloser, h Handle) rpc.ClientCodec
}

Rpc provides a rpc Server or Client Codec for rpc communication.

type Selfer

type Selfer interface {
	CodecEncodeSelf(*Encoder)
	CodecDecodeSelf(*Decoder)
}

Selfer defines methods by which a value can encode or decode itself.

Any type which implements Selfer will be able to encode or decode itself. Consequently, during (en|de)code, this takes precedence over (text|binary)(M|Unm)arshal or extension support.

By definition, it is not allowed for a Selfer to directly call Encode or Decode on itself. If that is done, Encode/Decode will rightfully fail with a Stack Overflow style error. For example, the snippet below will cause such an error.

type testSelferRecur struct{}
func (s *testSelferRecur) CodecEncodeSelf(e *Encoder) { e.MustEncode(s) }
func (s *testSelferRecur) CodecDecodeSelf(d *Decoder) { d.MustDecode(s) }

Note: *the first set of bytes of any value MUST NOT represent nil in the format*. This is because, during each decode, we first check the the next set of bytes represent nil, and if so, we just set the value to nil.

type SimpleHandle

type SimpleHandle struct {
	BasicHandle
	// EncZeroValuesAsNil says to encode zero values for numbers, bool, string, etc as nil
	EncZeroValuesAsNil bool
	// contains filtered or unexported fields
}

SimpleHandle is a Handle for a very simple encoding format.

simple is a simplistic codec similar to binc, but not as compact.

  • Encoding of a value is always preceded by the descriptor byte (bd)
  • True, false, nil are encoded fully in 1 byte (the descriptor)
  • Integers (intXXX, uintXXX) are encoded in 1, 2, 4 or 8 bytes (plus a descriptor byte). There are positive (uintXXX and intXXX >= 0) and negative (intXXX < 0) integers.
  • Floats are encoded in 4 or 8 bytes (plus a descriptor byte)
  • Length of containers (strings, bytes, array, map, extensions) are encoded in 0, 1, 2, 4 or 8 bytes. Zero-length containers have no length encoded. For others, the number of bytes is given by pow(2, bd%3)
  • maps are encoded as [bd] [length] [[key][value]]...
  • arrays are encoded as [bd] [length] [value]...
  • extensions are encoded as [bd] [length] [tag] [byte]...
  • strings/bytearrays are encoded as [bd] [length] [byte]...
  • time.Time are encoded as [bd] [length] [byte]...

The full spec will be published soon.

func (*SimpleHandle) Name

func (h *SimpleHandle) Name() string

Name returns the name of the handle: simple

func (*SimpleHandle) SetBytesExt

func (h *SimpleHandle) SetBytesExt(rt reflect.Type, tag uint64, ext BytesExt) (err error)

SetBytesExt sets an extension

func (SimpleHandle) TimeBuiltin added in v1.2.5

func (x SimpleHandle) TimeBuiltin() bool

TimeBuiltin returns whether time.Time OOTB support is used, based on the initial configuration of TimeNotBuiltin

type TypeInfos

type TypeInfos struct {
	// contains filtered or unexported fields
}

TypeInfos caches typeInfo for each type on first inspection.

It is configured with a set of tag keys, which are used to get configuration for the type.

func NewTypeInfos

func NewTypeInfos(tags []string) *TypeInfos

NewTypeInfos creates a TypeInfos given a set of struct tags keys.

This allows users customize the struct tag keys which contain configuration of their types.

Directories

Path Synopsis
bench module
codecgen module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL