Documentation ¶
Index ¶
- func CanAppearInNumber(c rune) bool
- func HexBytesToUint(in []byte) (result uint64, err error)
- func IsDelim(c rune) bool
- func IsHexDigit(c rune) bool
- func IsValidEscapedSymbol(c rune) bool
- func StringDeepCopy(s string) string
- func UnescapeBytesInplace(input []byte) ([]byte, error)
- type JSONLexer
- type TokenGeneric
- type TokenType
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func CanAppearInNumber ¶ added in v0.2.2
CanAppearInNUmber reports whether the given rune can appear in a JSON number
func HexBytesToUint ¶ added in v0.2.2
func IsHexDigit ¶ added in v0.2.0
IsHexDigit reports whether the given rune is a valid hex digit
func IsValidEscapedSymbol ¶ added in v0.2.0
IsValidEscapedSymbol reports whether the given rune is one of the special symbols permitted in JSON
func StringDeepCopy ¶ added in v0.2.0
StringDeepCopy creates a copy of the given string with it's own underlying bytearray. Use this function to make a copy of string returned by Token()
func UnescapeBytesInplace ¶ added in v0.2.2
UnescapeBytesInplace iterates over the given slice of byte unescaping all escaped symbols inplace. Since the unescaped symbols take less space the shrinked slice of bytes is returned
Types ¶
type JSONLexer ¶
type JSONLexer struct {
// contains filtered or unexported fields
}
JSONLexer is a JSON lexical analyzer with streaming API support, where stream is a sequence of JSON tokens. JSONLexer does its own IO buffering so prefer low-level readers if you want to miminize memory footprint.
JSONLexer uses a ring buffer for parsing tokens, every token must fit in its size, otherwise buffer will be automatically grown. Initial size of buffer is 4096 bytes, however you can tweak it with SetBufSize() in case you know that most tokens are going to be long.
JSONLexer uses unsafe pointers into the underlying buf to minimize allocations, see Token() for the provided guarantees.
func NewJSONLexer ¶
NewJSONLexer creates a new JSONLexer with the given reader.
func (*JSONLexer) SetBufSize ¶
SetBufSize creates a new buffer of the given size. MUST be called before parsing started.
func (*JSONLexer) SetSkipDelims ¶
SetSkipDelims tells JSONLexer to skip delimiters and return only keys and values. This can be useful in case you want to simply match the input to some specific grammar and have no intention of doing full syntax analysis.
func (*JSONLexer) Token ¶
Token returns the next JSON token, all delimiters are skipped. Token will return io.EOF when all input has been exhausted. All strings returned by Token are guaranteed to be valid until the next Token call, otherwise you MUST make a deep copy.
func (*JSONLexer) TokenFast ¶ added in v0.2.0
func (l *JSONLexer) TokenFast() (TokenGeneric, error)
TokenFast is a more efficient version of Token(). All strings returned by Token are guaranteed to be valid until the next Token call, otherwise you MUST make a deep copy.
type TokenGeneric ¶ added in v0.2.0
type TokenGeneric struct {
// contains filtered or unexported fields
}
TokenGeneric is a generic struct used to represent any possible JSON token
func (*TokenGeneric) Bool ¶ added in v0.2.0
func (t *TokenGeneric) Bool() bool
func (*TokenGeneric) Delim ¶ added in v0.2.0
func (t *TokenGeneric) Delim() byte
func (*TokenGeneric) IsNull ¶ added in v0.2.0
func (t *TokenGeneric) IsNull() bool
func (*TokenGeneric) Number ¶ added in v0.2.0
func (t *TokenGeneric) Number() float64
func (*TokenGeneric) String ¶ added in v0.2.0
func (t *TokenGeneric) String() string
String returns string that points into internal lexer buffer and is guaranteed to be valid until the next Token call, otherwise you MUST make a deep copy
func (*TokenGeneric) StringCopy ¶ added in v0.2.0
func (t *TokenGeneric) StringCopy() string
StringCopy return a deep copy of string
func (*TokenGeneric) Type ¶ added in v0.2.0
func (t *TokenGeneric) Type() TokenType
Type returns type of the token