codf

package module
v0.3.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 2, 2020 License: BSD-2-Clause Imports: 12 Imported by: 0

README

codf

CircleCI codecov Go Report Card GoDoc

$ go get -u go.spiff.io/codf

codf is a personal config language for declaring structured data with support for a somewhat-wide range of built-in types.

The root codf package only covers lexing, parsing, and the AST right now.

Rationale

Alternatively: Why yet another config library?

Codf exists primarily to make expressive, structured configuration easy to use. Expressive in this case means, more or less, relatively complex but easy to read and write. There are cases of progams like this in the wild, such as nginx, where configuration is integral to making use of them. Codf takes inspiration from nginx in particular, along with other curly-braced config languages. The goal is to provide structure for programs where the status quo of JSON-equivalent languages do not.

With that in mind, codf is the result of several years of building programs that require configuration to define their runtime behavior. This includes several programs whose configuration borders on scripting. These programs need configuration not only for inputs (sockets, files, DBs) and outputs (metrics, logs, more DBs), but also tasks, schedules, data pipelines, state transitions, and so on. Without these, the programs are mostly inert and do nothing — this makes configuration crucial to their operation. Thus far, all of these programs have used some JSON-like language, be it JSON itself, HCL, or YAML (with TOML showing up only in other programs I use).

While all of these provide some structure, they’re ultimately poor expressions of complex program configuration. They require a lot of work-arounds to play nice with Go: often an AST is unavailable (I’ve only seen this exposed in HCL), so all pre-processing, such as including documents or expanding variables, requires you to decode to a map to manipulate the document before consuming it; the structure of the config becomes tangled up in not violating a language spec (especially where keys are unique); and error messages are often cryptic and useless to users. All of this can be made to work, but it looks lazy and feels like a kludge.

If you look outside of the common languages, you see configuration like nginx, apt.conf, gnatsd (a mishmash of YAML/JSON/HCL; notable more for being specific to gnatsd), Caddyfiles, and others (Lisps and Erlang terms deserve honorable mention since a program’s language can work as configuration as well). So, it’s clear that there’s a way to express more than just key = value for configuration, but if there are any libraries for this, I haven’t yet found them.

So, I wrote codf. Codf allows me to write config files with directives and structure that expresses more than just key-value pairs. In fact, it can parse a wide range of nginx configs with some adjustments for comments and quoting style. Using it to handle program configuration is fairly easy by walking the AST, and enough metadata is available through it that it’s possible to provide users with helpful error feedback. Overall, it leads to much cleaner configuration with less reliance on reflection, fewer unmarshaling workarounds, and better help for users. This in turn makes me happier, because I can write programs that function the way I really want them to.

Language

A codf document is a UTF-8 text document containing statements and sections. Sections, in turn, contain further statements and sections. Documents may also contain comments, but are not part of a document's structure or AST.

Statements

A statement is a name followed by optional values and terminated by a semicolon (;):

enable-gophers;
enable-gophers yes;
enable-gophers yes with-butter;
; // This line doesn't have a statement -- a semicolon on its own is
  // an empty statement and does nothing.
Sections

A section is a name followed by optional values and braces, enclosing further sections or statements:

outer-section {
    inner-section /some/path {
        // ...
    }

    enable-gophers yes;
}
Parameters

Both sections and statements may have an optional set of values following their name, as above. These are called parameters for lack of a better term.

Parameters must be one of the types described below.

Comments

A comment begins with two forward slashes (//) and extends to the nearest line ending (a linefeed / "\n" / 0x0A byte):

// This is a comment
//This is also a comment
this// is not a comment;

Comments are not included in the parsed AST and may not be used to influence configuration.

Types

Supported value types are integers, floats, rationals, durations, strings, booleans, regular expressions, arrays, and maps.

Integers

Integers can be written in base 10, 16, 8, and 2, as well as arbitrary bases in the range of 2-36:

base-10 12345;
base-16 0x70f; // 1807
base-8  0712;  // 458
base-2  0b101; // 5
base-3  3#210; // 21
base-36 36#zz; // 1295

Integers are arbitarily long and represented as a *big.Int.

Floats / Decimals

Floats are written either as integers with exponents or decimal numbers (with optional exponent):

float-decimal   1.23456;    // positive
float-exponent  -123456e-5; // negative
float-big       1.23456789e200;

Floats are represented using a *big.Float. Precision can be adjusted by changing the Parser's Precision field -- if 0, it defaults to DefaultPrecision.

Rationals

Rationals are expressed as numerator/denominator, similar to lisps. It is illegal to use a denominator of zero (0):

rational -5/40; // -1/8
rational 0/40;  // 0/1

Rationals are represented using a *big.Rat.

Durations

Durations are expressed as a sequence of integers or decimal numbers followed by an interval unit (ns, us, ms, s, m, or h). This is compatible with the Go stdlib's durations, but does not allow decimals beginning with a period as Go does (e.g., ".5s" -- this has to be written as "0.5s" in codf). As with Go, it's valid to use "µs" or "us" for microseconds.

durations 0s -1s 1h 500ms;  // 0s -1s 1h0m0s 500ms
decimals  0.5us 0.5s 0.5ms; // 500ns 500ms 500µs

Durations are represented using time.Duration.

Strings

Strings take three forms: double-quoted sequences of characters, raw strings, and barewords.

Double-quoted strings

Double-quoted strings are surrounded by double quotes ("...") and permit all Go string escape codes (such as \n or \Uhhhhhhhh). In addition, in contrast to Go, newlines in double-quoted strings are permitted without escaping them.

simple-string "foobar";
escapes       "foo\nbar"; // "foo\nbar"
newline       "foo
bar";                     // "foo\nbar"
Raw strings

Raw strings are surrounded by backquotes (or backticks -- the "`" character). Like Go raw string literals, raw strings can contain almost anything. Unlike Go raw string literals, a backquote can be escaped inside of a raw string by writing two of them: "``". For example:

empty           ``;           // ""
with-quotes     `"foobar"`;   // "\"foobar\""
with-backquotes ```foobar```; // "`foobar`"
Barewords

Barewords are unquoted strings and usually more convenient than other strings.

A bareword is any text that begins with a Unicode graphical character minus syntactically-important characters: decimal numbers, quotes, semicolons, braces, pound, and plus/minus. The rest of a bareword may contain decimal numbers, pound, and plus/minus -- semicolons, braces, and quotes are still reserved.

leading-dot .dot;           // ".dot"
symbols     $^--;           // "$^--"
slashes     /foo/bar;       // "/foo/bar"
commas      Hello, World;   // "Hello," "World" -- two strings
unicode     こんにちは世界; // "こんにちは世界"

It is not possible to write a bareword that uses a boolean keyword except as a statement name (described below).

Barewords are represented as string.

Booleans

Booleans can be represented using the following values:

True False
Keyword TRUE FALSE
True False
true false
YES NO
Yes No
yes no

All keywords can be written in lowercase, uppercase, or titlecase. For example:

t-values YES True true; // true true true
f-values FALSE No no;   // false false false

Other case combinations are not valid (i.e., booleans keywords case-sensitive).

Booleans can only occur in parameters to statements and sections. For example, the bareword "true" as a statement name is just the string "true". The bareword "true" in an array or as a map key or value is the boolean true (and not permitted in map keys).

Booleans are represented as bool.

Regular Expressions

Regular expressions are written as #/regex/, where internal /s can be escaped using /. These are treated as re2 regular expressions and parsed using the stdlib regexp package.

empty-regex  #//;
simple-regex #/foo/;
slash-regex  #/foo\/bar/;

Regular expressions are represented as *regexp.Regexp.

Arrays

Arrays are ordered lists of values between square brackets ([]). Values are delimited by whitespace or other sentinel tokens (such as brackets and comments):

empty-array [];
numbers     [1 2 3];
nested      [[1 2] [3 4]];

Any of the above value types can be held by an array.

An array in the AST is represented as an *Array, which contains a sequence of []ExprNode.

Maps

Maps are unordered sets of space-delimited key-value pairs between curly braces, prefixed by a pound (#{}). Key-value pairs in a map are written as KEY VALUE (minus quotes), where each KEY must be followed by a VALUE (separated by a space). For example:

empty-map #{};
normal-map #{
    // Key    Value
    foo      1234      // "foo" => 1234
    "bar"    #/baz/    // "bar"  => #/baz/
};

Map keys must be strings, either bare or quoted. If a key occurs more than once in a map, only the last value is kept.

Maps are represented as a *Map, which contains a map of strings to *MapEntry. Each *MapEntry contains the original key, value, and the order that it was parsed in -- as above, codf maps are unordered, so ordering is intended only to be kept for reformatting and other tools right now.

License

codf is licensed under the BSD two-clause license. The license can be read in the COPYING file.

% vim: set tw=72 sw=4 et :

Documentation

Overview

Package codf is used to lex and parse codf configuration files.

Codf is similar in syntax to nginx configurations, albeit with differences (to keep parsing simple). Like its peers, codf structures configuration as a combination of sections and statements, both of which may accept values. For example, an HTTP server's configuration might look like the following (in keeping with the nginx references):

server go.spiff.io {
    listen 0.0.0.0:80;
    control unix:///var/run/httpd.sock;
    proxy unix:///var/run/go-redirect.sock {
        strip-x-headers yes;
        log-access no;
    }
    // keep caches in 64mb of memory
    cache memory 64mb {
         expire 10m 404;
         expire 1h  301 302;
         expire 5m  200;
    }
}

codf is intended to accept generic input that is then filtered by the program using it, so syntax is kept restrictive enough to avoid too-complex parsing rules.

Index

Constants

View Source
const (
	TEOF // !.

	TWhitespace   // [ \n\r\t]+
	TComment      // '//' { !EOL . } ( EOL | EOF )
	TWord         // BarewordRune {BarewordRune}
	TSemicolon    // ';'
	TCurlOpen     // '{'
	TCurlClose    // '}'
	TBracketOpen  // '['
	TBracketClose // ']'
	TMapOpen      // '#{'
	TRegexp       // '#/' { '\\/' | [^/] } '/'

	// Strings also include TWord, above, which is an unquoted string.
	// Escape := '\\' ( [abfnrtv\\"] | 'x' Hex2 | 'u' Hex4 | 'U' Hex8 | Oct3 )
	TString    // '"' ( Escape | [^"] )* '"'
	TRawString // '`' ( '“' | [^`] )* '`'

	// TBoolean is produced by the parser transforming a boolean TWord into a TBoolean with
	// a corresponding bool value.
	TBoolean // Title/lower/UPPER of: 'true' | 'false' | 'yes' | 'no

	TInteger  // '0' | [1-9] [0-9]*
	TFloat    // Integer '.' Integer Exponent? | Integer Exponent
	THex      // '0' [Xx] [a-fA-F0-9]+
	TOctal    // '0' [0-7]+
	TBinary   // '0' [bB] [01]+
	TBaseInt  // 2-36 '#' [a-zA-Z0-9]+ (corresponding to base)
	TDuration // 1m1.033s1h...
	TRational // Integer '/' Integer
)

Lex-able Token kinds encountered in codf.

View Source
const DefaultPrecision = 80

DefaultPrecision is the default precision of TFloat tokens produced by Lexer. This can be overridden in the Lexer by setting its Precision field to a non-zero value.

Variables

View Source
var ErrTooManyExprs = errors.New("too many expresssions")

ErrTooManyExprs is returned by ParseExpr when ParseExpr would return more than a single ExprNode.

View Source
var ErrUnexpectedEOF = errors.New("unexpected EOF")

ErrUnexpectedEOF is returned by the Lexer when EOF is encountered mid-token where a valid token cannot be cut off.

Functions

func BigFloat

func BigFloat(node Node) (v *big.Float)

BigFloat returns the value held by node as a *big.Float. If the node doesn't hold a float, it returns nil.

func BigInt

func BigInt(node Node) (v *big.Int)

BigInt returns the value held by node as a *big.Int. If the node doesn't hold an integer, it returns nil.

func BigRat

func BigRat(node Node) (v *big.Rat)

BigRat returns the value held by node as a *big.Rat. If the node doesn't hold a rational, it returns nil.

func Bool

func Bool(node Node) (v, ok bool)

Bool returns the value held by node as a boolean and true. If the node doesn't hold a boolean, it returns false for both values (v and ok).

func Duration

func Duration(node Node) (v time.Duration, ok bool)

Duration returns the value held by node as a time.Duration and true. If the node doesn't hold a duration, it returns 0 and false.

func Float64

func Float64(node Node) (v float64, ok bool)

Float64 returns the value held by node as a float64 and true. Integer and rational nodes are converted to floats. If the node doesn't hold a float, integer, or rational, it returns 0 and false.

func Int64

func Int64(node Node) (v int64, ok bool)

Int64 returns the value held by node as an int64 and true. Float and rational nodes are converted to floats. If the node is a rational and it is not an integer already, it is converted to a float and truncated to an integer. If the node doesn't hold an integer, float, or rational, it returns 0 and false.

func Quote

func Quote(node Node) (str string, ok bool)

Quote returns the string value of node if and only if node is a quoted string.

func Regexp

func Regexp(node Node) (v *regexp.Regexp)

Regexp returns the value held by node as a *regexp.Regexp. If the node doesn't hold a regexp, it returns nil.

func String

func String(node Node) (str string, ok bool)

String returns the value held by node as a string and true. If the node doesn't hold a string or word, it returns the empty string and false.

func Value

func Value(node Node) interface{}

Value returns the value of node. If node is an ExprNode, it will return that node's value. Otherwise, it will return any value associated with the node's token. It may be nil for nodes whose token is punctuation or an opening brace or bracket.

func Walk

func Walk(parent ParentNode, walker Walker) (err error)

Walk walks a codf AST starting with but not including parent. It is assumed that by having the parent, it has already been walked.

Walk will call walker.Statement for each statement encountered in parent and walker.EnterSection for each section (and walker.ExitSection if implemented).

If walker.EnterSection returns a non-nil Walker, Walk will recursively call Walk with the section and the returned Walker.

Walk will return a *WalkError if any error occurs during a walk. The WalkError will contain both the parent and child node that the error occurred for.

If the walker is a WalkMapper, any error in attempting to map a node will return a WalkError with the original node, not any resulting node. If the mapping is to a nil node without error, the node is deleted from the parent.

Nil child nodes are skipped.

func Word

func Word(node Node) (str string, ok bool)

Word returns the string value of node if and only if node is a word.

Types

type Array

type Array struct {
	StartTok Token
	EndTok   Token
	Elems    []ExprNode
}

Array is an ExprNode for a '[ value ]' array in a document.

func (*Array) String

func (a *Array) String() string

func (*Array) Token

func (a *Array) Token() Token

Token returns the first Token of the array (its opening bracket).

func (*Array) Value

func (a *Array) Value() interface{}

Value returns the elements of the array. This is always a value of the type []ExprNode.

type Document

type Document struct {
	// Name is usually a filename assigned by the user. It is not assigned by a parser and is
	// just metadata on the Document.
	Name     string
	Children []Node // Sections and Statements that make up the Document.
}

Document is the root of a codf document -- it is functionally similar to a Section, has no parameters.

func (*Document) Nodes

func (d *Document) Nodes() []Node

Nodes returns the child nodes of the document.

func (*Document) String

func (d *Document) String() string

func (*Document) Token

func (*Document) Token() Token

Token returns an empty token, as documents are the root node and only contain other nodes.

type ExpectedError

type ExpectedError struct {
	// Tok is the token that did not meet expectations.
	Tok Token
	// Msg is a message describing the expected token(s).
	Msg string
}

ExpectedError is returned when a token, Tok, is encountered that does not meet expectations.

func (*ExpectedError) Error

func (e *ExpectedError) Error() string

Error is an implementation of error.

type ExprNode

type ExprNode interface {
	Node
	Value() interface{}
}

ExprNode is a Node that has a concrete value associated with itself, such as a string, bool, rational, or other parse-able value.

type Lexer

type Lexer struct {
	// Precision is the precision used in *big.Float when taking the actual value of a TFloat
	// token.
	Precision uint
	// Name is the name of the token source currently being lexed. It is used to identify the
	// source of a location by name. It is not necessarily a filename, but usually is.
	//
	// If the scanner provided to the Lexer implements NamedScanner, the scanner's name takes
	// priority.
	Name string

	// Flags is a set of Lex flags that can be used to change lexer behavior.
	Flags LexerFlag
	// contains filtered or unexported fields
}

Lexer takes an input sequence of runes and constructs Tokens from it.

func NewLexer

func NewLexer(r io.Reader) *Lexer

NewLexer allocates a new Lexer that reads runes from r.

func (*Lexer) ReadToken

func (l *Lexer) ReadToken() (tok Token, err error)

ReadToken returns a token or an error. If EOF occurs, a TEOF token is returned without an error, and will be returned by all subsequent calls to ReadToken.

type LexerFlag added in v0.2.0

type LexerFlag uint64

LexerFlag is a bitset representing a combination of zero or more Lex flags, such as LexNoRegexps, LexWordLiterals, and others. These Lex flags affect the Lexer's output, allowing one to disable specific tokenization behavior.

const (
	// LexDefaultFlags is the empty flag set (the default).
	LexDefaultFlags LexerFlag = 0

	// LexWordLiterals treats all literals, other than strings and compounds (maps, arrays) as
	// words. This is the union of LexNo* flags.
	LexWordLiterals = LexNoRegexps |
		LexNoBools |
		LexNoDurations |
		LexNoRationals |
		LexNoFloats |
		LexNoBaseInts |
		LexNoNumbers
)
const (
	// LexNoRegexps disables regular expressions.
	LexNoRegexps LexerFlag = 1 << iota
	// LexNoBools disables true/false/yes/no parsing.
	LexNoBools
	// LexNoDurations disables durations.
	LexNoDurations
	// LexNoRationals disables rationals.
	LexNoRationals
	// LexNoFloats disables floating point numbers.
	LexNoFloats
	// LexNoBaseInts disables non-base-10 number forms.
	LexNoBaseInts
	// LexNoNumbers disables all numbers.
	// Implies NoBaseInts, NoFloats, NoRationals, and NoDurations
	LexNoNumbers
)

type Literal

type Literal struct {
	Tok Token
}

Literal is an ExprNode containing a value that is either a string, number (integer, float, or rational), regexp, duration, or boolean.

func (*Literal) Token

func (l *Literal) Token() Token

Token returns the literal's corresponding Token.

func (*Literal) Value

func (l *Literal) Value() interface{}

Value returns the literal's value. Depending on the token, this can be a value of type string, boolean, *big.Int, *big.Float, *big.Rat, time.Duration, or *regexp.Regexp. The entire AST is invalid if this returns nil.

type Location

type Location struct {
	Name   string // Name is an identifier, usually a file path, for the location.
	Offset int    // A byte offset into an input sequence. Starts at 0.
	Line   int    // A line number, delimited by '\n'. Starts at 1.
	Column int    // A column number. Starts at 1.
}

Location describes a location in an input byte sequence.

func (Location) String

func (l Location) String() string

type Map

type Map struct {
	StartTok Token
	EndTok   Token
	// Elems is a map of the string keys to their key-value pairs.
	Elems map[string]*MapEntry
}

Map is an ExprNode for a '#{ key value }' map in a document.

func (*Map) Pairs

func (m *Map) Pairs() []*MapEntry

Pairs returns the map's elements as a []*MapEntry. The returned slice ordered by each MapEntry's Ord field (i.e., the parsing order).

func (*Map) String

func (m *Map) String() string

func (*Map) Token

func (m *Map) Token() Token

Token returns the first Token of the map (its opening '#{' token).

func (*Map) Value

func (m *Map) Value() interface{}

Value returns the map's elements as its value. This is always a value of the type map[string]*MapEntry.

type MapEntry

type MapEntry struct {
	// Ord is an integer for ordering entries in the map.
	// There can be gaps in Ord for a range. Duplicate keys
	// increase Ord and replace the conflicting MapEntry.
	Ord uint
	// Key and Val are the key-value pair.
	Key ExprNode
	Val ExprNode
}

MapEntry is an entry in a codf map, containing the key, value, and an ord field -- an integer for determining the order of keys in the map as parsed. The order of keys in a map is unordered and information is only retained for writing tools.

func (*MapEntry) Name

func (m *MapEntry) Name() string

Name returns the MapEntry's key as a string.

func (*MapEntry) String

func (m *MapEntry) String() string

func (*MapEntry) Token

func (m *MapEntry) Token() Token

Token returns the first token of the MapEntry's key-value pair (its key token).

func (*MapEntry) Value

func (m *MapEntry) Value() interface{}

Value returns the MapEntry's value as a string. The entire AST is invalid if this returns nil.

type NamedReader

type NamedReader interface {
	io.Reader

	// Name returns a non-empty string identifying the reader's data source. This may be a file,
	// URL, resource ID, or some other thing. If the returned string is empty, it will be
	// treated as unnamed.
	Name() string
}

NamedReader is an optional interface that an io.Reader can implement to provide a name for its data source.

type Node

type Node interface {
	Token() Token
	// contains filtered or unexported methods
}

Node is any parsed element of a codf document. This includes the Section, Statement, Literal, Array, Map, and other types.

type ParamNode

type ParamNode interface {
	Node
	Name() string
	Parameters() []ExprNode
}

ParamNode is a node that has ExprNode parameters.

type ParentNode

type ParentNode interface {
	Node
	Nodes() []Node
}

ParentNode is a node that has sub-nodes.

type Parser

type Parser struct {
	// contains filtered or unexported fields
}

Parser consumes tokens from a TokenReader and constructs a codf *Document from it.

The Document produced by the Parser is kept for the duration of the parser's lifetime, so it is possible to read multiple TokenReaders into a Parser and produce a combined document.

func NewParser

func NewParser() *Parser

NewParser allocates a new *Parser and returns it.

func (*Parser) Document

func (p *Parser) Document() *Document

Document returns the document constructed by Parser. Each call to Parse() modifies the Document, so it is unsafe to use the Document from multiple goroutines during parsing.

func (*Parser) Parse

func (p *Parser) Parse(tr TokenReader) (err error)

Parse consumes tokens from a TokenReader and constructs a Document from its tokens.

If an error occurs during parsing, Parse will return that error for all subsequent calls to Parse, as the parser has been left in a middle-of-parsing state.

func (*Parser) ParseExpr

func (p *Parser) ParseExpr(tr TokenReader) (ExprNode, error)

ParseExpr consumes tokens from a TokenReader and constructs a single ExprNode from its tokens. It returns an error if no ExprNode is produced or if it would parse more than one ExprNode.

If an error occurs during parsing, it has no effect on the behavior of subsequent Parse or ParseExpr calls. Errors returned by Parse do not affect ParseExpr.

If ParseExpr would return more than one ExprNode, it returns nil and ErrTooManyExprs.

type Section

type Section struct {
	NameTok  *Literal
	Params   []ExprNode
	Children []Node

	StartTok Token
	EndTok   Token
}

Section is a single word follow by an optional set of ExprNodes for parameters. A Section may contain children Statements and Sections.

func (*Section) Name

func (s *Section) Name() string

Name returns the name of the Section. For example, the section "proxy http { }" has the name "proxy".

func (*Section) Nodes

func (s *Section) Nodes() []Node

Nodes returns the child nodes of the section.

func (*Section) Parameters

func (s *Section) Parameters() []ExprNode

Parameters returns the parameters the section holds.

func (*Section) String

func (s *Section) String() string

func (*Section) Token

func (s *Section) Token() Token

Token returns the first token of the section (its name token).

type Statement

type Statement struct {
	NameTok *Literal
	Params  []ExprNode

	EndTok Token
}

Statement is any single word followed by an optional set of ExprNodes for parameters.

func (*Statement) Name

func (s *Statement) Name() string

Name returns the name of the Statement. For example, the statement "enable-gophers yes;" has the name "enable-gophers".

func (*Statement) Parameters

func (s *Statement) Parameters() []ExprNode

Parameters returns the parameters the statement holds.

func (*Statement) String

func (s *Statement) String() string

func (*Statement) Token

func (s *Statement) Token() Token

Token returns the first Token of the statement (its name token).

type Token

type Token struct {
	Start, End Location
	Kind       TokenKind
	Raw        []byte
	Value      interface{}
}

Token is a token with a kind and a start and end location. Start, end, and raw fields are considered metadata and should not be used by a parser except to provide information to the user.

Kind is a TokenKind, such as TWord, and Value is the corresponding value of that TokenKind. Depending on the Kind, the Token must have a Value of the types described below. For all other TokenKinds not in the table below, a Value is not expected.

| Kind      | Value Type     |
|-----------+----------------|
| TWord     | string         |
| TString   | string         |
| TRegexp   | *regexp.Regexp |
| TBoolean  | bool           |
| TFloat    | *big.Float     |
| TRational | *big.Rat       |
| TInteger  | *big.Int       |
| THex      | *big.Int       |
| TOctal    | *big.Int       |
| TBinary   | *big.Int       |
| TBaseInt  | *big.Int       |
| TDuration | time.Duration  |

type TokenKind

type TokenKind uint

TokenKind is an enumeration of the kinds of tokens produced by a Lexer and consumed by a Parser.

func (TokenKind) String

func (t TokenKind) String() string

type TokenReader

type TokenReader interface {
	ReadToken() (Token, error)
}

TokenReader is anything capable of reading a token and returning either it or an error.

type WalkError

type WalkError struct {
	// Document is the document the context and node were found in, if the Walk root was
	// a document.
	Document *Document
	// Context is the ParentNode that Node is a child of.
	Context ParentNode
	// Node is the node that was encountered when the error occurred.
	Node Node
	// Err is the error that a Walker returned.
	Err error
}

WalkError is an error returned by Walk if an error occurs during a Walk call.

func (*WalkError) Error

func (e *WalkError) Error() string

type WalkExiter

type WalkExiter interface {
	Walker

	// ExitSection is called with the parent Walker, the exited node, and its parent node
	// (either the root document given to Walk or a section).
	ExitSection(Walker, *Section, ParentNode) error
}

WalkExiter is an optional interface implemented for a Walker to have Walk call ExitSection when it has finished consuming all children in a section.

type WalkMapper

type WalkMapper interface {
	Walker

	Map(node Node) (Node, error)
}

WalkMapper is an optional interface implemented for a Walker to have Walk replace the node passed to Map with the returned Node. If Map returns a nil node without an error, the mapped node is removed from its parent.

type Walker

type Walker interface {
	Statement(*Statement) error
	EnterSection(*Section) (Walker, error)
}

Walker is used by Walk to consume statements and sections, recursively, in ParentNodes (sections and documents).

Optionally, Walkers may also implement WalkExiter to see receive an ExitSection call when exiting a section.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL