d2lexer

package
v0.0.0-...-7f92c57 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 21, 2021 License: GPL-3.0 Imports: 4 Imported by: 0

Documentation

Overview

Package d2lexer contains the code for tokenizing calculation strings.

Index

Constants

View Source
const (
	// Name represents a name token, such as skill, par1 etc.
	Name tokenType = iota

	// String represents a quoted string token, such as "Sacrifice".
	String

	// Symbol represents a symbol token, such as '+', '-', '?, '.' etc.
	Symbol

	// Number represents an integer token.
	Number

	// EOF is the end-of-file token, generated when the end of data is reached.
	EOF
)

Variables

This section is empty.

Functions

This section is empty.

Types

type Lexer

type Lexer struct {
	CurrentToken Token
	// contains filtered or unexported fields
}

Lexer is the tokenizer for calculation strings.

func New

func New(input []byte) *Lexer

New creates a new Lexer for tokenizing the given data.

func (*Lexer) NextToken

func (l *Lexer) NextToken() Token

NextToken returns the next token and advances the tokenizer.

func (*Lexer) Peek

func (l *Lexer) Peek() Token

Peek returns the next token, but does not advance the tokenizer. The peeked token is cached until the tokenizer advances.

type Token

type Token struct {
	Type  tokenType
	Value string
}

Token is a lexical token of a calculation string.

func (*Token) String

func (t *Token) String() string

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL