Go: cmd/asm/internal/lex Index | Files

package lex

import "cmd/asm/internal/lex"

Package lex implements lexical analysis for the assembler.

Index

Package Files

input.go lex.go slice.go stack.go tokenizer.go

func IsRegisterShift Uses

func IsRegisterShift(r ScanToken) bool

IsRegisterShift reports whether the token is one of the ARM register shift operators.

type Input Uses

type Input struct {
    Stack
    // contains filtered or unexported fields
}

Input is the main input: a stack of readers and some macro definitions. It also handles #include processing (by pushing onto the input stack) and parses and instantiates macro definitions.

func NewInput Uses

func NewInput(name string) *Input

NewInput returns an Input from the given path.

func (*Input) Close Uses

func (in *Input) Close()

func (*Input) Error Uses

func (in *Input) Error(args ...interface{})

func (*Input) Next Uses

func (in *Input) Next() ScanToken

func (*Input) Push Uses

func (in *Input) Push(r TokenReader)

func (*Input) Text Uses

func (in *Input) Text() string

type Macro Uses

type Macro struct {
    // contains filtered or unexported fields
}

A Macro represents the definition of a #defined macro.

type ScanToken Uses

type ScanToken rune

A ScanToken represents an input item. It is a simple wrapping of rune, as returned by text/scanner.Scanner, plus a couple of extra values.

const (
    // Asm defines some two-character lexemes. We make up
    // a rune/ScanToken value for them - ugly but simple.
    LSH ScanToken = -1000 - iota // << Left shift.
    RSH                          // >> Logical right shift.
    ARR                          // -> Used on ARM for shift type 3, arithmetic right shift.
    ROT                          // @> Used on ARM for shift type 4, rotate right.

)

func (ScanToken) String Uses

func (t ScanToken) String() string

type Slice Uses

type Slice struct {
    // contains filtered or unexported fields
}

A Slice reads from a slice of Tokens.

func NewSlice Uses

func NewSlice(base *src.PosBase, line int, tokens []Token) *Slice

func (*Slice) Base Uses

func (s *Slice) Base() *src.PosBase

func (*Slice) Close Uses

func (s *Slice) Close()

func (*Slice) Col Uses

func (s *Slice) Col() int

func (*Slice) File Uses

func (s *Slice) File() string

func (*Slice) Line Uses

func (s *Slice) Line() int

func (*Slice) Next Uses

func (s *Slice) Next() ScanToken

func (*Slice) SetBase Uses

func (s *Slice) SetBase(base *src.PosBase)

func (*Slice) Text Uses

func (s *Slice) Text() string

type Stack Uses

type Stack struct {
    // contains filtered or unexported fields
}

A Stack is a stack of TokenReaders. As the top TokenReader hits EOF, it resumes reading the next one down.

func (*Stack) Base Uses

func (s *Stack) Base() *src.PosBase

func (*Stack) Close Uses

func (s *Stack) Close()

func (*Stack) Col Uses

func (s *Stack) Col() int

func (*Stack) File Uses

func (s *Stack) File() string

func (*Stack) Line Uses

func (s *Stack) Line() int

func (*Stack) Next Uses

func (s *Stack) Next() ScanToken

func (*Stack) Push Uses

func (s *Stack) Push(tr TokenReader)

Push adds tr to the top (end) of the input stack. (Popping happens automatically.)

func (*Stack) SetBase Uses

func (s *Stack) SetBase(base *src.PosBase)

func (*Stack) Text Uses

func (s *Stack) Text() string

type Token Uses

type Token struct {
    ScanToken
    // contains filtered or unexported fields
}

A Token is a scan token plus its string value. A macro is stored as a sequence of Tokens with spaces stripped.

func Make Uses

func Make(token ScanToken, text string) Token

Make returns a Token with the given rune (ScanToken) and text representation.

func Tokenize Uses

func Tokenize(str string) []Token

Tokenize turns a string into a list of Tokens; used to parse the -D flag and in tests.

func (Token) String Uses

func (l Token) String() string

type TokenReader Uses

type TokenReader interface {
    // Next returns the next token.
    Next() ScanToken
    // The following methods all refer to the most recent token returned by Next.
    // Text returns the original string representation of the token.
    Text() string
    // File reports the source file name of the token.
    File() string
    // Base reports the position base of the token.
    Base() *src.PosBase
    // SetBase sets the position base.
    SetBase(*src.PosBase)
    // Line reports the source line number of the token.
    Line() int
    // Col reports the source column number of the token.
    Col() int
    // Close does any teardown required.
    Close()
}

A TokenReader is like a reader, but returns lex tokens of type Token. It also can tell you what the text of the most recently returned token is, and where it was found. The underlying scanner elides all spaces except newline, so the input looks like a stream of Tokens; original spacing is lost but we don't need it.

func NewLexer Uses

func NewLexer(name string) TokenReader

NewLexer returns a lexer for the named file and the given link context.

type Tokenizer Uses

type Tokenizer struct {
    // contains filtered or unexported fields
}

A Tokenizer is a simple wrapping of text/scanner.Scanner, configured for our purposes and made a TokenReader. It forms the lowest level, turning text from readers into tokens.

func NewTokenizer Uses

func NewTokenizer(name string, r io.Reader, file *os.File) *Tokenizer

func (*Tokenizer) Base Uses

func (t *Tokenizer) Base() *src.PosBase

func (*Tokenizer) Close Uses

func (t *Tokenizer) Close()

func (*Tokenizer) Col Uses

func (t *Tokenizer) Col() int

func (*Tokenizer) File Uses

func (t *Tokenizer) File() string

func (*Tokenizer) Line Uses

func (t *Tokenizer) Line() int

func (*Tokenizer) Next Uses

func (t *Tokenizer) Next() ScanToken

func (*Tokenizer) SetBase Uses

func (t *Tokenizer) SetBase(base *src.PosBase)

func (*Tokenizer) Text Uses

func (t *Tokenizer) Text() string

Package lex imports 12 packages (graph) and is imported by 16 packages. Updated 2019-06-13. Refresh now. Tools for package owners.