vugu: github.com/vugu/vugu/internal/htmlx Index | Files | Directories

package htmlx

import "github.com/vugu/vugu/internal/htmlx"

Package htmlx is a fork of https://github.com/golang/net/html. Copyright 2011 The Go Authors. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.

Index

Package Files

doc.go entity.go escape.go token.go writer.go

Variables

var ErrBufferExceeded = errors.New("max buffer exceeded")

ErrBufferExceeded means that the buffering limit was exceeded.

func EscapeString Uses

func EscapeString(s string) string

EscapeString escapes special characters like "<" to become "&lt;". It escapes only five such characters: <, >, &, ' and ". UnescapeString(EscapeString(s)) == s always holds, but the converse isn't always true.

func UnescapeString Uses

func UnescapeString(s string) string

UnescapeString unescapes entities like "&lt;" to become "<". It unescapes a larger range of entities than EscapeString escapes. For example, "&aacute;" unescapes to "á", as does "&#225;" and "&xE1;". UnescapeString(EscapeString(s)) == s always holds, but the converse isn't always true.

type Attribute Uses

type Attribute struct {
    Namespace, Key, Val string
}

An Attribute is an attribute namespace-key-value triple. Namespace is non-empty for foreign attributes like xlink, Key is alphabetic (and hence does not contain escapable characters like '&', '<' or '>'), and Val is unescaped (it looks like "a<b" rather than "a&lt;b").

Namespace is only used by the parser, not the tokenizer.

type Token Uses

type Token struct {
    Type     TokenType
    DataAtom atom.Atom
    Data     string
    Attr     []Attribute
    Column   int
    Line     int
}

A Token consists of a TokenType and some Data (tag name for start and end tags, content for text, comments and doctypes). A tag Token may also contain a slice of Attributes. Data is unescaped for all Tokens (it looks like "a<b" rather than "a&lt;b"). For tag Tokens, DataAtom is the atom for Data, or zero if Data is not a known tag name.

func (Token) String Uses

func (t Token) String() string

String returns a string representation of the Token.

type TokenType Uses

type TokenType uint32

A TokenType is the type of a Token.

const (
    // ErrorToken means that an error occurred during tokenization.
    ErrorToken TokenType = iota
    // TextToken means a text node.
    TextToken
    // A StartTagToken looks like <a>.
    StartTagToken
    // An EndTagToken looks like </a>.
    EndTagToken
    // A SelfClosingTagToken tag looks like <br/>.
    SelfClosingTagToken
    // A CommentToken looks like <!--x-->.
    CommentToken
    // A DoctypeToken looks like <!DOCTYPE x>
    DoctypeToken
)

func (TokenType) String Uses

func (t TokenType) String() string

String returns a string representation of the TokenType.

type Tokenizer Uses

type Tokenizer struct {
    // contains filtered or unexported fields
}

A Tokenizer returns a stream of HTML Tokens.

func NewTokenizer Uses

func NewTokenizer(r io.Reader) *Tokenizer

NewTokenizer returns a new HTML Tokenizer for the given Reader. The input is assumed to be UTF-8 encoded.

func NewTokenizerFragment Uses

func NewTokenizerFragment(r io.Reader, contextTag string) *Tokenizer

NewTokenizerFragment returns a new HTML Tokenizer for the given Reader, for tokenizing an existing element's InnerHTML fragment. contextTag is that element's tag, such as "div" or "iframe".

For example, how the InnerHTML "a<b" is tokenized depends on whether it is for a <p> tag or a <script> tag.

The input is assumed to be UTF-8 encoded.

func (*Tokenizer) AllowCDATA Uses

func (z *Tokenizer) AllowCDATA(allowCDATA bool)

AllowCDATA sets whether or not the tokenizer recognizes <![CDATA[foo]]> as the text "foo". The default value is false, which means to recognize it as a bogus comment "<!-- [CDATA[foo]] -->" instead.

Strictly speaking, an HTML5 compliant tokenizer should allow CDATA if and only if tokenizing foreign content, such as MathML and SVG. However, tracking foreign-contentness is difficult to do purely in the tokenizer, as opposed to the parser, due to HTML integration points: an <svg> element can contain a <foreignObject> that is foreign-to-SVG but not foreign-to- HTML. For strict compliance with the HTML5 tokenization algorithm, it is the responsibility of the user of a tokenizer to call AllowCDATA as appropriate. In practice, if using the tokenizer without caring whether MathML or SVG CDATA is text or comments, such as tokenizing HTML to find all the anchor text, it is acceptable to ignore this responsibility.

func (*Tokenizer) Buffered Uses

func (z *Tokenizer) Buffered() []byte

Buffered returns a slice containing data buffered but not yet tokenized.

func (*Tokenizer) Err Uses

func (z *Tokenizer) Err() error

Err returns the error associated with the most recent ErrorToken token. This is typically io.EOF, meaning the end of tokenization.

func (*Tokenizer) Next Uses

func (z *Tokenizer) Next() TokenType

Next scans the next token and returns its type.

func (*Tokenizer) NextIsNotRawText Uses

func (z *Tokenizer) NextIsNotRawText()

NextIsNotRawText instructs the tokenizer that the next token should not be considered as 'raw text'. Some elements, such as script and title elements, normally require the next token after the opening tag to be 'raw text' that has no child elements. For example, tokenizing "<title>a<b>c</b>d</title>" yields a start tag token for "<title>", a text token for "a<b>c</b>d", and an end tag token for "</title>". There are no distinct start tag or end tag tokens for the "<b>" and "</b>".

This tokenizer implementation will generally look for raw text at the right times. Strictly speaking, an HTML5 compliant tokenizer should not look for raw text if in foreign content: <title> generally needs raw text, but a <title> inside an <svg> does not. Another example is that a <textarea> generally needs raw text, but a <textarea> is not allowed as an immediate child of a <select>; in normal parsing, a <textarea> implies </select>, but one cannot close the implicit element when parsing a <select>'s InnerHTML. Similarly to AllowCDATA, tracking the correct moment to override raw-text- ness is difficult to do purely in the tokenizer, as opposed to the parser. For strict compliance with the HTML5 tokenization algorithm, it is the responsibility of the user of a tokenizer to call NextIsNotRawText as appropriate. In practice, like AllowCDATA, it is acceptable to ignore this responsibility for basic usage.

Note that this 'raw text' concept is different from the one offered by the Tokenizer.Raw method.

func (*Tokenizer) Raw Uses

func (z *Tokenizer) Raw() []byte

Raw returns the unmodified text of the current token. Calling Next, Token, Text, TagName or TagAttr may change the contents of the returned slice.

func (*Tokenizer) SetMaxBuf Uses

func (z *Tokenizer) SetMaxBuf(n int)

SetMaxBuf sets a limit on the amount of data buffered during tokenization. A value of 0 means unlimited.

func (*Tokenizer) TagAttr Uses

func (z *Tokenizer) TagAttr() (key, val []byte, moreAttr bool)

TagAttr returns the lower-cased key and unescaped value of the next unparsed attribute for the current tag token and whether there are more attributes. The contents of the returned slices may change on the next call to Next.

func (*Tokenizer) TagName Uses

func (z *Tokenizer) TagName() (name []byte, hasAttr bool)

TagName returns the lower-cased name of a tag token (the `img` out of `<IMG SRC="foo">`) and whether the tag has attributes. The contents of the returned slice may change on the next call to Next.

func (*Tokenizer) Text Uses

func (z *Tokenizer) Text() []byte

Text returns the unescaped text of a text, comment or doctype token. The contents of the returned slice may change on the next call to Next.

func (*Tokenizer) Token Uses

func (z *Tokenizer) Token() Token

Token returns the current Token. The result's Data and Attr values remain valid after subsequent Next calls.

Directories

PathSynopsis
atomPackage atom provides integer codes (also known as atoms) for a fixed set of frequently occurring HTML strings: tag names and attribute keys such as "p" and "id".
charsetPackage charset provides common text encodings for HTML documents.

Package htmlx imports 7 packages (graph) and is imported by 2 packages. Updated 2019-04-18. Refresh now. Tools for package owners.