logs

package
v0.0.0-...-5655933 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 20, 2023 License: MIT Imports: 7 Imported by: 0

Documentation

Index

Constants

View Source
const (
	ChanSize          = 100
	NumberOfPipelines = 4
)

Pipeline constraints

View Source
const (
	TCPType           = "tcp"
	UDPType           = "udp"
	FileType          = "file"
	DockerType        = "docker"
	JournaldType      = "journald"
	WindowsEventType  = "windows_event"
	SnmpTrapsType     = "snmp_traps"
	StringChannelType = "string_channel"

	// UTF16BE for UTF-16 Big endian encoding
	UTF16BE string = "utf-16-be"
	// UTF16LE for UTF-16 Little Endian encoding
	UTF16LE string = "utf-16-le"

	// https://en.wikipedia.org/wiki/GB_2312
	// https://en.wikipedia.org/wiki/GBK_(character_encoding)
	// https://en.wikipedia.org/wiki/GB_18030
	// https://en.wikipedia.org/wiki/Big5
	GB18030  string = "gb18030"
	GB2312   string = "gb2312"
	HZGB2312 string = "hz-gb2312"
	GBK      string = "gbk"
	BIG5     string = "big5"
)

Logs source types

View Source
const (
	ForceBeginning = iota
	ForceEnd
	Beginning
	End
)

Tailing Modes

View Source
const (
	ExcludeAtMatch = "exclude_at_match"
	IncludeAtMatch = "include_at_match"
	MaskSequences  = "mask_sequences"
	MultiLine      = "multi_line"
)

Processing rule types

View Source
const ContainerCollectAll = "container_collect_all"

ContainerCollectAll is the name of the docker integration that collect logs from all containers

View Source
const (
	// DateFormat is the default date format.
	DateFormat = "2006-01-02T15:04:05.000000000Z"
)
View Source
const SnmpTraps = "snmp_traps"

SnmpTraps is the name of the integration that collects logs from SNMP traps received by the Agent

Variables

This section is empty.

Functions

func AggregationTimeout

func AggregationTimeout() time.Duration

AggregationTimeout is used when performing aggregation operations

func CompileProcessingRules

func CompileProcessingRules(rules []*ProcessingRule) error

CompileProcessingRules compiles all processing rule regular expressions.

func ContainsWildcard

func ContainsWildcard(path string) bool

ContainsWildcard returns true if the path contains any wildcard character

func ExpectedTagsDuration

func ExpectedTagsDuration() time.Duration

ExpectedTagsDuration returns a duration of the time expected tags will be submitted for.

func IsExpectedTagsSet

func IsExpectedTagsSet() bool

IsExpectedTagsSet returns boolean showing if expected tags feature is enabled.

func TaggerWarmupDuration

func TaggerWarmupDuration() time.Duration

TaggerWarmupDuration is used to configure the tag providers

func ValidateProcessingRules

func ValidateProcessingRules(rules []*ProcessingRule) error

ValidateProcessingRules validates the rules and raises an error if one is misconfigured. Each processing rule must have: - a valid name - a valid type - a valid pattern that compiles

Types

type ChannelMessage

type ChannelMessage struct {
	Content []byte
	// Optional. Must be UTC. If not provided, time.Now().UTC() will be used
	// Used in the Serverless Agent
	Timestamp time.Time
	// Optional.
	// Used in the Serverless Agent
	Lambda *Lambda
}

ChannelMessage represents a log line sent to datadog, with its metadata

func NewChannelMessageFromLambda

func NewChannelMessageFromLambda(content []byte, utcTime time.Time, ARN, reqID string) *ChannelMessage

NewChannelMessageFromLambda construts a message with content and with the given timestamp and Lambda metadata

type CountInfo

type CountInfo struct {
	// contains filtered or unexported fields
}

CountInfo records a simple count

func NewCountInfo

func NewCountInfo(key string) *CountInfo

NewCountInfo creates a new CountInfo instance

func (*CountInfo) Add

func (c *CountInfo) Add(v int32)

Add a new value to the count

func (*CountInfo) Info

func (c *CountInfo) Info() []string

Info returns the info

func (*CountInfo) InfoKey

func (c *CountInfo) InfoKey() string

InfoKey returns the key

type EPIntakeVersion

type EPIntakeVersion uint8

EPIntakeVersion is the events platform intake API version

const (

	// EPIntakeVersion1 is version 1 of the envets platform intake API
	EPIntakeVersion1 EPIntakeVersion
	// EPIntakeVersion2 is version 2 of the envets platform intake API
	EPIntakeVersion2
)

type Endpoint

type Endpoint struct {
	APIKey                  string `mapstructure:"api_key" json:"api_key"`
	Addr                    string
	Topic                   string
	Host                    string
	Port                    int
	UseSSL                  bool
	UseCompression          bool `mapstructure:"use_compression" json:"use_compression"`
	CompressionLevel        int  `mapstructure:"compression_level" json:"compression_level"`
	ProxyAddress            string
	ConnectionResetInterval time.Duration

	BackoffFactor    float64
	BackoffBase      float64
	BackoffMax       float64
	RecoveryInterval int
	RecoveryReset    bool

	Version   EPIntakeVersion
	TrackType IntakeTrackType
	Protocol  IntakeProtocol
	Origin    IntakeOrigin
}

Endpoint holds all the organization and network parameters to send logs

type Endpoints

type Endpoints struct {
	Main                   Endpoint
	Additionals            []Endpoint
	UseProto               bool
	Type                   string
	BatchWait              time.Duration
	BatchMaxConcurrentSend int
	BatchMaxSize           int
	BatchMaxContentSize    int
}

Endpoints holds the main endpoint and additional ones to dualship logs.

type HTTPConnectivity

type HTTPConnectivity bool

HTTPConnectivity is the status of the HTTP connectivity

var (
	// HTTPConnectivitySuccess is the status for successful HTTP connectivity
	HTTPConnectivitySuccess HTTPConnectivity = true
	// HTTPConnectivityFailure is the status for failed HTTP connectivity
	HTTPConnectivityFailure HTTPConnectivity = false
)

type InfoProvider

type InfoProvider interface {
	InfoKey() string
	Info() []string
}

InfoProvider is a general interface to provide info about a log source. It is used in the agent status page. The expected usage is for a piece of code that wants to surface something on the status page register an info provider with the source with a unique key/name. This file contains useful base implementations, but InfoProvider can be extended/implemented for more complex data.

When implementing InfoProvider - be aware of the 2 ways it is used by the status page:

  1. when a single message is returned, the statuspage will display a single line: InfoKey(): Info()[0]

  2. when multiple messages are returned, the status page will display an indented list: InfoKey(): Info()[0] Info()[1] Info()[n]

InfoKey only needs to be unique per source, and should be human readable.

type IntakeOrigin

type IntakeOrigin string

IntakeOrigin indicates the log source to use for an endpoint intake.

const DefaultIntakeOrigin IntakeOrigin = "agent"

DefaultIntakeOrigin indicates that no special DD_SOURCE header is in use for the endpoint intake track type.

const ServerlessIntakeOrigin IntakeOrigin = "lambda-extension"

ServerlessIntakeOrigin is the lambda extension origin

type IntakeProtocol

type IntakeProtocol string

IntakeProtocol indicates the protocol to use for an endpoint intake.

const DefaultIntakeProtocol IntakeProtocol = ""

DefaultIntakeProtocol indicates that no special protocol is in use for the endpoint intake track type.

type IntakeTrackType

type IntakeTrackType string

IntakeTrackType indicates the type of an endpoint intake.

type Lambda

type Lambda struct {
	ARN          string
	RequestID    string
	FunctionName string
}

Lambda is a struct storing information about the Lambda function and function execution.

type LogSource

type LogSource struct {
	// Put expvar Int first because it's modified with sync/atomic, so it needs to
	// be 64-bit aligned on 32-bit systems. See https://golang.org/pkg/sync/atomic/#pkg-note-BUG
	BytesRead expvar.Int

	Name   string
	Config *LogsConfig
	Status *LogStatus

	Messages *Messages

	// In the case that the source is overridden, keep a reference to the parent for bubbling up information about the child
	ParentSource *LogSource
	// LatencyStats tracks internal stats on the time spent by messages from this source in a processing pipeline, i.e.
	// the duration between when a message is decoded by the tailer/listener/decoder and when the message is handled by a sender
	LatencyStats *StatsTracker
	// contains filtered or unexported fields
}

LogSource holds a reference to an integration name and a log configuration, and allows to track errors and successful operations on it. Both name and configuration are static for now and determined at creation time. Changing the status is designed to be thread safe.

func ContainerCollectAllSource

func ContainerCollectAllSource(containerCollectAll bool) *LogSource

ContainerCollectAllSource returns a source to collect all logs from all containers.

func NewLogSource

func NewLogSource(name string, config *LogsConfig) *LogSource

NewLogSource creates a new log source.

func (*LogSource) AddInput

func (s *LogSource) AddInput(input string)

AddInput registers an input as being handled by this source.

func (*LogSource) GetInfo

func (s *LogSource) GetInfo(key string) InfoProvider

GetInfo gets an InfoProvider instance by the key

func (*LogSource) GetInfoStatus

func (s *LogSource) GetInfoStatus() map[string][]string

GetInfoStatus returns a primitive representation of the info for the status page

func (*LogSource) GetInputs

func (s *LogSource) GetInputs() []string

GetInputs returns the inputs handled by this source.

func (*LogSource) GetSourceType

func (s *LogSource) GetSourceType() SourceType

GetSourceType returns the sourceType used by this source

func (*LogSource) RegisterInfo

func (s *LogSource) RegisterInfo(i InfoProvider)

RegisterInfo registers some info to display on the status page

func (*LogSource) RemoveInput

func (s *LogSource) RemoveInput(input string)

RemoveInput removes an input from this source.

func (*LogSource) SetSourceType

func (s *LogSource) SetSourceType(sourceType SourceType)

SetSourceType sets a format that give information on how the source lines should be parsed

type LogSources

type LogSources struct {
	// contains filtered or unexported fields
}

LogSources stores a list of log sources.

func NewLogSources

func NewLogSources() *LogSources

NewLogSources creates a new log sources.

func (*LogSources) AddSource

func (s *LogSources) AddSource(source *LogSource)

AddSource adds a new source.

func (*LogSources) GetAddedForType

func (s *LogSources) GetAddedForType(sourceType string) chan *LogSource

GetAddedForType returns the new added sources matching the provided type.

func (*LogSources) GetRemovedForType

func (s *LogSources) GetRemovedForType(sourceType string) chan *LogSource

GetRemovedForType returns the new removed sources matching the provided type.

func (*LogSources) GetSources

func (s *LogSources) GetSources() []*LogSource

GetSources returns all the sources currently held. The result is copied and will not be modified after it is returned. However, the copy in the LogSources instance may change in that time (changing indexes or adding/removing entries).

func (*LogSources) RemoveSource

func (s *LogSources) RemoveSource(source *LogSource)

RemoveSource removes a source.

type LogStatus

type LogStatus struct {
	// contains filtered or unexported fields
}

LogStatus tracks errors and success.

func NewLogStatus

func NewLogStatus() *LogStatus

NewLogStatus creates a new log status.

func (*LogStatus) Error

func (s *LogStatus) Error(err error)

Error records the given error and invalidates the source.

func (*LogStatus) GetError

func (s *LogStatus) GetError() string

GetError returns the error.

func (*LogStatus) IsError

func (s *LogStatus) IsError() bool

IsError returns whether the current status is an error.

func (*LogStatus) IsPending

func (s *LogStatus) IsPending() bool

IsPending returns whether the current status is not yet determined.

func (*LogStatus) IsSuccess

func (s *LogStatus) IsSuccess() bool

IsSuccess returns whether the current status is a success.

func (*LogStatus) Success

func (s *LogStatus) Success()

Success sets the status to success.

type LogsConfig

type LogsConfig struct {
	Type string

	Port        int    // Network
	IdleTimeout string `mapstructure:"idle_timeout" json:"idle_timeout" toml:"idle_timeout"` // Network
	Path        string // File, Journald
	Topic       string `mapstructure:"topic" json:"topic" toml:"topic"`

	Encoding     string   `mapstructure:"encoding" json:"encoding" toml:"encoding"`                   // File
	ExcludePaths []string `mapstructure:"exclude_paths" json:"exclude_paths" toml:"exclude_paths"`    // File
	TailingMode  string   `mapstructure:"start_position" json:"start_position" toml:"start_position"` // File

	IncludeUnits  []string `mapstructure:"include_units" json:"include_units" toml:"include_units"`    // Journald
	ExcludeUnits  []string `mapstructure:"exclude_units" json:"exclude_units" toml:"exclude_units"`    // Journald
	ContainerMode bool     `mapstructure:"container_mode" json:"container_mode" toml:"container_mode"` // Journald

	Image string // Docker
	Label string // Docker
	// Name contains the container name
	Name string // Docker
	// Identifier contains the container ID
	Identifier string // Docker

	ChannelPath string `mapstructure:"channel_path" json:"channel_path" toml:"channel_path"` // Windows Event
	Query       string // Windows Event

	// used as input only by the Channel tailer.
	// could have been unidirectional but the tailer could not close it in this case.
	Channel chan *ChannelMessage `json:"-"`

	Service         string
	Source          string
	SourceCategory  string
	Tags            []string
	ProcessingRules []*ProcessingRule `mapstructure:"log_processing_rules" json:"log_processing_rules" toml:"log_processing_rules"`

	AutoMultiLine               bool    `mapstructure:"auto_multi_line_detection" json:"auto_multi_line_detection" toml:"auto_multi_line_detectio"`
	AutoMultiLineSampleSize     int     `mapstructure:"auto_multi_line_sample_size" json:"auto_multi_line_sample_size" toml:"auto_multi_line_sample_size"`
	AutoMultiLineMatchThreshold float64 `mapstructure:"auto_multi_line_match_threshold" json:"auto_multi_line_match_threshold" toml:"auto_multi_line_match_threshold"`
}

LogsConfig represents a log source config, which can be for instance a file to tail or a port to listen to.

func (*LogsConfig) Validate

func (c *LogsConfig) Validate() error

Validate returns an error if the config is misconfigured

type MappedInfo

type MappedInfo struct {
	// contains filtered or unexported fields
}

MappedInfo collects multiple info messages with a unique key

func NewMappedInfo

func NewMappedInfo(key string) *MappedInfo

NewMappedInfo creates a new MappedInfo instance

func (*MappedInfo) Info

func (m *MappedInfo) Info() []string

Info returns the info

func (*MappedInfo) InfoKey

func (m *MappedInfo) InfoKey() string

InfoKey returns the key

func (*MappedInfo) RemoveMessage

func (m *MappedInfo) RemoveMessage(key string)

RemoveMessage removes a message with a unique key

func (*MappedInfo) SetMessage

func (m *MappedInfo) SetMessage(key string, message string)

SetMessage sets a message with a unique key

type Messages

type Messages struct {
	// contains filtered or unexported fields
}

Messages holds messages and warning that can be displayed in the status Warnings are display at the top of the log section in the status and messages are displayed in the log source that generated the message

func NewMessages

func NewMessages() *Messages

NewMessages initialize Messages with the default values

func (*Messages) AddMessage

func (m *Messages) AddMessage(key string, message string)

AddMessage create a message

func (*Messages) GetMessages

func (m *Messages) GetMessages() []string

GetMessages returns all the messages

func (*Messages) RemoveMessage

func (m *Messages) RemoveMessage(key string)

RemoveMessage removes a message

type ProcessingRule

type ProcessingRule struct {
	Type               string `mapstructure:"type" json:"type" toml:"type"`
	Name               string `mapstructure:"name" json:"name" toml:"name"`
	ReplacePlaceholder string `mapstructure:"replace_placeholder" json:"replace_placeholder" toml:"replace_placeholder"`
	Pattern            string `mapstructure:"pattern" json:"pattern" toml:"pattern"`
	// TODO: should be moved out
	Regex       *regexp.Regexp
	Placeholder []byte
}

ProcessingRule defines an exclusion or a masking rule to be applied on log lines

type SourceType

type SourceType string

SourceType used for log line parsing logic. TODO: remove this logic.

const (
	// DockerSourceType docker source type
	DockerSourceType SourceType = "docker"
	// KubernetesSourceType kubernetes source type
	KubernetesSourceType SourceType = "kubernetes"
)

type StatsTracker

type StatsTracker struct {
	// contains filtered or unexported fields
}

StatsTracker Keeps track of simple stats over its lifetime and a configurable time range. StatsTracker is designed to be memory efficient by aggregating data into buckets. For example a time frame of 24 hours with a bucketFrame of 1 hour will ensure that only 24 points are ever kept in memory. New data is considered in the stats immediately while old data is removed by dropping expired aggregated buckets.

func NewStatsTracker

func NewStatsTracker(timeFrame time.Duration, bucketSize time.Duration) *StatsTracker

NewStatsTracker Creates a new StatsTracker instance

func NewStatsTrackerWithTimeProvider

func NewStatsTrackerWithTimeProvider(timeFrame time.Duration, bucketSize time.Duration, timeProvider timeProvider) *StatsTracker

NewStatsTrackerWithTimeProvider Creates a new StatsTracker instance with a time provider closure (mostly for testing)

func (*StatsTracker) Add

func (s *StatsTracker) Add(value int64)

Add Records a new value to the stats tracker

func (*StatsTracker) AllTimeAvg

func (s *StatsTracker) AllTimeAvg() int64

AllTimeAvg Gets the all time average of values seen so far

func (*StatsTracker) AllTimePeak

func (s *StatsTracker) AllTimePeak() int64

AllTimePeak Gets the largest value seen so far

func (*StatsTracker) MovingAvg

func (s *StatsTracker) MovingAvg() int64

MovingAvg Gets the moving average of values within the time frame

func (*StatsTracker) MovingPeak

func (s *StatsTracker) MovingPeak() int64

MovingPeak Gets the largest value seen within the time frame

type TailingMode

type TailingMode uint8

TailingMode type

func TailingModeFromString

func TailingModeFromString(mode string) (TailingMode, bool)

TailingModeFromString parses a string and returns a corresponding tailing mode, default to End if not found

func (TailingMode) String

func (mode TailingMode) String() string

TailingModeToString returns seelog string representation for a specified tailing mode. Returns "" for invalid tailing mode.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL