logs

package
v1.0.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 22, 2024 License: Apache-2.0, BSD-3-Clause, MIT Imports: 36 Imported by: 0

README

Custom Exporter Configuration

If the volume of logs in your business is large, the default parameters may not be suitable, and users can configure them according to the actual situation.

plugins:
  telemetry: # Note the indentation level relationship
    opentelemetry:
      addr: your.own.server.ip:port  # Cluster address (check if the environment domain can be resolved normally)
      tenant_id: your-tenant-id              # Tenant ID, default represents the default tenant (Note: Switch to the business tenant ID)
      logs:
        enabled: true
        export_option: # Optional configuration, default configuration can meet most scenarios
          queue_size: 2048  # The number of logs that can be buffered in local memory
          batch_size: 512  # The number of logs sent in batches each time, the log sending behavior will be triggered when the number of logs in the buffer exceeds this configuration
          batch_timeout: 5s  # Trigger log sending behavior every 5s
          max_batch_packet_size: 2097152  # When the total number of bytes of logs in the buffer exceeds 2097152 bytes, a log sending behavior will be triggered

How do I know if I need to modify the default exporter configuration?

  • Execute sum(rate(opentelemetry_sdk_batch_process_counter{status=~"dropped|failed", telemetry="logs"}[5m])) by (server, status) to check if there are any reporting failures.
  • If a server has logs dropped, it means the queue_size is too small, and the log production rate is greater than the flush rate:You can increase the queue_size. If your drop rate is 2000/s, then you should adjust the queue_size to >=2048+2000
    • You can increase the queue_size. If your drop rate is 2000/s, then you should adjust the queue_size to >=2048+2000
    • Modify the code to start multi-channel reporting
    • import "trpc.group/trpc-go/trpc-opentelemetry/exporter/asyncexporter"
      
      func main() {
          asyncexporter.Concurrency = 3
      }
      
  • If a server has logs reported as failed, it means there is a problem with the network request to the collector, which may be due to the high request pressure on the collector, you can try:
    • Increase the batch_timeout to avoid frequent reports
    • Increase batch_size to reduce the number of requests
    • Increase max_batch_packet_size, if each log is large, it can avoid frequent requests to the collector
    • The risk of adjusting the above three configurations is that logs will be stored in memory for a longer time before being flushed to the collector, and if the process is killed, this part of the data may be lost
    • The values are not the bigger, the better, adjust and observe if there are still failures

Documentation

Index

Constants

This section is empty.

Variables

View Source
var DefaultRecoveryHandler = handlePanic

DefaultRecoveryHandler default recovery

View Source
var ParseConfig = func(decoder *log.Decoder) (*config.Config, error) {
	cfg := &config.Config{
		Addr:     opentelemetry.DefaultExporterAddr,
		TenantID: opentelemetry.DefaultTenantID,
		Logs: config.LogsConfig{
			Enabled: false,
			Level:   opentelemetry.DefaultLogLevel,
		},
	}

	if err := loadConfig(cfg); err != nil {
		return nil, err
	}

	if decoder.OutputConfig.Level != "" {
		var s logtps.Level
		err := s.UnmarshalText([]byte(decoder.OutputConfig.Level))
		if err != nil {
			return nil, errors.New("opentelemetry level invalid: " + decoder.OutputConfig.Level)
		}

		cfg.Logs.Level = s
	}
	if cfg.Logs.Addr != "" {
		cfg.Addr = cfg.Logs.Addr
	}
	return cfg, nil
}

ParseConfig parse config from decoder

Functions

func ClientFilter

func ClientFilter() filter.ClientFilter

ClientFilter get client filters

func Debug

func Debug(ctx context.Context, args ...interface{})

Debug log with debug level

func Debugf

func Debugf(ctx context.Context, format string, args ...interface{})

Debugf Debug

func Error

func Error(ctx context.Context, args ...interface{})

Error log with error level

func Errorf

func Errorf(ctx context.Context, format string, args ...interface{})

Errorf error

func Fatal

func Fatal(ctx context.Context, args ...interface{})

Fatal log with debug level

func Fatalf

func Fatalf(ctx context.Context, format string, args ...interface{})

Fatalf fatal

func Info

func Info(ctx context.Context, args ...interface{})

Info log with Info level

func Infof

func Infof(ctx context.Context, format string, args ...interface{})

Infof Info

func LogRecoveryFilter

func LogRecoveryFilter(opts ...FilterOption) filter.ServerFilter

LogRecoveryFilter log recovery filter

func ServerFilter

func ServerFilter() filter.ServerFilter

ServerFilter get server filters

func Warn

func Warn(ctx context.Context, args ...interface{})

Warn log with warn level

func Warnf

func Warnf(ctx context.Context, format string, args ...interface{})

Warnf warn

Types

type Factory

type Factory struct {
}

Factory logger factory framework read config and init log

func (*Factory) Setup

func (f *Factory) Setup(name string, configDec plugin.Decoder) error

Setup setup for log

func (*Factory) Type

func (f *Factory) Type() string

Type log plugin type

type FilterOption

type FilterOption func(*FilterOptions)

FilterOption filter options

type FilterOptions

type FilterOptions struct {
	DisableRecovery bool
}

FilterOptions filter

type FlowKind

type FlowKind trace.SpanKind
const (
	FlowKindServer FlowKind = FlowKind(trace.SpanKindServer)
	FlowKindClient FlowKind = FlowKind(trace.SpanKindClient)
)

func (FlowKind) MarshalJSON

func (k FlowKind) MarshalJSON() ([]byte, error)

MarshalJSON return byte slice of flowkind

func (FlowKind) String

func (k FlowKind) String() string

String return string of flowkind

type FlowLog

type FlowLog struct {
	Kind     FlowKind `json:"kind,omitempty"`
	Source   Service  `json:"source,omitempty"`
	Target   Service  `json:"target,omitempty"`
	Request  Request  `json:"request,omitempty"`
	Response Response `json:"response,omitempty"`
	Cost     string   `json:"cost,omitempty"`
	Status   Status   `json:"status,omitempty"`
}

FlowLog log model for rpc

func (FlowLog) MultilineString

func (f FlowLog) MultilineString() string

MultilineString ...

func (FlowLog) OneLineString

func (f FlowLog) OneLineString() string

OneLineString ...

func (FlowLog) String

func (f FlowLog) String() string

String ...

type RecoveryHandler

type RecoveryHandler func(ctx context.Context, panicErr interface{}) error

RecoveryHandler recovery

type Request

type Request struct {
	Head string `json:"head,omitempty"`
	Body string `json:"body,omitempty"`
}

Request rpc request

type Response

type Response struct {
	Head string `json:"head,omitempty"`
	Body string `json:"body,omitempty"`
}

Response rpc response

type Service

type Service struct {
	Name      string `json:"service,omitempty"`
	Method    string `json:"method,omitempty"`
	Namespace string `json:"namespace,omitempty"`
	Address   string `json:"address,omitempty"`
}

Service rpc service

func (Service) String

func (s Service) String() string

String return service as string

type Status

type Status struct {
	Code    int32  `json:"code,omitempty"`
	Message string `json:"message,omitempty"`
	Type    string `json:"type,omitempty"`
}

Status rpc status

func (Status) String

func (s Status) String() string

String return status as string

Directories

Path Synopsis
Package log keep the function signatures of trpc-go log
Package log keep the function signatures of trpc-go log

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL