collector

package module
v0.46.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 26, 2024 License: Apache-2.0 Imports: 67 Imported by: 6

README

Google Cloud Exporter

This exporter can be used to send metrics and traces to Google Cloud Monitoring and Trace (formerly known as Stackdriver) respectively.

Getting started

In general, authenticating with the Collector exporter follows the same steps as any other app using the steps documented for Application Default Credentials. This section explains the specific use cases relevant to the exporter.

Prerequisite: Authenticating

The exporter relies on GCP client libraries to send data to Google Cloud. Use of these libraries requires the caller (the Collector) to be authenticated with a GCP account and project. This should be done using a GCP service account with at minimum the following IAM roles (depending on the type of data you wish to send):

  • Metrics: roles/monitoring.metricWriter
  • Traces: roles/cloudtrace.agent
  • Logs: roles/logging.logWriter

The Compute Engine default service account has all of these permissions by default, but if you are running on a different platform or with a different GCP service account you will need to ensure your service account has these permissions.

Options for different environments

Depending on the environment where your Collector is running, you can authenticate one of several ways:

GCE instances

On GCE it is recommended to use the GCP service account associated with your instance. If this is the Compute Engine default service account or another GCP service account with the sufficient IAM permissions, then there is nothing additional you need to do to authenticate the Collector process. Simply run the Collector on your instance, and it will inherit these permissions.

GKE / Workload Identity

On GKE clusters with Workload Identity enabled (including GKE Autopilot), follow the steps to configure a Workload Identity ServiceAccount in your cluster (if you do not already have one). Then, deploy the Collector as you would with any other workload, setting the serviceAccountName field in the Collector Pod’s .spec to the WI-enabled ServiceAccount.

In non-WI clusters, you can use the GCP service account associated with the node the same way as in the instructions for GCE instances above.

Non-GCP (AWS, Azure, on-prem, etc.) or alternative service accounts

In non-GCP environments, a service account key or credentials file is required. The exporter will automatically look for this file using the GOOGLE_APPLICATION_CREDENTIALS environment variable or, if that is unset, one of the other known locations. Note that when using this approach, you may need to explicitly set the project option in the exporter’s config.

When running the Collector in a Docker container, a credentials file can be passed to the container via volume mounts and environment variables at runtime like so:

docker run \
  --volume ~/service-account-key.json:/etc/otel/key.json \
  --volume $(pwd)/config.yaml:/etc/otel/config.yaml \
  --env GOOGLE_APPLICATION_CREDENTIALS=/etc/otel/key.json \
  --expose 4317 \
  --expose 55681 \
  --rm \
  otel/opentelemetry-collector-contrib

Using gcloud auth application-default login

Using gcloud auth application-default login to authenticate is not recommended for production use. Instead, it’s best to use a GCP service account through one of the methods listed above. The gcloud auth command can be useful for development and testing on a user account, and authenticating with it follows the same approach as the service account key method above.

Running the Collector

These instructions are to get you up and running quickly with the GCP exporter in a local development environment. We'll also point out alternatives that may be more suitable for CI or production.

  1. Obtain a Collector binary. Pull a binary or Docker image for the OpenTelemetry contrib collector which includes the GCP exporter plugin through one of the following:

  2. Create a configuration file config.yaml. The example below shows a minimal recommended configuration that receives OTLP and sends data to GCP, in addition to verbose logging to help understand what is going on. It uses application default credentials (which we will set up in the next step).

    Note that this configuration includes the recommended memory_limiter and batch plugins, which avoid high latency for reporting telemetry, and ensure that the collector itself will stay stable (not run out of memory) by dropping telemetry if needed.

    receivers:
      otlp:
        protocols:
          grpc:
          http:
    exporters:
      googlecloud:
        # Google Cloud Monitoring returns an error if any of the points are invalid, but still accepts the valid points.
        # Retrying successfully sent points is guaranteed to fail because the points were already written.
        # This results in a loop of unnecessary retries.  For now, disable retry_on_failure.
        retry_on_failure:
          enabled: false
      logging:
        loglevel: debug
    processors:
      memory_limiter:
      batch:
        send_batch_max_size: 200
        send_batch_size: 200
    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [googlecloud, logging]
        metrics:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [googlecloud, logging]
    
  3. Set up credentials.

    1. Enable billing in your GCP project.

    2. Enable the Cloud Metrics and Cloud Trace APIs.

    3. Ensure that your user GCP user has (at minimum) roles/monitoring.metricWriter and roles/cloudtrace.agent. You can learn about metric-related and trace-related IAM in the GCP documentation.

    4. Obtain credentials using one of the methods in the Authenticating section above.

  4. Run the collector. The following runs the collector in the foreground, so please execute it in a separate terminal.

    ./otelcol-contrib --config=config.yaml
    
    Alternatives

    If you obtained OS-specific packages or built your own binary in step 1, you'll need to follow the appropriate conventions for running the collector.

  5. Gather telemetry. Run an application that can submit OTLP-formatted metrics and traces, and configure it to send them to 127.0.0.1:4317 (for gRPC) or 127.0.0.1:55681 (for HTTP).

    Alternatives
    • Set up the host metrics receiver, which will gather telemetry from the host without needing an external application to submit telemetry.

    • Set up an application-specific receiver, such as the Nginx receiver, and run the corresponding application.

    • Set up a receiver for some other protocol (such Prometheus, StatsD, Zipkin or Jaeger), and run an application that speaks one of those protocols.

  6. View telemetry in GCP. Use the GCP metrics explorer and trace overview to view your newly submitted telemetry.

Configuration reference

The following configuration options are supported:

  • project (optional): GCP project identifier.
  • endpoint (optional): Endpoint where data is going to be sent to.
  • user_agent (optional): Override the user agent string sent on requests to Cloud Monitoring (currently only applies to metrics). Specify {{version}} to include the application version number. Defaults to opentelemetry-collector-contrib {{version}}.
  • use_insecure (optional): If true. use gRPC as their communication transport. Only has effect if Endpoint is not "".
  • timeout (optional): Timeout for all API calls. If not set, defaults to 12 seconds.
  • compression (optional): Enable gzip compression on gRPC calls for Metrics or Logs. Valid values: gzip.
  • resource_mappings (optional): ResourceMapping defines mapping of resources from source (OpenCensus) to target (Google Cloud).
    • label_mappings (optional): Optional flag signals whether we can proceed with transformation if a label is missing in the resource.
  • retry_on_failure (optional): Configuration for how to handle retries when sending data to Google Cloud fails.
    • enabled (default = true)
    • initial_interval (default = 5s): Time to wait after the first failure before retrying; ignored if enabled is false
    • max_interval (default = 30s): Is the upper bound on backoff; ignored if enabled is false
    • max_elapsed_time (default = 120s): Is the maximum amount of time spent trying to send a batch; ignored if enabled is false
  • sending_queue (optional): Configuration for how to buffer traces before sending.
    • enabled (default = true)
    • num_consumers (default = 10): Number of consumers that dequeue batches; ignored if enabled is false
    • queue_size (default = 5000): Maximum number of batches kept in memory before data; ignored if enabled is false; User should calculate this as num_seconds * requests_per_second where:
      • num_seconds is the number of seconds to buffer in case of a backend outage
      • requests_per_second is the average number of requests per seconds.
  • destination_project_quota (optional): Counts quota for traces and metrics against the project to which the data is sent (as opposed to the project associated with the Collector's service account. For example, when setting project_id or using multi-project export. (default = false)

Note: These retry_on_failure and sending_queue are provided (and documented) by the Exporter Helper

Additional configuration for the metric exporter:

  • metric.prefix (optional): MetricPrefix overrides the prefix / namespace of the Google Cloud metric type identifier. If not set, defaults to "custom.googleapis.com/opencensus/"
  • metric.skip_create_descriptor (optional): Whether to skip creating the metric descriptor.
  • metric.experimental_wal_config.directory (optional): Path to local write-ahead-log file.
  • metric.experimental_wal_config.max_backoff (optional): Maximum duration to retry entries from the WAL on network errors.

Addition configuration for the logging exporter:

  • log.default_log_name (optional): Defines a default name for log entries. If left unset, and a log entry does not have the gcp.log_name attribute set, the exporter will return an error processing that entry.
  • log.error_reporting_type (option, default = false): If true, log records with a severity of error or higher will be converted to JSON payloads with the @type field set for GCP Error Reporting. If the body is currently a string, it will be converted to a message field in the new JSON payload. If the body is already a map, the @type field will be added to the map. Other body types (such as byte) are undefined for this behavior.

Example:

exporters:
  googlecloud:
    # Google Cloud Monitoring returns an error if any of the points are invalid, but still accepts the valid points.
    # Retrying successfully sent points is guaranteed to fail because the points were already written.
    # This results in a loop of unnecessary retries.  For now, disable retry_on_failure.
    retry_on_failure:
      enabled: false
    project: my-project
    endpoint: test-endpoint
    user_agent: my-collector {{version}}
    use_insecure: true
    timeout: 12s

    resource_mappings:
      - source_type: source.resource1
        target_type: target-resource1
        label_mappings:
          - source_key: contrib.opencensus.io/exporter/googlecloud/project_id
            target_key: project_id
            optional: true
          - source_key: source.label1
            target_key: target_label_1

    sending_queue:
      enabled: true
      num_consumers: 2
      queue_size: 50

    metric:
      prefix: prefix
      skip_create_descriptor: true
      compression: gzip

    log:
      default_log_name: my-app

Beyond standard YAML configuration as outlined in the sections that follow, exporters that leverage the net/http package (all do today) also respect the following proxy environment variables:

  • HTTP_PROXY
  • HTTPS_PROXY
  • NO_PROXY

If set at Collector start time then exporters, regardless of protocol, will or will not proxy traffic as defined by these environment variables.

Preventing metric label collisions

The metrics exporter can add metric labels to timeseries, such as when setting metric.service_resource_labels, metric.instrumentation_library_labels (both on by default), or when using metric.resource_filters to convert resource attributes to metric labels.

However, if your metrics already contain any of these labels they will fail to export to Google Cloud with a Duplicate label key encountered error. Such labels from the default features above include:

  • service_name
  • service_namespace
  • service_instance_id
  • instrumentation_source
  • instrumentation_version

(Note that these are the sanitized versions of OpenTelemetry attributes, with . replaced by _ to be compatible with Cloud Monitoring. For example, service_name comes from the service.name resource attribute.)

To prevent this, it's recommended to use the transform processor in your collector config to rename existing metric labels to preserve them, for example:

processors:
  transform:
    metric_statements:
    - context: datapoint
      statements:
      - set(attributes["exported_service_name"], attributes["service_name"])
      - delete_key(attributes, "service_name")
      - set(attributes["exported_service_namespace"], attributes["service_namespace"])
      - delete_key(attributes, "service_namespace")
      - set(attributes["exported_service_instance_id"], attributes["service_instance_id"])
      - delete_key(attributes, "service_instance_id")
      - set(attributes["exported_instrumentation_source"], attributes["instrumentation_source"])
      - delete_key(attributes, "instrumentation_source")
      - set(attributes["exported_instrumentation_version"], attributes["instrumentation_version"])
      - delete_key(attributes, "instrumentation_version")

The same method can be used for any resource attributes being filtered to metric labels, or metric labels which might collide with the GCP monitored resource used with resource detection.

Keep in mind that your conflicting attributes may contain dots instead of underscores (eg, service.name), but these will still collide once all attributes are normalized to metric labels. In this case you will need to update the collector config above appropriately.

Logging Exporter

The logging exporter processes OpenTelemetry log entries and exports them to GCP Cloud Logging. Logs can be collected using one of the opentelemetry-collector-contrib log receivers, such as the filelogreceiver.

Log entries must contain any Cloud Logging-specific fields as a matching OpenTelemetry attribute (as shown in examples from the logs data model). These attributes can be parsed using the various log collection operators available upstream.

For example, the following config parses the HTTPRequest field from Apache log entries saved in /var/log/apache.log. It also parses out the timestamp and inserts a non-default log_name attribute and GCP MonitoredResource attribute.

receivers:
  filelog:
    include: [ /var/log/apache.log ]
    start_at: beginning
    operators:
      - id: http_request_parser
        type: regex_parser
        regex: '(?m)^(?P<remoteIp>[^ ]*) (?P<host>[^ ]*) (?P<user>[^ ]*) \[(?P<time>[^\]]*)\] "(?P<requestMethod>\S+)(?: +(?P<requestUrl>[^\"]*?)(?: +(?P<protocol>\S+))?)?" (?P<status>[^ ]*) (?P<responseSize>[^ ]*)(?: "(?P<referer>[^\"]*)" "(?P<userAgent>[^\"]*)")?$'
        parse_to: attributes["gcp.http_request"]
        timestamp:
          parse_from: attributes["gcp.http_request"].time
          layout_type: strptime
          layout: '%d/%b/%Y:%H:%M:%S %z'
    converter:
      max_flush_count: 100
      flush_interval: 100ms

exporters:
  googlecloud:
    project: my-gcp-project
    log:
      default_log_name: opentelemetry.io/collector-exported-log

processors:
  memory_limiter:
      check_interval: 1s
      limit_mib: 4000
      spike_limit_mib: 800
  resourcedetection:
    detectors: [gce, gke]
    timeout: 10s
  attributes:
    # Override the default log name.  `gcp.log_name` takes precedence
    # over the `default_log_name` specified in the exporter.
    actions:
      - key: gcp.log_name
        action: insert
        value: apache-access-log

service:
    logs:
      receivers: [filelog]
      processors: [memory_limiter, resourcedetection, attributes]
      exporters: [googlecloud]

This would parse logs of the following example structure:

127.0.0.1 - - [26/Apr/2022:22:53:36 +0800] "GET / HTTP/1.1" 200 1247

To the following GCP entry structure:

        {
          "logName": "projects/my-gcp-project/logs/apache-access-log",
          "resource": {
            "type": "gce_instance",
            "labels": {
              "instance_id": "",
              "zone": ""
            }
          },
          "textPayload": "127.0.0.1 - - [26/Apr/2022:22:53:36 +0800] \"GET / HTTP/1.1\" 200 1247",
          "timestamp": "2022-05-02T12:16:14.574548493Z",
          "httpRequest": {
            "requestMethod": "GET",
            "requestUrl": "/",
            "status": 200,
            "responseSize": "1247",
            "remoteIp": "127.0.0.1",
            "protocol": "HTTP/1.1"
          }
        }

The logging exporter also supports the full range of GCP log severity levels, which differ from the available OpenTelemetry log severity levels. To accommodate this, the following mapping is used to equate an incoming OpenTelemetry SeverityNumber to a matching GCP log severity:

OTel SeverityNumber/Name GCP severity level
Undefined Default
1-4 / Trace Debug
5-8 / Debug Debug
9-10 / Info Info
11-12 / Info Notice
13-16 / Warn Warning
17-20 / Error Error
21-22 / Fatal Critical
23 / Fatal Alert
24 / Fatal Emergency

The upstream severity parser (along with the regex parser) allows for additional flexibility in parsing log severity from incoming entries.

Multi-Project exporting

By default, the exporter sends telemetry to the project specified by project in the configuration. This can be overridden on a per-metrics basis using the gcp.project.id resource attribute. For example, if a metric has a label project, you could use the groupbyattrs processor to promote it to a resource label, and the resource processor to rename the attribute from project to gcp.project.id.

Multi-Project quota usage

The gcp.project.id label can be combined with the destination_project_quota option to attribute quota usage to the project parsed by the label. This feature is available for traces, metrics, and logs. The Collector's default service account will need roles/serviceusage.serviceUsageConsumer IAM permissions in the destination quota project.

Note that this option will not work if a quota project is already defined in your Collector's GCP credentials. In this case, the telemetry will fail to export with a "project not found" error. This can be done by manually editing your ADC file (if it exists) to remove the quota_project_id entry line.

Recommendations

It is recommended to always run a batch processor and memory limiter for tracing pipelines to ensure optimal network usage and avoiding memory overruns. You may also want to run an additional sampler, depending on your needs.

Deprecations

The previous trace configuration (v0.21.0) has been deprecated in favor of the common configuration options available in OpenTelemetry. These will cause a failure to start and should be migrated:

  • trace.bundle_delay_threshold (optional): Use batch processor instead (docs).
  • trace.bundle_count_threshold (optional): Use batch processor instead (docs).
  • trace.bundle_byte_threshold (optional): Use memorylimiter processor instead (docs)
  • trace.bundle_byte_limit (optional): Use memorylimiter processor instead (docs)
  • trace.buffer_max_bytes (optional): Use memorylimiter processor instead (docs)

Documentation

Overview

Package collector contains the wrapper for OpenTelemetry-GoogleCloud exporter to be used in opentelemetry-collector.

Index

Constants

View Source
const (
	HTTPRequestAttributeKey    = "gcp.http_request"
	LogNameAttributeKey        = "gcp.log_name"
	SourceLocationAttributeKey = "gcp.source_location"
	TraceSampledAttributeKey   = "gcp.trace_sampled"

	GCPTypeKey                 = "@type"
	GCPErrorReportingTypeValue = "type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent"
)
View Source
const (
	SummaryCountPrefix = "_count"
	SummarySumSuffix   = "_sum"
)

Constants we use when translating summary metrics into GCP.

View Source
const (
	DefaultTimeout = 12 * time.Second // Consistent with Cloud Monitoring's timeout
)

Variables

This section is empty.

Functions

func MetricViews

func MetricViews() []*view.View

MetricViews returns a slice of views for this exporter's metrics.

func ValidateConfig added in v0.28.0

func ValidateConfig(cfg Config) error

ValidateConfig returns an error if the provided configuration is invalid.

Types

type AttributeMapping added in v0.28.0

type AttributeMapping struct {
	// Key is the OpenTelemetry attribute key
	Key string `mapstructure:"key"`
	// Replacement is the attribute sent to Google Cloud Trace
	Replacement string `mapstructure:"replacement"`
}

AttributeMapping maps from an OpenTelemetry key to a Google Cloud Trace key.

type ClientConfig

type ClientConfig struct {
	// GetClientOptions returns additional options to be passed
	// to the underlying Google Cloud API client.
	// Must be set programmatically (no support via declarative config).
	// If GetClientOptions returns any options, the exporter will not add the
	// default credentials, as those could conflict with options provided via
	// GetClientOptions.
	// Optional.
	GetClientOptions func() []option.ClientOption

	Endpoint string `mapstructure:"endpoint"`
	// Compression specifies the compression format for Metrics and Logging gRPC requests.
	// Supported values: gzip.
	Compression string `mapstructure:"compression"`
	// Only has effect if Endpoint is not ""
	UseInsecure bool `mapstructure:"use_insecure"`
	// GRPCPoolSize sets the size of the connection pool in the GCP client
	GRPCPoolSize int `mapstructure:"grpc_pool_size"`
}

type Config

type Config struct {
	// ProjectID is the project telemetry is sent to if the gcp.project.id
	// resource attribute is not set. If unspecified, this is determined using
	// application default credentials.
	ProjectID               string            `mapstructure:"project"`
	UserAgent               string            `mapstructure:"user_agent"`
	ImpersonateConfig       ImpersonateConfig `mapstructure:"impersonate"`
	TraceConfig             TraceConfig       `mapstructure:"trace"`
	LogConfig               LogConfig         `mapstructure:"log"`
	MetricConfig            MetricConfig      `mapstructure:"metric"`
	DestinationProjectQuota bool              `mapstructure:"destination_project_quota"`
}

Config defines configuration for Google Cloud exporter.

func DefaultConfig

func DefaultConfig() Config

DefaultConfig creates the default configuration for exporter.

type ImpersonateConfig added in v0.31.0

type ImpersonateConfig struct {
	TargetPrincipal string   `mapstructure:"target_principal"`
	Subject         string   `mapstructure:"subject"`
	Delegates       []string `mapstructure:"delegates"`
}

ImpersonateConfig defines configuration for service account impersonation.

type LogConfig added in v0.29.0

type LogConfig struct {
	// DefaultLogName sets the fallback log name to use when one isn't explicitly set
	// for a log entry. If unset, logs without a log name will raise an error.
	DefaultLogName string `mapstructure:"default_log_name"`
	// ResourceFilters, if provided, provides a list of resource filters.
	// Resource attributes matching any filter will be included in LogEntry labels.
	// Defaults to empty, which won't include any additional resource labels.
	ResourceFilters []ResourceFilter `mapstructure:"resource_filters"`
	ClientConfig    ClientConfig     `mapstructure:",squash"`
	// ServiceResourceLabels, if true, causes the exporter to copy OTel's
	// service.name, service.namespace, and service.instance.id resource attributes into the Cloud Logging LogEntry labels.
	// Disabling this option does not prevent resource_filters from adding those labels. Default is true.
	ServiceResourceLabels bool `mapstructure:"service_resource_labels"`
	// ErrorReportingType enables automatically parsing error logs to a json payload containing the
	// type value for GCP Error Reporting. See https://cloud.google.com/error-reporting/docs/formatting-error-messages#log-text.
	ErrorReportingType bool `mapstructure:"error_reporting_type"`
}

type LogsExporter added in v0.29.0

type LogsExporter struct {
	// contains filtered or unexported fields
}

func NewGoogleCloudLogsExporter added in v0.29.0

func NewGoogleCloudLogsExporter(
	ctx context.Context,
	cfg Config,
	log *zap.Logger,
	version string,
) (*LogsExporter, error)

func (*LogsExporter) ConfigureExporter added in v0.35.2

func (l *LogsExporter) ConfigureExporter(config *logsutil.ExporterConfig)

ConfigureExporter is used by integration tests to set exporter settings not visible to users.

func (*LogsExporter) PushLogs added in v0.29.0

func (l *LogsExporter) PushLogs(ctx context.Context, ld plog.Logs) error

func (*LogsExporter) Shutdown added in v0.29.0

func (l *LogsExporter) Shutdown(ctx context.Context) error

func (*LogsExporter) Start added in v0.46.0

func (l *LogsExporter) Start(ctx context.Context, _ component.Host) error

type MetricConfig

type MetricConfig struct {
	// MapMonitoredResource is not exposed as an option in the configuration, but
	// can be used by other exporters to extend the functionality of this
	// exporter. It allows overriding the function used to map otel resource to
	// monitored resource.
	MapMonitoredResource func(pcommon.Resource) *monitoredrespb.MonitoredResource
	// ExtraMetrics is an extension point for exporters to modify the metrics
	// before they are sent by the exporter.
	ExtraMetrics func(pmetric.Metrics)
	// GetMetricName is not settable in config files, but can be used by other
	// exporters which extend the functionality of this exporter. It allows
	// customizing the naming of metrics. baseName already includes type
	// suffixes for summary metrics, but does not (yet) include the domain prefix
	GetMetricName func(baseName string, metric pmetric.Metric) (string, error)
	// WALConfig holds configuration settings for the write ahead log.
	WALConfig *WALConfig `mapstructure:"experimental_wal_config"`
	Prefix    string     `mapstructure:"prefix"`
	// KnownDomains contains a list of prefixes. If a metric already has one
	// of these prefixes, the prefix is not added.
	KnownDomains []string `mapstructure:"known_domains"`
	// ResourceFilters, if provided, provides a list of resource filters.
	// Resource attributes matching any filter will be included in metric labels.
	// Defaults to empty, which won't include any additional resource labels. Note that the
	// service_resource_labels option operates independently from resource_filters.
	ResourceFilters []ResourceFilter `mapstructure:"resource_filters"`
	ClientConfig    ClientConfig     `mapstructure:",squash"`
	// CreateMetricDescriptorBufferSize is the buffer size for the channel
	// which asynchronously calls CreateMetricDescriptor. Default is 10.
	CreateMetricDescriptorBufferSize int  `mapstructure:"create_metric_descriptor_buffer_size"`
	SkipCreateMetricDescriptor       bool `mapstructure:"skip_create_descriptor"`
	// CreateServiceTimeSeries, if true, this will send all timeseries using `CreateServiceTimeSeries`.
	// Implicitly, this sets `SkipMetricDescriptor` to true.
	CreateServiceTimeSeries bool `mapstructure:"create_service_timeseries"`
	// InstrumentationLibraryLabels, if true, set the instrumentation_source
	// and instrumentation_version labels. Defaults to true.
	InstrumentationLibraryLabels bool `mapstructure:"instrumentation_library_labels"`
	// ServiceResourceLabels, if true, causes the exporter to copy OTel's
	// service.name, service.namespace, and service.instance.id resource attributes into the GCM timeseries metric labels. This
	// option is recommended to avoid writing duplicate timeseries against the same monitored
	// resource. Disabling this option does not prevent resource_filters from adding those
	// labels. Default is true.
	ServiceResourceLabels bool `mapstructure:"service_resource_labels"`
	// CumulativeNormalization normalizes cumulative metrics without start times or with
	// explicit reset points by subtracting subsequent points from the initial point.
	// It is enabled by default. Since it caches starting points, it may result in
	// increased memory usage.
	CumulativeNormalization bool `mapstructure:"cumulative_normalization"`
	// EnableSumOfSquaredDeviation enables calculation of an estimated sum of squared
	// deviation.  It isn't correct, so we don't send it by default, and don't expose
	// it to users. For some uses, it is expected, however.
	EnableSumOfSquaredDeviation bool `mapstructure:"sum_of_squared_deviation"`
}

type MetricsExporter

type MetricsExporter struct {
	// contains filtered or unexported fields
}

MetricsExporter is the GCM exporter that uses pdata directly.

func NewGoogleCloudMetricsExporter

func NewGoogleCloudMetricsExporter(
	ctx context.Context,
	cfg Config,
	log *zap.Logger,
	version string,
	timeout time.Duration,
) (*MetricsExporter, error)

func (*MetricsExporter) PushMetrics

func (me *MetricsExporter) PushMetrics(ctx context.Context, m pmetric.Metrics) error

PushMetrics calls pushes pdata metrics to GCM, creating metric descriptors if necessary.

func (*MetricsExporter) Shutdown

func (me *MetricsExporter) Shutdown(ctx context.Context) error

func (*MetricsExporter) Start added in v0.46.0

func (me *MetricsExporter) Start(ctx context.Context, _ component.Host) error

type ResourceFilter

type ResourceFilter struct {
	// Match resource keys by prefix
	Prefix string `mapstructure:"prefix"`
	// Match resource keys by regex
	Regex string `mapstructure:"regex"`
}

type TraceConfig

type TraceConfig struct {
	// AttributeMappings determines how to map from OpenTelemetry attribute
	// keys to Google Cloud Trace keys.  By default, it changes http and
	// service keys so that they appear more prominently in the UI.
	AttributeMappings []AttributeMapping `mapstructure:"attribute_mappings"`

	ClientConfig ClientConfig `mapstructure:",squash"`
}

type TraceExporter

type TraceExporter struct {
	// contains filtered or unexported fields
}

TraceExporter is a wrapper struct of OT cloud trace exporter.

func NewGoogleCloudTracesExporter

func NewGoogleCloudTracesExporter(ctx context.Context, cfg Config, version string, timeout time.Duration) (*TraceExporter, error)

func (*TraceExporter) PushTraces

func (te *TraceExporter) PushTraces(ctx context.Context, td ptrace.Traces) error

PushTraces calls texporter.ExportSpan for each span in the given traces.

func (*TraceExporter) Shutdown

func (te *TraceExporter) Shutdown(ctx context.Context) error

func (*TraceExporter) Start added in v0.46.0

func (te *TraceExporter) Start(ctx context.Context, _ component.Host) error

type WALConfig added in v0.39.1

type WALConfig struct {
	// Directory is the location to store WAL files.
	Directory string `mapstructure:"directory"`
	// MaxBackoff sets the length of time to exponentially re-try failed exports.
	MaxBackoff time.Duration `mapstructure:"max_backoff"`
}

WALConfig defines settings for the write ahead log. WAL buffering writes data points in-order to disk before reading and exporting them. This allows for better retry logic when exporting fails (such as a network outage), because it preserves both the data on disk and the order of the data points.

Directories

Path Synopsis
internal

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL