pipeline-gnmi

command module
v0.0.0-...-d931496 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 15, 2020 License: Apache-2.0 Imports: 55 Imported by: 0

README

pipeline-gnmi Go Report Card Build Status

NOTE: For a more recently developed collector with more output flexibility and support, please evaluate usage of the following Telegraf plugins for your use case: cisco_telemetry_mdt and cisco_telemetry_gnmi.

A Model-Driven Telemetry collector based on the open-source tool pipeline including enhancements and bug fixes.

pipeline-gnmi is a Model-Driven Telemetry (MDT) collector based on the open-source tool pipeline which has gNMI support and fixes for maintainability (e.g. Go modules) and compatibility (e.g. Kafka version support). It supports MDT from IOS XE, IOS XR, and NX-OS enabling end-to-end Cisco MDT collection for DIY operators.

The original pipeline README is included here for reference.

Usage

pipeline-gnmi is written in Go and targets Go 1.11+. Windows and MacOS/Darwin support is experimental.

  1. pipeline-gnmi binaries may be downloaded from Releases
  2. Built from source:
git clone https://github.com/cisco-ie/pipeline-gnmi
cd pipeline-gnmi
make build
  1. Acquired via go get github.com/cisco-ie/pipeline-gnmi to be located in $GOPATH/bin

Configuration

pipeline configuration support is maintained and detailed in the original README. Sample configuration is supplied as pipeline.conf.

gNMI Support

This project introduces support for gNMI. gNMI is a standardized and cross-platform protocol for network management and telemetry. gNMI does not require prior sensor path configuration on the target device, merely enabling gRPC/gNMI is enough. Sensor paths are requested by the collector (e.g. pipeline). Subscription type (interval, on-change, target-defined) can be specified per path.

Filtering of retrieved sensor values can be done directly at the input stage through selectors in the configuration file, by defining all the sensor paths that should be stored in a TSDB or forwarded via Kafka. Regular metrics filtering through metrics.json files is ignored and not implemented, due to the lack of user-friendliness of the configuration.

[mygnmirouter]
stage = xport_input
type = gnmi
server = 10.49.234.114:57777

# Sensor Path to subscribe to. No configuration on the device necessary
# Appending an @ with a parameter specifies subscription type:
#   @x where x is a positive number indicates a fixed interval, e.g. @10 -> every 10 seconds
#   @change indicates only changes should be reported
#   omitting @ and parameter will do a target-specific subscriptions (not universally supported)
#
path1 = Cisco-IOS-XR-infra-statsd-oper:infra-statistics/interfaces/interface/latest/generic-counters@10
#path2 = /interfaces/interface/state@change

# Whitelist the actual sensor values we are interested in (1 per line) and drop the rest.
# This replaces metrics-based filtering for gNMI input - which is not implemented.
# Note: Specifying one or more selectors will drop all other sensor values and is applied for all paths.
#select1 = Cisco-IOS-XR-infra-statsd-oper:infra-statistics/interfaces/interface/latest/generic-counters/packets-sent
#select2 = Cisco-IOS-XR-infra-statsd-oper:infra-statistics/interfaces/interface/latest/generic-counters/packets-received

# Suppress redundant messages (minimum hearbeat interval)
# If set and 0 or positive, redundant messages should be suppressed by the server
# If greater than 0, the number of seconds after which a measurement should be sent, even if no change has occured
#heartbeat_interval = 0

tls = false
username = cisco
password = ...
Kafka 2.x Support

This project supports Kafka 2.x by requiring the Kafka version (kafkaversion) to be specified in the config file stage. This is a requirement of the underlying Kafka library and ensures that the library is communicating with the Kafka brokers effectively.

[kafkaconsumer]
topic=mdt
consumergroup=pipeline-gnmi
type=kafka
stage=xport_input
brokers=kafka-host:9092
encoding=gpb
datachanneldepth=1000
kafkaversion=2.1.0
Docker Environment Variables

This project has improved Docker support. The Dockerfile uses multi-stage builds and builds Pipeline from scratch. The configuration file can now be created from environment variables directly, e.g.

PIPELINE_default_id=pipeline
PIPELINE_mygnmirouter_stage=xport_input
PIPELINE_mygnmirouter_type=gnmi

is translated into a pipeline.conf with following contents:

[default]
id = pipeline

[mygnmirouter]
stage = xport_input
type = gnmi

If the special variable _password is used, the value is encrypted using the pipeline RSA key before being written to the password option. Similarly _secret can be used, then the value is read from the file whose name is given as value, encrypted using the pipeline RSA key and then written as password option. If the Pipeline RSA key is not given or does not exist it is created upon creation of the container.

Additionally, existing replays of sensor data can be fed in efficiently using xz-compressed files.

Licensing

pipeline-gnmi is licensed with Apache License, Version 2.0, per pipeline.

Help!

For support, please open a GitHub Issue or email cisco-ie@cisco.com.

Special Thanks

Chris Cassar for implementing pipeline used by anyone interested in MDT, Steven Barth for gNMI plugin development, and the Cisco teams implementing MDT support in the platforms.

Documentation

Overview

February 2016, cisco

Copyright (c) 2016 by cisco Systems, Inc. All rights reserved.

Codec factory

Author: Steven Barth <stbarth@cisco.com>

February 2016, cisco

Copyright (c) 2016 by cisco Systems, Inc. All rights reserved.

Provide JSON codec, such as it is. More effort required here to exploit common bit (Telemetry message) and provide better implementations exporting metadata. Shipping MDT does not support JSON yet, though this is in the works. JSON is largely pass through.

Package jsonpb provides marshaling and unmarshaling between protocol buffers and JSON. It follows the specification at https://developers.google.com/protocol-buffers/docs/proto3#json.

This package produces a different output than the standard "encoding/json" package, which does not operate correctly on protocol buffers.

February 2016, cisco

Copyright (c) 2016 by cisco Systems, Inc. All rights reserved.

Control and data message interfaces and common types.

May 2016, cisco

Copyright (c) 2016 by cisco Systems, Inc. All rights reserved.

June 2016, cisco

Copyright (c) 2016 by cisco Systems, Inc. All rights reserved.

Extract metrics from telemetry data. This module is independent of the specific metrics package (e.g. prometheus, opentsdb etc). The specific metrics package handling can be found in metrics_x.go.

June 2016, cisco

Copyright (c) 2016 by cisco Systems, Inc. All rights reserved.

Feed metrics to prometheus

January 2016, cisco

Copyright (c) 2016 by cisco Systems, Inc. All rights reserved.

Code orchestrating the various bits of the pipeline

Go generate directives. Think of this as limited makefile. Some autogeration of source is required. Currently, we have:

  • autogeneration of .pb.go from xport_grpc_out.proto
  • auto patching of vendor directory, until and if changes can be back ported.

November 2016, cisco

Copyright (c) 2016 by cisco Systems, Inc. All rights reserved.

Input node used to replay streaming telemetry archives. Archives can be recoded using 'tap' output module with "raw = true" set.

Tests for replay module are in tap_test.go

February 2016, cisco

Copyright (c) 2016 by cisco Systems, Inc. All rights reserved.

Output node used to tap pipeline for troubleshooting

Author: Steven Barth <stbarth@cisco.com>

February 2016, cisco

Copyright (c) 2016 by cisco Systdialin. Inc. All rights reserved.

gRPC client for server side streaming telemetry, and server for client side streaming.

The client sets up a connection with the server, subscribes for all paths, and then kicks off go routines to support each subscription, whilst watching for cancellation from the conductor

The server listens for gRPC connections from routers and then collects telemetry streams from gRPC clients.

Package main is a generated protocol buffer package.

It is generated from these files:

xport_grpc_out.proto

It has these top-level messages:

SubJSONReqMsg
SubJSONRepMsg

Directories

Path Synopsis
Packages exporting message samples for test purposes.
Packages exporting message samples for test purposes.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL