Elktail
THIS IS A FORK OF a bunch of things:
The original README is here .
Elktail is a command line utility to query and tail Elasticsearch logs. Even though it's powerful, using Kibana's web interface to search and analyze the logs is not always practical. Sometimes you just wish to tail -f
the logs that you normally view in Kibana to see what's happening right now. Elktail allows you to do just that, and more. Tail the logs. Search for errors and specific events on command line. Pipe the search results to any of the standard unix tools. Use it in scripts. Redirect the output to a file to effectively download a log from es / Kibana etc...
Docker usage
There is a prebuilt OCI image that can be used like:
# find a tag here https://gitlab.com/piersharding/elktail/container_registry/2728088
docker run --rm -it --net host registry.gitlab.com/piersharding/elktail/elktail:<a tag> --url http://<to-your-elasticsearch>:9200 --raw
Installation
Install Using Go
Elktail is written in Go language, and if you have Go installed, you can just type in:
$ go install gitlab.com/piersharding/elktail@latest # or any other version in tags https://gitlab.com/piersharding/elktail/-/tags
This will automatically download, compile and install the latest version of the app.
After that you should have elktail
executable in your $GOPATH/bin
.
Download Binary
Latest builds can be found in the artefacts of the compile
job step - see https://gitlab.com/piersharding/elktail/-/pipelines .
Alternatively, releases are prepared here (download 'Other'): https://gitlab.com/piersharding/elktail/-/releases .
An example of installation for Linux amd64 is (you need jq
for this):
# get the latest Linux amd64 release eg: https://gitlab.com/api/v4/projects/33492293/packages/generic/elktail/1.2.0/elktail-linux-amd64-1.2.0
export ELKTAIL_RELEASE=`curl -s https://gitlab.com/api/v4/projects/33492293/releases | jq -r .[0].tag_name | sed s/v//`
curl -q -o elktail-linux-amd64-${ELKTAIL_RELEASE} \
https://gitlab.com/api/v4/projects/33492293/packages/generic/elktail/${ELKTAIL_RELEASE}/elktail-linux-amd64-${ELKTAIL_RELEASE} && \
chmod a+x elktail-linux-amd64-${ELKTAIL_RELEASE} && \
sudo mv elktail-linux-amd64-${ELKTAIL_RELEASE} /usr/local/bin/elktail
Basic Usage
If elktail
is invoked without any parameters, it will attempt to connect to ES instance at localhost:9200
and tail the logs in the latest logstash index (index that matches pattern filebeat-\d+\.\d+\.\d+-.*
), displaying the contents of message
field. If your logstash logs do not have message
field, you can change the output format using -l (--field) parameter. For example:
elktail -l '%@timestamp %log'
By default, elktail
will query, print, and exit. If you want to continuously tail the logs then specify -f
(--follow
)
JSON Path Output
The --format
string will be automatically interpreted as a JSON Path query, after the initial pass for %
formatting options have been processed. The query engine is the https://github.com/PaesslerAG/jsonpath library which is based on the query language described https://goessner.net/articles/JsonPath/ .
The JSON Path expressions are applied to each raw JSON log row extracted from ElasticSearch based on the query expression. Each expression is encapsulated in curly braces - {...}
, and the expression result is evaluated to a string. eg:
elktail --format '{["@timestamp"]}{.agent.name}{.input.type}{.kubernetes.namespace}{.message}' -d "input.type: journald"
NOTE: for magic characters in the element name, use the following syntax style to escape: .kubernetes.labels[\"app_kubernetes_io/component\"]
.
This query will find records with an input.type
equal to journald
and then will output the tab-delimitered timestamp, agent name, input type, Kubernetes Namespace and the message. The keen observer will note that journald
type records will never have a Kubernetes Namespace, so the failed JSON Path expression will fail to an empty string:
elktail -n 3 --format '{["@timestamp"]}{.agent.name}{.kubernetes.namespace}' -d "input.type: journald"
2022-02-22T06:58:57.010Z systems-k8s1-worker-1
2022-02-22T06:58:58.137Z systems-k8s1-worker-1
2022-02-22T06:58:58.363Z systems-k8s1-worker-4
To see the failed expression error messages, just turn up the verbosity with --v1
:
$ elktail -n 3 --format '{["@timestamp"]}{.agent.name}{.kubernetes.namespace}' -d --v1 "input.type: journald"
INFO: HTTP Client: URL [http://192.168.99.131:9200] ...
INFO: selectIndices: CatIndices took 126.4188ms
INFO: Using indices: [filebeat-7.17.0-2022.02.22-000049]
2022-02-22T06:59:37.010Z systems-k8s1-worker-1 ERR [jsonpath: $.kubernetes.namespace] unknown key kubernetes
2022-02-22T06:59:38.140Z systems-k8s1-worker-1 ERR [jsonpath: $.kubernetes.namespace] unknown key kubernetes
2022-02-22T06:59:38.365Z systems-k8s1-worker-4 ERR [jsonpath: $.kubernetes.namespace] unknown key kubernetes
If the .message
field is actually a serialised JSON value, then it is possible to introspect this by drilling the JSONPath terms down into it. elktail
will attempt to unpack .message
and drill into it.
eg. if we have a .message
with a value of:
"message":"{\"level\":\"info\",\"ts\":\"2022-02-15T05:28:08.400Z\",\"caller\":\"traceutil/trace.go:171\",\"msg\":\"trace[100325659] range\",\"detail\":\"{range_begin:/registry/mutatingwebhookconfigurations/vault-agent-injector-cfg; range_end:; response_count:1; response_revision:14508902; }\",\"duration\":\"146.104307ms\",\"start\":\"2022-02-15T05:28:08.253Z\",\"end\":\"2022-02-15T05:28:08.400Z\",\"steps\":[\"trace[100325659] 'agreement among raft nodes before linearized reading' (duration: 146.002796ms)\"],\"step_count\":1}"
Then drill into it with:
$ elktail -n 1 -a 2022-01-01T00:00 --format "MSG: {.message.level}" vault
MSG: warn
Further details on JSONPath can be found at the IETF https://tools.ietf.org/id/draft-goessner-dispatch-jsonpath-00.html .
Connecting Through SSH Tunnel
If ES instance's endpoint is not publicly available over the internet, you can also connect to it through ssh tunnel. For example, if ES instance is installed on elastic.example.com, but port 9200 is firewalled, you can connect through SSH Tunnel:
elktail --ssh elastic.example.com
Elktail will connect as current user to elastic.example.com and establish ssh tunnel to port 9200 and then connect to ES through it.
You can also specifiy the ssh user, ssh port and tunnel local port (9199 by default) in the following format:
elktail --ssh [localport:][user@]sshhost.tld[:sshport]
If forwarding to an internal host (ie. using the ssh host as a bastion/jump host), then specify --ssh
and --url
, eg:
$ elktail --ssh ubuntu@some.remote --url http://internal-es.from.jumphost:9200 ...
Remember to add your ssh keys to the ssh-agent
in your user session - eg:
$ ssh-add /path/to/pem/file.pem
Elktail Remembers Last Successful Connection
Once you successfully connect to ES, elktail
will remember connection parameters for future invocations. You can than invoke elktail
without any parameters and it will connect to the last ES server it successfully connected to previously.
For example, once you successfully connect to ES using:
elktail -url "http://elastic.example.com:9200" --save
You can then invoke elktail
without any parameters and it will again attempt to connect to elastic.example.com:9200
.
Configuration parameters for last successful connection are stored in ~/.elktail/
directory.
You can specify a configuration file using the --config /path/to/config/file.json
, so that you can maintain multiple canned queries.
Queries
Elktail also supports ES query string searches as the argument. For example, in order to tail logs from host myhost.example.com
that have log level of ERROR you could do the following:
elktail host:myhost.example.com AND level:error
A cheatsheet for the KQL syntax can be found here with a pointer to a blog post - https://www.timroes.de/kibana-search-cheatsheet . The official documentation is here https://www.elastic.co/guide/en/kibana/current/kuery-query.html .
Often, when you start out, you do not know what fields you want to output or to query by. A good place to start is to print the raw JSON output and then find fields of interest using the -r
option. This can be made easier with the help of -p
which will pretty-print the output:
$ elktail -p
which will dump a pretty-printed record.
Date Ranges and Elastic's Logstash Indices
Logstash stores the logs in elasticsearch in one-per-day indices. When specifying date range, elktail
needs to search through appropriate indices depending on the dates selected. Currently, this will only work if your index name pattern contains dates in YYYY.MM.dd format (which is logstash's default).
Specifying Date Ranges
Elktail supports specifying date range in order to query the logs at specific times. You can specify the date range by using after -a
and before -b
options followed by the date. When specifying dates use the following format: YYYY-MM-ddTHH:mm:ss.SSS (e.g 2016-06-17T15:20:00.000). Time part is optional and you can omit it (e.g. you can leave out seconds, milliseconds, or the whole time part and only specify the date).
Examples
Search for errors after 3PM, April 1st, 2016:
elktail -a 2016-04-01T15:00 level:error
Search for errors betweem 1PM and 3PM on July 1st, 2016:
elktail -a 2016-07-01T13:00 -b 2016-07-01T15:00 level:error
datemath
The -a
and -b
(after and before) date selector options can accept datemath expressions. Thes would typically be something like -a 'now-2d' -b 'now-1d'
for a 1 day window the day before.
Examples:
elktail -a "now-60m" -b "now-15m" level:error
Since tailing the logs when using date ranges does not really make sense, when you specify date range options list-only mode will be implied and following is automatically disabled (e.g. elktail
will behave as if you did not specify -f
option)
You can also specify a context time span along with the -a
(after) option using -C <datmath value>
. This will create a date range by adding and subtracting a time value from the after
date eg: -a '2022-05-01T00:00:00' -C '2d'
will create an interval of 2022-05-01 +/- 2 days.
Note: this must always be specified as seconds,days,weeks,months and years as this is added to after
as a mathematical expression eg: <after>||+<context-time>
.
See for details: https://www.elastic.co/guide/en/elasticsearch/reference/8.0/common-options.html#date-math and https://github.com/vectordotdev/go-datemath .
Other Options
Options marked with (*) are saved between invocations of the command. Each time you specify an option marked with (*) previously
stored settings are erased.
NAME:
elktail - utility for tailing Filebeat logs stored in ElasticSearch
USAGE:
elktail-linux-amd64-1.11.0-dirty [global options] [query-string]
Options marked with (*) are saved between invocations of the command. Each time you specify an option marked with (*) previously stored settings are erased.
VERSION:
1.11.0
GLOBAL OPTIONS:
-c value, --config value Configuration file (default: "/home/piers/.elktail/default.json") [$ELKTAIL_CONFIG]
-s, --save Save query terms - next invocation of elktail (without parameters) will use saved query terms. Any additional terms specified will be applied with AND operator to saved terms (default: false)
-U value, --url value (*) ElasticSearch URL (default: "http://localhost:9200") [$ELKTAIL_URL]
--cacert value (*) ca certificate to use when accessing via TLS [$ELKTAIL_CACERT]
-I, --insecure Insecure skip verify server certificate (default: false)
--cert value (*) certificate to use when accessing via TLS [$ELKTAIL_CERT]
--key value (*) key to use when accessing via TLS [$ELKTAIL_KEY]
--apikey value (*) API key to use when accessing via TLS [$ELKTAIL_APIKEY]
-F value, --format value (*) Message format for the entries - field names are referenced using % sign, for example '%@timestamp %message' (default: "[%@timestamp] %agent.name :: %message")
-d, --delimited Add a tab delimiter to output on field boundaries of --format (default: false)
-f, --follow Follow/stream result, like tail -f (default: false)
-r, --raw Output raw (JSON) records (default: false)
-l, --lineno Output record line numbers (default: false)
-p, --pretty-print Output raw pretty printed (JSON) records (default: false)
-i value, --index-pattern value (*) Index pattern - elktail will attempt to tail only the latest of logstash's indexes matched by the pattern (default: "filebeat-\\d+\\.\\d+\\.\\d+.*")
-t value, --timestamp-field value (*) Timestamp field name used for tailing entries (default: @timestamp) (default: "@timestamp")
-n value, --number value (*) Number of entries fetched initially (default: 250)
-a value, --after value List results after or equal to specified date/time (example: -a "2022-06-17T00:00" also takes datemath expressions eg: -a "now-1d")
-b value, --before value List results before or equal to specified date/time (example: -b "2022-06-17T23:59" also takes datemath expressions eg: -a "now-15m")
-C value, --context-time value +/- context time span relative to after (ignores after, defaults rows to 10,000) - expressed as datemath expression, and selects all records using search criteria eg: -C "15m" is +/- 15 minutes
--ssh value, --ssh-tunnel value (*) Use ssh tunnel to connect. Format for the argument is [localport:][user@]sshhost.tld[:sshport] [$ELKTAIL_SSH_TUNNEL]
--ssh-hop value, --ssh-tunnel-hop value (*) second ssh hop - must be used incojunction with --ssh. Format for the argument is sshhost.tld[:sshport] [$ELKTAIL_SSH_HOP]
-w value, --tunnel-wait value (*) Number of seconds pause to enable tunnel to establish (default: 1) [$ELKTAIL_SSH_WAIT]
-u value, --user value (*) Username and password for authentication. curl-like format (separated by colon) [$ELKTAIL_USER]
--v1 Enable verbose output (for debugging) (default: false)
--v2 Enable even more verbose output (for debugging) (default: false)
--v3 Same as v2 but also trace requests and responses (for debugging) (default: false)
--print-version, -V Print version (default: false)
-h, --help Show help (default: false)
For extended help, go to https://gitlab.com/piersharding/elktail