elastic-stack

module
v0.0.0-...-c35129c Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 10, 2023 License: MIT

README

Elastic stack on Docker

Elastic Stack version

Run the Elastic stack with Docker Compose.

It gives you the ability to analyze any data set by using the searching/aggregation capabilities of Elasticsearch and the visualization power of Kibana.

Animated demo

Uses the official Docker images from Elastic:

And built from sources


Contents

  1. Features
  2. Requirements
  3. Usage
  4. Configuration
  5. Extensibility
  6. JVM tuning
  7. Going further

Features

This repository based at deviantony/docker-elk, but adapted to my own requirements. The main goal of this project is running production-ready single node Elasticsearch instance.

Comparing to original repo:

  1. Using original container images. This time I don't use plugins and see no point to build custom images.
  2. Using basic license by default.
  3. Enabled bootstrap checks.
  4. Enabled TLS and X-Pack security features.
  5. Configured container memory ulimits according to Elasticsearch documentation.
  6. Added healthcheck scripts.
  7. Added Logstash pipelines config file binding.

Requirements

Host setup
  • Docker Engine version 18.06 or newer
  • Docker Compose version 1.26.0 or newer ℹ Following instructions assumes that you are using Docker compose V2. If you use legacy docker-compose, use docker-compose instead of docker compose.
  • 3 GB of RAM

ℹ Especially on Linux, make sure your user has the required permissions to interact with the Docker daemon. Change Java heap with your requirements. ⚠ Docker-compose commands below assume using docker-compose v2.

Docker Desktop
Windows

If you are using the legacy Hyper-V mode of Docker Desktop for Windows, ensure File Sharing is enabled for the C: drive.

macOS

The default configuration of Docker Desktop for Mac allows mounting files from /Users/, /Volume/, /private/, /tmp and /var/folders exclusively. Make sure the repository is cloned in one of those locations, or follow the instructions from the documentation to add more locations.

Usage

Prepare docker host

Increase virtual memory map:

$ sudo sysctl -w vm.max_map_count=262144

⚠ This is not persisted change. Make a file in /etc/sysctl.d/ dir with this setting.

Please, check Elastic docs for more information.

Initial setup

⚠ This project prepared to use Elasticsearch with enabled TLS for Elasticsearch. You can disable TLS in services configs and healthcheck scripts if you don't need TLS.

  1. Clone this repository onto the Docker host.
  2. Follow TLS setup settings
  3. Enable built-in system accounts:
    1. Start Elasticsearch with docker compose up -d elastic
    2. After a few seconds run
      docker compose exec elastic bin/elasticsearch-setup-passwords auto --batch -u https://localhost:9200
      
      That will generate passwords for system accounts.
    3. Add logstash_writer role and logstash_internal user if needed with POST request ℹ Replace variables below with your values
      # Create role
      curl --insecure \
        --user elastic:${ELASTIC_PASSWORD} \
        --request POST \
        --header "Content-Type: application/json" \
        --data '{"cluster":["manage_index_templates","monitor","manage_ilm"],"indices":[{"names":["logs-generic-default","logstash-*","ecs-logstash-*"],"privileges":["write","create","create_index","manage","manage_ilm"]},{"names":["logstash","ecs-logstash"],"privileges":["write","manage"]}]}' \
        https://localhost:9200/_security/role/logstash_writer
      # Create iser
      curl --insecure \
        --user elastic:${ELASTIC_PASSWORD} \
        --request POST \
        --header "Content-Type: application/json" \
        --data '{"password":"${LOGSTASH_INTERNAL_PASSWD}","roles":["logstash_writer"]}' \
        https://localhost:9200/_security/user/logstash_internal
      
    4. Add remote_logging_agent role and beats_writer user if needed with POST request ℹ Replace variables below with your values
      # Create role
      curl --insecure \
        --user elastic:${ELASTIC_PASSWORD} \
        --request POST \
        --header "Content-Type: application/json" \
        --data '{"cluster":["manage_index_templates","manage_ingest_pipelines","monitor","manage_ilm","manage_pipeline"],"indices":[{"names":["logs-*","filebeat-*","metrics-*","metricbeat-*"],"privileges":["write","create","create_index","manage","manage_ilm"]}]}' \
        https://localhost:9200/_security/role/remote_logging_agent
      # Create iser
      curl --insecure \
        --user elastic:${ELASTIC_PASSWORD} \
        --request POST \
        --header "Content-Type: application/json" \
        --data '{"password":"${BEATS_WRITER_PASSWD}","roles":["remote_logging_agent","remote_monitoring_agent"]}' \
        https://localhost:9200/_security/user/beats_writer
      
    5. Fill passwords with generated ones in following files:  .envlogstash/pipeline/main.conf
  4. Fill .env file.
  5. Load Filebeat and Metricbeat Kibana settings with
    docker compose run filebeat setup -E output.elasticsearch.username=elastic -E output.elasticsearch.password=${your_elastic_root_password} -c config/filebeat.docker.yml --strict.perms=false
    docker compose run metricbeat setup -E output.elasticsearch.username=elastic -E output.elasticsearch.password=${your_elastic_root_password} -c config/metricbeat.docker.yml --strict.perms=false
    
  6. Start services with:docker compose up You can also run all services in the background (detached mode) by adding the -d flag to the above command.
Docker network driver

There are two network drivers that can be used with docker-compose: bridge and host.

bridge: Add virtual network and pass-through selected ports. Also provide ability to use internal domain names (elastic, kibana, etc). Unfortunately, brings some routing overhead.

host: Just use host network. No network isolation, no internal domains, no overhead.

According to Rally testing with metricbeat race, there is no significant difference.

Using host network: To use host network for Elastic stack, remove network and ports sections from docker-compose.yml file and add network_mode: host key to services you want to use host network driver. You can use all services with host network mode. When Elasticsearch set to use host network, change elasticsearch.hosts to localhost both in Kibana and Logstash configs.

Check docker compose reference for more information.

Custom changes in docker-compose.yml

To stay synced with remote repo it's recommended to add all local changes to docker-compose.override.yml.

Override file is a same as docker-compose file, but not required all section specified. Just overrides.

More info at docker docs

Cleanup

Elasticsearch data is persisted inside a volume by default.

In order to entirely shutdown the stack and remove all persisted data, use the following Docker Compose command:

$ docker compose down -v
Access Kibana

Give Kibana about a minute to initialize, then access the Kibana web UI by opening http://localhost:5601 in a web browser and use the following credentials to log in:

  • user: elastic
  • password: <your generated elastic password>
Default Kibana index pattern creation

When Kibana launches for the first time, it is not configured with any index pattern.

Via the Kibana web UI

ℹ You need to inject data into Logstash before being able to configure a Logstash index pattern via the Kibana web UI.

Navigate to the Discover view of Kibana from the left sidebar. You will be prompted to create an index pattern. Enter logstash-* to match Logstash indices then, on the next page, select @timestamp as the time filter field. Finally, click Create index pattern and return to the Discover view to inspect your log entries.

Refer to Connect Kibana with Elasticsearch and Creating an index pattern for detailed instructions about the index pattern configuration.

On the command line

Create an index pattern via the Kibana API:

$ curl -XPOST -D- 'http://localhost:5601/api/saved_objects/index-pattern' \
    -H 'Content-Type: application/json' \
    -H 'kbn-version: 8.1.2' \
    -u elastic:<your generated elastic password> \
    -d '{"attributes":{"title":"logstash-*","timeFieldName":"@timestamp"}}'

The created pattern will automatically be marked as the default index pattern as soon as the Kibana UI is opened for the first time.

Configuration

ℹ Configuration is not dynamically reloaded, you will need to restart individual components after any configuration change.

How to configure Elasticsearch

Learn more about the security of the Elastic stack at Secure the Elastic Stack.

The Elasticsearch configuration is stored in elastic/elasticsearch.yml.

You can also specify the options you want to override by setting environment variables inside the Compose file:

elastic:

  environment:
    network.host: _non_loopback_
    cluster.name: my-cluster

Please refer to the following documentation page for more details about how to configure Elasticsearch inside Docker containers: Install Elasticsearch with Docker.

How to configure Kibana

The Kibana default configuration is stored in kibana/config/kibana.yml.

Kibana TLS

It's highly recommended to use Kibana with secure TLS connection. There is two ways to achieve that:

  • Setup reverse proxy (like Nginx).
  • Setup Kibana using TLS itself.

You can find Kibana TLS setup instructions in tls/README.md

Please refer to the following documentation page for more details about how to configure Kibana inside Docker containers: Install Kibana with Docker.

How to configure Logstash

ℹ Do not use the logstash_system user inside the Logstash pipeline file, it does not have sufficient permissions to create indices. Follow the instructions at Configuring Security in Logstash to create a user with suitable roles.

The Logstash configuration is stored in logstash/logstash.yml, Logstash pipelines configuration is in logstash/pipelines.yml

Please refer to the following documentation page for more details about how to configure Logstash inside Docker containers: Configuring Logstash for Docker.

How to configure Beats

Filebeat and Metricbeat are using for Elastic stack monitoring. Refered docs:

Beats can be configured with beats/filebeat.docker.yml file or with docker labels. But for some reason X-Pack monitoring configured with labels doesn't works.

Please refer to the following documentation page for more details about how to configure Filebeat inside Docker containers:

How to configure Fleet server

Fleet is a new way to manage log shippers. Instead of bundle of beats now we can use only one service, called Elastic Agent. And Fleet is a management server for Elastic Agent.

In order to impossibility of preconfigure Kibana for Fleet server with environment variables, use web UI to configure Fleet and then fill FLEET_SERVER_POLICY_ID and FLEET_SERVER_SERVICE_TOKEN with your values.

Elastic registry

Elastic package registry is service which Kibana and Fleet system uses to get integration packages. Usually it's optional but required when using Fleet system isolated from official elastic registry.

How to scale out the Elasticsearch cluster

Follow the instructions from the Wiki: Scaling out Elasticsearch

Healthcheck

Repo contains healthcheck bash scripts and utility buit with Go. You can choose one oh them or don't use service healthcheck.

Healthcheck Go utility

Usage: healthcheck [options] [elastic | kibana | logstash] [host]

By default tool configurated for default repo settings (https for elastic, default ports, ignoring invalid certs).

⚠ Flags should be defore service type and host!

  • To use basic auth, add -u <username(Default remote_monitoring_user) and -p <password> flags.
  • Trigger status can be setted with RegExp by -s flag, e.g: healthcheck -s 'green|yellow' elastic
  • Accept non default hostname/scheme, e.g: healthcheck elastic http://elastic
Healthcheck scripts
  1. Add mount point for each script to corresponding service.
  2. Change healthcheck: test: "CMD" to service healthcheck script.
  3. Change checking endpoint and username/password.

Extensibility

How to add plugins

To add plugins to any Elastic stack component, you have to:

  1. Create Dockerfile for service you want to apply plugin.
  2. Add a RUN statement to the corresponding Dockerfile (e.g. RUN logstash-plugin install logstash-filter-json)
# https://www.docker.elastic.co/
FROM docker.elastic.co/logstash/logstash:${LOGSTASH_VERSION}

# Add your logstash plugins setup here
RUN logstash-plugin install logstash-filter-json
  1. Add the associated plugin code configuration to the service configuration (eg. Logstash input/output)
  2. Add following to docker compose service section you want to apply plugin (e.g. Logstash):
build:
      context: logstash/
  1. (Re)Build the images using the docker compose build command

JVM tuning

How to specify the amount of memory used by a service

By default, both Elasticsearch and Logstash start with 1/4 of the total host memory allocated to the JVM Heap Size.

The startup scripts for Elasticsearch and Logstash can append extra JVM options from the value of an environment variable, allowing the user to adjust the amount of memory that can be used by each component:

Service Environment variable
Elasticsearch ES_JAVA_OPTS
Logstash LS_JAVA_OPTS

For example, to increase the maximum JVM Heap Size for Logstash:

logstash:

  environment:
    LS_JAVA_OPTS: -Xmx1g -Xms1g
How to enable a remote JMX connection to a service

As for the Java Heap memory (see above), you can specify JVM options to enable JMX and map the JMX port on the Docker host.

Update the {ES,LS}_JAVA_OPTS environment variable with the following content (I've mapped the JMX service on the port 18080, you can change that). Do not forget to update the -Djava.rmi.server.hostname option with the IP address of your Docker host (replace DOCKER_HOST_IP):

logstash:

  environment:
    LS_JAVA_OPTS: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=18080 -Dcom.sun.management.jmxremote.rmi.port=18080 -Djava.rmi.server.hostname=DOCKER_HOST_IP -Dcom.sun.management.jmxremote.local.only=false

Going further

Swarm mode

This time, there are no plans on support for Docker Swarm mode.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL