You are currently viewing ELK Stack Installation and Configuration With Docker Compose

ELK Stack Installation and Configuration With Docker Compose

1. Introduction

ELK Stack is commonly used to refer to the three tools Elasticsearch, Logstash, and Kibana. There are different ways of installing these tools individually. In this tutorial, we will cover how to install and configure all of them at once using Docker-compose. We will be installing the ELK Stack version 8.14.3.

2. Prerequisites

You are expected to have basic knowledge of Docker and Docker-compose. This tutorial uses Docker Desktop for Windows, but you can use any other Docker installation depending on your Operating System. Moving forward, we assume that you have a working installation of Docker and Docker-compose in your workstation.

3. Directory Structure

First things first. Let’s start by creating a directory where we will keep the configuration files of the three services(Elasticsearch – Logstash – Kibana) and the docker-compose.yml file. We will name that directory “elk”.

Below is the directory structure of “elk”:

C:\apps\elk>tree /f
Folder PATH listing for volume Windows
Volume serial number is XXX-XXXX
C:.
│   docker-compose.yml
│
├───elasticsearch
│   └───config
│           elasticsearch.yml
│
├───kibana
│   └───config
│           kibana.yml
│
└───logstash
    ├───config
    │       logstash.yml
    │
    ├───pipeline
    │       logstash.conf
    │
    └───sample-logs
            access_log.log

Make sure you create all these files and put them in the exact location under the “elk” directory.

4. Configuration files

Fill the configuration files of each of the three services with the content given below. We won’t go deeper into the configurations themselves. You can refer to these tutorials to understand more about each service configuration: Elasticsearch, Logstash, and Kibana.

4.1. Elasticsearch

# Elasticsearch configuration file
# elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0

4.2. Logstash

Pipeline
We are configuring an input of type file. Logstash will output its data to Elasticsearch. We also print the data to the console for debugging purposes. We use a dummy file “access_log.log” for testing purposes only.

# Logstash configuration file logstash.conf
input{
    file {
        path => "/tmp/access_log.log"
    }
}

output {
   elasticsearch { 
       hosts => ["http://elasticsearch:9200"]
       user => "elastic"
       password => "${ELASTIC_PASSWORD}"  
       ssl_certificate_verification => false
       data_stream => false
       index => "logstash-%{+YYYY.MM.dd}"
   }
   # For debugging
   stdout { 
    codec => rubydebug 
   }

}

Settings file

# Logstash setting file logstash.yml
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline

4.3. Kibana

# Kibana configuration file
# kibana.yml
server.name: kibana
server.host: "0.0.0.0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]

5. The docker-compose file

In the docker-compose.yml file, you’ll need to declare the three services and to connect them to the same network. We will call that network “elk”.
Here is the content of our docker-compose.yml file:

services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.14.3
    container_name: elasticsearch
    environment:
      - discovery.type=single-node
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
    volumes:
      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
      - ./elasticsearch/data:/usr/share/elasticsearch/data
    ports:
      - "9200:9200"
    healthcheck:
      test: ["CMD", "curl", "-s", "http://localhost:9200/_cluster/health?wait_for_status=yellow&timeout=50s"]
      interval: 30s
      timeout: 10s
      retries: 5      
    networks:
      - elk

  logstash:
    image: docker.elastic.co/logstash/logstash:8.14.3
    container_name: logstash
    environment:
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
    volumes:
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
      - ./logstash/pipeline/:/usr/share/logstash/pipeline/
      - ./logstash/sample-logs/access_log.log:/tmp/access_log.log
    ports:
      - "5044:5044"
    networks:
      - elk
    depends_on:
      elasticsearch:
        condition: service_healthy

  kibana:
    image: docker.elastic.co/kibana/kibana:8.14.3
    container_name: kibana
    environment:
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}    
    volumes:
      - ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      elasticsearch:
        condition: service_healthy

networks:
  elk:
    driver: bridge

Pro-tip: We have created an environment variable ELASTIC_PASSWORD in the host machine, and we are passing it to the three services under the environment setting.

5.1. Elasticsearch Service

  • The discovery type is set to “single-node” because we are in a testing environment and only have a single instance of Elasticsearch. We don’t want the overhead of cluster formation and management.
  • Instead of leaving Elasticsearch to generate a password at startup, we have created a password and saved it in an environment variable. This way, we have total control over the password itself.
  • We use “Volume binding” to bind the configuration file that we created in the host machine to the one in the Docker image. We don’t want to content of Elasticsearch indices to be lost if the service restarts. So we bind the data directory “/usr/share/elasticsearch/data” to a directory in our host machine “./elasticsearch/data”.
  • We have created a health check for Elasticsearch to make sure it is ready before the other services start.
  • Lastly, we use port mapping to make Elasticsearch available on port 9200 from outside the container, and we attach the container to the “elk” network.

5.2. Logstash Service

  • Logstash has two types of configuration: pipeline and setting configurations. We also use “Volume binding” to bind these configuration files from our host machine to those in the container.
  • Logstash will be available on port 5044 and is attached to the “elk” network.
  • Elasticsearch has to be ready before Logstash can start. We use “depends_on” to configure that behavior.
  • For testing purposes, we have created a file “./logstash/sample-logs/access_log.log” in the host machine, and bind it to the file “/tmp/access_log.log” in the container.

5.3. Kibana Service

  • We also use “Volume binding” to bind the Kibana configuration file kibana.yml from our host machine to the one in the container.
  • Kibana will be available on port 5601 and is attached to the “elk” network.
  • Just like Logstash, Kibana depends on Elasticsearch to be able to start.

Pro-tip: When you configure Elasticsearch and Kibana using Docker-compose in this way, you don’t need to enroll Kibana using an enrollment token as you would do if you had configured each of the services individually.

6. Start the Services

Once the configuration files are created and the docker-compose file is ready, use the following command to start the services:

docker-compose up

You can access the container logs using the following command:

docker-compose logs -f

If you need to access the logs of a specific service, just add the service name like this:

docker-compose logs -f elasticsearch

The option “-f” is used to have the logs in real-time, similarly to “tail -f”

7. Verify the installation

7.1. Elasticsearch

To test your setup, you can make a REST API Call to Elasticsearch. You can use a command line tool like Curl, or a tool like Postman. We will be using Curl here.
Open a command prompt (or a terminal) and enter the following command:

curl -u elastic:%ELASTIC_PASSWORD% http://localhost:9200
  • %ELASTIC_PASSWORD% refers to the Elasticsearch password that we saved earlier as an environment variable.

If the command is successful, you’ll get the following output (or similar):

{
  "name" : "elasticsearch",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "UIdL-JY_QR6kHpg4IpTUbg",
  "version" : {
    "number" : "8.14.3",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "d55f984299e0e88dee72ebd8255f7ff130859ad0",
    "build_date" : "2024-07-07T22:04:49.882652950Z",
    "build_snapshot" : false,
    "lucene_version" : "9.10.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

7.2. Logstash

We have configured Logstash to listen to the changes in the file /tmp/access_log.log and to output its data to Elasticsearch.
Since the file /tmp/access_log.log from the container is bound to the file ./logstash/sample-logs/access_log.log from the host, we can type any text in the latter file and the changes will reflect in the first one.

Open the file with a text editor like Notepad and type any text:

elk-stack-logstash-access_log-log.png

Logstash will automatically send the text to Elasticsearch in an index named “logstash-%{+YYYY.MM.dd}” where {+YYYY.MM.dd} refers to the system date.

elk-stack-logstash-access_log-console.png

Issue the following command to query the index in Elasticsearch:

curl -u elastic:%ELASTIC_PASSWORD% http://localhost:9200/logstash-*/_search

If everything is OK, you should be able to see the text you typed in the access_log.log file.

logstash-access_log-elastic.png

7.2. Kibana

Navigate to http://127.0.0.1:5601/ and you should see the Kibana welcome page. If you have already populated some data to Elasticsearch, Kibana will invite you to create a DataView to be able to visualize the content of Elasticsearch indices.

kibana-create-dataview-welcome.png

At this stage, you need to create a Dataview to be able to visualize the content on Logstash indices. Follow this tutorial to configure the Logstash Dataview. Once everything is configured, you should see a screen similar to this:

elk-stack-kibana-logstash-index.png

8. Conclusion

In this tutorial, you learned how to install the ELK Stack using Docker-compose. You should be aware that the setup we did in this tutorial is only for a testing environment and is unsuitable for production. Kindly have a look at these recommendations if you plan to run the ELK Stack in production with Docker.

Noel Kamphoa

Experienced software engineer with expertise in Telecom, Payroll, and Banking. Now Senior Software Engineer at Societe Generale Paris.

This Post Has One Comment

Comments are closed.