1. Introduction
Logstash is an open-source data collection engine with real-time pipelining capabilities. Logstash is part of the ELK Stack (Elasticsearch, Logstash, Kibana). In this tutorial, you will learn how to install Logstash using Docker. You will also learn how to create a Logstash pipeline and send data to an Elasticsearch backend.
2. Prerequisites
- You are expected to have basic knowledge of Docker. This tutorial uses Docker Desktop for Windows, but you can use any other Docker installation depending on your Operating System. Moving forward, we will assume that you have a working installation of Docker in your workstation.
- This tutorial also assumes that you have Elasticsearch working in your local environment. Follow this tutorial if this is not the case for you.
3. Create the Network
Logstash will be communicating with Elasticsearch. So, let’s start by creating a network interface.
Important: Skip this step if you have already followed this other tutorial dedicated to Elasticsearch.
Open a command prompt (or a terminal) and type the following command:
>docker network create elk
With this command, we create a network named “elk”.
Your output will be similar to this:
759d536b180c54486af7414d1f9f7017f9df405ef41c991db5e3f011c20cd5ab
4. Pull the Logstash Docker image
You can find the Docker images for Logstash in the Docker Elastic Registry. For this tutorial, we will install Logstash version 8.14.3.
Open a command prompt (or a terminal) and issue the following command:
>docker pull docker.elastic.co/logstash/logstash:8.14.3
After some time, you will see an output similar to this:
5. Create Logstash Configuration files
Logstash provides two types of configuration: pipeline and setting configurations.
5.1. Pipeline Configuration
A pipeline configuration file defines where the data comes from, how it is processed, and where it is sent. The configuration file has the following structure:
input {}
filter{}
output {}
- input: Specifies the source of data. Example: Stdin, file,…
- filter: Defines how the data should be processed and transformed. Example: filtering, adding new fields, etc.
- output: Defines where the data should be sent. Example: Stdout, elasticsearch,…
5.1.1. Create a directory to store the pipeline configuration file
By default, the container will look for pipeline configuration files under /usr/share/logstash/pipeline/
. In our case, we will create a logstash
folder in our host machine. Under that folder, we will create a pipeline
folder under which we will store the pipeline configuration files. Finally, we will bind that folder to the /usr/share/logstash/pipeline/
directory of the container using Volume binding.
Here is the logstash directory structure:
C:\apps\logstash>tree .
Folder PATH listing for volume Windows
Volume serial number is XXXXXX
C:\APPS\LOGSTASH
└───pipeline
5.1.2. Create the file logstash.conf
In our case, we want to read data from the Standard input (stdin) and send it to our Elasticsearch backend. Hence, we create the following pipeline configuration file:
# Logstash pipeline configuration file logstash.conf
input {
stdin {}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
user => "elastic"
password => "${ELASTIC_PASSWORD}"
ssl_certificate_verification => false
data_stream => false
index => "logstash-%{+YYYY.MM.dd}"
}
}
Explanation of the Pipeline file
- The input section is simply reading from the standard input
- hosts: The elasticsearch backend. We use the name “elasticsearch” instead of “localhost” because Elasticsearch and Logstash are running in Docker containers.
- ssl_certificate_verification: We are disabling SSL verification for simplification
- data_stream: We are disabling the Data Stream behavior, so we can specify an index name.
- index: It’s the name of the index in Elasticsearch. The index will be automatically created when a new message is pushed.
The file is saved under ./logstash/pipeline/logstash.conf
in the host machine.
C:\apps\logstash>tree . /F
Folder PATH listing for volume Windows
Volume serial number is XXXXX
C:\APPS\LOGSTASH
└───pipeline
logstash.conf
5.2. Settings
There are different approaches to defining the settings for Logstash (configuration file, environment variables, etc.). The most common way is through a configuration file. This is what we are going to use.
Just like pipeline configuration, Logstash expects to find the setting files under /usr/share/logstash/config
. You may provide a single file or a whole directory. If you provide a directory, Logstash will consider every file in that directory as a configuration file.
To keep it simple, we create a config file logstash.yml
under C:\apps\logstash\config
and pass it to the container at runtime.
Here is the configuration file:
# Logstash setting file logstash.yml
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
config.reload.automatic: true
The structure of directory Logstash in the host machine after creating the config file:
C:\apps\logstash>tree . /F
Folder PATH listing for volume Windows
Volume serial number is XXXXX
C:\APPS\LOGSTASH
├───config
│ logstash.yml
│
└───pipeline
logstash.conf
6. Start the Container
Once the image is pulled and the configuration files are created, we use the following command to start the Logstash container:
>docker run --name logstash -h logstash --net elk -it -m 1GB -e ELASTIC_PASSWORD=%ELASTIC_PASSWORD% -v C:/apps/logstash/pipeline/:/usr/share/logstash/pipeline/ -v C:/apps/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml docker.elastic.co/logstash/logstash:8.14.3
--name logstash
is the name we are giving to the container--h logstash
is the hostname of the docker container--net elk
: to specify the network to which the container is attached-m 1GB
: to limit the memory size-e ELASTIC_PASSWORD=%ELASTIC_PASSWORD%
: to pass the environment variable from the host machine to the container.-v ...
: to bind a directory in the host machine to a directory in the container. This is used when we want the data to be persistent if the container restarts.
If everything goes well, you will have the following output:
As you can see, Logstash has started and is waiting for input from the standard input.
7. Verify the installation
We have configured Logstash to listen to the standard input and to send data to Elasticsearch.
Generate sample input
Enter the following input in the Logstash console:
>hello from logstash
Check the result in Elasticsearch
You can use a tool like Postman and Curl to send a GET
query to Elasticsearch. We are using Curl here. Open another command prompt, navigate to the folder with the certificate file “http_ca.crt”, and enter the following query:
C:\apps\elasticsearch>curl -k -u elastic:%ELASTIC_PASSWORD% https://localhost:9200/logstash-*/_search
The previous command is making an HTTP call to Elasticsearch, querying the data in the index named “logstash”. We use the option “-k” to avoid certificate verification. If everything is OK, you will receive an output similar to this:
8. Conclusion
In this tutorial, you learned how to install Logstash using Docker. To complete your journey on the ELK stack, read this other post about installing Kibana, the data analytics and visualization tool.
Pingback: Configure Kibana: Create Data Views and Master KQL
Pingback: ELK Stack Installation and Configuration With Docker Compose
Pingback: How to Install Elasticsearch Using Docker