1. Introduction
Debugging a distributed Java application by SSH-ing into individual servers to grep through log files is a nightmare. What if you could search and visualize all your Spring Boot logs from a single, powerful dashboard? In this tutorial, we’ll walk through installing the ELK Stack (Elasticsearch, Logstash, Kibana) using Docker Compose—a perfect setup for development and testing. Then, most importantly, I’ll show you exactly how to configure your Java application to send its logs straight to ELK. We will be installing the ELK Stack version 8.14.3.
2. Prerequisites
- Basic Knowledge of Docker and Docker Compose. This tutorial uses Docker Desktop for Windows, but you can use any other Docker installation, depending on your Operating System.
- A Running Docker and Docker Compose installation on your workstation.
- A Java Application that you want to monitor. You may use this Spring Boot REST Application.
3. Directory Structure
First things first. Let’s start by creating a directory where we will keep the configuration files of the three services(Elasticsearch, Logstash, Kibana) and the docker-compose.yml file. We will name that directory “elk”.
Below is the directory structure of “elk”:
C:\apps\elk>tree /f
Folder PATH listing for volume Windows
Volume serial number is XXX-XXXX
C:.
│ docker-compose.yml
│
├───elasticsearch
│ └───config
│ elasticsearch.yml
│
├───kibana
│ └───config
│ kibana.yml
│
└───logstash
├───config
│ logstash.yml
│
├───pipeline
│ logstash.conf
│
└───sample-logs
access_log.log
Make sure you create all these files and put them in the exact location under the “elk” directory.
4. Configuration files
Fill the configuration files of each of the three services with the content given below. We won’t go deeper into the configurations themselves. You can refer to these tutorials to understand more about each service configuration: Elasticsearch, Logstash, and Kibana.
4.1. Elasticsearch
# Elasticsearch configuration file
# elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
4.2. Logstash
Pipeline
We are configuring an input of type file. Logstash will output its data to Elasticsearch. We also print the data to the console for debugging purposes.
The file path refers to our application logs inside the container. We will bind that path to the application logs path outside the container in the docker-compose file.
# Logstash configuration file logstash.conf
input{
file {
path => "/usr/share/logstash/logs/*.log" # Path inside the container
start_position => "beginning"
sincedb_path => "/dev/null" # Optional: for testing, read from start every time
codec => multiline { # CRITICAL for Java stack traces
pattern => "^%{TIMESTAMP_ISO8601} "
negate => true
what => "previous"
}
}
}
filter {
# Normalize CRLF
mutate { gsub => ["message", "\r$", ""] }
grok {
match => {
"message" => [
"^%{TIMESTAMP_ISO8601:timestamp}\s+%{LOGLEVEL:level}\s+--- \[(?<thread>[^\]]+)\]\s+%{JAVACLASS:logger}\s+method=%{WORD:http_method}\s+path=%{DATA:http_path}\s+status=%{NUMBER:http_status:int}\s+duration_ms=%{NUMBER:duration_ms:int}\s*:\s%{GREEDYDATA:msg}$"
]
}
}
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss.SSS" ]
target => "@timestamp"
timezone => "Europe/Paris"
}
mutate {
rename => {
"msg" => "message"
"level" => "[log][level]"
"pid" => "[process][pid]"
"thread" => "[process][thread][name]"
"logger" => "[log][logger]"
}
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
user => "elastic"
password => "${ELASTIC_PASSWORD}"
ssl_certificate_verification => false
data_stream => false
index => "logstash-%{+YYYY.MM.dd}"
}
# For debugging
stdout {
codec => rubydebug
}
}
Settings file
# Logstash setting file logstash.yml
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
4.3. Kibana
# Kibana configuration file
# kibana.yml
server.name: kibana
server.host: "0.0.0.0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
5. The docker-compose file
In the docker-compose.yml
file, you’ll need to declare the three services and to connect them to the same network. We will call that network “elk”.
Here is the content of our docker-compose.yml
file:
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.14.3
container_name: elasticsearch
environment:
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ./elasticsearch/data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
healthcheck:
test: ["CMD", "curl", "-s", "http://localhost:9200/_cluster/health?wait_for_status=yellow&timeout=50s"]
interval: 30s
timeout: 10s
retries: 5
networks:
- elk
logstash:
image: docker.elastic.co/logstash/logstash:8.14.3
container_name: logstash
environment:
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline/:/usr/share/logstash/pipeline/
- C:/apps/logs:/usr/share/logstash/logs
ports:
- "5044:5044"
networks:
- elk
depends_on:
elasticsearch:
condition: service_healthy
kibana:
image: docker.elastic.co/kibana/kibana:8.14.3
container_name: kibana
environment:
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
volumes:
- ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- "5601:5601"
networks:
- elk
depends_on:
elasticsearch:
condition: service_healthy
networks:
elk:
driver: bridge
The line C:/apps/logs:/usr/share/logstash/logs binds our Spring Boot application logs path to a path inside the Logstash container. This step is CRITICAL to ensure the logs move from your application to Logstash, and Elasticsearch afterward.
Pro-tip: We have created an environment variable
ELASTIC_PASSWORD
in the host machine, and we are passing it to the three services under theenvironment
setting.
5.1. Elasticsearch Service
- The discovery type is set to “single-node” because we are in a testing environment and only have a single instance of Elasticsearch. We don’t want the overhead of cluster formation and management.
- Instead of leaving Elasticsearch to generate a password at startup, we have created a password and saved it in an environment variable. This way, we have total control over the password itself.
- We use “Volume binding” to bind the configuration file that we created in the host machine to the one in the Docker image. We don’t want to content of Elasticsearch indices to be lost if the service restarts. So we bind the data directory “/usr/share/elasticsearch/data” to a directory in our host machine “./elasticsearch/data”.
- We have created a health check for Elasticsearch to make sure it is ready before the other services start.
- Lastly, we use port mapping to make Elasticsearch available on port 9200 from outside the container, and we attach the container to the “elk” network.
5.2. Logstash Service
- Logstash has two types of configuration: pipeline and setting configurations. We also use “Volume binding” to bind these configuration files from our host machine to those in the container.
- Logstash will be available on port 5044 and is attached to the “elk” network.
- Elasticsearch has to be ready before Logstash can start. We use “depends_on” to configure that behavior.
- Our Java Application ships its logs to “C:/apps/logs/springboot-rest.log” in the host machine. We have bound that directory to “/usr/share/logstash/logs” in the container.
5.3. Kibana Service
- We also use “Volume binding” to bind the Kibana configuration file
kibana.yml
from our host machine to the one in the container. - Kibana will be available on port 5601 and is attached to the “elk” network.
- Just like Logstash, Kibana depends on Elasticsearch to be able to start.
Pro-tip: When you configure Elasticsearch and Kibana using Docker-compose in this way, you don’t need to enroll Kibana using an enrollment token as you would do if you had configured each of the services individually.
6. Start the Services
Once the configuration files are created and the docker-compose file is ready, use the following command to start the services:
docker-compose up
You can access the container logs using the following command:
docker-compose logs -f
7. Verify the installation
Look into each service’s logs to ensure it started properly.
Elasticsearch
docker-compose logs -f elasticsearch
Logstash
docker-compose logs -f logstash
Kibana
docker-compose logs -f kibana
The option “-f” is used to have the logs in real-time, similarly to “tail -f”
8. Shipping Your Java Application Logs
Now that your ELK stack is running, let’s get your Java logs into it. You don’t need to change your existing logging statements(like logger.info()
); you just need to ensure your application logs are written to a file.
8.1. For Our Spring Boot REST Application using Logback
Create or edit your logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property name="LOG_FILE" value="C:/apps/logs/springboot-rest.log"/>
<property name="LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level --- [%thread] %logger{36} method=%X{method} path=%X{path} status=%X{status} duration_ms=%X{duration_ms} : %msg%n"/>
<!-- Appender to write to the console -->
<appender name="Console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>${LOG_PATTERN}</pattern>
</encoder>
</appender>
<!-- Appender to write to a file -->
<appender name="File" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_FILE}</file>
<encoder>
<pattern>${LOG_PATTERN}</pattern>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>C:/apps/logs/springboot-rest.log.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<maxFileSize>10MB</maxFileSize>
<maxHistory>30</maxHistory>
</rollingPolicy>
</appender>
<!-- Root logging level -->
<root level="info">
<appender-ref ref="Console" />
<appender-ref ref="File" />
</root>
</configuration>
In this config, C:/apps/logs/springboot-rest.log
is the location of our Spring Boot application logs. Feel free to adjust it accordingly.
Restart your Java application.
mvn spring-boot:run
Your logs will now be shipped to Logstash and appear in Kibana!
View the Logs in Kibana
Navigate to http://127.0.0.1:5601/ and you should see the Kibana welcome page. Kibana will invite you to create a DataView to be able to visualize the content of Elasticsearch indices.

At this stage, you need to create a Dataview to be able to visualize the content of Logstash indices. Follow this tutorial about debugging a Java application with Kibana to configure your Logstash dataview. Once everything is configured, you should see a screen similar to this:

That’s it! You’ve set up a centralised logging infrastructure for your Java application.
9. Conclusion
You’ve now successfully built a powerful logging infrastructure for your Java projects. This Docker Compose setup is ideal for development and staging environments. By configuring your Java app to ship its logs to ELK, you’ve closed the loop and can now debug and monitor your applications more effectively than ever. You should be aware that the setup we did in this tutorial is only for a testing environment and is unsuitable for production. Kindly have a look at these recommendations if you plan to run the ELK Stack in production with Docker.
Pingback: How to Install Metricbeat Using Docker