Docker Swarm : How to Collect logs from multiple containers and write to a single file - Little Big Extra Skip to main content

Docker Swarm : How to Collect logs from multiple containers and write to a single file

Write multiple docker container logs into a single file in Docker Swarm

Introduction

So recently I had deployed scalable micro services using Docker stack deploy on Docker swarm. Now I had multiple micro services running on multiple nodes.

To analyse any micro service I had to log on to the manager node and find out on which node(manager/worker) the service is running. If the service was scaled to more than 1 that would mean I would have to log on to more than a machine, check the Docker container(micro service) logs to get a glimpse of an exception. That seems quite annoying and time-consuming.

Fluentd to the rescue

Fluentd is an open source data collector for unified logging layer. We can collect logs from various backends and stream it to various outputs mechanism like MongoDB, ElasticSearch, File etc.
In this tutorial, I will create a single logging file for each service in a separate folder irrespective of the fact that service has 1 or more instances.

Setting the Fluent Conf

So to start with we need to override the default fluent.conf with our custom configuration. More about config file can be read about on the fluentd website.

In the above config, we are listening to anything being forwarded on port 24224 and then do a match based on the tag. So for a log message with tag tutum create a tututm.log and for logs matching visualizer tag create another file called visualizer.log
In the tag we have mentioned that create a file called tutum.*.log this * will be replaced with a date and buffer so finally the file will be something like this tutum.20230630.b5532f4bcd1ec79b0..log

Create Dockerfile with our custom configuration

So the next step is to create a custom image of fluentd which has the above configuration file.
Save above file as fluent.conf in a folder named conf and then create a file called DockerFile at the same level as conf folder

In this Docker file as you can see we are replacing the fluent.conf in the base image with the version of ours.

Now let us create a Docker image by
run "docker build -t ##YourREPOname##/myfluentd:latest ."


and then push it to the dockerhub repository
"docker push ##YourREPOname##/myfluentd"

Fluentd as logging driver

So now we need to tell our docker service to use fluentd as logging driver
In this case, I am using autumn/hello-world which displays the container name on the page.

So in this line, we are defining that our service should use fluentd as logging driver
logging:      driver: "fluentd"
you might have also noticed
options:    tag: tutum so this tag will be used as an identifier to distinguish various services. Remember the match tag in the config file fluent.conf .

We need to define our fluentd Image too in the docker-compose file

As you might have noticed above we are storing the logs in the

volumes: - ./Logs:/fluentd/log

Logs directory so you need to create a “Logs” directory at the same path from where you will run the docker-compose file on your manager node

This is how the complete file will look like


You can run above services on docker swarm by using below command, make sure you save the file by the name docker-swarm.yml

In your Logs directory now there should be 2 log files something like tutum.*.*.log and visualizer.*.*.log

Fluentd Log files in Docker Swarm

Though Log analyses become much easier when used with ElasticSearch and Kibana as it eliminates the needs to login to the machine and also the log searches, filtering and analyses can be done more easily. I intend to cover it in my next blog.

 

Follow the video to see things in action

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Bitnami