Table of Contents
Write multiple docker container logs into a single file in Docker Swarm
Introduction
So recently I had deployed scalable micro services using Docker stack deploy on Docker swarm. Now I had multiple micro services running on multiple nodes.
To analyse any micro service I had to log on to the manager node and find out on which node(manager/worker) the service is running. If the service was scaled to more than 1 that would mean I would have to log on to more than a machine, check the Docker container(micro service) logs to get a glimpse of an exception. That seems quite annoying and time-consuming.
Fluentd to the rescue
Fluentd is an open source data collector for unified logging layer. We can collect logs from various backends and stream it to various outputs mechanism like MongoDB, ElasticSearch, File etc.
In this tutorial, I will create a single logging file for each service in a separate folder irrespective of the fact that service has 1 or more instances.
Setting the Fluent Conf
So to start with we need to override the default fluent.conf with our custom configuration. More about config file can be read about on the fluentd website.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | <source> @type forward port 24224 bind 0.0.0.0 </source> <match tutum> @type copy <store> @type file path /fluentd/log/tutum.*.log time_slice_format %Y%m%d time_slice_wait 10m time_format %Y%m%dT%H%M%S%z compress gzip utc format json </store> </match> <match visualizer> @type copy <store> @type file path /fluentd/log/visualizer.*.log time_slice_format %Y%m%d time_slice_wait 10m time_format %Y%m%dT%H%M%S%z compress gzip utc format json </store> </match> |
In the above config, we are listening to anything being forwarded on port 24224 and then do a match based on the tag. So for a log message with tag tutum create a tututm.log and for logs matching visualizer tag create another file called visualizer.log
In the tag we have mentioned that create a file called tutum.*.log this * will be replaced with a date and buffer so finally the file will be something like this tutum.20230630.b5532f4bcd1ec79b0..log
Create Dockerfile with our custom configuration
So the next step is to create a custom image of fluentd which has the above configuration file.
Save above file as fluent.conf in a folder named conf and then create a file called DockerFile at the same level as conf folder
1 2 3 4 5 6 | # fluentd/Dockerfile FROM fluent/fluentd:v0.12-debian RUN rm /fluentd/etc/fluent.conf COPY ./conf/fluent.conf /fluentd/etc |
In this Docker file as you can see we are replacing the fluent.conf in the base image with the version of ours.
Now let us create a Docker image by
run "docker build -t ##YourREPOname##/myfluentd:latest ."
and then push it to the dockerhub repository
"docker push ##YourREPOname##/myfluentd"
Fluentd as logging driver
So now we need to tell our docker service to use fluentd as logging driver
In this case, I am using autumn/hello-world which displays the container name on the page.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | whoami: image: tutum/hello-world networks: - net ports: - "80:80" logging: driver: "fluentd" options: tag: tutum deploy: restart_policy: max_attempts: 5 mode: replicated replicas: 5 placement: constraints: [node.role == worker] update_config: delay: 2s |
So in this line, we are defining that our service should use fluentd as logging driver
logging: driver: "fluentd"
you might have also noticed
options: tag: tutum so this tag will be used as an identifier to distinguish various services. Remember the match tag in the config file fluent.conf .
We need to define our fluentd Image too in the docker-compose file
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | fluentd: image: ##YourRepo##/myfluentd volumes: - ./Logs:/fluentd/log ports: - "24224:24224" - "24224:24224/udp" networks: - net deploy: restart_policy: condition: on-failure delay: 20s max_attempts: 3 window: 120s mode: replicated replicas: 1 placement: constraints: [node.role == manager] update_config: delay: 2s |
As you might have noticed above we are storing the logs in the
volumes: - ./Logs:/fluentd/log
Logs directory so you need to create a “Logs” directory at the same path from where you will run the docker-compose file on your manager node
This is how the complete file will look like
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 | version: "3" services: whoami: image: tutum/hello-world networks: - net ports: - "80:80" logging: driver: "fluentd" options: tag: tutum deploy: restart_policy: condition: on-failure delay: 20s max_attempts: 3 window: 120s mode: replicated replicas: 4 placement: constraints: [node.role == worker] update_config: delay: 2s vizualizer: image: dockersamples/visualizer volumes: - /var/run/docker.sock:/var/run/docker.sock ports: - "8080:8080" networks: - net logging: driver: "fluentd" options: tag: visualizer deploy: restart_policy: condition: on-failure delay: 20s max_attempts: 3 window: 120s mode: replicated # one container per manager node replicas: 1 update_config: delay: 2s placement: constraints: [node.role == manager] fluentd: image: ##YOUR-REPO##/myfluentd volumes: - ./Logs:/fluentd/log ports: - "24224:24224" - "24224:24224/udp" networks: - net deploy: restart_policy: condition: on-failure delay: 20s max_attempts: 3 window: 120s mode: replicated replicas: 1 placement: constraints: [node.role == manager] update_config: delay: 2s networks: net: |
You can run above services on docker swarm by using below command, make sure you save the file by the name docker-swarm.yml
1 2 3 | docker stack deploy -c docker-swarm.yml test |
In your Logs directory now there should be 2 log files something like tutum.*.*.log and visualizer.*.*.log

Though Log analyses become much easier when used with ElasticSearch and Kibana as it eliminates the needs to login to the machine and also the log searches, filtering and analyses can be done more easily. I intend to cover it in my next blog.