docker image – Little Big Extra http://littlebigextra.com A technology blog covering topics on Java, Scala, Docker, AWS, BigData, DevOps and much more to come. Do it yourself instructions for complex to simple problems for a novice to an expert. Mon, 01 Apr 2019 14:18:04 +0000 en-US hourly 1 https://wordpress.org/?v=5.1.1 http://littlebigextra.com/wp-content/uploads/2023/04/cropped-logo-32x32.png docker image – Little Big Extra http://littlebigextra.com 32 32 Five points guide to become a Docker Guru for beginners http://littlebigextra.com/five-points-guide-to-become-a-docker-guru/ http://littlebigextra.com/five-points-guide-to-become-a-docker-guru/#respond Tue, 25 Sep 2018 15:50:48 +0000 http://www.littlebigextra.com/?p=1328 Introduction When it comes to Docker many people ask about the courses for learning and mastering containerization and deployment using Docker. Based on my experience of using Docker I feel the best resources to learn Docker are available at Docker Documentation. Here I will list out 5 basic steps which will help all, who wish to […]

The post Five points guide to become a Docker Guru for beginners appeared first on Little Big Extra.

]]>
Introduction

When it comes to Docker many people ask about the courses for learning and mastering containerization and deployment using Docker. Based on my experience of using Docker I feel the best resources to learn Docker are available at . Here I will list out 5 basic steps which will help all, who wish to learn and master Docker 

 

5 Points Guide to become a Docker Guru

1. Installing Docker on your Operating system/local machine
 

The first baby step starts with installing Docker on your local machine. Make sure the docker recommendations for memory (at least 4GB of RAM ) are hardware requirements are met.   Docker is available for both  and .

Docker Community Edition is open source version which can be used free of cost and has most of the docker features minus support. 

When the installation finishes, Docker starts automatically, and you should see a Whale Icon in the notification area. which means Docker is running, and accessible from a terminal.  Open a command-line terminal and Run docker version to check the version.  Run docker run hello-world to verify that Docker can pull and run images from DockerHub. You may be asked for your DockerHub credentials for pulling Images. 

2. Familiarizing yourself with docker pull and docker run commands 

Next step would be to familiarize yourself with docker pull and docker run commands.  docker pull gets the images from docker registry and docker run runs the downloaded image. The running image is also called Docker Container.

If you just use docker run command, then it will first pull the image and then start running the image.  While using docker run make sure that you familiarize yourself, especially with the various flag which can be used. There are many flags which can be used but some of the handy ones are as listed below 

  • -d: detached or foreground running 
  •  –name: If you do not assign a container name with the –name option, then the daemon generates a random string name for you. 
  • -P: Publish exposed ports to the host interfaces   

At this point, I will strongly advise downloading . Kitematic’s one-click install gets Docker running on your Mac and lets you control your app containers from a graphical user interface (GUI). If you missed any flags with your run command, you can fix them using Kitematic’s UI. Also, another feature which I like about kitematic is that it is easy to remove unused images, get into the bash shell of Docker container, see the logs etc.   

3. Creating Images using Dockerfile and pushing images to the registry using docker push  

Docker file is the building block of Docker images and containers. Dockerfiles use a simple DSL which allows you to automate the steps to create an image. A Docker image consists of read-only layers each of which represents a Dockerfile instruction. The layers are stacked and each one is a delta of the changes from the previous layer.  

Once your application is ready and now you want to package it into a Docker Image you will need to use Docker file DSL instructions. If all is well, at this point and all instructions of DockerFile has been completed and a docker image has been created then do a sanity check using docker run command to verify if things are working. 

Use docker push to share your images to the  registry or to a self-hosted one so they can be used across various environments and by various users/team members.   

4. Docker Compose and Persistent Data Storage 

Considering that now you are master with the all the above commands and able to create docker images, the next would be to use multiple docker images. With Compose, you use a YAML file to . You can configure as many containers as you want, how they should be built and connected, and where data should be stored. When the YAML file is complete, you can run a single command to build, run, and configure all of the containers. 

Since you have now multiple Docker containers running and interacting with each other, now you want to persist the data too. Docker Volumes are your saviour for this cause, you can use the –v or –mount flag to map data from Docker Containers to disks. Since Docker Containers are ephermal, once they are killed any data stored within them will also be lost, hence it is important to use the docker volumes so data can be persisted.   

5. Docker Swarm 

So now you have multiple containers running using docker-compose, the next step is to ensure availability and high performance for your application by distributing it over the number of Docker hosts inside a cluster.   With Docker Swarm, you can control a lot of things,  

  • You can have multiple instances of a single service(container) running on same/different machines. 
  • You can scale up or scale down the number of containers at the runtime. 
  • Service discovery, rolling updates and load balancing are provided by default. 

If you are using Docker Swarm make sure that you familiarize yourself with Quorum of Nodes and Master/Worker configuration. The managers in Docker Swarm need to define a quorum of managers which, in simple terms, means that the number of available manager nodes should be always greater or equal to (n+1)/2, where n is the number of manager nodes. So if you have 3 manager nodes, 2 should be always up, and if you have 5 manager nodes, 3 should be up. Also, it is a good idea to have managers running in different geographic locations.

Hopefully, the above points will help in giving insight into Docker world for novices. I have documented my docker journey over here Docker Archives – Little Big Extra.

The best way to learn is by doing it.   

The post Five points guide to become a Docker Guru for beginners appeared first on Little Big Extra.

]]>
http://littlebigextra.com/five-points-guide-to-become-a-docker-guru/feed/ 0
How to maintain Session Persistence (Sticky Session) in Docker Swarm http://littlebigextra.com/how-to-maintain-session-persistence-sticky-session-in-docker-swarm-with-multiple-containers/ http://littlebigextra.com/how-to-maintain-session-persistence-sticky-session-in-docker-swarm-with-multiple-containers/#comments Sun, 06 May 2018 15:19:01 +0000 http://littlebigextra.com/?p=1016 How to maintain Session Persistence(Sticky Session) in Docker Swarm with multiple containers Introduction Stateless services are in vogue and rightfully so as they are easy to scale up and are loosely coupled. However, it is practically impossible to stay away from stateful services completely. For example, say you might need a login application where user session […]

The post How to maintain Session Persistence (Sticky Session) in Docker Swarm appeared first on Little Big Extra.

]]>
How to maintain Session Persistence(Sticky Session) in Docker Swarm with multiple containers

Introduction

Stateless services are in vogue and rightfully so as they are easy to scale up and are loosely coupled. However, it is practically impossible to stay away from stateful services completely. For example, say you might need a login application where user session details need to be maintained across several pages.

Session state can be maintained either using

  • Session Replication
  • Session Stickiness

or a combination of both.

 

Maintaining a user session is relatively easy if you are using a typical monolithic architecture where your application is installed on a couple of servers and you can change the configuration in servers to facilitate session replication using some cache mechanism or session stickiness using a load balancer/reverse proxy.

However, In the case of Microservices, where the scale can be as large from 10 to 10000’s instances the session replication might slow up things as each and every service need to look up at the centralised cache to get session information.

The other approach Session Stickiness where each following request should keep going to the same server( Docker container)  and hence preserving the session will be looked at in this article.

Why session persistence is hard to maintain with containers

Load balancer typically works on Layer 7 OSI model, the application layer (HTTP protocol at this layer) and then distributes the data across multiple machines, but Docker ingress routing mesh works at level 4 in OSI layer.

Someone in StackOverflow has summarized the solution for above problem as- To implement sticky sessions, you would need to implement a reverse proxy inside of docker that supports sticky sessions and communicates directly to the containers by their container id (rather than doing a DNS lookup on the service name which would again go to the round robin load balancer). Implementing that load balancer would also require you to implement your own service discovery tool so that it knows which containers are available.

Possible options explored

Take -1

So I tried implementing the reverse proxy with Nginx and it worked with multiple containers on a single machine but when deployed on Docker Swarm it doesn’t work probably because I was using the service discovery by name and as suggested above, I should use containerId to communicate and not container names.

Take -2

Read about the Jwilder Nginx proxy which works for everyone and it worked on my local but when deployed on Swarm it won’t generate anything any container IP’s inside the

upstream{server}

Take -3

Desperate enough by this time I was going through all possible solutions people have to offer about on the internet (stack overflow, Docker community forums..) and one gentleman has mentioned something about Traefik. Eyes glittered when I read that it works on SWARM and here I go.

Sticky Session with Traefik in Docker Swarm with multiple containers

Even though  I was very comfortable with Nginx and assumed that learning will again be an overhead. It wasn’t the case Traefik is simple to learn and easy to understand and good thing is that you need not fiddle with any of the conf files.

The only constraint is that Traefik should run on manager node

I have tested the configuration with Docker compose version 3 which is the latest and deployed using Docker stack deploy

To start off you need to create a docker-compose.yml (version 3) and add the load balancer Traefik Image. This is how it looks like

loadbalancer:
    image: traefik
    command: --docker \
      --docker.swarmmode \
      --docker.watch \
      --web \
      --loglevel=DEBUG
    ports:
      - 80:80
      - 9090:8080
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    deploy:
      restart_policy:
        condition: any
      mode: replicated
      replicas: 1
      update_config:
        delay: 2s
      placement:
         constraints: [node.role == manager]
    networks:
      - net

Few things to note here

  • Traefik listens to Docker daemon on manager node and keeps aware of new worker nodes, so there is no need to restart if you scale your services.
    volumes: – /var/run/docker.sock:/var/run/docker.sock
  • Traefik provides a dashboard to check the worker nodes health so port 9090 can be kept inside a firewall for monitoring purpose.
    Also, note that
    placement: constraints: [node.role == manager]
     specifies that traefik run only on manager node.

Adding the Image for sticky session

To add a Docker Image which will hold session stickyness we need to add something like this

whoami:
    image: tutum/hello-world
    networks:
      - net
    ports:
      - "80"
    deploy:
      restart_policy:
        condition: any
      mode: replicated
      replicas: 5
      placement:
        constraints: [node.role == worker]
      update_config:
        delay: 2s
      labels:
        - "traefik.docker.network=test_net"
        - "traefik.port=80"
        - "traefik.frontend.rule=PathPrefix:/hello;"
        - "traefik.backend.loadbalancer.sticky=true"

This is a hello world image which displays the container name its running on. We are defining in this file to have 5 replicas of this container. The important section where traefik does the magic is in “labels”

  • - "traefik.docker.network=test_net"
    Tells on which network this image will run on. Please note that the network name is test_net, where test is the stack name. In the load balancer service we just gave net as name.
  • - "traefik.port=80"
    This Helloworld is running on docker port 80 so lets map the traefik port to 80
  • - "traefik.frontend.rule=PathPrefix:/hello"
    All URLs starting with {domainname}/hello/ will be redirected to this container/application
  • - "traefik.backend.loadbalancer.sticky=true"
    The magic happens here, where we are telling to make sessions sticky.

The Complete Picture

Try to use the below file as it is and see if it works, if it does then fiddle with it and make your changes accordingly.

You will need to create a file called docker-compose.yml on your Docker manager node and run this command

docker stack deploy -c docker-compose.yml test
wher the “test” is the namespace.

Read Here about deploying in Swarm: How to Install Stack of services in Docker Swarm

version: "3"

services:

  whoami:
    image: tutum/hello-world
    networks:
      - net
    ports:
      - "80"
    deploy:
      restart_policy:
        condition: any
      mode: replicated
      replicas: 5
      placement:
        constraints: [node.role == worker]
      update_config:
        delay: 2s
      labels:
        - "traefik.docker.network=test_net"
        - "traefik.port=80"
        - "traefik.frontend.rule=PathPrefix:/hello;"
        - "traefik.backend.loadbalancer.sticky=true"

  loadbalancer:
    image: traefik
    command: --docker \
      --docker.swarmmode \
      --docker.watch \
      --web \
      --loglevel=DEBUG
    ports:
      - 80:80
      - 9090:8080
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    deploy:
      restart_policy:
        condition: any
      mode: replicated
      replicas: 1
      update_config:
        delay: 2s
      placement:
         constraints: [node.role == manager]
    networks:
      - net

networks:
  net:

Now you can test this service by http://{Your-Domain-name}/hello and http://{Your-Domain-name}:9090 should show us a Traefik dashboard.

Though there are 5 replicas of above “whoami” service, it should always display the same container ID. If it does congratulations your session peristence is working.

This is how the dashboard of Traefik looks like

Testing session stickness in local machine

In case you don’t have a swarm node and just want to test it on your localhost machine. You can use the following docker-compose file. To run successfully create a directory called test( required for namespace, as we have given our network name as  test_net

- "traefik.docker.network=test_net"
, change the directory name if you have different network) and run
docker-compose up -d

version: "3"

services: 

  whoami:
    image: tutum/hello-world
    networks:
      - net
    ports:
      - "80"  
    labels:
        - "traefik.backend.loadbalancer.sticky=true"
        - "traefik.docker.network=test_net"
        - "traefik.port=80"
        - "traefik.frontend.rule=PathPrefix:/hello"
      
  
  loadbalancer:
    image: traefik
    command: --docker \
      --docker.watch \
      --web \
      --loglevel=DEBUG
    ports:
      - 80:80
      - 25581:8080
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - net

networks:
  net:

Docker-Compose should create the required services and whoami service should be available on http://localhost/hello.

Scale say this service to 5

docker-compose scale whoami=5
and test

Follow this Video to see things in action

The post How to maintain Session Persistence (Sticky Session) in Docker Swarm appeared first on Little Big Extra.

]]>
http://littlebigextra.com/how-to-maintain-session-persistence-sticky-session-in-docker-swarm-with-multiple-containers/feed/ 22
Installing Docker Images from private repositories in Docker Swarm http://littlebigextra.com/installing-docker-images-private-repositories-docker-swarm/ http://littlebigextra.com/installing-docker-images-private-repositories-docker-swarm/#comments Fri, 21 Jul 2023 15:00:34 +0000 http://littlebigextra.com/?p=1073 Installing Docker Images from private repositories in Docker Swarm Introduction On a Docker Swarm you can deploy only those images which come from a Docker repository as all the manager/worker nodes need to pull them separately. When the Docker images are pulled from public repository like DockerHub they can be easily deployed using a simple […]

The post Installing Docker Images from private repositories in Docker Swarm appeared first on Little Big Extra.

]]>
Installing Docker Images from private repositories in Docker Swarm

Introduction

On a Docker Swarm you can deploy only those images which come from a Docker repository as all the manager/worker nodes need to pull them separately.
When the Docker images are pulled from public repository like DockerHub they can be easily deployed using a simple command such as

docker service create nginx

and if you happen to use compose file you can use
docker stack deploy -c "name of the compose file"
.

However in the case of private repositories you need to provide Docker credentials and Docker repository details too.

In this tutorial, I will list out the steps needed for deploying Docker images from a DockerHub private repository.

Login to DockerHub

Considering that you have a substantial number of images and using a Docker-compose file where you have listed the Docker images needed from private repositories.The first step would be is to log on to DockerHub from all nodes(manager and worker)
Use this command

docker login 
and input your username and password.
The reason why this step is important (in my experience) is that we might want to use the deploy constraint available in docker compose(see below) to run Docker containers only on worker nodes. I believe in this cases the Docker pull is ran from worker machines only and that is why it is essential to log in on worker nodes. Not logging from worker nodes in this scenario failed to pull images for me in Worker nodes.

deploy:
      	placement:
        	constraints: [node.role == worker]

Deploying images on Swarm

When deploying on Swarm you need to run the following command, considering that you have created a compose file

docker login -u #DockerHub Username# -p #DockerHub Password# registry.hub.Docker.com/#Organization-Or-DockerHubUserName# && Docker stack deploy -c Docker-swarm.yml #STACK-NAME# --with-registry-auth

Breaking above command for clarification

  • -u #DockerHub Username# : The DockerHub Username
  • -p #DockerHub Password#: The DockerHub Password
  • #Organization-Or-DockerHubUserName#: In case you have created a team/organisation on DockerHub and you have pushing images like organisation/Docker-Image. This is a scenario where you can have multiple team members working on same Docker image and pushing the image using something like TEAM/DockerImage.In case you are a single user and don’t have any Team defined in DockerHub you can use username
  • #STACK-NAME#: The stack name, could be just a “test” or something more meaningful name suited to requirement
  • –with-registry-auth: This option tells that you are pulling images from a docker hub registry with authorization.

a simple example of above command would be like

docker login -u username -p password registry.hub.Docker.com/myproject && Docker stack deploy -c Docker-swarm.yml test --with-registry-auth

The post Installing Docker Images from private repositories in Docker Swarm appeared first on Little Big Extra.

]]>
http://littlebigextra.com/installing-docker-images-private-repositories-docker-swarm/feed/ 2
Docker Swarm : How to Collect logs from multiple containers and write to a single file http://littlebigextra.com/how-to-collect-logs-from-multiple-containers-in-docker-swarm/ http://littlebigextra.com/how-to-collect-logs-from-multiple-containers-in-docker-swarm/#respond Tue, 04 Jul 2023 16:45:22 +0000 http://littlebigextra.com/?p=1036 Write multiple docker container logs into a single file in Docker Swarm Introduction So recently I had deployed scalable micro services using Docker stack deploy on Docker swarm. Now I had multiple micro services running on multiple nodes. To analyse any micro service I had to log on to the manager node and find out […]

The post Docker Swarm : How to Collect logs from multiple containers and write to a single file appeared first on Little Big Extra.

]]>
Write multiple docker container logs into a single file in Docker Swarm

Introduction

So recently I had deployed scalable micro services using Docker stack deploy on Docker swarm. Now I had multiple micro services running on multiple nodes.

To analyse any micro service I had to log on to the manager node and find out on which node(manager/worker) the service is running. If the service was scaled to more than 1 that would mean I would have to log on to more than a machine, check the Docker container(micro service) logs to get a glimpse of an exception. That seems quite annoying and time-consuming.

Fluentd to the rescue

Fluentd is an open source data collector for unified logging layer. We can collect logs from various backends and stream it to various outputs mechanism like MongoDB, ElasticSearch, File etc.
In this tutorial, I will create a single logging file for each service in a separate folder irrespective of the fact that service has 1 or more instances.

Setting the Fluent Conf

So to start with we need to override the default fluent.conf with our custom configuration. More about config file can be read about on the fluentd website.

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>
<match tutum>
  @type copy
   <store>
    @type file
    path /fluentd/log/tutum.*.log
    time_slice_format %Y%m%d
    time_slice_wait 10m
    time_format %Y%m%dT%H%M%S%z
    compress gzip
    utc
    format json
  </store>
</match>
<match visualizer>
  @type copy
   <store>
    @type file
    path /fluentd/log/visualizer.*.log
    time_slice_format %Y%m%d
    time_slice_wait 10m
    time_format %Y%m%dT%H%M%S%z
    compress gzip
    utc
    format json
  </store>
</match>

In the above config, we are listening to anything being forwarded on port 24224 and then do a match based on the tag. So for a log message with tag tutum create a tututm.log and for logs matching visualizer tag create another file called visualizer.log
In the tag we have mentioned that create a file called tutum.*.log this * will be replaced with a date and buffer so finally the file will be something like this tutum.20230630.b5532f4bcd1ec79b0..log

Create Dockerfile with our custom configuration

So the next step is to create a custom image of fluentd which has the above configuration file.
Save above file as fluent.conf in a folder named conf and then create a file called DockerFile at the same level as conf folder

# fluentd/Dockerfile
FROM fluent/fluentd:v0.12-debian
RUN rm /fluentd/etc/fluent.conf
COPY ./conf/fluent.conf /fluentd/etc

In this Docker file as you can see we are replacing the fluent.conf in the base image with the version of ours.

Now let us create a Docker image by

run "docker build -t ##YourREPOname##/myfluentd:latest ."


and then push it to the dockerhub repository
"docker push ##YourREPOname##/myfluentd"

Fluentd as logging driver

So now we need to tell our docker service to use fluentd as logging driver
In this case, I am using autumn/hello-world which displays the container name on the page.

whoami:
    image: tutum/hello-world
    networks:
      - net
    ports:
      - "80:80"
    logging:
      driver: "fluentd"
      options:
        tag: tutum      
    deploy:
      restart_policy:
        max_attempts: 5
      mode: replicated
      replicas: 5
      placement:
        constraints: [node.role == worker]
      update_config:
        delay: 2s

So in this line, we are defining that our service should use fluentd as logging driver

logging:      driver: "fluentd"

you might have also noticed
options:    tag: tutum
so this tag will be used as an identifier to distinguish various services. Remember the match tag in the config file fluent.conf .

We need to define our fluentd Image too in the docker-compose file

fluentd:
    image: ##YourRepo##/myfluentd
    volumes:
      - ./Logs:/fluentd/log
    ports:
      - "24224:24224"
      - "24224:24224/udp"
    networks:
      - net
    deploy:
      restart_policy:
           condition: on-failure
           delay: 20s
           max_attempts: 3
           window: 120s
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]
      update_config:
        delay: 2s

As you might have noticed above we are storing the logs in the

volumes: - ./Logs:/fluentd/log

Logs directory so you need to create a “Logs” directory at the same path from where you will run the docker-compose file on your manager node

This is how the complete file will look like

version: "3"
 
services:
       
  whoami:
    image: tutum/hello-world
    networks:
      - net
    ports:
      - "80:80"
    logging:
      driver: "fluentd"
      options:
        tag: tutum    
    deploy:
      restart_policy:
           condition: on-failure
           delay: 20s
           max_attempts: 3
           window: 120s
      mode: replicated
      replicas: 4
      placement:
        constraints: [node.role == worker]
      update_config:
        delay: 2s
 
  vizualizer:
      image: dockersamples/visualizer
      volumes:
         - /var/run/docker.sock:/var/run/docker.sock
      ports:
        - "8080:8080"
      networks:
        - net
      logging:
        driver: "fluentd"
        options:
         tag: visualizer    
      deploy:
          restart_policy:
             condition: on-failure
             delay: 20s
             max_attempts: 3
             window: 120s
          mode: replicated # one container per manager node
          replicas: 1
          update_config:
            delay: 2s
          placement:
             constraints: [node.role == manager]
 
        
  fluentd:
    image: ##YOUR-REPO##/myfluentd
    volumes:
      - ./Logs:/fluentd/log
    ports:
      - "24224:24224"
      - "24224:24224/udp"
    networks:
      - net
    deploy:
      restart_policy:
           condition: on-failure
           delay: 20s
           max_attempts: 3
           window: 120s
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]
      update_config:
        delay: 2s
 
 
networks:
  net:


You can run above services on docker swarm by using below command, make sure you save the file by the name docker-swarm.yml

docker stack deploy -c docker-swarm.yml test

In your Logs directory now there should be 2 log files something like tutum.*.*.log and visualizer.*.*.log

Fluentd Log files in Docker Swarm

Though Log analyses become much easier when used with ElasticSearch and Kibana as it eliminates the needs to login to the machine and also the log searches, filtering and analyses can be done more easily. I intend to cover it in my next blog.

 

Follow the video to see things in action

The post Docker Swarm : How to Collect logs from multiple containers and write to a single file appeared first on Little Big Extra.

]]>
http://littlebigextra.com/how-to-collect-logs-from-multiple-containers-in-docker-swarm/feed/ 0
How to install Nginx as a reverse proxy server with Docker http://littlebigextra.com/install-nginx-reverse-proxy-server-docker/ http://littlebigextra.com/install-nginx-reverse-proxy-server-docker/#comments Fri, 19 May 2023 15:17:10 +0000 http://littlebigextra.com/?p=989 How to install Nginx as a reverse proxy server with Docker Introduction On a single docker host machine, we can run 100’s of containers and each container can be accessed by exposing a port on the host machine and binding it to the docker port. This is the most standard practice which is used and […]

The post How to install Nginx as a reverse proxy server with Docker appeared first on Little Big Extra.

]]>
How to install Nginx as a reverse proxy server with Docker

Introduction

On a single docker host machine, we can run 100’s of containers and each container can be accessed by exposing a port on the host machine and binding it to the docker port.

This is the most standard practice which is used and we use docker run command with -p option to bind docker port with and host machine port. Now if we have to do this with a couple of services this process might work well but if we had to cater a large number of containers, remembering port numbers and managing them could be a hurricane task.

This problem can be dealt by installing Nginx, which is a reverse proxy server and directs the client requests to the appropriate docker container

Installing Nginx Base Image

Nginx Image can be downloaded from docker hub and can be installed by simply using.

docker run nginx
Nginx Configuration is stored in file /etc/nginx/nginx.conf. This file holds a reference to default.conf
include /etc/nginx/conf.d/*.conf;

Follow the below steps to run a nginx server and have a peek around nginx configuration

  • Run the latest Nginx docker Image using

docker run -d --name nginx nginx

  • Open the bash console for accessing Nginx configuration

bash -c "clear && docker exec -it nginx sh"

Navigate to /etc/nginx/conf.d and 

cat default.conf
 copy file contents.We will use this file contents in next steps.

Creating our own custom Nginx Image

In this step we will try to modify the base nginx image, with changes to default.conf

Create a simple project in eclipse(File->New ->General-> Project) and create a new file called default.conf in the project directory.
In this file add the location block where

location /<URL-To-BE-ACCESSED> {  
        proxy_pass http://<DOCKER_CONTAINER_NAME>:<DOCKER-PORT>;  
    }

for eg,

location /app1 {  
        proxy_pass http://microservice1:8080;  
    }

where the app1 is the URL and microservice1 is the docker container name and 8080 is the docker port , this info can be found using

docker ps-a

While running a docker container make sure that you use — name attribute so the docker container name remains consistent. If no name is given then docker usually assign a random name to container

This is how the default.conf looks like for 2 docker containers named microservice1 and microservice2

server {
    listen       80;
    server_name  nginxserver;
    
    location /app1 {  
        proxy_pass http://microservice1:8080;  
    }
    
    location /app2 {  
        proxy_pass http://microservice2:8080;  
    }

    
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

   
}

Creating Docker Image

Next step is to create a Dockerfile where we will replace the default configuration file default.conf in the nginx base image with our version of default.conf

Create a file called Dockerfile and add below contents. Make sure the file is at same level as default.conf and under project root directory.

#GET the base default nginx image from docker hub
FROM nginx

#Delete the Existing default.conf
RUN rm /etc/nginx/conf.d/default.conf 

#Copy the custom default.conf to the nginx configuration
COPY default.conf /etc/nginx/conf.d

Now all we need is to build this docker image. Open terminal/command prompt and navigate to project directory

docker build -t  mynginx .

If above command was suucessful, our own custom nginx image is ready with our configuration.

Running our own Custom Nginx Image

Each Docker Container is a seperate process and is unawaare of other docker container. However docker has –link attribute which can be used to create links between 2 docker containers and make them aware about the existence of other containers.

Before we run our image we need to make sure that the services mentioned in the location block are up and running , in our case microservice1 and microservice2.

docker ps -a

Next we need to link these 2 docker containers with our nginx container using this command

docker run -d --name mynginx -p 80:80 -p 443:443 --link=microservice1 --link=microservice2 mynginx

Make sure that you stop the default nginx image, created in Step-1 as it might be running on port 80 and 443

If the above command was successful with no errors we have successfully installed nginx as reverse proxy server and can be tested by opening a browser and accessing

http://localhost/app1

http://localhost/app2

For reference , please see below video.


 

The post How to install Nginx as a reverse proxy server with Docker appeared first on Little Big Extra.

]]>
http://littlebigextra.com/install-nginx-reverse-proxy-server-docker/feed/ 7
How to create docker image of Standalone Spring MVC project http://littlebigextra.com/create-docker-image-spring-mvc-project/ http://littlebigextra.com/create-docker-image-spring-mvc-project/#comments Wed, 03 May 2023 16:54:13 +0000 http://littlebigextra.com/?p=961 How to create docker image of spring MVC project Introduction There is a lot of documentation available on running a spring boot as a docker container and it is quite easy to achieve too. Follow my tutorial which lists step by step guide on creating a spring boot docker image. Well, when it comes to […]

The post How to create docker image of Standalone Spring MVC project appeared first on Little Big Extra.

]]>
How to create docker image of spring MVC project

Introduction

There is a lot of documentation available on running a spring boot as a docker container and it is quite easy to achieve too. Follow my tutorial which lists step by step guide on creating a spring boot docker image.
Well, when it comes to running an existing spring MVC project say which was done a couple of years ago when spring boot was a novice or not as popular and now you want to port out the spring MVC application without making it a Spring boot app due to any issue.
The good news is it is very simple to create a docker image of Spring MVC project and run as a docker container

In this tutorial, I will be using Tomcat but you can choose other applications servers too.

How to do it?

Well, get your thinking hat on. In spring boot we embed Tomcat application server and run our app on top of it, now what about running our application in a Tomcat standalone server?
Yes, that’s all we will do. We will get a Tomcat Image and deploy our WAR on top of it.

Create Dockerfile

In the project root folder, you need to create a docker file which should be named as Dockerfile without any extension. In this file, we will add couple of steps

FROM tomcat:8.0.20-jre8
COPY /target/myApplication.war /usr/local/tomcat/webapps/

The first line suggests that get the tomcat Image from Docker hub
The second line is suggesting that copy your war file to tomcat’s web apps directory where it will be installed.

If you are familiar with Tomcat you would know that any WAR file in the webapps folder will be installed on server start up.

You can use docker file commands like RUN, COPY, itsADD etc to update config folder, for eg RUN rm -rf /usr/local/tomcat/webapps/ROOT will remove/uninstall the existing ROOT application.

Create Docker Image

Once you have the docker file ready in your project, we need to create a Docker image.
To create a Docker Image open Terminal/command Prompt and then navigate to the project root directory and build the docker image ( You need to have docker installed on your system)

docker build -t myapp .

(don’t miss “.�? , as it tells docker file is in current directory)
If the above command is successful then use

docker images
see if the image has been created.There should be a docker image called myapp

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
myapp               latest              36ada7cb0f68        6 seconds ago       539 MB

Run Docker Image

docker run -d  -p 8080:8080  --name mydockerapp myapp

We are asking docker here to run our application on localhost port 8080, and since we know that our Tomcat started on port 8080 by default so we are pointing to that port within docker. The docker assigns some random name each time docker is restarted it’s good practice to name the docker image and we have done that using –name tag so our docker container will be called as mydockerapp.

If above command was successful use, 

docker ps -a
it should show the running images under names columns you should see the mydockerapp

CONTAINER ID        IMAGE                      COMMAND             CREATED             STATUS                        PORTS                    NAMES
45046a198154        myapp                      "catalina.sh run"   9 seconds ago       Up 8 seconds                  0.0.0.0:8080->8080/tcp   mydockerapp

Test it

The docker container is running on 8080 port, so your application should be available on localhost:8080/mydockerapp/

The post How to create docker image of Standalone Spring MVC project appeared first on Little Big Extra.

]]>
http://littlebigextra.com/create-docker-image-spring-mvc-project/feed/ 1
How to view docker container logs http://littlebigextra.com/view-docker-container-logs/ http://littlebigextra.com/view-docker-container-logs/#respond Wed, 03 May 2023 15:40:35 +0000 http://littlebigextra.com/?p=946 How to see docker container logs Introduction If you are running your container in -d (detached mode) or by docker remote API, you will not see any logs in console. To access running docker image or container logs you need to use docker logs command. In this tutorial, I will list out various commands to […]

The post How to view docker container logs appeared first on Little Big Extra.

]]>
How to see docker container logs

Introduction

If you are running your container in -d (detached mode) or by docker remote API, you will not see any logs in console. To access running docker image or container logs you need to use docker logs command.

In this tutorial, I will list out various commands to display logs

To see docker containers logs make sure that first of all, docker container is running you can check this by using

docker ps -a

Once you have confirmed that docker container is up and running you can use following commands

  • To see all the logs of a particular container

    docker logs ContainerName/ContainerID

  • To follow docker log output or tail continuously

    docker logs --follow ContainerName/ContainerID

  • To see last n lines of logs

    In this case, last 2500 lines will be displayed

    docker logs --tail 2500 ContainerName/ContainerID

  • To see logs since particular date or timestamp

    First get the Timestamp format in logs using

    docker logs --timestamps ContainerName/ContainerID

    Now use the timestamp which should be in format like this 2023-05-03T10:00:59.935397007Z and use it to display logs

    For e.g if only a day’s log needs to be viewed

    docker logs --since 2023-05-03 ContainerName/ContainerID

    For e.g if only a day’s log needs to be viewed since 10 am
    docker logs --since 2023-05-03T10:00 ContainerName/ContainerID

The post How to view docker container logs appeared first on Little Big Extra.

]]>
http://littlebigextra.com/view-docker-container-logs/feed/ 0
Using Jenkins to Build and Deploy Docker images http://littlebigextra.com/using-jenkins-to-build-and-deploy-docker-images/ http://littlebigextra.com/using-jenkins-to-build-and-deploy-docker-images/#respond Fri, 28 Apr 2023 16:06:19 +0000 http://littlebigextra.com/?p=933 How to use Jenkins to Build and Deploy docker images without Jenkins Docker plugin Introduction Jenkins is one of the most popular continuous integration tools and is widely used. It can be used to create pipelines, run jobs and do automated deployments. There are many plugins available which can be added to Jenkins and makes it […]

The post Using Jenkins to Build and Deploy Docker images appeared first on Little Big Extra.

]]>
How to use Jenkins to Build and Deploy docker images without Jenkins Docker plugin

Introduction

Jenkins is one of the most popular continuous integration tools and is widely used. It can be used to create pipelines, run jobs and do automated deployments.
There are many plugins available which can be added to Jenkins and makes it really powerful.

Recently I wanted to do an automated deployment of Docker Images to docker server and tried using docker-plugin but after spending some time it looked to me that if it asking for too much of information and for each operation you need to provide arguments, I would prefer a solution which is more dynamic and picks things automatically with user providing the bare essentials parameters arguments.
This is how it looks like

 

In a nutshell, all I wanted was

  • Jenkins to automatically start building the Docker Image once the code is pushed to GIT.
  • The Image should be deployed to remote docker host and it should start the container

How to do it

  • Well, the first step is is to enable Jenkins to auto build with BitBucket server or other code repository server. If you want your build process to start as soon as the code is committed to GIT repository.
    Here is an example of how to enable with BitBucket Server, for other GIT repositories servers like GITHUB Jenkins does have different plugins.In case you want to build process to start manually or periodically you can skip this step.

  • Once the build is automated.The second step is to enable Remote API on Docker Host.Docker provides remote REST API which is beneficial if you want to connect to a remote docker host, this API allows you do to set of operations from creating and managing containers to start to stop containers.This step is pivotal before we move on to next step.
  • The third step is to modify your pom to build and deploy Docker Image, well this is the key step as in this step you will be literally doing all the stuff from creating the image to build the image and start the container.

    Once all 3 steps have been tested and completed all is left to be done in Jenkins is to Invoke the clean install goal as shown below.

Jenkins Build step

If you want to push the docker image to some repository after building, testing and deploying.Please follow this link

The post Using Jenkins to Build and Deploy Docker images appeared first on Little Big Extra.

]]> http://littlebigextra.com/using-jenkins-to-build-and-deploy-docker-images/feed/ 0 How to List and Delete Docker Images http://littlebigextra.com/delete_docker_images/ http://littlebigextra.com/delete_docker_images/#respond Tue, 28 Mar 2023 14:41:57 +0000 http://littlebigextra.com/?p=721 Introduction Docker image is an immutable file composed of layers of other images.In this tutorial, I will list out various useful commands to list, delete or remove docker images and also fix some common errors with below commands. Listing the images To list all Docker Images [crayon-5d7029a2c5e14437132658/] To list Docker Images including composition of layers […]

The post How to List and Delete Docker Images appeared first on Little Big Extra.

]]>
Introduction

Docker image is an immutable file composed of layers of other images.In this tutorial, I will list out various useful commands to list, delete or remove docker images and also fix some common errors with below commands.

Listing the images

  • To list all Docker Images

docker images

  • To list Docker Images including composition of layers

docker images -a

  • To list just the Image ID’s of Docker Images

docker images -q

  • To list all dangling images (dangling images are one which is not being referenced by any container and just using disk space)

docker images -f dangling=true

 

Deleting the images

  • To delete/remove a particular Image use

docker rmi IMAGE_ID

  • To delete all dangling images from your docker host (dangling images are one which is not being referenced by any container and just using disk space)

docker rmi $(docker images -f dangling=true -q)

  • To delete all images

docker rmi $(docker images -a -q)

  • To delete a single docker image forcefully, even when being used by a container

docker rmi -f IMAGE_ID

  • To delete all docker image forcefully, even when they are being used by container

docker rmi -f `docker images -q`

 

Common Errors while using above commands

Error response from daemon: No such image: 018cfe87f870:latest

  • Your Image ID is not correct, check if you are not using container ID by mistake (docker ps -a). Use above commands to get correct Image ID

 

Error response from daemon: conflict: unable to delete ad912aaaca3b (cannot be forced) - image is being used by running container a17e2251b82a

The image reference is being used by the running container.

  1. Either stop the container and remove the image
    docker stop CONTAINER_ID
    and then 
    docker rmi IMAGE_ID

    OR

2. Forcefully stop the container and remove the image using

docker rmi -f IMAGE_ID

 

 

 

The post How to List and Delete Docker Images appeared first on Little Big Extra.

]]>
http://littlebigextra.com/delete_docker_images/feed/ 0
How to Enable Docker Remote REST API on Docker Host http://littlebigextra.com/how-to-enable-remote-rest-api-on-docker-host/ http://littlebigextra.com/how-to-enable-remote-rest-api-on-docker-host/#comments Sat, 25 Mar 2023 19:23:30 +0000 http://littlebigextra.com/?p=679 Enable Docker Remote REST API on Docker Host in Ubuntu   Introduction Docker provides remote REST API which is beneficial if you want to connect to a remote docker host. Few of the functions which you can achieve using Docker REST API over a simple browser are Create and Manage Containers Get low-level information about a […]

The post How to Enable Docker Remote REST API on Docker Host appeared first on Little Big Extra.

]]>
Enable Docker Remote REST API on Docker Host in Ubuntu

 

Introduction

Docker provides remote REST API which is beneficial if you want to connect to a remote docker host. Few of the functions which you can achieve using Docker REST API over a simple browser are

  • Create and Manage Containers
  • Get low-level information about a container
  • Get Container Logs
  • Start/Stop container
  • Kill a container
My remote docker host was an Ubuntu Virtual Image on Microsoft Azure.

In this tutorial, I will show you

  • What didn’t worked?
  • What really worked.

Things that didn’t worked

Over the internet, most of the people have suggested editing DOCKER_OPTS variable.

  • I changed DOCKER_OPTS in the
    /etc/default/docker
    file but it didn’t have any effect
  • Then I tried changing DOCKER_OPTS in the file
    /etc/init/docker.conf
    but again no success.

What really worked for me to enable docker remote API on docker host

  • Navigate to /lib/system/system in your terminal and open docker.service file
    vi /lib/systemd/system/docker.service
  • Find the line which starts with ExecStart and adds  -H=tcp://0.0.0.0:2375 to make it look like
    ExecStart=/usr/bin/docker daemon -H=fd:// -H=tcp://0.0.0.0:2375
  • Save the Modified File
  • Reload the docker daemon
    systemctl daemon-reload
  • Restart the container
    sudo service docker restart
  • Test if it is working by using this command, if everything is fine below command should return  a JSON
    curl http://localhost:2375/images/json
  • To test remotely, use the PC name or IP address of Docker Host
Please note that docker standard TLS port is 2376 and 2375 are the standard unencrypted port. As we have defined 0.0.0.0, which defines an open interface for everyone. Anyone with network access to this port will have full root access on the host. I have restricted the access by creating an access-controlled list in Azure so please follow some similar mechanism to restrict access

The post How to Enable Docker Remote REST API on Docker Host appeared first on Little Big Extra.

]]>
http://littlebigextra.com/how-to-enable-remote-rest-api-on-docker-host/feed/ 2