Five points guide to become a Docker Guru for beginners - Little Big Extra Skip to main content

Five points guide to become a Docker Guru for beginners

Introduction

When it comes to Docker many people ask about the courses for learning and mastering containerization and deployment using Docker. Based on my experience of using Docker I feel the best resources to learn Docker are available at . Here I will list out 5 basic steps which will help all, who wish to learn and master Docker 

 

5 Points Guide to become a Docker Guru

1. Installing Docker on your Operating system/local machine

 

The first baby step starts with installing Docker on your local machine. Make sure the docker recommendations for memory (at least 4GB of RAM ) are hardware requirements are met.   Docker is available for both  and .

Docker Community Edition is open source version which can be used free of cost and has most of the docker features minus support. 

When the installation finishes, Docker starts automatically, and you should see a Whale Icon in the notification area. which means Docker is running, and accessible from a terminal.  Open a command-line terminal and Run docker version to check the version.  Run docker run hello-world to verify that Docker can pull and run images from DockerHub. You may be asked for your DockerHub credentials for pulling Images. 

2. Familiarizing yourself with docker pull and docker run commands 

Next step would be to familiarize yourself with docker pull and docker run commands.  docker pull gets the images from docker registry and docker run runs the downloaded image. The running image is also called Docker Container.

If you just use docker run command, then it will first pull the image and then start running the image.  While using docker run make sure that you familiarize yourself, especially with the various flag which can be used. There are many flags which can be used but some of the handy ones are as listed below 

  • -d: detached or foreground running 
  •  –name: If you do not assign a container name with the –name option, then the daemon generates a random string name for you. 
  • -P: Publish exposed ports to the host interfaces   

At this point, I will strongly advise downloading . Kitematic’s one-click install gets Docker running on your Mac and lets you control your app containers from a graphical user interface (GUI). If you missed any flags with your run command, you can fix them using Kitematic’s UI. Also, another feature which I like about kitematic is that it is easy to remove unused images, get into the bash shell of Docker container, see the logs etc.   

3. Creating Images using Dockerfile and pushing images to the registry using docker push  

Docker file is the building block of Docker images and containers. Dockerfiles use a simple DSL which allows you to automate the steps to create an image. A Docker image consists of read-only layers each of which represents a Dockerfile instruction. The layers are stacked and each one is a delta of the changes from the previous layer.  

Once your application is ready and now you want to package it into a Docker Image you will need to use Docker file DSL instructions. If all is well, at this point and all instructions of DockerFile has been completed and a docker image has been created then do a sanity check using docker run command to verify if things are working. 

Use docker push to share your images to the  registry or to a self-hosted one so they can be used across various environments and by various users/team members.   

4. Docker Compose and Persistent Data Storage 

Considering that now you are master with the all the above commands and able to create docker images, the next would be to use multiple docker images. With Compose, you use a YAML file to . You can configure as many containers as you want, how they should be built and connected, and where data should be stored. When the YAML file is complete, you can run a single command to build, run, and configure all of the containers. 

Since you have now multiple Docker containers running and interacting with each other, now you want to persist the data too. Docker Volumes are your saviour for this cause, you can use the –v or –mount flag to map data from Docker Containers to disks. Since Docker Containers are ephermal, once they are killed any data stored within them will also be lost, hence it is important to use the docker volumes so data can be persisted.   

5. Docker Swarm 

So now you have multiple containers running using docker-compose, the next step is to ensure availability and high performance for your application by distributing it over the number of Docker hosts inside a cluster.   With Docker Swarm, you can control a lot of things,  

  • You can have multiple instances of a single service(container) running on same/different machines. 
  • You can scale up or scale down the number of containers at the runtime. 
  • Service discovery, rolling updates and load balancing are provided by default. 

If you are using Docker Swarm make sure that you familiarize yourself with Quorum of Nodes and Master/Worker configuration. The managers in Docker Swarm need to define a quorum of managers which, in simple terms, means that the number of available manager nodes should be always greater or equal to (n+1)/2, where n is the number of manager nodes. So if you have 3 manager nodes, 2 should be always up, and if you have 5 manager nodes, 3 should be up. Also, it is a good idea to have managers running in different geographic locations.

Hopefully, the above points will help in giving insight into Docker world for novices. I have documented my docker journey over here Docker Archives – Little Big Extra.

The best way to learn is by doing it.   

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Bitnami