The post Five points guide to become a Docker Guru for beginners appeared first on Little Big Extra.
]]>When it comes to Docker many people ask about the courses for learning and mastering containerization and deployment using Docker. Based on my experience of using Docker I feel the best resources to learn Docker are available at . Here I will list out 5 basic steps which will help all, who wish to learn and master Docker
1. Installing Docker on your Operating system/local machine
The first baby step starts with installing Docker on your local machine. Make sure the docker recommendations for memory (at least 4GB of RAM ) are hardware requirements are met. Docker is available for both and .
Docker Community Edition is open source version which can be used free of cost and has most of the docker features minus support.
When the installation finishes, Docker starts automatically, and you should see a Whale Icon in the notification area. which means Docker is running, and accessible from a terminal. Open a command-line terminal and Run docker version to check the version. Run docker run hello-world to verify that Docker can pull and run images from DockerHub. You may be asked for your DockerHub credentials for pulling Images.
2. Familiarizing yourself with docker pull and docker run commands
Next step would be to familiarize yourself with docker pull and docker run commands. docker pull gets the images from docker registry and docker run runs the downloaded image. The running image is also called Docker Container.
If you just use docker run command, then it will first pull the image and then start running the image. While using docker run make sure that you familiarize yourself, especially with the various flag which can be used. There are many flags which can be used but some of the handy ones are as listed below
At this point, I will strongly advise downloading . Kitematic’s one-click install gets Docker running on your Mac and lets you control your app containers from a graphical user interface (GUI). If you missed any flags with your run command, you can fix them using Kitematic’s UI. Also, another feature which I like about kitematic is that it is easy to remove unused images, get into the bash shell of Docker container, see the logs etc.
3. Creating Images using Dockerfile and pushing images to the registry using docker push
Docker file is the building block of Docker images and containers. Dockerfiles use a simple DSL which allows you to automate the steps to create an image. A Docker image consists of read-only layers each of which represents a Dockerfile instruction. The layers are stacked and each one is a delta of the changes from the previous layer.
Once your application is ready and now you want to package it into a Docker Image you will need to use Docker file DSL instructions. If all is well, at this point and all instructions of DockerFile has been completed and a docker image has been created then do a sanity check using docker run command to verify if things are working.
Use docker push to share your images to the registry or to a self-hosted one so they can be used across various environments and by various users/team members.
4. Docker Compose and Persistent Data Storage
Considering that now you are master with the all the above commands and able to create docker images, the next would be to use multiple docker images. With Compose, you use a YAML file to . You can configure as many containers as you want, how they should be built and connected, and where data should be stored. When the YAML file is complete, you can run a single command to build, run, and configure all of the containers.
Since you have now multiple Docker containers running and interacting with each other, now you want to persist the data too. Docker Volumes are your saviour for this cause, you can use the –v or –mount flag to map data from Docker Containers to disks. Since Docker Containers are ephermal, once they are killed any data stored within them will also be lost, hence it is important to use the docker volumes so data can be persisted.
5. Docker Swarm
So now you have multiple containers running using docker-compose, the next step is to ensure availability and high performance for your application by distributing it over the number of Docker hosts inside a cluster. With Docker Swarm, you can control a lot of things,
If you are using Docker Swarm make sure that you familiarize yourself with Quorum of Nodes and Master/Worker configuration. The managers in Docker Swarm need to define a quorum of managers which, in simple terms, means that the number of available manager nodes should be always greater or equal to (n+1)/2, where n is the number of manager nodes. So if you have 3 manager nodes, 2 should be always up, and if you have 5 manager nodes, 3 should be up. Also, it is a good idea to have managers running in different geographic locations.
Hopefully, the above points will help in giving insight into Docker world for novices. I have documented my docker journey over here Docker Archives – Little Big Extra.
The best way to learn is by doing it.
The post Five points guide to become a Docker Guru for beginners appeared first on Little Big Extra.
]]>The post How to maintain Session Persistence (Sticky Session) in Docker Swarm appeared first on Little Big Extra.
]]>Stateless services are in vogue and rightfully so as they are easy to scale up and are loosely coupled. However, it is practically impossible to stay away from stateful services completely. For example, say you might need a login application where user session details need to be maintained across several pages.
Session state can be maintained either using
or a combination of both.
Maintaining a user session is relatively easy if you are using a typical monolithic architecture where your application is installed on a couple of servers and you can change the configuration in servers to facilitate session replication using some cache mechanism or session stickiness using a load balancer/reverse proxy.
However, In the case of Microservices, where the scale can be as large from 10 to 10000’s instances the session replication might slow up things as each and every service need to look up at the centralised cache to get session information.
The other approach Session Stickiness where each following request should keep going to the same server( Docker container) and hence preserving the session will be looked at in this article.
Load balancer typically works on Layer 7 OSI model, the application layer (HTTP protocol at this layer) and then distributes the data across multiple machines, but Docker ingress routing mesh works at level 4 in OSI layer.
Someone in StackOverflow has summarized the solution for above problem as- To implement sticky sessions, you would need to implement a reverse proxy inside of docker that supports sticky sessions and communicates directly to the containers by their container id (rather than doing a DNS lookup on the service name which would again go to the round robin load balancer). Implementing that load balancer would also require you to implement your own service discovery tool so that it knows which containers are available.
So I tried implementing the reverse proxy with Nginx and it worked with multiple containers on a single machine but when deployed on Docker Swarm it doesn’t work probably because I was using the service discovery by name and as suggested above, I should use containerId to communicate and not container names.
Read about the Jwilder Nginx proxy which works for everyone and it worked on my local but when deployed on Swarm it won’t generate anything any container IP’s inside the
upstream{server}
Desperate enough by this time I was going through all possible solutions people have to offer about on the internet (stack overflow, Docker community forums..) and one gentleman has mentioned something about Traefik. Eyes glittered when I read that it works on SWARM and here I go.
Even though I was very comfortable with Nginx and assumed that learning will again be an overhead. It wasn’t the case Traefik is simple to learn and easy to understand and good thing is that you need not fiddle with any of the conf files.
I have tested the configuration with Docker compose version 3 which is the latest and deployed using Docker stack deploy
To start off you need to create a docker-compose.yml (version 3) and add the load balancer Traefik Image. This is how it looks like
loadbalancer: image: traefik command: --docker \ --docker.swarmmode \ --docker.watch \ --web \ --loglevel=DEBUG ports: - 80:80 - 9090:8080 volumes: - /var/run/docker.sock:/var/run/docker.sock deploy: restart_policy: condition: any mode: replicated replicas: 1 update_config: delay: 2s placement: constraints: [node.role == manager] networks: - net
Few things to note here
placement: constraints: [node.role == manager]specifies that traefik run only on manager node.
To add a Docker Image which will hold session stickyness we need to add something like this
whoami: image: tutum/hello-world networks: - net ports: - "80" deploy: restart_policy: condition: any mode: replicated replicas: 5 placement: constraints: [node.role == worker] update_config: delay: 2s labels: - "traefik.docker.network=test_net" - "traefik.port=80" - "traefik.frontend.rule=PathPrefix:/hello;" - "traefik.backend.loadbalancer.sticky=true"
This is a hello world image which displays the container name its running on. We are defining in this file to have 5 replicas of this container. The important section where traefik does the magic is in “labels”
- "traefik.docker.network=test_net"Tells on which network this image will run on. Please note that the network name is test_net, where test is the stack name. In the load balancer service we just gave net as name.
- "traefik.port=80"This Helloworld is running on docker port 80 so lets map the traefik port to 80
- "traefik.frontend.rule=PathPrefix:/hello"All URLs starting with {domainname}/hello/ will be redirected to this container/application
- "traefik.backend.loadbalancer.sticky=true"The magic happens here, where we are telling to make sessions sticky.
Try to use the below file as it is and see if it works, if it does then fiddle with it and make your changes accordingly.
You will need to create a file called docker-compose.yml on your Docker manager node and run this command
docker stack deploy -c docker-compose.yml testwher the “test” is the namespace.
version: "3" services: whoami: image: tutum/hello-world networks: - net ports: - "80" deploy: restart_policy: condition: any mode: replicated replicas: 5 placement: constraints: [node.role == worker] update_config: delay: 2s labels: - "traefik.docker.network=test_net" - "traefik.port=80" - "traefik.frontend.rule=PathPrefix:/hello;" - "traefik.backend.loadbalancer.sticky=true" loadbalancer: image: traefik command: --docker \ --docker.swarmmode \ --docker.watch \ --web \ --loglevel=DEBUG ports: - 80:80 - 9090:8080 volumes: - /var/run/docker.sock:/var/run/docker.sock deploy: restart_policy: condition: any mode: replicated replicas: 1 update_config: delay: 2s placement: constraints: [node.role == manager] networks: - net networks: net:
Now you can test this service by http://{Your-Domain-name}/hello and http://{Your-Domain-name}:9090 should show us a Traefik dashboard.
Though there are 5 replicas of above “whoami” service, it should always display the same container ID. If it does congratulations your session peristence is working.
This is how the dashboard of Traefik looks like
In case you don’t have a swarm node and just want to test it on your localhost machine. You can use the following docker-compose file. To run successfully create a directory called test( required for namespace, as we have given our network name as test_net
- "traefik.docker.network=test_net", change the directory name if you have different network) and run
docker-compose up -d
version: "3" services: whoami: image: tutum/hello-world networks: - net ports: - "80" labels: - "traefik.backend.loadbalancer.sticky=true" - "traefik.docker.network=test_net" - "traefik.port=80" - "traefik.frontend.rule=PathPrefix:/hello" loadbalancer: image: traefik command: --docker \ --docker.watch \ --web \ --loglevel=DEBUG ports: - 80:80 - 25581:8080 volumes: - /var/run/docker.sock:/var/run/docker.sock networks: - net networks: net:
Docker-Compose should create the required services and whoami service should be available on http://localhost/hello.
Scale say this service to 5
docker-compose scale whoami=5and test
Follow this Video to see things in action
The post How to maintain Session Persistence (Sticky Session) in Docker Swarm appeared first on Little Big Extra.
]]>The post Installing Docker Images from private repositories in Docker Swarm appeared first on Little Big Extra.
]]>On a Docker Swarm you can deploy only those images which come from a Docker repository as all the manager/worker nodes need to pull them separately.
When the Docker images are pulled from public repository like DockerHub they can be easily deployed using a simple command such as
docker service create nginx
docker stack deploy -c "name of the compose file".
However in the case of private repositories you need to provide Docker credentials and Docker repository details too.
In this tutorial, I will list out the steps needed for deploying Docker images from a DockerHub private repository.
Considering that you have a substantial number of images and using a Docker-compose file where you have listed the Docker images needed from private repositories.The first step would be is to log on to DockerHub from all nodes(manager and worker)
Use this command
docker loginand input your username and password.
deploy: placement: constraints: [node.role == worker]
When deploying on Swarm you need to run the following command, considering that you have created a compose file
docker login -u #DockerHub Username# -p #DockerHub Password# registry.hub.Docker.com/#Organization-Or-DockerHubUserName# && Docker stack deploy -c Docker-swarm.yml #STACK-NAME# --with-registry-auth
Breaking above command for clarification
a simple example of above command would be like
docker login -u username -p password registry.hub.Docker.com/myproject && Docker stack deploy -c Docker-swarm.yml test --with-registry-auth
The post Installing Docker Images from private repositories in Docker Swarm appeared first on Little Big Extra.
]]>The post Docker Swarm : How to Collect logs from multiple containers and write to a single file appeared first on Little Big Extra.
]]>So recently I had deployed scalable micro services using Docker stack deploy on Docker swarm. Now I had multiple micro services running on multiple nodes.
To analyse any micro service I had to log on to the manager node and find out on which node(manager/worker) the service is running. If the service was scaled to more than 1 that would mean I would have to log on to more than a machine, check the Docker container(micro service) logs to get a glimpse of an exception. That seems quite annoying and time-consuming.
Fluentd is an open source data collector for unified logging layer. We can collect logs from various backends and stream it to various outputs mechanism like MongoDB, ElasticSearch, File etc.
In this tutorial, I will create a single logging file for each service in a separate folder irrespective of the fact that service has 1 or more instances.
So to start with we need to override the default fluent.conf with our custom configuration. More about config file can be read about on the fluentd website.
<source> @type forward port 24224 bind 0.0.0.0 </source> <match tutum> @type copy <store> @type file path /fluentd/log/tutum.*.log time_slice_format %Y%m%d time_slice_wait 10m time_format %Y%m%dT%H%M%S%z compress gzip utc format json </store> </match> <match visualizer> @type copy <store> @type file path /fluentd/log/visualizer.*.log time_slice_format %Y%m%d time_slice_wait 10m time_format %Y%m%dT%H%M%S%z compress gzip utc format json </store> </match>
In the above config, we are listening to anything being forwarded on port 24224 and then do a match based on the tag. So for a log message with tag tutum create a tututm.log and for logs matching visualizer tag create another file called visualizer.log
In the tag we have mentioned that create a file called tutum.*.log this * will be replaced with a date and buffer so finally the file will be something like this tutum.20230630.b5532f4bcd1ec79b0..log
So the next step is to create a custom image of fluentd which has the above configuration file.
Save above file as fluent.conf in a folder named conf and then create a file called DockerFile at the same level as conf folder
# fluentd/Dockerfile FROM fluent/fluentd:v0.12-debian RUN rm /fluentd/etc/fluent.conf COPY ./conf/fluent.conf /fluentd/etc
In this Docker file as you can see we are replacing the fluent.conf in the base image with the version of ours.
Now let us create a Docker image by
run "docker build -t ##YourREPOname##/myfluentd:latest ."
"docker push ##YourREPOname##/myfluentd"
So now we need to tell our docker service to use fluentd as logging driver
In this case, I am using autumn/hello-world which displays the container name on the page.
whoami: image: tutum/hello-world networks: - net ports: - "80:80" logging: driver: "fluentd" options: tag: tutum deploy: restart_policy: max_attempts: 5 mode: replicated replicas: 5 placement: constraints: [node.role == worker] update_config: delay: 2s
So in this line, we are defining that our service should use fluentd as logging driver
logging: driver: "fluentd"
options: tag: tutumso this tag will be used as an identifier to distinguish various services. Remember the match tag in the config file fluent.conf .
We need to define our fluentd Image too in the docker-compose file
fluentd: image: ##YourRepo##/myfluentd volumes: - ./Logs:/fluentd/log ports: - "24224:24224" - "24224:24224/udp" networks: - net deploy: restart_policy: condition: on-failure delay: 20s max_attempts: 3 window: 120s mode: replicated replicas: 1 placement: constraints: [node.role == manager] update_config: delay: 2s
As you might have noticed above we are storing the logs in the
volumes: - ./Logs:/fluentd/logLogs directory so you need to create a “Logs” directory at the same path from where you will run the docker-compose file on your manager node
This is how the complete file will look like
version: "3" services: whoami: image: tutum/hello-world networks: - net ports: - "80:80" logging: driver: "fluentd" options: tag: tutum deploy: restart_policy: condition: on-failure delay: 20s max_attempts: 3 window: 120s mode: replicated replicas: 4 placement: constraints: [node.role == worker] update_config: delay: 2s vizualizer: image: dockersamples/visualizer volumes: - /var/run/docker.sock:/var/run/docker.sock ports: - "8080:8080" networks: - net logging: driver: "fluentd" options: tag: visualizer deploy: restart_policy: condition: on-failure delay: 20s max_attempts: 3 window: 120s mode: replicated # one container per manager node replicas: 1 update_config: delay: 2s placement: constraints: [node.role == manager] fluentd: image: ##YOUR-REPO##/myfluentd volumes: - ./Logs:/fluentd/log ports: - "24224:24224" - "24224:24224/udp" networks: - net deploy: restart_policy: condition: on-failure delay: 20s max_attempts: 3 window: 120s mode: replicated replicas: 1 placement: constraints: [node.role == manager] update_config: delay: 2s networks: net:
You can run above services on docker swarm by using below command, make sure you save the file by the name docker-swarm.yml
docker stack deploy -c docker-swarm.yml test
In your Logs directory now there should be 2 log files something like tutum.*.*.log and visualizer.*.*.log
Though Log analyses become much easier when used with ElasticSearch and Kibana as it eliminates the needs to login to the machine and also the log searches, filtering and analyses can be done more easily. I intend to cover it in my next blog.
The post Docker Swarm : How to Collect logs from multiple containers and write to a single file appeared first on Little Big Extra.
]]>The post How to install Nginx as a reverse proxy server with Docker appeared first on Little Big Extra.
]]>On a single docker host machine, we can run 100’s of containers and each container can be accessed by exposing a port on the host machine and binding it to the docker port.
This is the most standard practice which is used and we use docker run command with -p option to bind docker port with and host machine port. Now if we have to do this with a couple of services this process might work well but if we had to cater a large number of containers, remembering port numbers and managing them could be a hurricane task.
This problem can be dealt by installing Nginx, which is a reverse proxy server and directs the client requests to the appropriate docker container
Nginx Image can be downloaded from docker hub and can be installed by simply using.
docker run nginxNginx Configuration is stored in file /etc/nginx/nginx.conf. This file holds a reference to default.conf
include /etc/nginx/conf.d/*.conf;
Follow the below steps to run a nginx server and have a peek around nginx configuration
docker run -d --name nginx nginx
bash -c "clear && docker exec -it nginx sh"
Navigate to /etc/nginx/conf.d and
cat default.confcopy file contents.We will use this file contents in next steps.
In this step we will try to modify the base nginx image, with changes to default.conf
Create a simple project in eclipse(File->New ->General-> Project) and create a new file called default.conf in the project directory.
In this file add the location block where
location /<URL-To-BE-ACCESSED> { proxy_pass http://<DOCKER_CONTAINER_NAME>:<DOCKER-PORT>; }
for eg,
location /app1 { proxy_pass http://microservice1:8080; }
where the app1 is the URL and microservice1 is the docker container name and 8080 is the docker port , this info can be found using
docker ps-a
This is how the default.conf looks like for 2 docker containers named microservice1 and microservice2
server { listen 80; server_name nginxserver; location /app1 { proxy_pass http://microservice1:8080; } location /app2 { proxy_pass http://microservice2:8080; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } }
Next step is to create a Dockerfile where we will replace the default configuration file default.conf in the nginx base image with our version of default.conf
Create a file called Dockerfile and add below contents. Make sure the file is at same level as default.conf and under project root directory.
#GET the base default nginx image from docker hub FROM nginx #Delete the Existing default.conf RUN rm /etc/nginx/conf.d/default.conf #Copy the custom default.conf to the nginx configuration COPY default.conf /etc/nginx/conf.d
Now all we need is to build this docker image. Open terminal/command prompt and navigate to project directory
docker build -t mynginx .
If above command was suucessful, our own custom nginx image is ready with our configuration.
Each Docker Container is a seperate process and is unawaare of other docker container. However docker has –link attribute which can be used to create links between 2 docker containers and make them aware about the existence of other containers.
Before we run our image we need to make sure that the services mentioned in the location block are up and running , in our case microservice1 and microservice2.
docker ps -a
Next we need to link these 2 docker containers with our nginx container using this command
docker run -d --name mynginx -p 80:80 -p 443:443 --link=microservice1 --link=microservice2 mynginx
If the above command was successful with no errors we have successfully installed nginx as reverse proxy server and can be tested by opening a browser and accessing
http://localhost/app1
http://localhost/app2
For reference , please see below video.
The post How to install Nginx as a reverse proxy server with Docker appeared first on Little Big Extra.
]]>The post How to create docker image of Standalone Spring MVC project appeared first on Little Big Extra.
]]> There is a lot of documentation available on running a spring boot as a docker container and it is quite easy to achieve too. Follow my tutorial which lists step by step guide on creating a spring boot docker image.
Well, when it comes to running an existing spring MVC project say which was done a couple of years ago when spring boot was a novice or not as popular and now you want to port out the spring MVC application without making it a Spring boot app due to any issue.
The good news is it is very simple to create a docker image of Spring MVC project and run as a docker container
In this tutorial, I will be using Tomcat but you can choose other applications servers too.
Well, get your thinking hat on. In spring boot we embed Tomcat application server and run our app on top of it, now what about running our application in a Tomcat standalone server?
Yes, that’s all we will do. We will get a Tomcat Image and deploy our WAR on top of it.
In the project root folder, you need to create a docker file which should be named as Dockerfile without any extension. In this file, we will add couple of steps
FROM tomcat:8.0.20-jre8 COPY /target/myApplication.war /usr/local/tomcat/webapps/
The first line suggests that get the tomcat Image from Docker hub
The second line is suggesting that copy your war file to tomcat’s web apps directory where it will be installed.
If you are familiar with Tomcat you would know that any WAR file in the webapps folder will be installed on server start up.
You can use docker file commands like RUN, COPY, itsADD etc to update config folder, for eg RUN rm -rf /usr/local/tomcat/webapps/ROOT will remove/uninstall the existing ROOT application.
Once you have the docker file ready in your project, we need to create a Docker image.
To create a Docker Image open Terminal/command Prompt and then navigate to the project root directory and build the docker image ( You need to have docker installed on your system)
docker build -t myapp .
(don’t miss “.�? , as it tells docker file is in current directory)
If the above command is successful then use
docker imagessee if the image has been created.There should be a docker image called myapp
REPOSITORY TAG IMAGE ID CREATED SIZE myapp latest 36ada7cb0f68 6 seconds ago 539 MB
docker run -d -p 8080:8080 --name mydockerapp myapp
We are asking docker here to run our application on localhost port 8080, and since we know that our Tomcat started on port 8080 by default so we are pointing to that port within docker. The docker assigns some random name each time docker is restarted it’s good practice to name the docker image and we have done that using –name tag so our docker container will be called as mydockerapp.
If above command was successful use,
docker ps -ait should show the running images under names columns you should see the mydockerapp
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 45046a198154 myapp "catalina.sh run" 9 seconds ago Up 8 seconds 0.0.0.0:8080->8080/tcp mydockerapp
The docker container is running on 8080 port, so your application should be available on localhost:8080/mydockerapp/
The post How to create docker image of Standalone Spring MVC project appeared first on Little Big Extra.
]]>The post How to view docker container logs appeared first on Little Big Extra.
]]>If you are running your container in -d (detached mode) or by docker remote API, you will not see any logs in console. To access running docker image or container logs you need to use docker logs command.
In this tutorial, I will list out various commands to display logs
To see docker containers logs make sure that first of all, docker container is running you can check this by using
docker ps -a
Once you have confirmed that docker container is up and running you can use following commands
docker logs ContainerName/ContainerID
docker logs --follow ContainerName/ContainerID
In this case, last 2500 lines will be displayed
docker logs --tail 2500 ContainerName/ContainerID
First get the Timestamp format in logs using
docker logs --timestamps ContainerName/ContainerID
Now use the timestamp which should be in format like this 2023-05-03T10:00:59.935397007Z and use it to display logs
For e.g if only a day’s log needs to be viewed
docker logs --since 2023-05-03 ContainerName/ContainerID
docker logs --since 2023-05-03T10:00 ContainerName/ContainerID
The post How to view docker container logs appeared first on Little Big Extra.
]]>The post Using Jenkins to Build and Deploy Docker images appeared first on Little Big Extra.
]]>Jenkins is one of the most popular continuous integration tools and is widely used. It can be used to create pipelines, run jobs and do automated deployments.
There are many plugins available which can be added to Jenkins and makes it really powerful.
Recently I wanted to do an automated deployment of Docker Images to docker server and tried using docker-plugin but after spending some time it looked to me that if it asking for too much of information and for each operation you need to provide arguments, I would prefer a solution which is more dynamic and picks things automatically with user providing the bare essentials parameters arguments.
This is how it looks like
In a nutshell, all I wanted was
Once all 3 steps have been tested and completed all is left to be done in Jenkins is to Invoke the clean install goal as shown below.
If you want to push the docker image to some repository after building, testing and deploying.Please follow this link
The post Using Jenkins to Build and Deploy Docker images appeared first on Little Big Extra.
]]>The post How to List and Delete Docker Images appeared first on Little Big Extra.
]]>Docker image is an immutable file composed of layers of other images.In this tutorial, I will list out various useful commands to list, delete or remove docker images and also fix some common errors with below commands.
docker images
docker images -a
docker images -q
docker images -f dangling=true
docker rmi IMAGE_ID
docker rmi $(docker images -f dangling=true -q)
docker rmi $(docker images -a -q)
docker rmi -f IMAGE_ID
docker rmi -f `docker images -q`
Error response from daemon: No such image: 018cfe87f870:latest
Error response from daemon: conflict: unable to delete ad912aaaca3b (cannot be forced) - image is being used by running container a17e2251b82a
The image reference is being used by the running container.
docker stop CONTAINER_ID and then docker rmi IMAGE_ID
OR
2. Forcefully stop the container and remove the image using
docker rmi -f IMAGE_ID
The post How to List and Delete Docker Images appeared first on Little Big Extra.
]]>The post How to Enable Docker Remote REST API on Docker Host appeared first on Little Big Extra.
]]>
Docker provides remote REST API which is beneficial if you want to connect to a remote docker host. Few of the functions which you can achieve using Docker REST API over a simple browser are
In this tutorial, I will show you
Over the internet, most of the people have suggested editing DOCKER_OPTS variable.
/etc/default/dockerfile but it didn’t have any effect
/etc/init/docker.confbut again no success.
vi /lib/systemd/system/docker.service
ExecStart=/usr/bin/docker daemon -H=fd:// -H=tcp://0.0.0.0:2375
systemctl daemon-reload
sudo service docker restart
curl http://localhost:2375/images/json
The post How to Enable Docker Remote REST API on Docker Host appeared first on Little Big Extra.
]]>