Define Docker : Docker is an open-source platform that automates the process of deploying applications inside lightweight, portable containers.
Define Docker image : Docker image is kind of template that consists of the code, dependencies or environment variable necessary to run the application in a container.
Define Container : Container is a software package that consists of everything that an application needed to run and function seamlessly.
Define docker daemon : A long-running background process that manages Docker objects, such as images, containers, networks, and storage volumes
Define docker Engine : technology used to create and run the containers.
Define Docker Desktop: Docker Desktop is an easy-to-install application for your Mac, Windows or Linux environment that enables you to build and share containerized applications and microservices.
Define Docker Registries: A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry.
- Docker Architecture:
- Docker lifecycle?
-> We can use the above Image as reference to understand the lifecycle of Docker.
There are three important things,
docker build -> builds docker images from Dockerfile
docker run -> runs container from docker images
docker push -> push the container image to public/private regestries to share the docker images.
Image vs Container :
image : template
container : running instance of an image that consists of the package that needed to run the app seamlessly.
what is port mapping and why it is needed : a technique that allows you to expose a container's network services to the host or other devices on the network.
what is docker layer : docker layer is a building block for docker image where for each instruction new layer has been created. And in the similar manner the final image is created. Thus layer has been build incrementally. The benefit of creating incrementally is we can cached the layer for the future use.
Docker volume : It is a file system that is used to store the data outside the container. What happens is when we delete the container or container goes down the data inside it is deleted so inorder to maintain the data we need the docker volume.
What is docker network ? : It is a feature that allows container to communicate with each other, or with the outside host.
-> when container is created by default it had bridge network which is also known as Docker 0. These connect with the localhost using VEth .
What is Veth? -> it is a virtual ethernet device that act as a cable to connect the container with the host bridge network.
What is bind mouunt : It is used to bind the file or directory of the host machine to the container.
Basic commands:
docker run : to run the image
docker images : to list the images
docker pull : to pull the image from the docker hub repo
docker ps : to list the running container.
docker kill
docker rmi : to remove the image
docker build -t :latest .(location from where to build)
docker exec -it /bin/bash
docker volume create : to create the volume
docker volume ls : to list the volumes created
docker run -v : -p
docker network create
docker logs : to get the logs of the container if run in the detached mode
docker -v : : for bind mount
docker netwok ls : to get the list of the network
docker inspect : to get the details of the container.
docker volume inspect : to get the details of the volume.
docker volume rm : to delete the volume.
Attributes :
-d : detached mode to run the container in background and free the terminal
-p : for port mapping
-t : for giving the tag
-e : for passing the env variable
-it : to run in the interactive manner
--name : to give the name to the container
--network : to attach the network to the container
-v : for specifying the volume
Examples of docker command:
docker build -t test:latest .
docker network create check
docker run -p 3000:3000 --network=check test
docker run -p 3000:3000 --name=test_container --network=check test
docker -v /host_app:/container_app node:16-alpine or
docker run -d --mount source=<volume_name>, target=/app <image_name>
docker run -it --name=test_mount \
--mount type=bind,source=$(pwd)/app,target=/usr/src/app \
node:16-alpine
Commands to install docker on ubuntu :
You can create an Ubuntu EC2 Instance on AWS and run the below commands to install docker.
sudo apt update
sudo apt install docker.io -y
You use the below command to verify if the docker daemon is actually started and Active
sudo systemctl status docker
If you notice that the docker daemon is not running, you can start the daemon using the below command
sudo systemctl start docker
To grant access to your user to run the docker command, you should add the user to the Docker Linux group. Docker group is create by default when docker is installed.
sudo usermod -aG docker ubuntu
docker run hello-world
Demo docker file:
FROM ubuntu:latest
WORKDIR /app
COPY . /app
RUN apt-get update && apt-get install -y python3 python3-pip
ENV 'key' 'value'
CMD ['python', 'app.py']
Docker multistage example :
###########################################
# BASE IMAGE
###########################################
FROM ubuntu AS build
RUN apt-get update && apt-get install -y golang-go
ENV GO111MODULE=off
COPY . .
RUN CGO_ENABLED=0 go build -o /app .
############################################
# HERE STARTS THE MAGIC OF MULTI STAGE BUILD
############################################
FROM scratch
# Copy the compiled binary from the build stage
COPY --from=build /app /app
# Set the entrypoint for the container to run the binary
ENTRYPOINT ["/app"]
Demo compose.yaml file
services:
backend:
build: ./mern/backend
ports:
- "5050:5050"
networks:
- mern_network
environment:
MONGO_URI: mongodb://mongo:27017/mydatabase
depends_on:
- mongodb
frontend:
build: ./mern/frontend
ports:
- "5173:5173"
networks:
- mern_network
environment:
REACT_APP_API_URL: http://backend:5050
mongodb:
image: mongo:latest
ports:
- "27017:27017"
networks:
- mern_network
volumes:
- mongo-data:/data/db
networks:
mern_network:
driver: bridge
volumes:
mongo-data:
driver: local # Persist MongoDB data locally
Files and Folders that containers use from host operating system:
The host's file system: Docker containers can access the host file system using bind mounts, which allow the container to read and write files in the host file system.
Networking stack: The host's networking stack is used to provide network connectivity to the container. Docker containers can be connected to the host's network directly or through a virtual network.
System calls: The host's kernel handles system calls from the container, which is how the container accesses the host's resources, such as CPU, memory, and I/O.
Namespaces: Docker containers use Linux namespaces to create isolated environments for the container's processes. Namespaces provide isolation for resources such as the file system, process ID, and network.
Control groups (cgroups): Docker containers use cgroups to limit and control the amount of resources, such as CPU, memory, and I/O, that a container can access.
- What is the issue in production while using docker and how can you overcome that ?
-> Even for the smaller application the docker container size increases significantly and also since we are using one base image and create a one stage docker file then it will create the os related vulnerabilities, inorder to overcome the problem we use the multi-stage docker file with the use of distroless image.
Define distroless image: A distroless image is a Docker image that contains only the essential components needed to run an application or service.
What are the advantages of distroless image?
-> It significantly reduce the size of the docker container and also it provide the highest level of security since it contains the minimal software so the os related vulnerabilities is removed using the disrtoless image.
What is the issue related to the container when it comes to the file system ?
-> Container are emperical in nature it means it does not have permanent storage so when the container is down all the file get lost. So in order to overcome this problem there are two ways. 1) Bind mounts 2) Volumes
Define Bind mounts: It is a way that you can bind the file system of container to the file system of host, and allows you to write the file onto the host system.
Define volume: It is a persistent data store that allows users to manage and store data outside of containers.
Difference between bind mounts and volumes:
-> Bind attach the specific folder to the docker however, volume create a new directory in Docker's storage directory on the host machine.
Bind is managed by the host file system however volume are managed by the docker.
Volumes can be shared between containers, but bind mounts cannot.
Also volume has a lifecycle it means you can create the volume or destroy it.
Volume can also be created using the external sources if you have less amount of space in the host system.
Diff Btw Docker COPY & ADD: Docker add copy the file from the url to the docker container while copy just copy it from the local host machine.
Diff Btw Docker CMD & ENTRYPOINT: CMD commands can be overridden, while ENTRYPOINT commands cannot.
Types of Docker Network:
Bridge networks: The default network type that is suitable for most scenarios.
Overlay networks: Used to allow containers on different Docker hosts to communicate with each other
Macvlan networks: Used to connect Docker containers to host network interfaces, making them appear as a physical device on the host's network
Host networks: Used to bind ports directly to the host's interfaces
Define multi stage build in docker: It means building the container in multiple stages and allows you to copy the artifacts from one stage to other.
Steps to secure container: You can use the distroless image, use the sync to scan the container, and ensure the network configuration properly.
Real time challenges: Docker daemon goes down then entire thing does not work, also if we create more number of containers with huge size then the process may slow down, daemon run as a root user which create the vulnerabilities at the root level.
Top comments (0)