No matter where we work in IT, exposure to Docker is essential. Used in the design of applications, it provides us with the ability to make them easily portable, scalable and independent. For several years now, the software has been gaining popularity because of its capabilities. But let's start at the beginning.
Containerization vs. virtualization
To understand how Docker works, we first need to understand the reason behind containerization and its differences from traditional virtualization.
| | | |
| VIRTUAL MACHINE | | CONTAINERS |
| | | |
| | | |
| App1 | App2 | | App1 | App2 |
| Bins / libs | Bins / libs | | Bins / libs | Bins / libs |
| Guest OS | Guest OS | | Container daemon |
| Hypervisor | | |
| Host operating system | | Host operating system |
| Infrastructure | | Infrastructure |
| | | |
Virtualization
Standard virtualization involves creating a stack consisting of (from the lowest level) a hardware layer, an installed operating system, its supervisor (hypervisor) and (as required) virtual machines. They use independent libraries and binary files to ultimately create an application.
The process is not complicated, but it implies quite a few drawbacks. The most important of these is the decrease in application performance caused by the creation (compared to containerization) of additional layers, i.e. the operating system running on the VM and the hypervisor. Each operating system uses resources in the form of CPU, RAM or hard disk, which slows down the whole system.
Containerization
The need for at least higher performance is solved by containerization by dropping the duplicate layer of the operating system and hypervisor. Nonetheless, all the major advantages of virtualization such as portability or environment seperation are preserved (although there are vulnerabilities that allow you to "walk away" from it [We will deal with this in future posts]).
However, it is important to keep in mind that containerization is not in every case able to replace traditional virtualization. The security of using containerization (because these aspects interest us most :)) is at a lower level than that of virtualization. Although Docker breaks the application into smaller parts (containers), they share access to a single host operating system. This introduces the risk of running Docker containers with incomplete isolation, for example.
Docker
Above we mentioned that "Docker breaks an application into containers." So what is Docker? Wikipedia defines it as:
Docker - open source software that serves as "a platform for developers and administrators to create, deploy and run distributed applications." Docker is defined as a tool that allows you to put a program and its dependencies in a lightweight, portable, virtual container that can be run on almost any Linux server
Generalizing the above definition, the Docker tool allows us to design applications that have multiple independent modules (containers), having their own "closed" environment. In theory, containers do not know of each other's existence. In practice, however, they can communicate with each other under strict rules.
Why do I need Docker if I'm not an administrator or developer?
Here we get to the essence. Containerization of separated application processes is not only useful in web applications. As a pentester, you can use Docker to create an easily portable application designed to automate the running of hundreds of vulnerability-finding scripts. This bypasses the hassle of reinstalling languages and libraries on another system. An automation tester, on the other hand, will use Docker in his daily work of comparing images and screenshots for UI testing. This rules out the problem of responsive elements on a page displaying differently on each computer.
Dockerfile
The basic element on which Docker operates is the Dockerfile. It stores all the information about a single image from which we create a container.
Let's take a look at an example. We assume that we want to create an environment to automate several tools, including xira. The contents of the directory holding our scripts:
┌──(figaro㉿kali)-[~/Desktop/myScripts]
└─$ ls
Dockerfile xira
An example of the contents of a Dockerfile could look as follows:
FROM ubuntu:18.04
COPY . /scripts
RUN set -xe \
&& apt-get update -y \
&& apt-get install -y python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install -r /scripts/xira/requirements.txt
CMD python3 /scripts/xira/xira.py -u https://google.com
Let's take a look at the next parts of the file to find out what's going on.
- FROM - creates a layer from the indicated image.
- COPY - as the name suggests, copies the designated files (in our case, the dot means the entire contents of the directory) to the indicated place inside the image (in our case, it is the scripts folder).
- RUN - executes the indicated commands. The state of the container is saved in the image only after all RUN commands are executed.
- CMD - this is the command that will be run when the image is started.
In the above example, we first create a layer from the ubuntu:18.04 image, then copy all the files from the current directory to the "scripts" folder, install and update pip and download the necessary libraries to run the xira script.
Building an image
To create an image from our Dockerfile, use the command:
docker build -t scriptsimage -f Dockerfile .
With the "-t" parameter (from the word tag) we define the name and tag of our image in "name:tag" format. In turn, "-f" is used to indicate the image file. After running the above command, we should get information similar to the following in response.
Sending build context to Docker daemon 1.305MB
Step 1/6 : FROM ubuntu:18.04
18.Sending build context to Docker daemon 1.305MB
Step 1/6 : FROM ubuntu:18.04
18.04: Pulling from library/ubuntu
feac53061382: Pulling fs layer
feac53061382: Verifying Checksum
feac53061382: Download complete
feac53061382: Pull complete
Digest: sha256:7bd7a9cdc9f8ewbf69c4b6212f6432af8e243f97ba13abgr3e641e03a7ewb59e8
Status: Downloaded newer image for ubuntu:18.04
---> 39a8cfeef173
Step 2/6 : COPY . /scripts
---> bbcad8f5d005
Step 3/6 : RUN set -xe && apt-get update -y && apt-get install -y python3-pip
---> Running in ca5f1154a80e
[91m+ apt-get update -y
[0mGet:1 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]
Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
(...)
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Removing intermediate container ca5f1154a80e
---> 6b5a7d6e8192
Step 4/6 : RUN pip3 install --upgrade pip
---> Running in b3a18b37c4f8
Collecting pip
Downloading https://files.pythonhosted.org/packages/8a/d7/f505e91e2cdea54cfcf51f43c478a8cd64fb3bc1042629ceddk50d9a6a9b/pip-21.2.2-py3-none-any.whl (1.6MB)
Installing collected packages: pip
Found existing installation: pip 9.0.1
Not uninstalling pip at /usr/lib/python3/dist-packages, outside environment /usr
Successfully installed pip-21.2.2
Removing intermediate container b3a18b37c4f8
---> 13454eef844d
Step 5/6 : RUN pip3 install -r /scripts/xira/requirements.txt
---> Running in 8db10b181f06
Collecting beautifulsoup4==4.9.3
(...)
[0mRemoving intermediate container 8db10b181f06
---> cc35d7c4ef10
Step [0mRemoving intermediate container 8db10b181f06
---> cc35d7c4ef10
Step 6/6 : CMD python3 /scripts/xira/xira.py -u https://google.com
---> Running in dbfd45c84d87
Removing intermediate container dbfd45c84d87
---> c997586dc481
Successfully built c997586dc481
Successfully tagged scriptsimage:latest
We can see that our docker image has been created. We can display the list of available images with the docker images
command. As a result, we get:
┌──(figaro㉿kali)-[~/Desktop/myScripts]
└─$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
scriptsimage latest 1aed353475ad 2 minutes ago 503MB
ubuntu 18.04 39a8cfeef173 8 days ago 63.1MB
As you can see in the outfile above, since we did not indicate any tag for the "scriptsimage" image, it will be populated with the name "latest".
Running a container based on an existing image
Now, having created the image, we can run it and thus create a container based on it. We will use the docker run -i <IMAGE_ID>
command to do this. Note that in this case we are running the container based on the id of the existing image.
──(figaro㉿kali)-[~/Desktop/myScripts]
└─$ docker run -i 1aed353475ad
_ __ ________ ___
| |/ // _/ __ \/ |
| / / // /_/ / /| |
/ |_/ // _, _/ ___ |
/_/|_/___/_/ |_/_/ |_|
~# Coded by Adhrit. twitter -- @xadhrit
~# Contributor: Naivenom. twitter -- @naivenom
(...)
After running the command, the previously indicated script should run inside the container. Once the command finishes, we can use the "docker ps -a
" command to check existing containers.
┌──(figaro㉿kali)-[~/Desktop/myScripts]
└─$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eaf3ff93f771 1aed353475ad "/bin/sh -c 'python3…" 2 minutes ago Exited (1) About a minute ago brave_hypatia
To avoid creating a new container every time we want to run the CMD command indicated earlier in the container, we can restart the container with the docker start -i < CONTAINER_ID>
command. Ok, but what if we want to "enter" the container to invoke some custom command? With help comes the docker run -it <IMAGE> bash
command, which will allow us to enter the bash shell. Being in it, we can already use standard commands, for example:
──(figaro㉿kali)-[~/Desktop/myScripts]
root@decc8a95f3aa:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin scripts srv sys tmp usr var
root@decc8a95f3aa:/# pwd
/
root@decc8a95f3aa:/#
Platforms you should know about
docker-hub
This is an easy-to-use repository for docker images. We can create our own images on it, manage them and share them with the community.
Labs play with docker
A platform for learning and playing with docker. On the platform we have the ability to run docker processes, build containers and generally test them.
Cheatsheet summary
A list of useful commands to have on hand when working with docker.
- docker build -t : -f . - create a docker image with name , tag from file
- docker images - display docker images
- docker ps -a - display docker containers
- docker rm - remove the docker container
- docker rmi - remove docker image
- docker run -i - start the container
- docker start -i - launch container
- docker run -it bash - run the bash shell for the indicated image
- docker logs - read logs from the container
Top comments (1)
You must start looking at Docker Init collabnix.com/introducing-docker-i...