A few days ago I was performing a refactoring on a Docker Compose file. I've been working with Docker for a while now. I always had the understanding of volumes with their use only for data persistence with the local files of the machine creating the volume next to the container. From a doubt I opened a question on StackOverFlow, but first i want to thank the user Hakro, for help. If you want to query the "question Communication between services within the docker compose network".
Please don't mind my designer side. Haha ha.
Usually when I have an idea involving processes I like to express it on paper. That's why the image, from that point on, wanted to understand if it was possible to create communication between the containers. Giving names to simplify more, I was creating containerA that has a volume with machine and creating a volume in containerB with the machine. This to be able to share files between containers.
Soon after help from Hakro I was able to create communication between containers without having to have volume on the machine. Now I'm writing this article, to be able to help the next one who had a doubt like me, try to be more expressive with the lines below.
First let's create a very simple project, nothing to complicate things, just to go through the steps. I'll leave the code available on my Github, so you can create a clone and test it too.
Dockerfile
In the Dockerfile file I opted for the Ubuntu image, nothing seems absurd. One of the images I use most often. I created "WORKDIR" to work with an area that doesn't have other directories. Some settings I followed from Docker Hub, which was in the Ubuntu image, I added Python so that we could take advantage of the FastAPI library as an example service and finally I created a logs folder, just to simulate a situation of a service log, that I might want to reuse in the other image.
FROM ubuntu:20.04
WORKDIR /application
RUN apt-get update && apt-get install -y \
python3.8 \
python3-pip
COPY ./requirements.txt /application/requirements.txt
COPY ./app /application/app
RUN pip install --no-cache-dir --upgrade -r /application/requirements.txt
RUN mkdir -p /application/logs
Docker Compose
Now creating the Docker compose file, I chose to use the same Dockerfile. I was imagining something complicated to create this communication between two containers. But with the help of Hakro this has become so much easier. We need to declare a volume.
volumes:
volume_between_containers:
After that we pass the volume path to the folder we want inside our container.
volumeyams:
- volume_between_containers:/app/ia-data
We do the same for both containers. Let's look at the script below as it turned out.
version: '3'
services:
container_ubuntu_1:
container_name: ubuntu_1
build:
dockerfile: ./Dockerfile
context: .
entrypoint: ['/bin/sh', '-c']
command:
- |
uvicorn app.main:app --host 0.0.0.0 --port 9100
ports:
- '9100:9100'
volumes:
- volume_between_containers:/application/logs
container_ubuntu_2:
container_name: ubuntu_2
build:
dockerfile: ./Dockerfile
context: .
entrypoint: ['/bin/sh', '-c']
command:
- |
uvicorn app.main:app --host 0.0.0.0 --port 9200
ports:
- '9200:9200'
volumes:
- volume_between_containers:/application/logs
volumes:
volume_between_containers:
Now all great. Let's do a test to verify that this volume is really working.
docker compose -f docker-compose.yml up --build -d
output:
[+] Building 0.5s (15/19)
=> [communicate_between_containers-container_ubuntu_1 internal] load build d 0.0s
=> => transferring dockerfile: 345B 0.0s
=> [communicate_between_containers-container_ubuntu_1 internal] load .docker 0.0s
=> => transferring context: 2B 0.0s
=> [communicate_between_containers-container_ubuntu_2 internal] load .docker 0.0s
=> => transferring context: 2B 0.0s
=> [communicate_between_containers-container_ubuntu_2 internal] load build d 0.0s
=> => transferring dockerfile: 345B 0.0s
=> [communicate_between_containers-container_ubuntu_1 internal] load metadat 0.5s
=> [communicate_between_containers-container_ubuntu_1 1/7] FROM docker.io/li 0.0s
=> [communicate_between_containers-container_ubuntu_2 internal] load build c 0.0s
=> => transferring context: 122B 0.0s
=> [communicate_between_containers-container_ubuntu_1 internal] load build c 0.0s
=> => transferring context: 122B 0.0s
=> CACHED [communicate_between_containers-container_ubuntu_2 2/7] WORKDIR /a 0.0s
=> CACHED [communicate_between_containers-container_ubuntu_2 3/7] RUN apt-ge 0.0s
=> CACHED [communicate_between_containers-container_ubuntu_2 4/7] COPY ./req 0.0s
=> CACHED [communicate_between_containers-container_ubuntu_2 5/7] COPY ./app 0.0s
=> CACHED [communicate_between_containers-container_ubuntu_2 6/7] RUN pip in 0.0s
=> CACHED [communicate_between_containers-container_ubuntu_2 7/7] RUN mkdir 0.0s
=> [communicate_between_containers-container_ubuntu_1] exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:9870e94af1cdda9d5007793fd81c173bdb54b0c754b56ce61 0.0s
=> => writing image sha256:d7404e99aed9adf6c652ba2b47db48529567e0bdd225a3629 0.0s
=> => naming to docker.io/library/communicate_between_containers-container_u 0.0s
=> => naming to docker.io/library/communicate_between_containers-container_u 0.0s
[+] Running 2/0
β Ώ Container ubuntu_2 Running 0.0s
β Ώ Container ubuntu_1 Running
docker ps
A note "CONTAINER ID", may change with runs. That's why in Docker compose I added the name ubuntu_1 and ubuntu_2.
output:
Now with the service running, let's observe the logs and check if anything is loading in the browser.
β docker compose logs -f
ubuntu_2 | INFO: Started server process [7]
ubuntu_2 | INFO: Waiting for application startup.
ubuntu_2 | INFO: Application startup complete.
ubuntu_2 | INFO: Uvicorn running on http://0.0.0.0:9200 (Press CTRL+C to quit)
ubuntu_1 | INFO: Started server process [7]
ubuntu_1 | INFO: Waiting for application startup.
ubuntu_1 | INFO: Application startup complete.
ubuntu_1 | INFO: Uvicorn running on http://0.0.0.0:9100 (Press CTRL+C to quit)
output:
At this point I added another terminal using VS Code, to visualize inside the two containers.
docker exec -ti name_container /bin/sh
With this command we can access inside the container.
With command "ls" we can see inside that the folder logs was created. But is there persistence within the two containers?
For now the directory is empty.
Now let's write some text to a file in our ubuntu 1 directory to see the persistence in ubuntu 2.
echo "Hey !!! Ubuntu 2. How are you ? " > ubuntu_1_hey.txt
Let's observe the ubuntu 1 terminal we create the file in the logs directory, giving the command "ls" we see that the file is in the directory. Now in the ubuntu 2 terminal we see that the file is also in the logs directory.
Conclusion
I found it very incredible to have this possibility of creating volumes in containers, each day a learning experience. We see that it is not something complicated. It depends on following the steps calmly. Test possible options. I hope I can help you. If you have any comments, please. Leave it in the comments.
Comments:
Thanks for reading this far. I hope I can help you understand. Any code or text errors please do not hesitate to return. Don't forget to leave a like so you can reach more people.
Resources:
sc0v0ne / communicate_between_containers
Create a volume between two containers
Communicate Between Containers
I created this project with the intention of putting into practice the doubt I had on StackOverFlow. I also wrote an article, to show that it's not that complicated.
About the author:
A little more about me...
Graduated in Bachelor of Information Systems, in college I had contact with different technologies. Along the way, I took the Artificial Intelligence course, where I had my first contact with machine learning and Python. From this it became my passion to learn about this area. Today I work with machine learning and deep learning developing communication software. Along the way, I created a blog where I create some posts about subjects that I am studying and share them to help other users.
I'm currently learning TensorFlow and Computer Vision
Curiosity: I love coffee
Top comments (1)
Nice job on the demo man. This was a good article to show the possibilities of docker containers π€.