Welcome to the next pikoTutorial!
If you ever wanted to ensure a consistent environment for your application, you most probably know that containerization is the way to go. To containerize a C++ application, let's first setup a simple file structure:
cpp_docker
|- src
| |- main.cpp
|- Dockerfile
|- CMakeLists.txt
Now add a simple main function:
#include <iostream>
#include <thread>
#include <chrono>
int main(int argc, char** argv)
{
while (true) {
std::cout << "Hello from the inside of the container" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
}
And a simple CMakeLists.txt file:
cmake_minimum_required(VERSION 3.15)
project(CppDocker)
add_executable(${PROJECT_NAME})
target_sources(${PROJECT_NAME} PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/src/main.cpp)
Having this, it's time to write the Dockerfile:
# Use an official Ubuntu base image
FROM ubuntu:20.04
# Assure non-interactive mode of installers
ENV DEBIAN_FRONTEND=noninteractive
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
cmake
# Copy necessary files
COPY ./src /app/src
COPY ./CMakeLists.txt /app
# Build the application
WORKDIR /app/build
RUN cmake ..
RUN cmake --build .
# Run the application
CMD ["./CppDocker"]
When the Dockerfile is ready, you can build a Docker image by running the following command:
docker build -t cpp_docker_image .
If you don't want to use sudo
every time when calling docker
command, you may want to change the permissions of one of the docker files by running sudo chmod 666 /var/run/docker.sock
.
Note for beginners: remember that changing permissions of /var/run/docker.sock may have consequences related to security, so you want to think twice before you do this in the production environment!
After successful build, you can check the list of the images by running:
docker images
This will show you a table like the following:
REPOSITORY TAG IMAGE ID CREATED SIZE
cpp_docker_image latest e132d245100b About a minute ago 450MB
To create a container out of your image, use command:
docker run cpp_docker_image
This is the simplest form of this command, but most likely you would like to extend it with the following option and flags:
-
--rm
- removes container after it has been stopped -
--name
- allows you to assign a custom name to your container (otherwise Docker will choose some random name) -
-d
- detaches process from your current terminal what effectively makes the container to run in the background.
Eventually our command looks like this:
docker run --rm --name my_cpp_application -d cpp_docker_image
Now you can check the table of currently running containers with docker ps
command. If everything went ok, you should see such output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a99c1e9e0bbc cpp_docker_image "./CppDocker" 12 seconds ago Up 11 seconds my_cpp_application
Finally, let's look at the logs from the application. Run docker logs my_cpp_application
and check if you see the following output:
Hello from the inside of the container
Hello from the inside of the container
Hello from the inside of the container
If yes, you've just successfully containerized a C++ application.
Note for beginners: if you leave the container like this, it will run in the background forever. To stop the application, run
docker container stop my_cpp_application
command.
Note for advanced: if you need more in-depth look into your container, you can get inside the container's shell terminal using commanddocker container exec -it my_cpp_application /bin/bash
. You can also do this at the container startup by using commanddocker run -it cpp_docker_image /bin/bash
.
Top comments (5)
I believe this solution may be unfortunate (or at least as huge drawbacks).
Here, you have a very simple project, with a single cpp file. And yet, there is an issue: each time you modify either this cpp file or
CMakeLists.txt
, the COPY layers invalidate the next layers, and you have to run both your CMake commands. You even regenerate the project each time. This takes time. Since the project is small, it's little time, so it might be OK.But what happens when you have hundreds of files, and you modify just one (only one) of them? You will have to rebuild every cpp files, even if you simply fix a typo in a comment of
CMakeLists.txt
. You probably don't want this.What is your actual purpose here:
If case 1, I suggest you take a look at my article on the subject: dev.to/pgradot/cmake-on-smt32-epis... The idea is to build inside the container, but the source files remain on your host (there are not copied inside the image).
If case 2, your solution is acceptable if you want your CI to generate a Docker image with the application. You ensure that the application is built and run in the same environment, with the same standard libraries. However, it's absolutely not convenient on your own machine to test the application. Instead, we may want to build the application (inside Docker as in case 1, or not) and then create an image will only the executable (not the source code).
You have something like this:
You probably want the FROM to get the same version as your host OS (or, if you build inside a container, use the same FROM for both images).
Indeed, I have actually built large C++ projects in docker, it can work, but not like this.
First, you want a multi-stage docker file, so that you do not get the build environment and artifacts layered into your final image.
Naturally I start from Alpine, too, and actually rather than the host development platform, because this is for real-world delivery, I am rebuilding from scratch anyway, and I want small images. I also actually turn the project into a source archive tarball, transfer, and unpack that, in one operation. Consider this...
Good comment, multistage builds were a must have almost in all projects I've been to. Here however, I didn't want to overwhelm people who are looking for an answer to a question "what is the minimum number of steps I need to do to containerize my app?". But there will be more on that topic in the future, for example on how to save local storage space when building a Docker image using Bazel and its remote caching capabilities.
To me the one real downside to C++ in docker is you have to build the app in Docker too, and so as a practical matter you cannot avoid using a multistage docker file. Using go (or c# dotnet) you can build the app entirely locally regardless of platform and only have to copy a cross-compiled target executable and its immediate support. You can also instead do local cross-compiling for C/C++, but the complexity of setting that up remains appalling.
It's true that re-building of such Docker image will take longer and I am planning the upcoming posts about optimization of things like image build time, image size etc. because it's always worth to consider such things when making design decisions.
However, notice that there are at least 2 types of projects in which impact of changed sources on the image build time does not matter.
1. Projects, where Docker is just a matter of deployment, not development
In such projects, although the final system is being deployed in form of a Docker container, an individual developer may don't even know about the existence of the Docker image on a daily basis. During their work, they compile, run unit tests etc. using just CMake/Bazel or whatever the project is based on. The image is built only on CI, where the entire build must contain all the source code changes anyway.
2. Projects too big to compile them on a machine of an individual software developer
I've been to projects so big that the entire build process takes 8, 10 or more hours when ran on the machine assigned to an individual software developer. This means that the only thing a software developer builds, is the component on which he/she is currently working on and the only thing that developer runs, are the unit or software component tests of that component. In such projects, no one even attempts to rebuild the entire project, not mentioning about the whole image. Building image makes sense only on powerful CI machines where optimization of build time is often achieved by making incremental builds which reuse the actual built artifacts (stored in a storage dedicated for remote caching), instead of reusing image layers.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.