Introduction
Docker is quickly becoming one of the most popular technologies for hosting web applications. It is a set of tools for packaging, distributing, and running software applications. Developers can write configuration files to create packages called images, which are distributed via decentralized, web-based repositories (some public, some private). Images downloaded from repositories are used as templates to create isolated environments called "containers" that run applications within them. Many containers may exist alongside each other on a single host. Memory and CPU resources are shared between all the containers running on a machine, but each container has its own fully isolated file system and environment. This is convenient for a number of reasons, but most of all, it simplifies the process of installing and running one or more applications on a single host machine.
Installing Docker
If you are on MacOS or Windows, the best way to install Docker is by installing Docker Desktop. It provides a complete installation of Docker and provides a GUI for managing it. You can use the GUI to start or stop your Docker daemon, or to manage installing software updates to the Docker platform. (Bonus: Docker Desktop can also manage a local Kubernetes cluster for you. It's not relevant to this article, but it provides a straightforward way to get started with Kubernetes, a platform for managing running containers across a scalable number of hosts). Linux users can install docker from their distribution’s package manager, but the Docker Desktop GUI is not included. Installation instructions for the most popular Linux distributions can be found in the Docker documentation.
Working With 3rd Party Containers
The first thing to try once you've installed Docker on your computer is running containers based on 3rd party images. This exercise is a great way to quickly display the power of Docker. First open your favorite system terminal and enter docker pull nginx
.
This command will download the official nginx image from Docker Hub. Docker Hub is a managed host for Docker images. You can think of it sort of like npm for Docker. We've pulled the newest version of the nginx image, however, as with npm, we could have chosen a specific version to download by changing the command to docker pull nginx:1.18
. You can find more details about an image, including which versions are available for download, on its Docker Hub page.
Now that we've downloaded an image, we can use it to create a container on our local machine just as simply as we downloaded it. Run docker run -d -p 8080:80 nginx
to start an nginx container. I’ve added a couple options to the command. By default, nginx runs on port 80, and your system configuration likely prevents you from exposing port 80. Therefore, we use -p 8080:80
to bind port 80 on the container to port 8080 on your local machine. We use -d
to detach the running container from the terminal session. This will allow us to continue using the same terminal while the nginx container continues to run in the background.
Now, you can navigate to http://localhost:8080 with your web browser, and see the nginx welcome page that is being served from within Docker. You can stop the nginx container running in the background by using the docker kill
command. First, you'll need to use docker ps
to get its container ID, then you can run docker kill <container ID>
. Now, if you navigate to http://localhost:8080 again, you will be met with an error, and docker ps
will show no containers running.
The ability to simply download and run any published image is one of the most powerful features of Docker. Docker Hub hosts millions of already baked images, many of which are officially supported by the developer of the software contained within. This allows you to quickly and easily deploy 3rd party software to your servers and workstations without having to follow bespoke installation processes. However, this isn’t all that Docker can do. You can also use it build your own images so that you can benefit from the same streamlined deployment processes for your own software.
Build Your Own
As I said before, Docker isn’t only good for running software applications from 3rd parties. You can build and publish your own images, so that your applications can also benefit from the streamlined deployment workflows that Docker provides. Docker images are built using 2 configuration files, Dockerfile
and .dockerignore
. Dockerfile
is the most important of the two. It contains instructions for telling docker how to run your application within a container. The .dockerignore
file is similar to Git’s .gitignore
file. It contains a list of project files that should never be copied into container images.
For this example, we'll Dockerize a dead "hello world" app, written with Node.js and Express. Our example project has a package.json
and index.js
like the following:
package.json:
{
"name": "hiwrld",
"version": "1.0.0",
"description": "hi world",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"author": "Adam Clarke",
"license": "MIT",
"dependencies": {
"express": "^4.17.1"
}
}
index.js:
const express = require('express')
const app = express()
const port = 3000
const greeting = process.env.GREETING
app.get('/', (req, res) => {
res.send('Hello World!')
})
app.listen(port, () => {
console.log('Example app listening at http://localhost:' + port)
})
The package.json
manages our single express dependency, and configures an npm start
command with which to start the application. In index.js
, I've defined a basic express app that responds to requests on the root path with a greeting message.
The first step to Dockerizing this application is creating a Dockerfile
. The first thing we should do with our empty Dockerfile is add a FROM
directive. This tells Docker which image we want to use as the base for our application image. Any Docker image published to a repository can be used in your FROM
directive. Since we've created a Node.js application, we'll use the official node docker image. This will prevent us from needing to install Node.js on our own. Add the following to the top of your empty Dockerfile:
FROM node:15
Next, we need to make sure that our npm dependencies are installed into the container so that the application will run. We will use the COPY
and RUN
directives to copy our package.json
file (along with the package-lock.json
that was generated when modules were installed locally) and run npm install
. We'll also use the WORKDIR
directive to create a folder and make it the image's working directory. Add the following to the bottom of your Dockerfile
:
# Create a directory for the app and make it the working directory
WORKDIR /usr/src/app
# Copy package files from the local filesystem directory to the working directory of the container
# You can use a wildcard character to capture multiple files for copying. In this case we capture
# package.json and package-lock.json
COPY package*.json ./
# now install the dependencies into the container image
RUN npm install
Now that we've configured the image so that Docker installs the application dependencies, we need to copy our application code and tell Docker how to run our application. We will again use COPY
, but we’ll add CMD
and EXPOSE
directives as well. These will explain to Docker how to start our application and which ports it needs exposed to operate. Add these lines to your Dockerfile:
# Copy everything from the local filesystem directory to the working directory. Including
# the source code
COPY . .
# The app runs on port 3000
EXPOSE 3000
# Use the start script defined in package.json to start the application
CMD ["npm", "start"]
Your completed Dockerfile should look like this:
FROM node:15
# Create a directory for the app and make it the working directory
WORKDIR /usr/src/app
# Copy package files from the local filesystem directory to the working directory of the container
# You can use a wildcard character to capture multiple files for copying. In this case we capture
# package.json and package-lock.json
COPY package*.json ./
# now install the dependencies into the container image
RUN npm install
# Copy everything from the local filesystem directory to the working directory. Including
# the source code
COPY . .
# The app runs on port 3000
EXPOSE 3000
# Use the start script defined in package.json to start the application
CMD ["npm", "start"]
Now that we have a complete Dockerfile, we need to create a .dockerignore
as well. Since our project is simple, we only need to ignore our local node_modules folder. That will ensure that the locally installed modules aren’t copied from your local disk via the COPY . .
directive in our Dockerfile after they've already been installed into the container image with npm. We'll also ignore npm debug logs since they're never needed, and it's a best practice to keep Docker images' storage footprints as small as possible. Add the following .dockerignore
to the project directory:
node_modules
npm-debug.log
On a larger project, you would want to add things like the .git
folder and any text and/or configuration files that aren't required for the app to run, like continuous integration configuration, or project readme files.
Now that we've got our Docker configuration files, we can build an image and run it! In order to build your Docker image open your terminal and navigate to the same location where your Dockerfile is, then run docker build -t hello-world .
. Docker will look for your Dockerfile
in the working folder, and will build an image, giving it a tag of “hello-world”. The “tag” is just a name we can use later to reference the image.
Once your image build has completed, you can run it! Just as you did before with nginx, simply run docker run -d -p 3000:3000 hello-world
. Now, you can navigate your browser to http://localhost:3000, and you will be politely greeted by our example application. You may also use docker ps
and docker kill
as before in order to verify or stop the running container.
Conclusion
By now, it should be clear to see the power that Docker provides. Not only does Docker make it incredibly easy to run 3rd party software and applications in your cloud, it also gives you tools for making it just as simple to deploy your own applications. Here, we've only scratched the surface of what Docker is capable of. Stay tuned to the This Dot blog for more information about how you can use Docker and other cloud native technologies with your applications.
This Dot Labs is a modern web consultancy focused on helping companies realize their digital transformation efforts. For expert architectural guidance, training, or consulting in React, Angular, Vue, Web Components, GraphQL, Node, Bazel, or Polymer, visit thisdotlabs.com.
This Dot Media is focused on creating an inclusive and educational web for all. We keep you up to date with advancements in the modern web through events, podcasts, and free content. To learn, visit thisdot.co.
Top comments (2)
Now that kubernetes is making docker deprecated, things are gonna go down for docker...
I think there's some truth to this, but Docker containers are OCI compliant, and Docker still provides the best set of tools available for working with OCI containers on a desktop workstation. Docker also controls DockerHub which is by far the most popular image registry host. Using Docker to run containers in prod will become less common, but Docker the company will most likely stay healthy for quite some time.