DEV Community

Cover image for Use Docker, No more setup Hell! (Part-1)
Shubham Tiwary
Shubham Tiwary

Posted on

Use Docker, No more setup Hell! (Part-1)

docker-dashboard

A case scenario - setup hell

This are the docker containers I'm currently running on my PC.
Earlier, I'd have to setup manually which'd look something like:

  • Go to the official site, get hit with the register, verify account & login (forgot you password? another mess to recover that)
  • After that, turns out it doesn't work: go to reddit, find some solution from 7 years ago, a 50/50 hit or miss it does work (bless those guys though)

Another case scenario - "But It works on my machine"

  • You got a windows PC, your friend got a linux/mac, and you wanna deploy to AWS, so there's something like AMI.
  • Code is simple, use git. Wanna share & deploy your local dependencies? good luck testing on each platform & writing README files for "how to run" (realize nobody actually reads that)

Creating a container

First, let's see how easy it is to create a container

Let's go to Docker Hub, a repository for docker "images". Think of images like blueprints to make a "container".

We'll try to setup a simple nginx container:
docker-hub-nginx

There's instructions on how to use this in the same page:

nginx-docker-use

Running the command to start the container:

docker run --name nginx-dummy -d -p 8080:80 some-content-nginx
Enter fullscreen mode Exit fullscreen mode

Then open your web-browser at localhost:8080, and you'll see the following:

nginx-browser-ui

See? It's pretty easy once you get the hang of it, you can do the same for things like:

  • setting up a database locally like redis, postgres, mongoDB
  • setting up services like nginx, grafana, selenium, etc.

Now let's set this up on your machine, come back and try running this nginx container later after docker installation!


Docker Installation

Skip if you already have this done.

What you wanna install is "Docker Desktop", docker in itself is a collection of tools such as:

  • Docker engine.
  • Docker CLI.
  • Docker daemon.

Docker desktop has all these packaged, so install only that and you're set to go.

For Windows:

  • Go here and follow the steps
  • Note that for windows, you have to setup WSL (windows support for a linux like environment) - I'll like up to this really great tutorial here for this one.

For Linux:

  1. For a GUI based environment:

    • Go here, same as windows, setup docker desktop based on your distro.
    • Pre-requisite: setup KVM (virtualization) on your system (how to do so present in the same link above)
  2. For a TTY environment:

    • In systems like AMI, ubuntu server, you may not have a GUI, and it's more suited to setup docker engine (comes with docker cli)
    • Go here and check your distro.



So far, I've tested it in these environments:

  • Windows (with WSL)
  • Ubuntu (gnome)
  • Arch (hyprland - wayland)
  • Ubuntu Server.
  • AMI (amazon machine image)

After setup:

Try using the command:

docker --help
Enter fullscreen mode Exit fullscreen mode

To start creating and using docker containers, note that you'll have to turn on the "docker daemon", how to do that? Since docker desktop already has everything, just open the desktop app, and you'll see a whale icon on top right screen, then try again.


The basis: Images & Containers

docker-flow

Gotta get a little bit theoretical on this part

Docker has two fundamental components:

  • Containers are services that are isolated, like virtual machines, they provide a isolated environment where you can run, test, do anything like its a whole new system inside it. These containers don't persist data in your disk like a normal service, delete a container and all it's data goes with it, it is therefore: stateless
    • Advantage? You can create/destroy containers at ease.
    • Problem? The isolation can cut itself off from others (we'll learn about controlling isolation with volumes & networks in next part!)
  • Images are blueprints to creating these container. An example of image can be set of instructions like:
    • Clone a repository, install its dependencies.
    • Run the main service (ex: express) on port 3000. Containers replicate this formula, so you can create any service into a image --> and then make a container from that image!

So basically, docker-hub (from nginx example) is just a central registry to host these images!


Writing our own custom Docker Image

  • Create some random directory, initialize node-js project & setup express:
mkdir test && cd test
npm init -y && npm install express dotenv
touch index.js 
Enter fullscreen mode Exit fullscreen mode
  • Make up a simple index.js script for the express server:
const express = require('express')
const app = express();
require('dotenv').config()

const port = 3000;
const num = process.env.num;

app.get('/', (req,res)=>{
  res.send(`Hello ${num}`);
})

app.listen(port, ()=>{
  console.log(`Server connected at port: ${port}`)
})
Enter fullscreen mode Exit fullscreen mode

We write the image's recipe in a "Dockerfile", which looks something like this:

#needs a "base image", we have an express app, so base will be nodejs image. 
FROM node:18   

#inside container, make a /app folder to store our project files
WORKDIR /app  
#indicate that port 3000 is running the service(doesn't actually expose it, just a indicator)
EXPOSE 3000
#env variable used in the script    
ENV num=10   

#now copy our index.js + package.json to the /app folder for use
COPY index.js /app
COPY package.json /app

#run command in workdir(/app in this case), install packages from package.json (i.e: express & dotenv)
RUN npm install 
#start the application from
ENTRYPOINT ["node", "index.js"]
Enter fullscreen mode Exit fullscreen mode

Now let's create the image from this dockerfile:

#run the dockerfile residing in this directory (assuming you are in same directory as dockerfile)
docker build -t express-image:v1 .
Enter fullscreen mode Exit fullscreen mode

Confirm this image is created:

docker image ls
Enter fullscreen mode Exit fullscreen mode

You should notice an "express-image:v1" image in the list.



Now, image is set, create container from that:

docker container run --name express-app -d -p 3000:3000 express-dummy:v1 
Enter fullscreen mode Exit fullscreen mode

Notice the -p 3000:3000? Remember the EXPOSE 3000 line back when we wrote the dockerfile, here -p <my-port>:<container-port> maps our PC's port 3000 to container port 3000.
So example: if you do -p 8080:3030, our port 8080 will act like container's port 3000!



Since we set port 3000:3000 mapping, make a request in localhost:3000:

wget localhost:3000
Enter fullscreen mode Exit fullscreen mode

You should get an "index.html" file with contents: "Hello 10"



We used an environment variable "num" earlier but that was static, let's try passing a new value for the env variable:

docker container run --name express-app-2 -d -e num=50 -p 8080:3000 express-dummy:v1
Enter fullscreen mode Exit fullscreen mode

In this new container from same image, we pass the num variable from -e key=value flag.
And this should run in your localhost:8080:

hello-50

I honestly don't remember all the syntax for docker-file, so using something like the dockerfile reference is recommended


Summary

This is just the basics, but you should be able to use these as basis to start using containers, easily setting them up or even making your own containers, but there's many more things remaining:

  • Containers are isolated, so how do we let multiple containers connect? So far we've connected our PC to the container. - docker network
  • These containers drop everything when removed, what about - containers running DBs? we would want some to persist state right. - docker volumes
  • Is there any simpler way to handle multiple containers? - docker compose



.

Next Part: Docker continued - Volumes & Network (part-2)

Top comments (0)