DEV Community

Cover image for Containerize React app with Docker for Production
Mubbashir Mustafa
Mubbashir Mustafa

Posted on • Edited on

Containerize React app with Docker for Production

Docker - an overview
Docker is an open platform for developing, shipping, and running applications. Docker ensures quick delivery of your software by providing isolation between your app and the infrastructure. Docker packages and runs everything inside a loosely isolated environment called the container.

Key Terms
Image - a complete package that contains everything (application code, required libraries, software dependencies, configurations, etc.) needed to run your app (just like a Class in OOP)

Container - an instance of the image, just like an object in OOP

Volume - images are read-only, to persist data you have to use volumes. In simplest terms, you share a folder (on host OS) with your docker image to read/write data from/to it.

Dockerfile - the blueprints of an image. This is where you define what will be inside the image you are trying to build. Like OS (e.g Ubuntu 16), Softwares (e.g Node), etc.

Tag - for now just consider it in literal terms.

I assume you have your React application ready that you want to containerize using docker, if you don't, you can clone this sample React application and follow along.

Step 1: Install Docker

Download and install docker

Step 2: Prepare Configuration Files

You need to create two configuration files, for:

  1. Nginx (webserver)
  2. Docker (to build the Docker image)

Nginx
The build files of React are just static (HTML, CSS, JS, etc.) files and you need a webserver to serve your static files like Nginx, Apache, OpenLiteSpeed, etc.
Inside your React app, create another directory and name it nginx. Inside the nginx directory (you just created), create a new file and name it nginx.conf. You can also use to following commands (one-by-one to achieve it).

cd my-app
mkdir nginx
cd nginx
touch nginx.conf
Enter fullscreen mode Exit fullscreen mode

Edit the "nginx.conf" file and add the following code to it.

server {

  listen 80;

  location / {
    root   /usr/share/nginx/html;
    index  index.html index.htm;

    # to redirect all the requests to index.html, 
    # useful when you are using react-router

    try_files $uri /index.html; 
  }

  error_page   500 502 503 504  /50x.html;

  location = /50x.html {
    root   /usr/share/nginx/html;
  }

}
Enter fullscreen mode Exit fullscreen mode

The gist of this code block is that you're telling Nginx to listen on port 80, redirect every request to "index.html" and the root is "/usr/share/nginx/html" (the directory where to serve from).

Dockerfile
Inside your app directory create a new file and name it as Dockerfile.prod and add the following code in it:

# stage1 - build react app first 
FROM node:12.16.1-alpine3.9 as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY ./package.json /app/
COPY ./yarn.lock /app/
RUN yarn
COPY . /app
RUN yarn build

# stage 2 - build the final image and copy the react build files
FROM nginx:1.17.8-alpine
COPY --from=build /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Enter fullscreen mode Exit fullscreen mode

Create a new file and name it as .dockerignore and add node_modules inside it. This is simply to tell Docker to ignore the node_modules directory.

So your directory structure should be like this

my-app
│   Dockerfile.prod
│   .dockerignore    
│
└───nginx
      nginx.conf

Enter fullscreen mode Exit fullscreen mode

Explanation

Stage 1

  1. Use a multi-stage Docker build (supported in Docker v17+)
  2. FROM tells what base image to use (required), you can check out base images at Docker Hub
  3. WORKDIR is used to specify the working directory (inside the image, not your host OS)
  4. ENV PATH adds node_modules in the PATH
  5. COPY is used to copy package.json from the current directory (on the host) to the working directory (in the image).
  6. RUN is used to run the command, here we want to run Yarn to install the dependencies mentioned in package.json
  7. COPY is run again to copy all the code from the host OS to the working directory in the image
  8. Run yarn build to build our app

You copy package.json first and install the dependencies and don't copy node_modules into the image. This is to leverage the excellent caching system of Docker and reduce build times.

Stage 2

In the first stage, you copied package.json to the working directory, installed the dependencies, copied your code, and built the final static files. In stage 2:

  1. Use Nginx as a base image. (nginx is the image, and 1.17.8-alpine is the tag. It's like you are telling what particular version/release of the Nginx base image you would like to use).
  2. Copy the build files from stage 1 to /usr/share/nginx/html (the default directory where Nginx serves from)
  3. Remove the default Nginx configuration file present at /etc/nginx/conf.d/default.conf
  4. Copy the configuration file you created earlier into the docker image
  5. Use EXPOSE to expose the port of the container. One pitfall here is that it doesn't actually expose the port, rather it is just for the sake of documentation
  6. Run Nginx in the foreground, not as a daemon (i.e in the background).

Both CMD and RUN are used to run commands. The difference is that RUN is an image build step, whereas CMD is the command a container executes by default when it is started.

Step 3: Build and Tag Image
From the root directory of your app, run the following command to build & tag your docker image:

docker build -f Dockerfile.prod -t my-first-image:latest .

  1. -f is used to specify the filename. If you don't specify it then you must rename your file to Dockerfile - that's what the build command looks for in the current directory by default.
  2. -t is used to tag the image. You can tag your image the way you want (e.g v1.0.0, v2.0.0, production, latest, etc.)
  3. . at the end is important, and it should be added to tell docker to use the current directory.

**Step 4: Run Container
The final step is to run the built image (as a container)
docker run -it -p 80:80 --rm my-first-image:latest

  1. -it for interactive mode
  2. -p to expose and bind ports. Here we are exposing port 80 of the container and binding it with port 80 of the host machine. The first one is of your machine (host OS) and the second one is of the docker image container. For example, if you use -p 1234:80 then you will need to go to http://localhost:1234 on your browser.
  3. --rm to remove the container once it is stopped
  4. my-first-image:latest the name:tag of the image we want to run container of

Now open up your browser and go to http://localhost and you will see your app being served from the docker. If you make any changes to your React application code, you will need to rebuild the image (Step 3) and run it again (Step 4).

Extra

  1. Run docker image ls to see a list of all the images on your machine
  2. Run docker container ls to see all the running containers
  3. Run docker system prune to prune the containers (be careful while using this command, read docs for options before using them)
  4. Read the Docker getting starting guide

Let's connect:

Linkedin: https://www.linkedin.com/in/mubbashir10/

Twitter: https://twitter.com/mubbashir100

Top comments (13)

Collapse
 
xai1983kbu profile image
xai1983kbu

Hi! Thank you for your article!
Is it possible to containerize NextJs App with Nginx in one container?
I don't understand how to run two processes in one container.
I find this answer stackoverflow.com/a/63038894/9783262 but have no success yet.

Can we containerize only NextJs App (without Nginx) for usage with "Fargate + Application Load Balancer"?

Collapse
 
xai1983kbu profile image
xai1983kbu • Edited

I manage to launch containerize NextJs App without Nginx
Example with Pulumi
github.com/pulumi/examples/issues/...

It's interesting now - Is it the right way (don't use Nginx)?

Collapse
 
mubbashir10 profile image
Mubbashir Mustafa

Umm, I would throw in Nginx there to do the load balancing, caching, compressing, etc. But yeah that's just my opinion :)
If you are generally interested in why we want to use Nginx with node server, this might help: expressjs.com/en/advanced/best-pra...

Thread Thread
 
mubbashir10 profile image
Mubbashir Mustafa

Btw you can add more than one container in ECS. That's how usually the node js apps are deployed (the other being a sidecar container).

Thread Thread
 
xai1983kbu profile image
xai1983kbu

Thank you soo much!

Thread Thread
 
abhipise81 profile image
Abhishek Pise

Can you guide me How to deploy Preact app into ecs?

Thread Thread
 
mubbashir10 profile image
Mubbashir Mustafa
Collapse
 
jhonatangiraldo profile image
Jhonatan Giraldo

Hey, great article. A couple of questions:

  1. Why do we need to add node_modules to the path? ENV PATH /app/node_modules/.bin:$PATH
  2. Why do you copy explicitly the package.json if anyway you will copy everything from the root? COPY ./package.json /app/ # <-- redundant? COPY . /app
  3. What is the advantage of running nginx in the foreground instead of daemon? CMD ["nginx", "-g", "daemon off;"]
Collapse
 
bjkeller profile image
Ben Keller

The answer to 2 is that Docker builds images in layers with rebuilds being triggered for all layers after a change has been detected. Since installing dependencies takes time and happens less frequently than code updates, it is better to isolate that step to avoid having dependencies installed every time you tweak your code.

Collapse
 
jayanath profile image
Jay Amaranayake

Well written series! nicely done!

Collapse
 
mubbashir10 profile image
Mubbashir Mustafa

Thank you!

Collapse
 
jhonatangiraldo profile image
Jhonatan Giraldo

Sorry, I got another question. Why did you use the module index? Trying out by myself nginx uses index.html when accessing the URL without defining that the index is index.html|index.htm

Collapse
 
kvssankar profile image
Kvs Sankar Kumar

y do we need ngnix server, cant use node to serve the static files dire