DEV Community

Cover image for Docker, Postgres, Node, Typescript Setup
Chandra Panta Chhetri
Chandra Panta Chhetri

Posted on

Docker, Postgres, Node, Typescript Setup

When setting up the backend for my project I had many issues related to configuring & connecting to the DB running in a Docker container via Node & PgAdmin. And so, I wanted to explain how I fixed these issues in hopes that it can save you hours of frustrations.

We will be learning to:

  • Configure Typescript for Node.js
  • Run Node.js & Postgres in Docker containers
  • Use env variables in Docker Compose & Node.js
  • Connect to the DB running in a container via PgAdmin
  • Use Nodemon to automatically restart the server once the code changes

Prerequisite

  1. Docker Desktop

Typescript & Nodemon

We will start by creating a basic Express server.

First, let's install the packages we will need:

//Dev Dependencies
npm i --save-dev typescript nodemon @types/pg @types/express dotenv

npm i pg express
Enter fullscreen mode Exit fullscreen mode

Add the following scripts in package.json:

"scripts": {
    "start": "node ./dist/app.js",
    "dev": "nodemon -L -e ts --exec \"npm run build && npm start\"",
    "build": "tsc"
  }
Enter fullscreen mode Exit fullscreen mode
  • build converts all our .ts files to .js and puts it in a dist folder (as configured below in tsconfig.json)
  • dev uses nodemon to watch for changes in any .ts file ('-e ts'). When there are changes, it will run the build & start scripts. Nodemon saves us from having to stop and start the server each time there is a change
    • '-L' is required when using nodemon in containers
  • start starts up our server

To configure Typescript, create a tsconfig.json file at the root with the following:

{
    "compilerOptions": {  
      "target": "es6" /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019', 'ES2020', or 'ESNEXT'. */,
      "module": "commonjs" /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', 'es2020', or 'ESNext'. */,
      "outDir": "./dist" /* Redirect output structure to the directory. */,
      "strict": true /* Enable all strict type-checking options. */,
      "typeRoots": ["./node_modules/@types"] /* List of folders to include type definitions from. */,
      "esModuleInterop": true /* Enables emit interoperability between CommonJS and ES Modules via creation of namespace objects for all imports. Implies 'allowSyntheticDefaultImports'. */,
      "skipLibCheck": true /* Skip type checking of declaration files. */,
      "forceConsistentCasingInFileNames": true /* Disallow inconsistently-cased references to the same file. */
    }
}
Enter fullscreen mode Exit fullscreen mode

Next, create an .env file at the root so that we use the same variables when configuring Docker Compose & the server. Also, we can hide the env variables used in Docker Compose as docker-compose.yml are commited to Github whereas the .env file is not.

For now, add a PORT variable to set the port the server will run at:

PORT=5000
Enter fullscreen mode Exit fullscreen mode

Create a app.ts in a new src folder with the following content:

import express, { NextFunction, Request, Response } from "express";
import dotenv from "dotenv";

const app = express();
dotenv.config(); //Reads .env file and makes it accessible via process.env

app.get("/test", (req: Request, res: Response, next: NextFunction) => {
  res.send("hi");
});

app.listen(process.env.PORT, () => {
  console.log(`Server is running at ${process.env.PORT}`);
});
Enter fullscreen mode Exit fullscreen mode

To verify everything is setup correctly thus far, start the server:

npm run dev
Enter fullscreen mode Exit fullscreen mode

Now, make a GET request to localhost:5000/test. The response should be hi. Also, notice there should be a dist folder with all the converted .ts files.

Docker

Now, we will run the server & Postgres in a Docker container.

Before that, you might ask why use Docker at all?

Docker allows your app to run in isolated environments known as containers. Consequently, this solves the age-old problem of "the code works on my machine".

Also, it allows you to use all the tools you want without installing them locally but by using images.

Docker images can installed from Docker Hub or created using a Dockerfile.

Create a file named Dockerfile at the root:

# Installs Node.js image
FROM node:16.13.1-alpine3.14

# sets the working directory for any RUN, CMD, COPY command
# all files we put in the Docker container running the server will be in /usr/src/app (e.g. /usr/src/app/package.json)
WORKDIR /usr/src/app

# Copies package.json, package-lock.json, tsconfig.json, .env to the root of WORKDIR
COPY ["package.json", "package-lock.json", "tsconfig.json", ".env", "./"]

# Copies everything in the src directory to WORKDIR/src
COPY ./src ./src

# Installs all packages
RUN npm install

# Runs the dev npm script to build & start the server
CMD npm run dev
Enter fullscreen mode Exit fullscreen mode

The Dockerfile will build our Express Server as an image, which we can then run in a container.

When creating applications that use multiple containers, it is best to use Docker Compose to configure them.

But before Docker Compose, let's add some more variables to the .env file as we will require them shortly.

DB_USER='postgres'
DB_HOST='db'
DB_NAME='db_name'
DB_PASSWORD='password'
DB_PORT=5432
Enter fullscreen mode Exit fullscreen mode
  • DB_HOST corresponds to the name of the DB service below. This is because each Docker container has its own definition of localhost. You can think of db as the container's localhost.
  • DB_PORT is the default port Postgres uses
  • DB_PASSWORD & DB_USER are the default auth credentials Postgres uses

Create a docker-compose.yml file at the root:

version: '3.8'
services:
  api:
    container_name: api
    restart: always
    build: .
    ports:
      - ${PORT}:${PORT}
    depends_on:
      - db
    volumes:
    - .:/usr/src/app

  db:
    container_name: postgres
    image: postgres
    ports:
      - '5433:${DB_PORT}'
    volumes:
      - data:/data/db
    environment:
      - POSTGRES_PASSWORD=${DB_PASSWORD}
      - POSTGRES_DB=${DB_NAME}

volumes: 
 data: {}
Enter fullscreen mode Exit fullscreen mode

Note: The ${VARIABLE_NAME} syntax lets us use variables from the .env file. Docker Compose can automatically get variables from the root .env file.

For the api service, we are:

  • using the Dockerfile to build the container
  • exposing ${PORT} (which was 5000 from the .env file). When we expose a port, it allows us to access the server via localhost:${PORT}
  • only starting the container once the db service finishes starting up
  • mapping all the files in the project directory to WORKDIR of the container using volumes

For the db service, we are:

  • using the postgres image from Docker Hub
  • using volumes so that our DB data does not erase when we shut down the container
  • mapping port 5432 of the container to port 5433 of our localhost
  • using env variables from the .env file and passing it to the postgres image. The image requires at least the POSTGRES_PASSWORD as per the documentation on Docker Hub. We also included POSTGRES_DB as it specifies a different name for the default database that is created when the image is first started

Connecting To Postgres

To connect the server to Postgres container, add the following to app.ts:

import { Pool } from "pg";
const pool = new Pool({
  host: process.env.DB_HOST,
  user: process.env.DB_USER,
  database: process.env.DB_NAME,
  password: process.env.DB_PASSWORD,
  port: parseInt(process.env.DB_PORT || "5432")
});

const connectToDB = async () => {
  try {
    await pool.connect();
  } catch (err) {
    console.log(err);
  }
};
connectToDB();
Enter fullscreen mode Exit fullscreen mode

Now, we can startup the server & DB by the following command:

docker-compose up
Enter fullscreen mode Exit fullscreen mode

This will build & start the containers (api & db). Remember, first db will start then api as api depends on db.

Try making the same GET request we did earlier and you should get the same response.

Before we end the tutorial, you might be wondering, how do I view the DB and its contents? There are 2 ways:

  1. You can add a new service to the docker-compose.yml file that uses the pgadmin4 image
  2. If you have PgAdmin installed locally:
    • Use localhost as the host & 5433 as the port when adding a new server. Why 5433 and not 5432 - the default port of Postgres? Earlier, we mapped port 5432 of the container to port 5433 of our localhost. But, why 5433? It could've been any port, just not 5432 because if you have Postgres already installed locally, it is already using port 5432. So, you cannot have the Postgres container also using the same port.

Conclusion

I hope my explanation was clear & helped you in some way. If you want the source code, you can find the full code here.

Top comments (5)

Collapse
 
devil20104 profile image
devil 2010

I got this error

#9 0.327 Error connecting to PostgreSQL database Error: getaddrinfo ENOTFOUND db
#9 0.327     at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:120:26) {
#9 0.327   errno: -3008,
#9 0.327   code: 'ENOTFOUND',
#9 0.327   syscall: 'getaddrinfo',
#9 0.327   hostname: 'db'
#9 0.327 }
Enter fullscreen mode Exit fullscreen mode
Collapse
 
rollergui profile image
Guilherme

Great post! Helped me out a lot :P

Just so you know, I searched the -L flag on nodemon, cause I didn't know what that was, and I saw this in the docs:

In some networked environments (such as a container running nodemon reading across a mounted drive), you will need to use the legacyWatch: true which enables Chokidar's polling.

Via the CLI, use either --legacy-watch or -L for short:

nodemon -L

Though this should be a last resort as it will poll every file it can find.
Enter fullscreen mode Exit fullscreen mode

So, maybe it's not required. It looks worth to try and run things without it first :)

Collapse
 
jphn profile image
João Pedro Holanda Neves

Great post, cleared up a lot of questions I had about it. Thank you very much!

Collapse
 
stanislav_baraniuk_fc28bf profile image
Stanislav Baraniuk

FATAL: password authentication failed for user

Collapse
 
leomoreiras profile image
Leonardo Moreira

I removed quotes from .env and did docker compose down and docker compose up