Setup & Dockerize a React / Nest Monorepo application

Montacer Dkhilali
5 min readAug 31, 2021

Today, there is a buzz all around about containerization and Docker. Software development companies use Docker to simplify and accelerate their workflows. It really makes it easier to create, deploy, and run applications by using containers.

Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one image.

In this blog, we are going to use Docker and Docker Compose to “Dockerize“ a monorepo React.js / Nest.js application with a PostgreSQL database.

Prerequisites

sudo apt install make

Project structure

In Breakpoint Technology, we are generally working with the Monorepos or Git Submodules patterns.
As you can see in image below, our project contains the backend Nest.js folder created with Nest.js CLI and the frontend React.js folder created with Create React App under packages. We also have other global files at the root level of our project.

In order to create the same project structure as we have in the image below, follow these steps :

  • Initialize an empty Git repository and add into it this empty files :
.dockerignore
.gitignore
.env
.docker-compose.yaml
Makefile
  • Create packages/ folder that will hold the backend and frontend projects :
mkdir packages && cd packages
  • Create the frontend project using Create React App :
npx create-react-app frontend --template typescript
  • Create the backend project using Nest.js CLI :
nest new backend
  • Delete .git/ folder - created by NestJS CLI - inside the backend project, we already initialized a Git repository.
  • Inside .eslintrc.js, update the path of tsconfig.json file to packages/backend/tsconfig.json. Otherwise, ESLint will not be working in backend.
  • Now you should try to run the backend and the frontend projects without docker in order to verify that everything is okay !

Dockerize Nest.js backend project

As you can notice, in the backend folder, contains 2 Dockerfiles. The local one is used for running the project locally, it is used in docker-compose.yaml file, whereas the other one is used for production.

Dockerfile.local

  • Adding --silent to npm install is a personal choice. It basically hide logs when building the docker image.
  • $BACKEND_PORT value is set in .env file, you can also pass it explicitly here.
FROM node:14-alpineRUN mkdir -p /svr/appWORKDIR /svr/appRUN npm i -g @nestjs/cli --silentCOPY package.json .
COPY package-lock.json .
RUN npm install --silentCOPY . .# Value set in .env file.
EXPOSE $BACKEND_PORT
CMD ["npm", "run", "start:debug"]

Dockerfile.production

  • The main difference between the 2 Dockerfiles is that for production we are running npm run build, and we are running node from the generated JavaScript in dist folder.
FROM node:12-alpineARG NODE_ENV=stagingWORKDIR /appCOPY . .RUN npm install --silent
RUN npm run build
EXPOSE 3000CMD ["node", "dist/main"]

Dockerize React.js frontend project

Dockerfile.local

  • In the Dockerfile.local we are copying files and running npm install, in docker-compose.yaml, we will run npm start command that will launch the development server.
FROM node:14-slimWORKDIR /usr/src/appCOPY . .RUN npm install --silent

Dockerfile.production

  • For production, we will use npm run build command to create a production build of our react app.
  • Then, we take advantage of the multistage build pattern to create a temporary image used for building the artifact — the production-ready React static files — that is then copied over to the production image.
FROM node:14-slim as buildWORKDIR /appCOPY package.json .RUN npm install --silentCOPY . .RUN npm run buildFROM nginx:alpineCOPY nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80CMD ["nginx", "-g", "daemon off;"]
  • In the frontend project, we added the nginx.conf file that contains the Nginx configuration.
server {
listen 80;

location / {
root /usr/share/nginx/html;
index index.html;
try_files $uri /index.html;
}
}

Create Docker Compose file and run project

Setup environment variables

Copy and paste this environment variables in your .env file.

NODE_ENV=developmentFRONTEND_PORT=3000
BACKEND_PORT=4000
JWT_SECRET=jwt_secret_key_here
JWT_EXPIRES_IN=30d
DB_HOST=bp-pg-db
DB_NAME=bp-pg-db
DB_USER=postgres
DB_PASSWORD=root
DB_PORT=5432
PGADMIN_DEFAULT_EMAIL=admin@backend.com
PGADMIN_DEFAULT_PASSWORD=pass@123
PGADMIN_PORT=5055

Setup configuration files

  • .gitignore
/**/node_modules/
/**/build/
/**/dist/
.vscode
.env
  • .dockerignore
packages/**/node_modules
.gitignore
.git
*.md

Docker Compose

Services (Each service represent a Docker container that will be created)

  • frontend : Based on the Dockerfile.local of the frontend project. The volume created for this container allows us track the project files changes.
  • backend : Based on the Dockerfile.local of the backend project. Depends on the database container.
  • bp-pg-db : Based on postgres:12-alpine Docker image. Will get environment variables from .env file.
  • pgadmin-portal : Based on dpage/pgadmin4 Docker image. Depends on bp-pg-db database container.

Volumes

  • pgdata : A Docker volume used by bp-pg-db database to store its data persistently.
  • pgadmin : A Docker volume used by pgadmin-portal to store its configurations persistently.

Networks

  • bp-network : Based on the default bridge default network, we use this network to allow containers communicate with each other.
  • docker-compose.yaml
version: "3.9"services:
frontend:
container_name: frontend
build:
context: ./packages/frontend
dockerfile: Dockerfile.local
restart: always
env_file: .env
ports:
- "${FRONTEND_PORT}:${FRONTEND_PORT}"
volumes:
- "./packages/frontend/src:/usr/src/app/src"
networks:
- bp-network
command: "npm start"
backend:
container_name: backend
build:
context: ./packages/backend
dockerfile: Dockerfile.local
restart: always
env_file: .env
volumes:
- ./packages/backend:/svr/app
- "./scripts/wait.sh:/wait.sh"
- /svr/app/node_modules
networks:
- bp-network
ports:
- "${BACKEND_PORT}:${BACKEND_PORT}"
depends_on:
- bp-pg-db
links:
- bp-pg-db
bp-pg-db:
image: postgres:12-alpine
restart: always
container_name: bp-pg-db
env_file:
- .env
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
PGDATA: /var/lib/postgresql/data
POSTGRES_USER: ${DB_USER}
POSTGRES_DB: ${DB_NAME}
ports:
- "${DB_PORT}:${DB_PORT}"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- bp-network
pgadmin-portal:
image: dpage/pgadmin4
restart: always
container_name: pgadmin-portal
env_file:
- .env
environment:
PGADMIN_DEFAULT_PASSWORD: "${PGADMIN_DEFAULT_PASSWORD}"
PGADMIN_DEFAULT_EMAIL: "${PGADMIN_DEFAULT_EMAIL}"
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT}:80"
depends_on:
- bp-pg-db
networks:
- bp-network
volumes:
pgdata:
pgadmin:
networks:
bp-network:
driver: bridge

Run project

  • Makefile
local:
@docker-compose stop && docker-compose up --build -d --remove-orphans

After running the project using make local cammand, you can check the status of your Docker containers buy running docker container ls or simply docker ps.

Now your boilerplate project is set up! you can start coding ;)

--

--