Dockerizing a Postgres, Nestjs & React application with separate development and test databases. Run with a single command thanks to docker-compose.
In this approach the only public facing part of the application will be the frontend which will have access to the backend thanks to a shared network and likewise backend will have access to the database that same way.
You can checkout the finished code on github
Frontend
First off, frontend needs a proxy server that will either serve the static files if it’s a page request or forward it to the backend REST endpoint.
const express = require("express")
const { createProxyMiddleware } = require("http-proxy-middleware")
const app = express()
const port = 3001
const options = {
/* we will use this environmental variable to
feed the backend server url via a service name */
target: process.env.REST_API_URL,
// changes the origin of the host header to the target URL
changeOrigin: true,
pathRewrite: {
/* client will send all backend requests to /api/path/to/endpoint
this will remove the /api prefix when forwarding the request to the server */
"^/api": "",
},
}
// adding proxy middleware for /api requests
app.use("/api", createProxyMiddleware(options))
// standard static file serve
app.use(express.static("public"))
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`)
})
Naturally, to run this in docker we need a Dockerfile that will download a Node.js
image, install all dependencies, build the site and startup our simple proxy server to serve it.
FROM node:14.15.0
COPY dockerize-fullstack-application.mdx .
RUN npm install && npm run build
EXPOSE 3001
CMD node server/main.js
Use a .dockerignore
file to avoid copying unnecessary folders into the container. Here I recommend putting .cache, public and node_modules to avoid long build times as well as gatsby build errors.
You can build and run this image with
docker build -t my-frontend-app .
docker run -e REST_API_URL=http://localhost:3000 -p 3001:3001 my-frontend-app
Database
Setting up a database with docker-compose is as easy as it can be. Define the image you want to use, expose a port, add database variables and define a volume for persistent data.
version: '3.9'
services:
exanubes-database:
image: postgres:12-alpine
container_name: exanubes-database
expose:
- "5432"
environment:
- POSTGRES_PASSWORD=exanubes
- POSTGRES_USER=exanubes
- POSTGRES_DB=exanubes-prod
volumes:
- ./postgres-data:/var/lib/postgesql/data
Backend
Backend is a little bit more complicated as it needs to connect to different databases depending on the environment it’s in, so we’re gonna use some environment variables to configure the connection.
Sequelize
Aside from that, sequelize-cli which is responsible for performing migrations on the database, has to be configured as well. So in db/config.json
add:
{
"production": {
"username": "exanubes",
"password": "exanubes",
"database": "exanubes-prod",
"host": "database",
"dialect": "postgres"
}
}
Each key in this config relates to the NODE_ENV environment variable’s value. It’s completely arbitrary and can be set to anything.
Dockerfile
One caveat in backend’s Dockerfile is that we have to run migrations before starting the server. Also making sure the NODE_ENV is what is expected it to be, I set it explicitly when running the commands. Rest is pretty much the same as in the frontend image
FROM node:14.15.0-alpine
COPY .. .
RUN npm install && npm run build
EXPOSE 3000
CMD NODE_ENV='production' npm run migrate && NODE_ENV='production' node dist/main.js
It’s also worth creating a .dockerignore file to prevent copying node_modules, .env files and dist folder
Docker Compose
To make everything easier on us when deploying this application, we’ll add backend and frontend services to docker-compose.yml
exanubes-backend:
restart: always
build: ./backend
container_name: exanubes-backend
links:
- "exanubes-database:database"
depends_on:
- exanubes-database
environment:
- DB_USER=exanubes
- DB_PASSWORD=exanubes
- DB_NAME=exanubes-prod
- DB_HOST=database
First of all, docker-compose needs to know where’s the Docker image it’s supposed to build. In the database service we used an existing postgres image, here we point to the backend
directory and from there docker-compose will build the Dockerfile that’s in there.
Link and alias the database service, tell docker that we rely on that service so it will spin that up first and finally pass in the environment variables so we can connect to the database. These should be the same as in db/config.json
.
exanubes-frontend:
restart: always
build: ./frontend
container_name: exanubes-frontend
links:
- "exanubes-backend:backend"
ports:
- "3001:3001"
environment:
- REST_API_URL=http://backend:3000
Very similar to the backend service, we set the restart option, point Docker to the image definition but this time link and alias the backend service. Now, because frontend has to be accessible from outside the container, we gotta map the container port to the host port. Last but not least we pass the backend url to the proxy server using the backend alias.
Test & Development
Adding test and development database is very similar to production database.
dev-database:
image: postgres:12-alpine
container_name: exanubes-dev-database
profiles: ["dev"]
ports:
- "5431:5432"
environment:
- POSTGRES_PASSWORD=exanubes
- POSTGRES_USER=exanubes
- POSTGRES_DB=exanubes-db-dev
volumes:
- ./pg-data:/var/lib/postgresql/data
Each of the databases definitely needs a different db name in POSTGRES_DB. Image should stay the same as we do not want a different version of postgres depending on environment. Define a separate mapping for the dev database volume. I also like to define profiles for non production services. This way I make sure that when I run this service all other dev
services will be run as well but also that this service will not be run when I want run the production app with docker-compose up
.
test-database:
image: postgres:12-alpine
container_name: exanubes-test-database
profiles: ["test"]
ports:
- "5433:5432"
environment:
- POSTGRES_PASSWORD=exanubes
- POSTGRES_USER=exanubes
- POSTGRES_DB=exanubes-db-test
Pretty much the same as dev but this time with a test
profile and without a volume as we don’t need persistent data with automatic tests. Last but not least, both of these databases need to map container port to host machine port so we’re able to connect to them from localhost.
To get all this to work I also added development
and test
configs for the sequelize-cli in db/config.json
{
"development": {
"username": "exanubes",
"password": "exanubes",
"database": "exanubes-db-dev",
"host": "localhost",
"dialect": "postgres",
"port": "5431"
},
"test": {
"username": "exanubes",
"password": "exanubes",
"database": "exanubes-db-test",
"host": "localhost",
"dialect": "postgres",
"port": "5433"
}
}
You can checkout the e2e tests by running npm run test:e2e
in the backend directory. This should build the test-database service, run tests and tear it down before exiting.
In a future article I’ll cover how to deploy a dockerized application on Elastic Container Service