Deployment & DevOps for Node.js
Introduction to Node.js Deployment
Deploying a Node.js application to production involves much more than just running node app.js. You need to consider process management, environment configuration, security, monitoring, and scalability. This lesson covers modern deployment strategies and DevOps practices.
Production vs Development: Production environments require different configurations: no source maps, minified assets, proper logging, environment variables, and process monitoring.
Environment Variables
Environment variables separate configuration from code, making applications portable and secure:
// .env file (NEVER commit to git)
NODE_ENV=production
PORT=3000
DATABASE_URL=postgresql://user:pass@localhost:5432/mydb
JWT_SECRET=your-secret-key-here
API_KEY=your-api-key
REDIS_URL=redis://localhost:6379
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=your-email@gmail.com
SMTP_PASS=your-password
// config.js - centralized configuration
require('dotenv').config();
module.exports = {
env: process.env.NODE_ENV || 'development',
port: parseInt(process.env.PORT, 10) || 3000,
database: {
url: process.env.DATABASE_URL,
pool: {
min: 2,
max: 10
}
},
jwt: {
secret: process.env.JWT_SECRET,
expiresIn: '7d'
},
redis: {
url: process.env.REDIS_URL
},
email: {
host: process.env.SMTP_HOST,
port: parseInt(process.env.SMTP_PORT, 10),
auth: {
user: process.env.SMTP_USER,
pass: process.env.SMTP_PASS
}
},
isProduction: process.env.NODE_ENV === 'production',
isDevelopment: process.env.NODE_ENV === 'development'
};
// Usage in app
const config = require('./config');
if (config.isProduction) {
// Production-specific setup
app.use(compression());
app.use(helmet());
}
Security: Never hardcode secrets in your code. Always use environment variables and add .env to your .gitignore file.
Process Management with PM2
PM2 is a production process manager for Node.js applications with built-in load balancing:
// Installation
npm install -g pm2
// Start application
pm2 start app.js --name "my-app"
// Start with cluster mode (multi-core)
pm2 start app.js -i max --name "my-app"
// ecosystem.config.js - PM2 configuration
module.exports = {
apps: [{
name: 'my-app',
script: './app.js',
instances: 'max',
exec_mode: 'cluster',
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production',
PORT: 3000
},
error_file: './logs/err.log',
out_file: './logs/out.log',
log_date_format: 'YYYY-MM-DD HH:mm:ss Z',
merge_logs: true,
max_memory_restart: '1G',
autorestart: true,
watch: false,
ignore_watch: ['node_modules', 'logs']
}]
};
// Common PM2 commands
pm2 start ecosystem.config.js --env production
pm2 restart my-app
pm2 reload my-app // Zero-downtime reload
pm2 stop my-app
pm2 delete my-app
pm2 list // List all processes
pm2 logs my-app // View logs
pm2 monit // Monitor CPU/Memory
pm2 save // Save process list
pm2 startup // Generate startup script
pm2 resurrect // Restore saved processes
Docker Containerization
Docker packages your application with all dependencies into a portable container:
# Dockerfile - multi-stage build
# Stage 1: Build
FROM node:18-alpine AS builder
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy source code
COPY . .
# Stage 2: Production
FROM node:18-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
# Copy from builder
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --chown=nodejs:nodejs . .
# Switch to non-root user
USER nodejs
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=40s \
CMD node healthcheck.js
# Start app
CMD ["node", "app.js"]
# .dockerignore
node_modules
npm-debug.log
.env
.git
.gitignore
README.md
docker-compose*.yml
Dockerfile*
Build and run Docker container:
# Build image
docker build -t my-app:1.0.0 .
# Run container
docker run -d \
--name my-app \
-p 3000:3000 \
-e NODE_ENV=production \
-e DATABASE_URL=postgresql://... \
--restart unless-stopped \
my-app:1.0.0
# View logs
docker logs -f my-app
# Execute commands in container
docker exec -it my-app sh
# Stop and remove container
docker stop my-app
docker rm my-app
Docker Compose for Multi-Container Apps
Docker Compose orchestrates multiple containers:
# docker-compose.yml
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://postgres:password@db:5432/mydb
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
restart: unless-stopped
volumes:
- ./logs:/app/logs
db:
image: postgres:15-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=mydb
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
redis:
image: redis:7-alpine
restart: unless-stopped
volumes:
- redis_data:/data
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./ssl:/etc/nginx/ssl:ro
depends_on:
- app
restart: unless-stopped
volumes:
postgres_data:
redis_data:
# Commands
docker-compose up -d # Start all services
docker-compose down # Stop all services
docker-compose logs -f app # View logs
docker-compose ps # List services
docker-compose restart app # Restart service
Nginx Reverse Proxy
Nginx acts as a reverse proxy, handling SSL, caching, and load balancing:
# nginx.conf
events {
worker_connections 1024;
}
http {
upstream app_servers {
least_conn;
server app:3000 max_fails=3 fail_timeout=30s;
# Add more servers for load balancing
# server app2:3000 max_fails=3 fail_timeout=30s;
}
# Rate limiting
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
# Cache settings
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=1g inactive=60m;
server {
listen 80;
server_name example.com www.example.com;
# Redirect HTTP to HTTPS
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com www.example.com;
# SSL configuration
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Compression
gzip on;
gzip_vary on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml;
# Static files
location /static/ {
alias /app/public/;
expires 1y;
add_header Cache-Control "public, immutable";
}
# API routes with rate limiting
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://app_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# All other routes
location / {
proxy_pass http://app_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Caching for GET requests
proxy_cache my_cache;
proxy_cache_valid 200 60m;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
add_header X-Cache-Status $upstream_cache_status;
}
}
}
Pro Tip: Use Let's Encrypt with Certbot for free SSL certificates: certbot --nginx -d example.com -d www.example.com
Continuous Integration/Continuous Deployment (CI/CD)
Automate testing and deployment with GitHub Actions:
# .github/workflows/deploy.yml
name: Deploy to Production
on:
push:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: password
POSTGRES_DB: test_db
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run linting
run: npm run lint
- name: Run tests
run: npm test
env:
DATABASE_URL: postgresql://postgres:password@localhost:5432/test_db
- name: Run security audit
run: npm audit --audit-level=moderate
build-and-deploy:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: |
myapp/app:latest
myapp/app:${{ github.sha }}
- name: Deploy to server
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.SERVER_HOST }}
username: ${{ secrets.SERVER_USERNAME }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
cd /var/www/myapp
docker-compose pull
docker-compose up -d
docker system prune -f
- name: Health check
run: |
sleep 10
curl -f https://example.com/health || exit 1
- name: Notify deployment
if: success()
run: |
curl -X POST ${{ secrets.SLACK_WEBHOOK }} \
-H 'Content-Type: application/json' \
-d '{"text":"Deployment successful! 🚀"}'
Cloud Hosting Options
Popular platforms for Node.js deployment:
// 1. Heroku - Simple PaaS
// Procfile
web: node app.js
// Deploy
heroku create my-app
git push heroku main
heroku config:set NODE_ENV=production
heroku ps:scale web=1
// 2. AWS EC2 - Full control
// Install Node.js on Ubuntu
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt-get install -y nodejs
sudo npm install -g pm2
// Deploy with PM2
pm2 start ecosystem.config.js --env production
pm2 save
pm2 startup
// 3. DigitalOcean App Platform
// app.yaml
name: my-app
services:
- name: web
github:
repo: username/repo
branch: main
build_command: npm install
run_command: npm start
envs:
- key: NODE_ENV
value: production
http_port: 3000
// 4. Vercel - Serverless (Next.js optimized)
// vercel.json
{
"version": 2,
"builds": [
{
"src": "app.js",
"use": "@vercel/node"
}
],
"routes": [
{
"src": "/(.*)",
"dest": "/app.js"
}
]
}
// Deploy
npm i -g vercel
vercel --prod
Database Migrations in Production
Safely manage database schema changes:
// migrations/001_create_users.js
exports.up = async (db) => {
await db.schema.createTable('users', (table) => {
table.increments('id').primary();
table.string('email').unique().notNullable();
table.string('password').notNullable();
table.timestamps(true, true);
});
};
exports.down = async (db) => {
await db.schema.dropTable('users');
};
// migrate.js - migration runner
const config = require('./config');
const migrations = require('./migrations');
async function runMigrations() {
// Create migrations table
await db.schema.createTableIfNotExists('migrations', (table) => {
table.string('name').primary();
table.timestamp('run_at').defaultTo(db.fn.now());
});
// Get completed migrations
const completed = await db('migrations').pluck('name');
// Run pending migrations
for (const migration of migrations) {
if (!completed.includes(migration.name)) {
console.log(`Running migration: ${migration.name}`);
try {
await migration.up(db);
await db('migrations').insert({ name: migration.name });
console.log(`✓ ${migration.name} completed`);
} catch (error) {
console.error(`✗ ${migration.name} failed:`, error);
throw error;
}
}
}
}
// Run before starting app
runMigrations()
.then(() => {
console.log('All migrations completed');
require('./app');
})
.catch((error) => {
console.error('Migration failed:', error);
process.exit(1);
});
Zero-Downtime Deployment
Deploy updates without service interruption:
// 1. Blue-Green Deployment
// Run two identical environments, switch traffic when new version is ready
// docker-compose-blue-green.yml
services:
app-blue:
image: myapp:1.0.0
# ...
app-green:
image: myapp:1.1.0
# ...
nginx:
# Switch upstream to green when ready
// 2. Rolling Deployment with PM2
pm2 reload ecosystem.config.js --update-env
// 3. Graceful Shutdown
process.on('SIGTERM', async () => {
console.log('SIGTERM received, closing server...');
// Stop accepting new connections
server.close(async () => {
console.log('Server closed');
// Close database connections
await db.destroy();
// Close Redis connection
await redis.quit();
console.log('All connections closed');
process.exit(0);
});
// Force shutdown after 30 seconds
setTimeout(() => {
console.error('Forced shutdown');
process.exit(1);
}, 30000);
});
Exercise: Deploy a Full-Stack Application
Deploy a Node.js application with the following requirements:
- Dockerize the application with multi-stage build
- Use Docker Compose with Node.js, PostgreSQL, and Redis
- Configure Nginx as reverse proxy with SSL
- Set up PM2 for process management
- Create a CI/CD pipeline with GitHub Actions
- Implement health checks and graceful shutdown
- Configure environment variables for different environments
Test zero-downtime deployment by updating the application while monitoring uptime.