GraphQL
Deployment and Production
Deployment and Production
Deploying a GraphQL API to production requires careful planning and configuration to ensure security, performance, and reliability. This lesson covers deployment strategies, production best practices, and cloud hosting options.
Production Environment Configuration
Before deploying, configure your application for production:
// .env.production
NODE_ENV=production
PORT=4000
DATABASE_URL=postgresql://user:password@host:5432/dbname
REDIS_URL=redis://host:6379
JWT_SECRET=your-production-secret-here
CORS_ORIGIN=https://yourdomain.com
RATE_LIMIT_MAX=100
INTROSPECTION_ENABLED=false
PLAYGROUND_ENABLED=false
Security Alert: Never commit production secrets to version control. Use environment variables or secret management services like AWS Secrets Manager or HashiCorp Vault.
Deploying Apollo Server to Node.js Hosting
Deploy to traditional Node.js hosting platforms:
// server.js - Production configuration
const { ApolloServer } = require('apollo-server-express');
const express = require('express');
const helmet = require('helmet');
const compression = require('compression');
const app = express();
// Security middleware
app.use(helmet({
contentSecurityPolicy: process.env.NODE_ENV === 'production',
crossOriginEmbedderPolicy: false,
}));
// Compression middleware
app.use(compression());
// Health check endpoint
app.get('/health', (req, res) => {
res.status(200).json({ status: 'ok', timestamp: new Date() });
});
const server = new ApolloServer({
typeDefs,
resolvers,
context: ({ req }) => ({
user: req.user,
dataSources: {
db: pool,
cache: redisClient,
},
}),
introspection: process.env.INTROSPECTION_ENABLED === 'true',
playground: process.env.PLAYGROUND_ENABLED === 'true',
plugins: [
// Production monitoring
ApolloServerPluginLandingPageDisabled(),
ApolloServerPluginInlineTrace(),
],
formatError: (error) => {
// Hide internal error details in production
if (process.env.NODE_ENV === 'production') {
console.error(error);
return new Error('Internal server error');
}
return error;
},
});
await server.start();
server.applyMiddleware({ app, path: '/graphql' });
const PORT = process.env.PORT || 4000;
app.listen(PORT, () => {
console.log(`🚀 Server ready at http://localhost:${PORT}${server.graphqlPath}`);
});
Serverless GraphQL with AWS Lambda
Deploy GraphQL as a serverless function:
// lambda.js
const { ApolloServer } = require('apollo-server-lambda');
const typeDefs = require('./schema');
const resolvers = require('./resolvers');
const server = new ApolloServer({
typeDefs,
resolvers,
context: ({ event, context }) => ({
headers: event.headers,
functionName: context.functionName,
event,
context,
}),
introspection: false,
playground: false,
});
exports.handler = server.createHandler({
cors: {
origin: 'https://yourdomain.com',
credentials: true,
},
});
# serverless.yml
service: graphql-api
provider:
name: aws
runtime: nodejs18.x
region: us-east-1
environment:
DATABASE_URL: ${env:DATABASE_URL}
JWT_SECRET: ${env:JWT_SECRET}
functions:
graphql:
handler: lambda.handler
events:
- http:
path: graphql
method: post
cors: true
- http:
path: graphql
method: get
cors: true
Serverless Benefits: Pay only for what you use, automatic scaling, zero server maintenance. Ideal for APIs with variable traffic patterns.
Docker Deployment
Containerize your GraphQL API with Docker:
# Dockerfile
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app .
ENV NODE_ENV=production
EXPOSE 4000
USER node
CMD ["node", "server.js"]
# docker-compose.yml
version: '3.8'
services:
graphql-api:
build: .
ports:
- "4000:4000"
environment:
- DATABASE_URL=postgresql://postgres:password@db:5432/mydb
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
db:
image: postgres:15-alpine
environment:
POSTGRES_DB: mydb
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
volumes:
postgres_data:
redis_data:
CDN and Caching Strategy
Implement caching headers for GET requests:
const express = require('express');
const app = express();
// Cache control for GraphQL GET requests
app.get('/graphql', (req, res, next) => {
if (req.query.query && !req.query.query.includes('mutation')) {
// Cache read-only queries for 5 minutes
res.set('Cache-Control', 'public, max-age=300');
} else {
res.set('Cache-Control', 'no-store');
}
next();
});
// Use CDN for static assets
app.use(express.static('public', {
maxAge: '1y',
etag: true,
}));
CDN Integration: Use services like CloudFlare, AWS CloudFront, or Fastly to cache GraphQL responses at edge locations worldwide, reducing latency for global users.
Production Checklist
Pre-Deployment Checklist:
- ✅ Disable introspection and playground in production
- ✅ Implement rate limiting and query complexity limits
- ✅ Set up logging and error monitoring (Sentry, LogRocket)
- ✅ Configure CORS with specific allowed origins
- ✅ Use HTTPS/TLS for all traffic
- ✅ Implement database connection pooling
- ✅ Set up health check endpoints
- ✅ Configure automatic backups for database
- ✅ Test with production-like data volume
- ✅ Document API with schema documentation
- ✅ Set up CI/CD pipeline for automated deployments
- ✅ Configure environment variables securely
Monitoring and Observability
// Using Apollo Studio for monitoring
const { ApolloServer } = require('apollo-server');
const { ApolloServerPluginUsageReporting } = require('apollo-server-core');
const server = new ApolloServer({
typeDefs,
resolvers,
plugins: [
ApolloServerPluginUsageReporting({
sendVariableValues: { none: true },
sendHeaders: { none: true },
}),
],
});
// Custom metrics with Prometheus
const prometheus = require('prom-client');
const register = new prometheus.Registry();
const httpRequestDuration = new prometheus.Histogram({
name: 'http_request_duration_seconds',
help: 'Duration of HTTP requests in seconds',
labelNames: ['method', 'route', 'status_code'],
registers: [register],
});
app.get('/metrics', async (req, res) => {
res.set('Content-Type', register.contentType);
res.end(await register.metrics());
});
Performance: Monitor query execution times, error rates, and server resources. Set up alerts for anomalies to catch issues before they impact users.
Scaling Strategies
Horizontal scaling with load balancer:
# nginx.conf - Load balancer configuration
upstream graphql_backend {
least_conn;
server graphql-server-1:4000;
server graphql-server-2:4000;
server graphql-server-3:4000;
}
server {
listen 80;
server_name api.yourdomain.com;
location /graphql {
proxy_pass http://graphql_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Auto-Scaling: Use Kubernetes or cloud auto-scaling groups to automatically add/remove server instances based on traffic patterns. Set CPU and memory thresholds for scaling triggers.