Redis Performance Optimization
Redis Performance Optimization
Redis is designed for high performance, but proper configuration and best practices are essential to maximize throughput, minimize latency, and optimize memory usage in production environments.
Memory Optimization Strategies
Memory is Redis's most critical resource. Efficient memory usage directly impacts performance and cost:
- Use appropriate data structures: Hashes for objects, Sets for unique items, Sorted Sets with scores
- Enable compression: Use Redis's built-in encoding for small collections
- Set expiration times: Prevent memory leaks with TTL
- Use memory-efficient encodings: ziplist, intset, listpack
- Avoid large keys: Split large values into smaller chunks
- Monitor memory fragmentation: Restart Redis if fragmentation > 1.5
# Total memory usage
INFO memory
# Memory usage by key
MEMORY USAGE mykey
# Memory stats
used_memory_human: 2.50M
used_memory_peak_human: 3.12M
mem_fragmentation_ratio: 1.23
# Find largest keys
redis-cli --bigkeys
# Detailed memory analysis
redis-cli --memkeys --memkeys-samples 1000
Key Naming Conventions
Consistent key naming improves organization, performance, and memory efficiency:
# Format: object:id:field or namespace:object:id
user:1000:name
user:1000:email
user:1000:settings
session:abc123:data
cache:homepage:html
rate_limit:api:user:1000
# Use colons as separators (Redis convention)
# Namespaces help organize and pattern match
# Shorter keys = less memory usage
SCAN vs KEYS - Critical Performance Difference
Never use KEYS in production - it blocks Redis while scanning all keys:
KEYS user:*
# Blocks entire server until complete
# O(N) complexity where N = total keys
✅ Good - Non-blocking:
SCAN 0 MATCH user:* COUNT 100
# Returns cursor + batch of keys
# Non-blocking, can be paginated
# O(1) per call
# SCAN iteration example
cursor="0"
while true; do
result=$(redis-cli SCAN $cursor MATCH user:* COUNT 100)
cursor=$(echo "$result" | head -1)
keys=$(echo "$result" | tail -n +2)
echo "$keys"
[ "$cursor" = "0" ] && break
done
Pipeline Optimization
Pipelining reduces network round trips by batching multiple commands:
# 1000 commands = 1000 round trips
for i in range(1000):
redis.set(f"key:{i}", f"value{i}")
# ~500ms with 0.5ms latency per command
With Pipeline (Fast):
# 1000 commands = 1 round trip
pipe = redis.pipeline()
for i in range(1000):
pipe.set(f"key:{i}", f"value{i}")
pipe.execute()
# ~10ms total
$redis = new Redis();
$redis->connect('127.0.0.1', 6379);
$pipe = $redis->multi(Redis::PIPELINE);
for ($i = 0; $i < 1000; $i++) {
$pipe->set("key:$i", "value$i");
}
$pipe->exec();
Node.js Pipeline Example:
const pipeline = redis.pipeline();
for (let i = 0; i < 1000; i++) {
pipeline.set(`key:${i}`, `value${i}`);
}
await pipeline.exec();
Connection Pooling
Reuse connections instead of creating new ones for each request:
# config/database.php
'redis' => [
'client' => 'predis',
'options' => [
'cluster' => env('REDIS_CLUSTER', 'redis'),
'prefix' => env('REDIS_PREFIX', Str::slug(env('APP_NAME', 'laravel'), '_').'_database_'),
],
'default' => [
'host' => env('REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD'),
'port' => env('REDIS_PORT', 6379),
'database' => env('REDIS_DB', 0),
'persistent' => true, # Enable persistent connections
],
],
const Redis = require('ioredis');
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
maxRetriesPerRequest: 3,
enableReadyCheck: true,
lazyConnect: false,
keepAlive: 30000 // Keep connections alive
});
# Reuse this connection throughout your app
Memory Analysis Tools
Identify memory bottlenecks and optimize storage:
redis-cli --bigkeys
# Output:
# Biggest string found: "cache:homepage" has 2048576 bytes
# Biggest list found: "queue:jobs" has 50000 items
# Biggest hash found: "user:1000" has 1000 fields
MEMORY DOCTOR:
MEMORY DOCTOR
# Provides memory optimization suggestions
MEMORY STATS:
MEMORY STATS
# Detailed memory allocation statistics
# Install RMA
pip install rma
# Generate memory report
rma --host 127.0.0.1 --port 6379 --types all
# Output: Detailed breakdown of memory by key pattern
Eviction Policies
Configure Redis to automatically remove keys when memory limit is reached:
# Set maximum memory limit
maxmemory 2gb
# Eviction policy options:
maxmemory-policy allkeys-lru
# Policies:
# noeviction: Return error when memory limit reached
# allkeys-lru: Remove least recently used keys
# allkeys-lfu: Remove least frequently used keys
# allkeys-random: Remove random keys
# volatile-lru: Remove LRU keys with TTL set
# volatile-lfu: Remove LFU keys with TTL set
# volatile-random: Remove random keys with TTL
# volatile-ttl: Remove keys with shortest TTL
Persistence Trade-offs
Balance durability with performance by choosing the right persistence strategy:
# No persistence (fastest, no durability)
save ""
appendonly no
# Best for: Pure cache, temporary data
# RDB only (good performance, some data loss)
save 900 1
save 300 10
save 60 10000
appendonly no
# Best for: Tolerable data loss (minutes)
# AOF with fsync everysec (balanced)
appendonly yes
appendfsync everysec
# Best for: Production with minimal data loss
# AOF with fsync always (slowest, most durable)
appendonly yes
appendfsync always
# Best for: Critical data requiring no loss
Slow Query Logging
Identify and optimize slow commands:
# redis.conf
# Log queries slower than 10ms
slowlog-log-slower-than 10000
# Keep last 128 slow queries
slowlog-max-len 128
View Slow Log:
# Get last 10 slow queries
SLOWLOG GET 10
# Example output:
1) 1) (integer) 12
2) (integer) 1634567890
3) (integer) 15234
4) 1) "KEYS"
2) "user:*"
# Reset slow log
SLOWLOG RESET
Latency Monitoring
Track and diagnose latency spikes:
# Monitor events taking >100ms
CONFIG SET latency-monitor-threshold 100
# View latency events
LATENCY LATEST
LATENCY HISTORY command
LATENCY DOCTOR
# Latency graph
LATENCY GRAPH command
Configuration Tuning
Optimize Redis configuration for your workload:
# redis.conf
# Disable transparent huge pages (Linux)
# echo never > /sys/kernel/mm/transparent_hugepage/enabled
# TCP backlog
tcp-backlog 511
# Max clients
maxclients 10000
# Timeout idle connections
timeout 300
# TCP keepalive
tcp-keepalive 300
# Faster replication
repl-diskless-sync yes
repl-diskless-sync-delay 5
# Lazy freeing (non-blocking deletes)
lazyfree-lazy-eviction yes
lazyfree-lazy-expire yes
lazyfree-lazy-server-del yes
Best Practices Summary
- ✅ Use pipelining for bulk operations
- ✅ Use SCAN instead of KEYS
- ✅ Enable connection pooling
- ✅ Set appropriate TTLs to prevent memory leaks
- ✅ Use efficient data structures (Hashes > Strings for objects)
- ✅ Monitor memory with --bigkeys
- ✅ Configure maxmemory and eviction policy
- ✅ Enable lazy freeing for non-blocking deletes
- ✅ Monitor slow log and optimize slow queries
- ✅ Use shorter key names to save memory
- ✅ Disable persistence for pure cache workloads
- ✅ Use AOF with everysec for durability