Security & Performance
Performance Monitoring in Production
Performance Monitoring in Production
Production performance monitoring helps identify bottlenecks, track user experience, and prevent performance regressions before they impact users.
Real User Monitoring (RUM)
Measure actual user experience with RUM:
<!-- Include RUM script in layout -->
<script>
(function() {
// Measure page load performance
window.addEventListener('load', function() {
if (window.performance && window.performance.timing) {
var timing = window.performance.timing;
var loadTime = timing.loadEventEnd - timing.navigationStart;
var domReady = timing.domContentLoadedEventEnd - timing.navigationStart;
var ttfb = timing.responseStart - timing.requestStart;
// Send metrics to backend
fetch('/api/rum-metrics', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
page: window.location.pathname,
load_time: loadTime,
dom_ready: domReady,
ttfb: ttfb,
user_agent: navigator.userAgent,
connection: navigator.connection?.effectiveType,
})
});
}
});
})();
</script>
<script>
(function() {
// Measure page load performance
window.addEventListener('load', function() {
if (window.performance && window.performance.timing) {
var timing = window.performance.timing;
var loadTime = timing.loadEventEnd - timing.navigationStart;
var domReady = timing.domContentLoadedEventEnd - timing.navigationStart;
var ttfb = timing.responseStart - timing.requestStart;
// Send metrics to backend
fetch('/api/rum-metrics', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
page: window.location.pathname,
load_time: loadTime,
dom_ready: domReady,
ttfb: ttfb,
user_agent: navigator.userAgent,
connection: navigator.connection?.effectiveType,
})
});
}
});
})();
</script>
Key RUM Metrics:
- TTFB: Time to First Byte (server response time)
- FCP: First Contentful Paint (first visible content)
- LCP: Largest Contentful Paint (main content visible)
- FID: First Input Delay (interactivity)
- CLS: Cumulative Layout Shift (visual stability)
Core Web Vitals Tracking
Monitor Google's Core Web Vitals:
<script>
// Using web-vitals library
import {getCLS, getFID, getFCP, getLCP, getTTFB} from 'web-vitals';
function sendToAnalytics(metric) {
// Send to your analytics endpoint
fetch('/api/web-vitals', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
name: metric.name,
value: metric.value,
id: metric.id,
delta: metric.delta,
rating: metric.rating, // 'good', 'needs-improvement', 'poor'
})
});
}
getCLS(sendToAnalytics);
getFID(sendToAnalytics);
getFCP(sendToAnalytics);
getLCP(sendToAnalytics);
getTTFB(sendToAnalytics);
</script>
// Using web-vitals library
import {getCLS, getFID, getFCP, getLCP, getTTFB} from 'web-vitals';
function sendToAnalytics(metric) {
// Send to your analytics endpoint
fetch('/api/web-vitals', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
name: metric.name,
value: metric.value,
id: metric.id,
delta: metric.delta,
rating: metric.rating, // 'good', 'needs-improvement', 'poor'
})
});
}
getCLS(sendToAnalytics);
getFID(sendToAnalytics);
getFCP(sendToAnalytics);
getLCP(sendToAnalytics);
getTTFB(sendToAnalytics);
</script>
Backend Performance Tracking
Laravel middleware to track request performance:
// app/Http/Middleware/PerformanceMonitoring.php
namespace App\Http\Middleware;
use Closure;
use Illuminate\Support\Facades\Log;
class PerformanceMonitoring
{
public function handle($request, Closure $next)
{
$startTime = microtime(true);
$startMemory = memory_get_usage();
// Process request
$response = $next($request);
// Calculate metrics
$duration = (microtime(true) - $startTime) * 1000; // ms
$memoryUsed = memory_get_usage() - $startMemory;
$queryCount = count(DB::getQueryLog());
// Log slow requests
if ($duration > 1000) { // > 1 second
Log::warning('Slow request detected', [
'url' => $request->fullUrl(),
'method' => $request->method(),
'duration_ms' => $duration,
'memory_mb' => round($memoryUsed / 1024 / 1024, 2),
'query_count' => $queryCount,
]);
}
// Add performance headers
$response->header('X-Response-Time', round($duration, 2) . 'ms');
$response->header('X-Query-Count', $queryCount);
return $response;
}
}
namespace App\Http\Middleware;
use Closure;
use Illuminate\Support\Facades\Log;
class PerformanceMonitoring
{
public function handle($request, Closure $next)
{
$startTime = microtime(true);
$startMemory = memory_get_usage();
// Process request
$response = $next($request);
// Calculate metrics
$duration = (microtime(true) - $startTime) * 1000; // ms
$memoryUsed = memory_get_usage() - $startMemory;
$queryCount = count(DB::getQueryLog());
// Log slow requests
if ($duration > 1000) { // > 1 second
Log::warning('Slow request detected', [
'url' => $request->fullUrl(),
'method' => $request->method(),
'duration_ms' => $duration,
'memory_mb' => round($memoryUsed / 1024 / 1024, 2),
'query_count' => $queryCount,
]);
}
// Add performance headers
$response->header('X-Response-Time', round($duration, 2) . 'ms');
$response->header('X-Query-Count', $queryCount);
return $response;
}
}
Synthetic Monitoring
Automated testing from multiple locations:
// Using Pingdom API or similar service
// config/monitoring.php
return [
'synthetic_checks' => [
[
'name' => 'Homepage Load Time',
'url' => 'https://example.com',
'frequency' => 60, // seconds
'locations' => ['US-East', 'EU-West', 'Asia-Pacific'],
'threshold_ms' => 2000,
],
[
'name' => 'API Health Check',
'url' => 'https://api.example.com/health',
'frequency' => 30,
'expected_status' => 200,
],
],
];
// Custom synthetic monitoring script
// app/Console/Commands/SyntheticMonitoring.php
class SyntheticMonitoring extends Command
{
public function handle()
{
$checks = config('monitoring.synthetic_checks');
foreach ($checks as $check) {
$start = microtime(true);
$response = Http::timeout(10)->get($check['url']);
$duration = (microtime(true) - $start) * 1000;
if ($duration > $check['threshold_ms']) {
$this->alert('Performance threshold exceeded', [
'check' => $check['name'],
'duration' => $duration,
]);
}
}
}
}
// config/monitoring.php
return [
'synthetic_checks' => [
[
'name' => 'Homepage Load Time',
'url' => 'https://example.com',
'frequency' => 60, // seconds
'locations' => ['US-East', 'EU-West', 'Asia-Pacific'],
'threshold_ms' => 2000,
],
[
'name' => 'API Health Check',
'url' => 'https://api.example.com/health',
'frequency' => 30,
'expected_status' => 200,
],
],
];
// Custom synthetic monitoring script
// app/Console/Commands/SyntheticMonitoring.php
class SyntheticMonitoring extends Command
{
public function handle()
{
$checks = config('monitoring.synthetic_checks');
foreach ($checks as $check) {
$start = microtime(true);
$response = Http::timeout(10)->get($check['url']);
$duration = (microtime(true) - $start) * 1000;
if ($duration > $check['threshold_ms']) {
$this->alert('Performance threshold exceeded', [
'check' => $check['name'],
'duration' => $duration,
]);
}
}
}
}
Performance Regression Detection
Detect performance degradation over time:
// Store baseline performance metrics
// database/migrations/xxxx_create_performance_baselines.php
Schema::create('performance_baselines', function (Blueprint $table) {
$table->id();
$table->string('endpoint');
$table->decimal('p50_ms', 8, 2); // Median response time
$table->decimal('p95_ms', 8, 2); // 95th percentile
$table->decimal('p99_ms', 8, 2); // 99th percentile
$table->integer('sample_size');
$table->timestamp('measured_at');
});
// Regression detection algorithm
class PerformanceRegressionDetector
{
public function detectRegression(string $endpoint, float $currentP95): bool
{
$baseline = PerformanceBaseline::where('endpoint', $endpoint)
->latest('measured_at')
->first();
if (!$baseline) {
return false;
}
// Alert if current p95 is 50% worse than baseline
$threshold = $baseline->p95_ms * 1.5;
if ($currentP95 > $threshold) {
$this->sendAlert([
'endpoint' => $endpoint,
'baseline_p95' => $baseline->p95_ms,
'current_p95' => $currentP95,
'degradation_percent' => (($currentP95 / $baseline->p95_ms) - 1) * 100,
]);
return true;
}
return false;
}
}
// database/migrations/xxxx_create_performance_baselines.php
Schema::create('performance_baselines', function (Blueprint $table) {
$table->id();
$table->string('endpoint');
$table->decimal('p50_ms', 8, 2); // Median response time
$table->decimal('p95_ms', 8, 2); // 95th percentile
$table->decimal('p99_ms', 8, 2); // 99th percentile
$table->integer('sample_size');
$table->timestamp('measured_at');
});
// Regression detection algorithm
class PerformanceRegressionDetector
{
public function detectRegression(string $endpoint, float $currentP95): bool
{
$baseline = PerformanceBaseline::where('endpoint', $endpoint)
->latest('measured_at')
->first();
if (!$baseline) {
return false;
}
// Alert if current p95 is 50% worse than baseline
$threshold = $baseline->p95_ms * 1.5;
if ($currentP95 > $threshold) {
$this->sendAlert([
'endpoint' => $endpoint,
'baseline_p95' => $baseline->p95_ms,
'current_p95' => $currentP95,
'degradation_percent' => (($currentP95 / $baseline->p95_ms) - 1) * 100,
]);
return true;
}
return false;
}
}
Common Pitfalls: Don't monitor only averages (use percentiles), avoid sampling bias, monitor during peak hours, and account for geographic variations.
A/B Testing Performance
Compare performance between variants:
// Middleware to track variant performance
class ABTestPerformance
{
public function handle($request, Closure $next)
{
$variant = $request->cookie('ab_variant', 'control');
$startTime = microtime(true);
$response = $next($request);
$duration = (microtime(true) - $startTime) * 1000;
// Log performance by variant
DB::table('ab_performance')->insert([
'variant' => $variant,
'endpoint' => $request->path(),
'duration_ms' => $duration,
'created_at' => now(),
]);
return $response;
}
}
// Analyze variant performance
$analysis = DB::table('ab_performance')
->select(
'variant',
DB::raw('AVG(duration_ms) as avg_duration'),
DB::raw('PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY duration_ms) as p95')
)
->where('created_at', '>', now()->subDays(7))
->groupBy('variant')
->get();
class ABTestPerformance
{
public function handle($request, Closure $next)
{
$variant = $request->cookie('ab_variant', 'control');
$startTime = microtime(true);
$response = $next($request);
$duration = (microtime(true) - $startTime) * 1000;
// Log performance by variant
DB::table('ab_performance')->insert([
'variant' => $variant,
'endpoint' => $request->path(),
'duration_ms' => $duration,
'created_at' => now(),
]);
return $response;
}
}
// Analyze variant performance
$analysis = DB::table('ab_performance')
->select(
'variant',
DB::raw('AVG(duration_ms) as avg_duration'),
DB::raw('PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY duration_ms) as p95')
)
->where('created_at', '>', now()->subDays(7))
->groupBy('variant')
->get();
Database Query Performance
Track slow queries in production:
// app/Providers/AppServiceProvider.php
use Illuminate\Support\Facades\DB;
use Illuminate\Support\Facades\Log;
public function boot()
{
DB::listen(function ($query) {
if ($query->time > 1000) { // > 1 second
Log::warning('Slow query detected', [
'sql' => $query->sql,
'bindings' => $query->bindings,
'time_ms' => $query->time,
]);
}
});
}
use Illuminate\Support\Facades\DB;
use Illuminate\Support\Facades\Log;
public function boot()
{
DB::listen(function ($query) {
if ($query->time > 1000) { // > 1 second
Log::warning('Slow query detected', [
'sql' => $query->sql,
'bindings' => $query->bindings,
'time_ms' => $query->time,
]);
}
});
}
Client-Side Performance Budgets
Enforce performance budgets in CI/CD:
// performance-budget.json
{
"budgets": [
{
"resourceType": "script",
"budget": 300 // KB
},
{
"resourceType": "stylesheet",
"budget": 50
},
{
"resourceType": "image",
"budget": 500
},
{
"metric": "interactive",
"budget": 3000 // ms
},
{
"metric": "first-contentful-paint",
"budget": 1500
}
]
}
# Use Lighthouse CI in pipeline
npm install -g @lhci/cli
lhci autorun --config=lighthouserc.json
{
"budgets": [
{
"resourceType": "script",
"budget": 300 // KB
},
{
"resourceType": "stylesheet",
"budget": 50
},
{
"resourceType": "image",
"budget": 500
},
{
"metric": "interactive",
"budget": 3000 // ms
},
{
"metric": "first-contentful-paint",
"budget": 1500
}
]
}
# Use Lighthouse CI in pipeline
npm install -g @lhci/cli
lhci autorun --config=lighthouserc.json
Best Practice: Set performance budgets early, track trends over time, monitor percentiles (p50, p95, p99), and correlate performance with business metrics (conversion, bounce rate).
Exercise: Implement RUM tracking for your application's main pages. Measure Core Web Vitals for one week. Identify the slowest 5% of requests and investigate root causes. Set a performance baseline for critical endpoints.