Performance Optimization Project (Part 2)
Performance Optimization Project (Part 2)
Building on the fundamentals covered in Part 1, this lesson explores advanced performance optimization techniques, automation strategies, and comprehensive monitoring approaches. We'll implement sophisticated optimization patterns, establish performance testing pipelines, and create a culture of performance awareness within your development workflow.
Advanced Code Splitting Strategies
Code splitting goes beyond basic route-level splitting. Modern applications require sophisticated strategies to optimize bundle delivery and execution timing. Let's explore advanced patterns that dramatically improve initial load times and runtime performance.
<!-- React Component Lazy Loading -->\nconst HeavyChart = lazy(() => import('./components/HeavyChart'));\nconst DataTable = lazy(() => import('./components/DataTable'));\nconst VideoPlayer = lazy(() => import('./components/VideoPlayer'));\n\nfunction Dashboard() {\n return (\n <Suspense fallback={<LoadingSpinner />}>\n <HeavyChart data={chartData} />\n <DataTable rows={tableData} />\n <VideoPlayer src={videoUrl} />\n </Suspense>\n );\n}\n\n// Vue Component Lazy Loading\nconst AsyncComponent = () => ({\n component: import('./components/HeavyComponent.vue'),\n loading: LoadingComponent,\n error: ErrorComponent,\n delay: 200,\n timeout: 10000\n});// Prefetch - loads during idle time\nconst CalendarModule = () => import(\n /* webpackPrefetch: true */\n /* webpackChunkName: "calendar" */\n './modules/Calendar'\n);\n\n// Preload - loads in parallel with parent\nconst CriticalModule = () => import(\n /* webpackPreload: true */\n /* webpackChunkName: "critical" */\n './modules/Critical'\n);\n\n// Conditional loading based on device\nif (window.innerWidth > 768) {\n import(/* webpackChunkName: "desktop-features" */ './desktop-features');\n} else {\n import(/* webpackChunkName: "mobile-features" */ './mobile-features');\n}Memory Profiling and Leak Detection
Memory leaks are silent killers of web application performance. They gradually degrade user experience and can cause crashes on long-running applications. Let's implement comprehensive memory profiling strategies to detect and eliminate leaks.
class MemoryProfiler {\n constructor() {\n this.snapshots = [];\n this.baseline = null;\n }\n\n takeSnapshot(label) {\n if (performance.memory) {\n const snapshot = {\n label,\n timestamp: Date.now(),\n usedJSHeapSize: performance.memory.usedJSHeapSize,\n totalJSHeapSize: performance.memory.totalJSHeapSize,\n jsHeapSizeLimit: performance.memory.jsHeapSizeLimit\n };\n this.snapshots.push(snapshot);\n return snapshot;\n }\n return null;\n }\n\n setBaseline() {\n this.baseline = this.takeSnapshot('baseline');\n }\n\n getMemoryGrowth() {\n if (!this.baseline || this.snapshots.length < 2) return null;\n const latest = this.snapshots[this.snapshots.length - 1];\n return {\n absolute: latest.usedJSHeapSize - this.baseline.usedJSHeapSize,\n percentage: ((latest.usedJSHeapSize - this.baseline.usedJSHeapSize) / this.baseline.usedJSHeapSize) * 100,\n duration: latest.timestamp - this.baseline.timestamp\n };\n }\n\n detectLeak(threshold = 10) {\n const growth = this.getMemoryGrowth();\n if (growth && growth.percentage > threshold) {\n console.warn('Potential memory leak detected!', growth);\n return true;\n }\n return false;\n }\n}\n\n// Usage\nconst profiler = new MemoryProfiler();\nprofiler.setBaseline();\n\n// After user interactions\nsetTimeout(() => {\n profiler.takeSnapshot('after-interaction');\n profiler.detectLeak();\n}, 5000);class DOMLeakDetector {\n constructor() {\n this.nodeCount = 0;\n this.listenerCount = 0;\n }\n\n countNodes() {\n return document.getElementsByTagName('*').length;\n }\n\n trackListeners() {\n // Wrap addEventListener to track listeners\n const original = EventTarget.prototype.addEventListener;\n const listeners = new Map();\n\n EventTarget.prototype.addEventListener = function(type, listener, options) {\n if (!listeners.has(this)) {\n listeners.set(this, []);\n }\n listeners.get(this).push({ type, listener, options });\n return original.call(this, type, listener, options);\n };\n\n return listeners;\n }\n\n detectDetachedNodes() {\n // Check for detached DOM nodes\n const detachedNodes = [];\n const allNodes = document.querySelectorAll('*');\n \n allNodes.forEach(node => {\n if (!document.body.contains(node)) {\n detachedNodes.push(node);\n }\n });\n\n if (detachedNodes.length > 0) {\n console.warn(`Found ${detachedNodes.length} detached nodes`, detachedNodes);\n }\n return detachedNodes;\n }\n}Performance Testing Automation
Manual performance testing is time-consuming and inconsistent. Automated performance testing ensures that performance regressions are caught early in the development cycle, before they reach production.
// lighthouserc.js\nmodule.exports = {\n ci: {\n collect: {\n numberOfRuns: 3,\n startServerCommand: 'npm run serve',\n url: [\n 'http://localhost:3000/',\n 'http://localhost:3000/products',\n 'http://localhost:3000/checkout'\n ],\n settings: {\n preset: 'desktop',\n throttling: {\n rttMs: 40,\n throughputKbps: 10240,\n cpuSlowdownMultiplier: 1\n }\n }\n },\n assert: {\n assertions: {\n 'categories:performance': ['error', { minScore: 0.9 }],\n 'categories:accessibility': ['error', { minScore: 0.9 }],\n 'categories:best-practices': ['error', { minScore: 0.9 }],\n 'categories:seo': ['error', { minScore: 0.9 }],\n 'first-contentful-paint': ['error', { maxNumericValue: 2000 }],\n 'largest-contentful-paint': ['error', { maxNumericValue: 2500 }],\n 'cumulative-layout-shift': ['error', { maxNumericValue: 0.1 }],\n 'total-blocking-time': ['error', { maxNumericValue: 300 }]\n }\n },\n upload: {\n target: 'temporary-public-storage'\n }\n }\n};// performance-tests.js\nconst puppeteer = require('puppeteer');\nconst lighthouse = require('lighthouse');\n\nclass PerformanceTestSuite {\n async runTest(url, config = {}) {\n const browser = await puppeteer.launch({\n headless: true,\n args: ['--no-sandbox', '--disable-dev-shm-usage']\n });\n\n const results = {\n url,\n timestamp: new Date().toISOString(),\n metrics: {}\n };\n\n try {\n const page = await browser.newPage();\n \n // Enable performance monitoring\n await page.evaluateOnNewDocument(() => {\n window.performanceMetrics = [];\n });\n\n // Measure page load time\n const startTime = Date.now();\n await page.goto(url, { waitUntil: 'networkidle2' });\n const loadTime = Date.now() - startTime;\n\n // Collect Core Web Vitals\n const vitals = await page.evaluate(() => {\n return new Promise((resolve) => {\n const metrics = {};\n \n // LCP\n new PerformanceObserver((list) => {\n const entries = list.getEntries();\n const lastEntry = entries[entries.length - 1];\n metrics.lcp = lastEntry.renderTime || lastEntry.loadTime;\n }).observe({ entryTypes: ['largest-contentful-paint'] });\n\n // FID\n new PerformanceObserver((list) => {\n const entries = list.getEntries();\n entries.forEach(entry => {\n metrics.fid = entry.processingStart - entry.startTime;\n });\n }).observe({ entryTypes: ['first-input'] });\n\n // CLS\n let clsScore = 0;\n new PerformanceObserver((list) => {\n list.getEntries().forEach(entry => {\n if (!entry.hadRecentInput) {\n clsScore += entry.value;\n }\n });\n metrics.cls = clsScore;\n }).observe({ entryTypes: ['layout-shift'] });\n\n setTimeout(() => resolve(metrics), 3000);\n });\n });\n\n results.metrics = {\n loadTime,\n ...vitals,\n passed: loadTime < 3000 && vitals.lcp < 2500 && vitals.cls < 0.1\n };\n\n } catch (error) {\n results.error = error.message;\n } finally {\n await browser.close();\n }\n\n return results;\n }\n\n async runSuite(urls) {\n const results = [];\n for (const url of urls) {\n const result = await this.runTest(url);\n results.push(result);\n }\n return results;\n }\n}\n\n// Usage in CI/CD\nconst suite = new PerformanceTestSuite();\nconst results = await suite.runSuite([\n 'https://example.com',\n 'https://example.com/products',\n 'https://example.com/checkout'\n]);\n\nif (results.some(r => !r.metrics.passed)) {\n console.error('Performance tests failed!');\n process.exit(1);\n}Real User Monitoring (RUM)
Synthetic testing only tells part of the story. Real User Monitoring captures actual user experiences across diverse devices, networks, and geographic locations. This data is invaluable for understanding real-world performance.
class RealUserMonitoring {\n constructor(config = {}) {\n this.apiEndpoint = config.apiEndpoint || '/api/rum';\n this.sampleRate = config.sampleRate || 0.1;\n this.sessionId = this.generateSessionId();\n this.metrics = [];\n }\n\n init() {\n if (Math.random() > this.sampleRate) return;\n\n // Collect navigation timing\n window.addEventListener('load', () => {\n this.collectNavigationTiming();\n this.collectResourceTiming();\n this.observeWebVitals();\n });\n\n // Send data before page unload\n window.addEventListener('beforeunload', () => {\n this.sendBeacon();\n });\n }\n\n collectNavigationTiming() {\n const timing = performance.getEntriesByType('navigation')[0];\n if (!timing) return;\n\n this.addMetric('navigation', {\n dns: timing.domainLookupEnd - timing.domainLookupStart,\n tcp: timing.connectEnd - timing.connectStart,\n ttfb: timing.responseStart - timing.requestStart,\n download: timing.responseEnd - timing.responseStart,\n domInteractive: timing.domInteractive - timing.fetchStart,\n domComplete: timing.domComplete - timing.fetchStart,\n loadComplete: timing.loadEventEnd - timing.fetchStart\n });\n }\n\n collectResourceTiming() {\n const resources = performance.getEntriesByType('resource');\n const byType = {};\n\n resources.forEach(resource => {\n const type = this.getResourceType(resource.name);\n if (!byType[type]) byType[type] = [];\n \n byType[type].push({\n name: resource.name,\n duration: resource.duration,\n size: resource.transferSize\n });\n });\n\n this.addMetric('resources', byType);\n }\n\n observeWebVitals() {\n // LCP\n new PerformanceObserver((list) => {\n const entries = list.getEntries();\n const lastEntry = entries[entries.length - 1];\n this.addMetric('lcp', {\n value: lastEntry.renderTime || lastEntry.loadTime,\n element: lastEntry.element?.tagName\n });\n }).observe({ entryTypes: ['largest-contentful-paint'] });\n\n // FID\n new PerformanceObserver((list) => {\n list.getEntries().forEach(entry => {\n this.addMetric('fid', {\n value: entry.processingStart - entry.startTime,\n name: entry.name\n });\n });\n }).observe({ entryTypes: ['first-input'] });\n\n // CLS\n let clsScore = 0;\n new PerformanceObserver((list) => {\n list.getEntries().forEach(entry => {\n if (!entry.hadRecentInput) {\n clsScore += entry.value;\n }\n });\n this.addMetric('cls', { value: clsScore });\n }).observe({ entryTypes: ['layout-shift'] });\n }\n\n addMetric(type, data) {\n this.metrics.push({\n type,\n data,\n timestamp: Date.now(),\n url: window.location.href,\n userAgent: navigator.userAgent\n });\n }\n\n sendBeacon() {\n const payload = JSON.stringify({\n sessionId: this.sessionId,\n metrics: this.metrics,\n context: {\n url: window.location.href,\n referrer: document.referrer,\n viewport: `${window.innerWidth}x${window.innerHeight}`,\n connection: navigator.connection?.effectiveType\n }\n });\n\n navigator.sendBeacon(this.apiEndpoint, payload);\n }\n\n getResourceType(url) {\n if (url.match(/\.(js)$/)) return 'script';\n if (url.match(/\.(css)$/)) return 'stylesheet';\n if (url.match(/\.(png|jpg|jpeg|gif|svg|webp)$/)) return 'image';\n if (url.match(/\.(woff|woff2|ttf|eot)$/)) return 'font';\n return 'other';\n }\n\n generateSessionId() {\n return `${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;\n }\n}\n\n// Initialize RUM\nconst rum = new RealUserMonitoring({\n apiEndpoint: 'https://api.example.com/rum',\n sampleRate: 0.1 // Monitor 10% of users\n});\nrum.init();Performance Regression Testing
Performance improvements can easily regress during rapid development. Automated regression testing catches performance degradations before they reach production, maintaining the hard-won gains from optimization efforts.
// performance-budget.json\n{\n "budgets": [\n {\n "path": "/*",\n "resourceSizes": [\n { "resourceType": "script", "budget": 300 },\n { "resourceType": "stylesheet", "budget": 100 },\n { "resourceType": "image", "budget": 500 },\n { "resourceType": "font", "budget": 150 },\n { "resourceType": "total", "budget": 1000 }\n ],\n "timings": [\n { "metric": "first-contentful-paint", "budget": 2000 },\n { "metric": "largest-contentful-paint", "budget": 2500 },\n { "metric": "time-to-interactive", "budget": 3500 },\n { "metric": "total-blocking-time", "budget": 300 },\n { "metric": "cumulative-layout-shift", "budget": 0.1 }\n ]\n },\n {\n "path": "/products/*",\n "resourceSizes": [\n { "resourceType": "script", "budget": 350 },\n { "resourceType": "image", "budget": 800 }\n ]\n }\n ]\n}const fs = require('fs');\nconst path = require('path');\n\nclass PerformanceRegressionTester {\n constructor(baselinePath, budgetPath) {\n this.baseline = JSON.parse(fs.readFileSync(baselinePath, 'utf8'));\n this.budget = JSON.parse(fs.readFileSync(budgetPath, 'utf8'));\n }\n\n compareMetrics(current) {\n const regressions = [];\n const improvements = [];\n\n // Compare against baseline\n Object.keys(this.baseline.metrics).forEach(metric => {\n const baselineValue = this.baseline.metrics[metric];\n const currentValue = current.metrics[metric];\n const diff = currentValue - baselineValue;\n const percentChange = (diff / baselineValue) * 100;\n\n if (percentChange > 10) {\n regressions.push({\n metric,\n baseline: baselineValue,\n current: currentValue,\n change: percentChange.toFixed(2) + '%'\n });\n } else if (percentChange < -10) {\n improvements.push({\n metric,\n baseline: baselineValue,\n current: currentValue,\n change: percentChange.toFixed(2) + '%'\n });\n }\n });\n\n // Check against budget\n const budgetViolations = [];\n this.budget.budgets.forEach(budget => {\n budget.timings?.forEach(timing => {\n const value = current.metrics[timing.metric];\n if (value > timing.budget) {\n budgetViolations.push({\n metric: timing.metric,\n value,\n budget: timing.budget,\n exceeded: value - timing.budget\n });\n }\n });\n });\n\n return { regressions, improvements, budgetViolations };\n }\n\n generateReport(comparison) {\n let report = '\n=== Performance Regression Report ===\n\n';\n\n if (comparison.regressions.length > 0) {\n report += 'REGRESSIONS DETECTED:\n';\n comparison.regressions.forEach(r => {\n report += ` ❌ ${r.metric}: ${r.baseline}ms → ${r.current}ms (${r.change})\n`;\n });\n report += '\n';\n }\n\n if (comparison.budgetViolations.length > 0) {\n report += 'BUDGET VIOLATIONS:\n';\n comparison.budgetViolations.forEach(v => {\n report += ` ⚠️ ${v.metric}: ${v.value}ms exceeds budget of ${v.budget}ms by ${v.exceeded}ms\n`;\n });\n report += '\n';\n }\n\n if (comparison.improvements.length > 0) {\n report += 'IMPROVEMENTS:\n';\n comparison.improvements.forEach(i => {\n report += ` ✅ ${i.metric}: ${i.baseline}ms → ${i.current}ms (${i.change})\n`;\n });\n }\n\n return report;\n }\n}\n\n// Usage in CI/CD pipeline\nconst tester = new PerformanceRegressionTester(\n './baseline.json',\n './performance-budget.json'\n);\n\nconst currentMetrics = await runPerformanceTests();\nconst comparison = tester.compareMetrics(currentMetrics);\nconst report = tester.generateReport(comparison);\n\nconsole.log(report);\n\nif (comparison.regressions.length > 0 || comparison.budgetViolations.length > 0) {\n process.exit(1);\n}Final Optimization Checklist
This comprehensive checklist ensures no optimization opportunity is missed. Use it for final reviews before major releases or during performance audits.
Frontend Optimization:
☐ Images optimized (WebP/AVIF format, responsive images, lazy loading)
☐ CSS optimized (minified, critical CSS inlined, unused CSS removed)
☐ JavaScript optimized (minified, tree-shaken, code-split)
☐ Fonts optimized (WOFF2 format, font-display: swap, subsetting)
☐ Resources compressed (Brotli/Gzip enabled)
☐ HTTP/2 or HTTP/3 enabled
☐ CDN configured for static assets
☐ Service Worker implemented for offline support
☐ Resource hints implemented (preconnect, prefetch, preload)
Backend Optimization:
☐ Database queries optimized (indexed, no N+1 queries)
☐ Response caching implemented (Redis/Memcached)
☐ API responses compressed
☐ Connection pooling configured
☐ Background jobs for heavy operations
☐ Rate limiting implemented
☐ Server-side caching (OPcache, APCu)
Monitoring & Testing:
☐ Performance monitoring active (RUM + synthetic)
☐ Alerts configured for metric thresholds
☐ Performance budgets defined and enforced
☐ Automated performance tests in CI/CD
☐ Regular performance audits scheduled
☐ Core Web Vitals tracked and optimized
Implementation:
Conduct a full audit of your application using this checklist. Document current status, identify gaps, prioritize improvements, and create an action plan. Share the audit results with your team and establish ongoing performance review processes.
Advanced performance optimization is an ongoing journey, not a destination. By implementing these sophisticated monitoring, testing, and optimization strategies, you create a foundation for sustained high performance. The key is making performance a core part of your development culture, not an afterthought. With automated testing, comprehensive monitoring, and clear budgets, you ensure that your application remains fast and responsive as it evolves.