Performance & Profiling¶
Measure, analyze, and optimize application performance.
Philosophy¶
"Premature optimization is the root of all evil." — Donald Knuth
Workflow: 1. Measure — Profile before optimizing 2. Identify — Find actual bottlenecks 3. Optimize — Fix the real problems 4. Verify — Confirm improvements
Section Contents¶
| Topic | Description |
|---|---|
| Python Profiling | cProfile, py-spy, memory profiling |
| Frontend Profiling | Lighthouse, React DevTools, bundle analysis |
| Load Testing | Locust, k6, interpreting results |
| Optimization Patterns | Caching, batching, N+1 prevention |
| Benchmarking | Writing and running benchmarks |
Quick Profiling Commands¶
Python¶
# Profile script
python -m cProfile -s cumtime script.py
# Live profiling with py-spy
py-spy top --pid 12345
py-spy record -o profile.svg --pid 12345
# Memory profiling
python -m memory_profiler script.py
Frontend¶
# Analyze bundle size
bun run build && bunx source-map-explorer dist/**/*.js
# Lighthouse CLI
npx lighthouse https://example.com --output html
Performance Targets¶
| Metric | Target | Critical |
|---|---|---|
| API response (P50) | < 100ms | < 500ms |
| API response (P99) | < 500ms | < 2s |
| Page load (LCP) | < 2.5s | < 4s |
| First Input Delay | < 100ms | < 300ms |
| Time to Interactive | < 3.8s | < 7.3s |
Quick Wins¶
Backend¶
# 1. Add database indexes
CREATE INDEX idx_bookings_user_id ON bookings(user_id);
# 2. Use select_related/joinedload
users = await db.execute(
select(User).options(selectinload(User.bookings))
)
# 3. Paginate results
results = await db.execute(
select(User).limit(20).offset(page * 20)
)
# 4. Cache expensive queries
@cache(ttl=300)
async def get_dashboard_stats():
return await compute_stats()
Frontend¶
// 1. Code splitting
const Dashboard = lazy(() => import('./Dashboard'));
// 2. Memoize expensive components
const MemoizedList = memo(ExpensiveList);
// 3. Virtualize long lists
<VirtualList items={items} height={400} itemHeight={50} />
// 4. Debounce search input
const debouncedSearch = useDebouncedCallback(search, 300);
Monitoring Performance¶
# Add timing middleware
import time
from fastapi import Request
@app.middleware("http")
async def timing_middleware(request: Request, call_next):
start = time.perf_counter()
response = await call_next(request)
duration = time.perf_counter() - start
response.headers["X-Response-Time"] = f"{duration:.3f}s"
if duration > 1.0:
logger.warning(f"Slow request: {request.url.path} took {duration:.2f}s")
return response
Related Documentation¶
- Database Performance — SQL optimization
- Concurrency — Async patterns
- Monitoring & Observability — Production observability
- Backend Performance Standards — Backend performance patterns
- Concurrency Cookbook — Common concurrent and parallel task recipes