Pattern
Cache-Aside Pattern
Canonical description of a caching pattern with practical production trade-offs.
The caching strategy determines not only latency, but also consistency boundaries, recovery complexity, and behavior under traffic spikes. In production, teams usually combine multiple approaches: for example, cache-aside for read paths and write-through for critical entities with strict read-after-write requirements.
Four Core Strategies
Cache-Aside
The application controls cache reads and loads data from DB on misses.
Read Path
App
read(key)
Cache
GET key
DB
SELECT on miss
Cache
SET key
App
return value
Write Path
App
update(key)
DB
WRITE source of truth
Cache
invalidate/update key
App
ack
What happens
- Reads go to cache first; on a miss, data is loaded from DB.
- After a miss, the app warms cache for subsequent requests.
- Writes go to DB first, then cache is invalidated/updated to control stale data.
Risk: Invalidation is critical; weak invalidation quickly increases stale-read rate.
Quick Strategy Selection
| Strategy | Read latency | Write latency | Consistency | Complexity | Best fit |
|---|---|---|---|---|---|
| Cache-Aside | Low on hit, higher on miss | Low (DB-only + invalidate) | Eventual (depends on invalidation) | Low/medium | General-purpose read-heavy services |
| Read-Through | Stable through a unified cache layer | Depends on paired write policy | Depends on write-side strategy | Medium | Platform cache layers |
| Write-Through | Low | Higher (synchronous double write) | High after successful write | Medium/high | Read-after-write critical flows |
| Write-Back | Low | Very low | Eventual, complex recovery | High | Write-heavy ingestion |
Practical Rules
What to do
- Define freshness/staleness SLA before selecting a strategy.
- Design cache invalidation and eviction policy as separate concerns.
- For write-back, use durable buffer plus idempotent flush pipeline.
- Add cache-stampede protection (single-flight, TTL jitter).
Common mistakes
- Caching without an explicit invalidation strategy and TTL policy.
- Using write-back for critical financial data without durable queue/journal.
- Caching everything instead of focusing on hot keys and expensive queries.
- Ignoring stampede/thundering herd during large cache-miss waves.
- Not tracking hit rate, p95/p99 latency, and stale-read rate.
Mini Implementation Checklist
Short selection rule: if predictability and simplicity matter most, start with Cache-Aside; if strict read-after-write is critical, choose Write-Through; if write throughput is the priority and eventual consistency is acceptable, use Write-Back with a reliable flush pipeline.
