System Design Space
Knowledge graphSettings

Updated: February 21, 2026 at 11:40 PM

Caching strategies: Cache-Aside, Read-Through, Write-Through, Write-Back

mid

Practical analysis of the main cache patterns, latency/consistency trade-offs and choice of strategy for different workloads.

Pattern

Cache-Aside Pattern

Canonical description of a caching pattern with practical production trade-offs.

Open reference

The caching strategy determines not only latency, but also consistency boundaries, recovery complexity, and behavior under traffic spikes. In production, teams usually combine multiple approaches: for example, cache-aside for read paths and write-through for critical entities with strict read-after-write requirements.

Four Core Strategies

Cache-Aside

The application controls cache reads and loads data from DB on misses.

Read Path

App

read(key)

Cache

GET key

DB

SELECT on miss

Cache

SET key

App

return value

Write Path

App

update(key)

DB

WRITE source of truth

Cache

invalidate/update key

App

ack

What happens

  • Reads go to cache first; on a miss, data is loaded from DB.
  • After a miss, the app warms cache for subsequent requests.
  • Writes go to DB first, then cache is invalidated/updated to control stale data.

Risk: Invalidation is critical; weak invalidation quickly increases stale-read rate.

Quick Strategy Selection

StrategyRead latencyWrite latencyConsistencyComplexityBest fit
Cache-AsideLow on hit, higher on missLow (DB-only + invalidate)Eventual (depends on invalidation)Low/mediumGeneral-purpose read-heavy services
Read-ThroughStable through a unified cache layerDepends on paired write policyDepends on write-side strategyMediumPlatform cache layers
Write-ThroughLowHigher (synchronous double write)High after successful writeMedium/highRead-after-write critical flows
Write-BackLowVery lowEventual, complex recoveryHighWrite-heavy ingestion

Practical Rules

What to do

  • Define freshness/staleness SLA before selecting a strategy.
  • Design cache invalidation and eviction policy as separate concerns.
  • For write-back, use durable buffer plus idempotent flush pipeline.
  • Add cache-stampede protection (single-flight, TTL jitter).

Common mistakes

  • Caching without an explicit invalidation strategy and TTL policy.
  • Using write-back for critical financial data without durable queue/journal.
  • Caching everything instead of focusing on hot keys and expensive queries.
  • Ignoring stampede/thundering herd during large cache-miss waves.
  • Not tracking hit rate, p95/p99 latency, and stale-read rate.

Mini Implementation Checklist

1. Measure baseline p95/p99 and hit rate before rollout.
2. Define the source of truth and invalidation policy.
3. Constrain key size and plan namespace/versioning.
4. Add fallback behavior for cache outage/degradation.

Short selection rule: if predictability and simplicity matter most, start with Cache-Aside; if strict read-after-write is critical, choose Write-Through; if write throughput is the priority and eventual consistency is acceptable, use Write-Back with a reliable flush pipeline.

Caching strategy should be driven by business SLA, not team habit.

Related chapters

Enable tracking in Settings

System Design Space

© 2026 Alexander Polomodov