System Design Space
Knowledge graphSettings

Updated: March 24, 2026 at 5:36 PM

Caching strategies: Cache-Aside, Read-Through, Write-Through, Write-Back

medium

Practical analysis of the main cache patterns, latency/consistency trade-offs and choice of strategy for different workloads.

Caching almost always buys speed at the price of staleness, invalidation complexity, and a more fragile operating loop.

The chapter compares Cache-Aside, Read-Through, Write-Through, and Write-Back through read-after-write semantics, cache miss behavior, stampede protection, recovery after failure, and the cost of maintaining a source of truth across layers.

In architecture conversations, it helps you treat cache placement, staleness tolerance, and cache-layer failure modes as real design decisions rather than as the automatic move of 'let's add Redis.'

Practical value of this chapter

Cache as contract

Treat cache policy as a product contract: define where stale reads are acceptable and where strict freshness is required.

Invalidation strategy

Choose invalidation by business semantics: write-through, event-driven invalidation, TTL, or hybrid versioning.

Anti-storm controls

Prevent stampede with TTL jitter, single-flight, background refresh, and controlled origin fan-out.

Interview confidence

Demonstrate you understand not only read acceleration, but also consistency impact and operational risks.

Pattern

Cache-Aside Pattern

Canonical description of a caching pattern with practical production trade-offs.

Open reference

The caching strategy determines not only latency, but also consistency boundaries, recovery complexity, and behavior under traffic spikes. In production, teams usually combine multiple approaches: for example, cache-aside for read paths and write-through for critical entities with strict read-after-write requirements.

Four Core Strategies

Cache-Aside

The application controls cache reads and loads data from DB on misses.

Read Path

App

read(key)

Cache

GET key

DB

SELECT on miss

Cache

SET key

App

return value

Write Path

App

update(key)

DB

WRITE source of truth

Cache

invalidate/update key

App

ack

What happens

  • Reads go to cache first; on a miss, data is loaded from DB.
  • After a miss, the app warms cache for subsequent requests.
  • Writes go to DB first, then cache is invalidated/updated to control stale data.

Risk: Invalidation is critical; weak invalidation quickly increases stale-read rate.

Quick Strategy Selection

StrategyRead latencyWrite latencyConsistencyComplexityBest fit
Cache-AsideLow on hit, higher on missLow (DB-only + invalidate)Eventual (depends on invalidation)Low/mediumGeneral-purpose read-heavy services
Read-ThroughStable through a unified cache layerDepends on paired write policyDepends on write-side strategyMediumPlatform cache layers
Write-ThroughLowHigher (synchronous double write)High after successful writeMedium/highRead-after-write critical flows
Write-BackLowVery lowEventual, complex recoveryHighWrite-heavy ingestion

Practical Rules

What to do

  • Define freshness/staleness SLA before selecting a strategy.
  • Design cache invalidation and eviction policy as separate concerns.
  • For write-back, use durable buffer plus idempotent flush pipeline.
  • Add cache-stampede protection (single-flight, TTL jitter).

Common mistakes

  • Caching without an explicit invalidation strategy and TTL policy.
  • Using write-back for critical financial data without durable queue/journal.
  • Caching everything instead of focusing on hot keys and expensive queries.
  • Ignoring stampede/thundering herd during large cache-miss waves.
  • Not tracking hit rate, p95/p99 latency, and stale-read rate.

Mini Implementation Checklist

1. Measure baseline p95/p99 and hit rate before rollout.
2. Define the source of truth and invalidation policy.
3. Constrain key size and plan namespace/versioning.
4. Add fallback behavior for cache outage/degradation.

Short selection rule: if predictability and simplicity matter most, start with Cache-Aside; if strict read-after-write is critical, choose Write-Through; if write throughput is the priority and eventual consistency is acceptable, use Write-Back with a reliable flush pipeline.

Caching strategy should be driven by business SLA, not team habit.

Related chapters

Enable tracking in Settings