System Design Space
Knowledge graphSettings

Updated: March 15, 2026 at 11:00 AM

NewSQL: TiDB, CockroachDB, and YDB

medium

A consolidated NewSQL chapter: SQL + ACID on distributed architecture, practical comparison of TiDB, CockroachDB, and YDB, and selection guidance by workload.

This Theme 8 chapter focuses on NewSQL approach and SQL+ACID trade-offs in distributed environments.

In real system design work, this material helps you choose storage and data boundaries using measurable constraints: workload profile, consistency, latency, operating cost, and evolution risks.

For system design interviews, the chapter gives you a clear trade-off vocabulary and a defensible decision narrative: why this choice fits, where its boundaries are, and how it behaves under failure.

Practical value of this chapter

Global ACID fit

Identify where NewSQL preserves SQL+ACID under horizontal scale without manual sharding complexity.

Region-aware design

Model data locality and inter-region RTT so transactional paths do not pay unnecessary latency.

Cost of consistency

Balance strong consistency benefits against latency, throughput, and infrastructure cost.

Interview comparison

Compare NewSQL with Postgres+sharding and NoSQL using business constraints instead of vendor messaging.

Primary source

NewSQL (Wikipedia)

A compact historical overview of NewSQL as SQL/ACID plus horizontal scale evolution.

Open material

TiDB docs

TiDB Architecture

Official architectural overview of TiDB: TiKV, PD, SQL layer, and HTAP components.

Open documentation

NewSQL is usually considered when a team wants to keep SQL and ACID semantics, but workload has already outgrown comfortable single-node OLTP limits. In this chapter we compare three practical implementations: TiDB, CockroachDB and YDB.

The goal is not to choose one universal winner, but to map workload, transaction constraints, and operational context to the most suitable engine.

What NewSQL usually means in practice

SQL + ACID without giving up distribution

Instead of manual app-level sharding, the DBMS coordinates partitioning and transaction execution itself.

Strong consistency as a default posture

The baseline objective is correctness under failures, not only peak write throughput.

Higher operational complexity than classic OLTP

Distributed SQL requires disciplined key design, observability, and capacity planning.

Shared architectural pattern

SQL as the primary interface

NewSQL keeps the relational model and SQL ecosystem so teams do not need to rewrite everything around a NoSQL API.

Consensus replication

Data is replicated with Raft/Paxos-class protocols to preserve write correctness under node and network failures.

Automatic sharding

The system splits and rebalances ranges/shards automatically, reducing manual partition lifecycle work.

Transactions across distributed state

ACID and serializable-like guarantees are implemented on top of a distributed KV layer with transaction coordination.

TiDB vs CockroachDB vs YDB

Core architecture

  • TiDB: TiDB server + TiKV (Raft KV) + PD; HTAP extension via TiFlash.
  • CockroachDB: Single binary model: SQL + KV + Raft ranges in one cluster.
  • YDB: Tablets/actors, shared-nothing, with compute/storage split by layers.

Multi-region model

  • TiDB: Placement rules are available, but deployments often require explicit regional tuning.
  • CockroachDB: Strong built-in focus on geo-partitioning and locality policies.
  • YDB: Multi-AZ and geo-distributed setups depend on cluster topology and domain configuration.

Compatibility and ecosystem

  • TiDB: MySQL-compatible wire protocol and DDL; convenient for MySQL migration paths.
  • CockroachDB: PostgreSQL wire compatibility and PostgreSQL-like SQL workflows.
  • YDB: Own SQL dialect and APIs with strong integration into YDB/Yandex Cloud ecosystem.

Primary strengths

  • TiDB: Balanced OLTP + near-real-time analytics (HTAP) with practical MySQL migration support.
  • CockroachDB: Strong global OLTP story with automated failover in multi-region environments.
  • YDB: High-load transactional workloads, auto-sharding, and row+column tables in one platform.

Operational risks

  • TiDB: Requires careful hot-region management and key design discipline, especially for HTAP.
  • CockroachDB: Cross-region transaction latency cost and high sensitivity to schema/key design.
  • YDB: Needs a mature distributed-operations model and strong partitioning practices.

Fast scenario-based selection

Global SaaS with active multi-region writes

Often CockroachDB

A good fit when locality policies, region-failure survivability, and predictable geo-failover are critical.

Growth from MySQL toward distributed SQL without fully rewriting the solution

Often TiDB

Useful when MySQL compatibility, incremental migration, and OLTP+HTAP in one platform are priorities.

High-load transactional services in Yandex Cloud ecosystem

Often YDB

A common fit when managed/native YDB integration and auto-sharding are key requirements.

Standard regional OLTP without hard scale-out pressure

Often not NewSQL

If PostgreSQL/MySQL already meets workload and SLA, distributed SQL can add unnecessary complexity.

Recommendations

Start with a key design document: partition key, transaction boundaries, and expected hot keys.

Measure p95/p99 latency under node/AZ failures and network partitions, not only steady-state throughput.

Separate local and cross-region transaction paths explicitly to control latency and cost.

Build observability around shard/range/tablet behavior: skew, rebalance, retries, and lock contention.

Common mistakes

Treating NewSQL as a silver bullet and migrating schemas without redesigning key strategy.

Ignoring transaction boundaries and allowing frequent cross-shard or cross-region writes.

Comparing engines only with synthetic benchmarks, without failure and rebalance behavior.

Choosing distributed SQL when single-node OLTP already satisfies workload and SLA.

References

Related chapters

Enable tracking in Settings

System Design Space

© 2026 Alexander Polomodov