Source
Wikipedia: CockroachDB
CockroachDB timeline, release milestones, and broad context for its distributed SQL positioning.
Official website
CockroachDB Product Overview
Product-level overview of resilience, horizontal scaling, locality controls, and target workloads.
CockroachDB is a distributed SQL DBMS focused on strong consistency, fault tolerance, and horizontal scale. In system design, CockroachDB is commonly evaluated for business-critical OLTP services where multi-region deployment, automatic failover, and SQL semantics without manual sharding are key requirements.
History and context
Initial project concept
Spencer Kimball publishes the early design of a distributed SQL system that later becomes CockroachDB.
Company foundation
Cockroach Labs is founded to develop the product as a distributed SQL platform for fault-tolerant services.
First production-ready release
The 1.0 line establishes SQL interface, transactional guarantees, and cluster deployment for production workloads.
Shift to source-available licensing
Project licensing changes from Apache 2.0 to the Business Source License (BuSL).
Stable 25.1 release line
25.1 branch continues enterprise-focused improvements in performance, scale behavior, and operational maturity.
Core architecture elements
SQL gateway + PostgreSQL compatibility
Any node can accept SQL traffic over PostgreSQL wire protocol and serve as a gateway for distributed execution.
Ranges, leaseholder, and Raft
Global keyspace is split into ranges; leaseholder coordinates access while replication and consensus are handled by Raft.
ACID transactions and Parallel Commits
Transaction layer uses write intents and atomic commit protocol to provide strong consistency in distributed execution.
Geo-locality and auto-rebalancing
Cluster supports multi-region placement, automatic data rebalancing, and horizontal growth without manual sharding.
Data model and transaction contour
Interactive section below explains how CockroachDB combines SQL semantics with a distributed KV core: ranges, replicas, write intents, isolation modes, and locality controls.
CockroachDB data model: ranges, replicas, transactions
CockroachDB builds SQL semantics on top of a distributed KV engine where data is split into ranges and replicated via Raft.
Why CockroachDB differs from classic single-node SQL
- Tables and indexes map to a distributed KV keyspace that auto-splits into ranges.
- Each range has replicas; leaseholder coordinates reads/writes for that range.
- Transactions rely on write intents, lock table, and atomic commit protocol (Parallel Commits).
- Multi-region locality controls are available for data placement and latency goals.
SQL -> KV keyspace
Table rows and secondary index entries are stored as key-value pairs in a global keyspace.
Key elements
Typical use cases
- Horizontal growth
- Hot key isolation
- Large table partitioning
Example
CREATE TABLE orders (
id UUID PRIMARY KEY,
tenant_id UUID,
status STRING,
created_at TIMESTAMPTZ
);High-Level Architecture
High-level CockroachDB flow: SQL gateway, transaction layer, range distribution, Raft replication, and storage/locality mechanics.
System view
Workload profile
Operational trade-offs
Read / Write Path through components
Unified diagram combines write/read paths with explanations: request moves through gateway, range routing, leaseholder handling, Raft consensus, and transaction commit/response.
Read/Write Path Explorer
Interactive walkthrough of CockroachDB requests through gateway, leaseholder, Raft, and transaction layer.
Write path
- Tables/indexes are split into ranges; transaction keys decide single-range vs multi-range execution.
- Writes are first recorded as intents (provisional values with lock semantics).
- Commit requires Raft majority per affected range plus transaction-layer coordination.
- Under contention, retryable errors are expected and clients should retry transactions.
When to choose CockroachDB
Good fit
- Mission-critical OLTP systems that require strong consistency, ACID semantics, and cross-zone/region survivability.
- Products with growing load where reads/writes must scale by adding nodes without manual shard management.
- Global SaaS and fintech workloads that need locality controls, failover readiness, and SQL continuity.
- Teams ready to invest in schema/index/key design and distributed SQL operational discipline.
Avoid when
- Simple single-node applications where a classic local SQL database is enough and cheaper to run.
- Heavy analytical scan workloads better served by specialized OLAP engines.
- Systems that cannot tolerate retry-oriented transaction handling under contention.
- Teams without capacity to operate multi-node infrastructure and deep observability practices.
Practice: DDL and DML
Below are practical CockroachDB SQL examples: DDL for schema/index and multi-region settings, plus DML for transactions, UPSERT, and concurrent row access.
DDL and DML examples in CockroachDB
DDL controls schema/indexes; DML handles transactional and distributed read/write paths.
CockroachDB supports PostgreSQL-like SQL DDL with online schema changes, but key/index design is critical for distributed performance.
Create table with primary key
CREATE TABLEPrimary key shape influences distribution in keyspace/ranges.
CREATE TABLE accounts (
id UUID PRIMARY KEY,
tenant_id UUID NOT NULL,
balance DECIMAL(18,2) NOT NULL,
status STRING NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);Covering secondary index
CREATE INDEX ... STORINGHelps avoid extra lookups for hot read endpoints.
CREATE INDEX idx_accounts_tenant_status
ON accounts (tenant_id, status)
STORING (balance, created_at);Configure multi-region database
ALTER DATABASE ... REGIONMulti-region SQL features support locality and survivability goals.
ALTER DATABASE appdb PRIMARY REGION "us-east1";
ALTER DATABASE appdb ADD REGION "eu-west1";
ALTER DATABASE appdb ADD REGION "ap-southeast1";