System Design Space
Knowledge graphSettings

Updated: March 2, 2026 at 7:35 PM

VictoriaMetrics: history and architecture

mid

VictoriaMetrics from a system design perspective: timeline, layered architecture, write/read flow, and a practical DDL-like/DML-like model for monitoring at scale.

Source

VictoriaMetrics docs

Official documentation for VictoriaMetrics architecture, components, and operating model.

Перейти на сайт

VictoriaMetrics is a high-performance TSDB built for cost-efficient metric storage and a scalable observability path. In the practical TSDB map it is often considered alongside Prometheus as a backend for long retention and high-cardinality workloads.

History: key milestones

2018

Public launch

VictoriaMetrics was released as an open-source TSDB focused on efficient metric storage.

2019

Performance and storage-density focus

The project gained traction as a low-resource option for Prometheus-compatible workloads.

2020

Cluster profile maturation

The vmselect/vminsert/vmstorage architecture stabilized for horizontal scalability.

2021

Ecosystem expansion

vmagent, vmalert, and multi-tenant deployment patterns became more widely adopted.

2023

Production-scale adoption

Migration patterns from Prometheus-based stacks to VictoriaMetrics for long retention became common.

2024+

Observability-stack evolution

Cluster deployment, cost optimization, and enterprise monitoring integration patterns continued to mature.

VictoriaMetrics specifics

Prometheus-compatible interface

Prometheus API and remote_write/read support simplify integration into existing monitoring stacks.

Efficient metric storage

Storage optimizations and background merges allow longer retention windows with fewer resources.

Cluster architecture

Splitting responsibilities across vmagent/vminsert/vmstorage/vmselect gives a clear write/read path.

Rule-driven monitoring

vmalert plus Alertmanager integration creates a controlled recording/alerting rule loop.

VictoriaMetrics architecture by layers

At a high level, the pipeline can be read as: ingest -> write routing -> storage parts/merge -> query fan-out -> rules/alerts -> external integrations.

Ingestion layer
vmagentPrometheus scraperemote_write ingestRelabeling
Layer transition
Write routing
vminsertShard routingTenant routingReplication fan-out
Layer transition
Storage layer
vmstorageCompressed partsBackground mergeRetention cleanup
Layer transition
Query execution
vmselectFan-out readsDeduplicationMetricsQL/PromQL
Layer transition
Rules and alerting
vmalertRecording rulesAlerting rulesAlertmanager
Layer transition
Integrations and operations
Single-node/clusterGrafanavmauthBackup/restore

Key features

VictoriaMetrics is optimized for cost-efficient metric storage, Prometheus-compatible APIs, and growth from single-node to cluster deployments.

Compression and storage

High storage densityPart mergingLong retention profile

Prometheus compatibility

PromQL-compatible APIremote_write/readGrafana integration

Scalability

Cluster modeTenant isolationHorizontal scale-out

DDL vs DML: VictoriaMetrics model

Like most TSDB engines, VictoriaMetrics has no literal SQL DDL/DML layer. For system-design reasoning it is useful to split DDL-like operations (topology/configuration updates) and DML-like operations (sample movement and query read execution).

How the DDL/DML model works in VictoriaMetrics

DDL-like: topology/config updates. DML-like: sample flow and query read path.

Interactive replayStep 1/5

1. Ingest samples

Samples + queries

vmagent or remote_write sends fresh metrics to the write endpoint.

2. Parse and relabel

Samples + queries

Samples are parsed, labels are enriched, and data is prepared for routing.

3. vminsert shard routing

Samples + queries

vminsert distributes data to vmstorage nodes using hash/tenant routing.

4. vmstorage append + merge

Samples + queries

vmstorage appends samples to local parts and merges them in the background.

5. vmselect read path

Samples + queries

vmselect fans out across shards, applies dedup/aggregation, and returns results.

Active step

1. Ingest samples

vmagent or remote_write sends fresh metrics to the write endpoint.

Data and query path

  • The DML-like path covers ingestion, storage, compaction, and query execution.
  • In cluster mode, write/read paths scale horizontally across shards.
  • Label cardinality and tenant skew are key drivers of latency and cost.

Source

Prometheus docs

Reference context for a Prometheus-compatible observability stack.

Перейти на сайт

VictoriaMetrics vs Prometheus

Core profile

VictoriaMetrics: Strong focus on cost-efficient storage, scalable write/read paths, and Prometheus compatibility.

Prometheus: Canonical monitoring stack with pull-based collection, PromQL, and a built-in TSDB for operational use.

Query model

VictoriaMetrics: PromQL-compatible querying with MetricsQL extensions for production analytics.

Prometheus: PromQL as the baseline language for metric analysis and alert-driven workflows.

Scalability

VictoriaMetrics: Cluster mode (vminsert/vmstorage/vmselect) for large-scale data and long-term retention.

Prometheus: Often scaled via single-node + federation/remote storage patterns.

Operating model

VictoriaMetrics: Often used as a consolidated metrics backend in large observability platforms.

Prometheus: Often acts as the primary scrape/rule engine with external long-term storage integration.

Why VictoriaMetrics is often chosen in production

Practical interpretation for system design workloads:

  • VictoriaMetrics is often chosen for high storage efficiency and predictable long-retention cost.
  • Prometheus compatibility lowers migration effort and preserves existing dashboards and alert definitions.
  • The vmagent/vminsert/vmstorage/vmselect write/read path scales cleanly using shard-based topology.
  • Single-node and cluster deployment modes provide a practical growth path from small setups to large-scale production.

References

Related chapters

Enable tracking in Settings

System Design Space

© 2026 Alexander Polomodov