System Design Space
Knowledge graphSettings

Updated: March 24, 2026 at 11:23 AM

Containerization

medium

How containers are structured, layered file system, cgroups/limits and comparison with VM.

This chapter is useful because it removes the main myth: a container is not a 'lightweight VM,' but a way to package and isolate a process using kernel primitives.

In real engineering work, it helps you understand how namespaces, cgroups, the layered filesystem, and the application image become repeatable delivery and a more predictable runtime.

In interviews and design reviews, it gives you a mature way to explain where containers truly simplify architecture and where they merely add another layer of operational complexity.

Practical value of this chapter

Container primitives

Builds understanding of namespaces/cgroups as the basis for resource predictability.

Deploy consistency

Reduces environment drift between local development and production runtime.

Operational limits

Keeps container-model limitations explicit: state handling, networking, observability, and security.

Interview readiness

Supports mature discussion of where containers simplify architecture and where they add complexity.

Source

Containerization

Definition of containerization and key principles.

Перейти на сайт

Containerization is virtualization at the OS level: applications are isolated but share the host kernel. This makes containers lightweight, fast, and easy to scale.

How containerization works

  • Namespaces isolate processes, network, file system and users.
  • Cgroups limit and allocate CPU, memory, I/O and number of processes.
  • Union/overlay FS provides layers of images and fast launch of containers.
  • Container runtime manages the life cycle of containers (create/start/stop).
  • Registry and images allow you to transfer applications between environments.
Host machine (Linux)
Physical resources
CPURAMDiskNIC
Container runtime

Runtime manages namespaces and cgroups, starts containers, and enforces resource limits.

Containers inside host

Isolation via namespaces + cgroups

Container A

  • Nginx
  • App
CPU: 2 coresRAM: 512MBIO: limited

Container B

  • Worker
  • Queue
CPU: 1 coreRAM: 256MBIO: limited

Container C

  • DB
  • Backup
CPU: 2 coresRAM: 1GBIO: priority
Layered filesystem (overlay)
Base imageRuntime layerApp layerWritable layer
Cgroups & limits

They control CPU, memory, and I/O per container, preventing one process from consuming all resources.

Shared kernel

All containers share the host kernel, so containerization is lighter and faster than VMs, but needs a compatible kernel.

Layered file system

Container images are built from several layers, and the container adds its own writable layer.

  • Base layer: minimal OS image or runtime.
  • Intermediate layers: dependencies, libraries, configurations.
  • Top layer: application and its files.
  • Writable layer: changes at the level of a specific container.

Layered file system visual model

Each container has its own writable layer, while image read-only layers are reused.

Container writable layers

Container A: changes, temp files, logs

RW

Container B: changes, temp files, logs

RW
Shared image layers

Application layer

RO

Application code and static artifacts.

Dependencies layer

RO

Libraries, runtime, and system packages.

Base image layer

RO

Minimal OS image or base runtime.

Copy-on-write

On write, only the writable layer of that container changes. Base image layers remain untouched.

Layer cache

Lower layers are commonly cached across builds, so changes in upper layers build and deploy faster.

Groups and limits

  • CPU limits: quotas/shares for container processes.
  • Memory limits: hard limits and OOM-killer.
  • IO limits: control of disk operations.
  • PIDs limits: limit the number of processes.

Containers vs virtual machines

Containers

  • They share the host core, so they start quickly.
  • Less overhead and higher density.
  • Requires a compatible kernel and shares it with other containers.

Virtual machines

  • Each VM has its own guest OS.
  • Higher insulation, but higher overhead.
  • Suitable for different OS and strict security boundaries.

Container evolution timeline

2008

LXC in Linux

LXC combines namespaces and cgroups into a practical format, making Linux containerization broadly usable.

2013

Docker mainstreams containers

Docker simplifies UX with image layers, Dockerfile, registry, and a portable workflow for developers and platform teams.

2015

OCI standardizes formats

Open Container Initiative defines runtime and image specifications so the ecosystem is not tied to one vendor.

2016

CRI appears in Kubernetes

Kubernetes introduces Container Runtime Interface, decoupling orchestration from a specific runtime.

2017–2022

containerd/CRI-O and dockershim removal

Runtimes mature; Kubernetes moves to direct CRI integration and eventually removes dockershim.

2023+

Era of secure and specialized runtimes

For multi-tenant and sensitive workloads, adoption grows for gVisor, Kata Containers, and microVM-style isolation.

VM underlay comparison for container workloads

Full virtualization

  • Best at: Maximum guest OS compatibility.
  • Container impact: More I/O and CPU overhead for container nodes inside VMs.
  • Typical scenario: Legacy and mixed-OS environments.

Paravirtualization

  • Best at: Optimized network and disk drivers.
  • Container impact: Better latency/throughput for container workloads in VMs.
  • Typical scenario: Cloud VMs using paravirtualized drivers (virtio, VMXNET3, PVSCSI).

Hardware-accelerated

  • Best at: Default model for production clusters.
  • Container impact: Best balance of isolation and performance for Kubernetes/containers.
  • Typical scenario: Mainstream approach in public cloud and enterprise DC.

Solutions used today

Runtime and low-level layer

  • containerd and CRI-O are the primary Kubernetes runtimes via CRI.
  • runc/crun are common OCI executors.
  • gVisor and Kata Containers provide stronger workload isolation.

Developer and local environments

  • Docker Desktop is the most common local stack.
  • Podman/Desktop supports daemonless and rootless workflows.
  • Colima/OrbStack (macOS) are alternatives for local Linux VM + containers.

Orchestration and platform layer

  • Kubernetes is the industry-standard container orchestrator.
  • Nomad and Docker Swarm remain lighter alternatives for specific scenarios.
  • Managed Kubernetes in AWS/GCP/Azure underpins many production platforms.

Request path to nginx inside the container

The request path is similar to virtual machines, but without a separate hypervisor - the container uses the host kernel.

Request path to nginx in a container

Request path: internet → host → runtime → container

External

Layer 1
ClientInternet

Host Linux

Layer 2
NICKernel network

Container runtime

Layer 3
Bridge/NATNamespaces

Container

Layer 4
NginxApp

Service

Layer 5
HTTP handlerBusiness logic
Request path

Active step

Click "Start" to walk through the request path.

Why is this important for systems design?

  • Containers speed up delivery and simplify environments (dev/test/prod).
  • Understanding cgroups helps you set limits and avoid noisy neighbors.
  • The container network model affects latency and security rules.
  • Containers have become the basic unit of deployment in Kubernetes and the cloud.

Related chapters

Enable tracking in Settings