This chapter is useful because it removes the main myth: a container is not a 'lightweight VM,' but a way to package and isolate a process using kernel primitives.
In real engineering work, it helps you understand how namespaces, cgroups, the layered filesystem, and the application image become repeatable delivery and a more predictable runtime.
In interviews and design reviews, it gives you a mature way to explain where containers truly simplify architecture and where they merely add another layer of operational complexity.
Practical value of this chapter
Container primitives
Builds understanding of namespaces/cgroups as the basis for resource predictability.
Deploy consistency
Reduces environment drift between local development and production runtime.
Operational limits
Keeps container-model limitations explicit: state handling, networking, observability, and security.
Interview readiness
Supports mature discussion of where containers simplify architecture and where they add complexity.
Source
Containerization
Definition of containerization and key principles.
Containerization is virtualization at the OS level: applications are isolated but share the host kernel. This makes containers lightweight, fast, and easy to scale.
How containerization works
- Namespaces isolate processes, network, file system and users.
- Cgroups limit and allocate CPU, memory, I/O and number of processes.
- Union/overlay FS provides layers of images and fast launch of containers.
- Container runtime manages the life cycle of containers (create/start/stop).
- Registry and images allow you to transfer applications between environments.
Runtime manages namespaces and cgroups, starts containers, and enforces resource limits.
Containers inside host
Isolation via namespaces + cgroupsContainer A
- Nginx
- App
Container B
- Worker
- Queue
Container C
- DB
- Backup
They control CPU, memory, and I/O per container, preventing one process from consuming all resources.
All containers share the host kernel, so containerization is lighter and faster than VMs, but needs a compatible kernel.
Layered file system
Container images are built from several layers, and the container adds its own writable layer.
- Base layer: minimal OS image or runtime.
- Intermediate layers: dependencies, libraries, configurations.
- Top layer: application and its files.
- Writable layer: changes at the level of a specific container.
Layered file system visual model
Each container has its own writable layer, while image read-only layers are reused.
Container A: changes, temp files, logs
RWContainer B: changes, temp files, logs
RWApplication layer
ROApplication code and static artifacts.
Dependencies layer
ROLibraries, runtime, and system packages.
Base image layer
ROMinimal OS image or base runtime.
Copy-on-write
On write, only the writable layer of that container changes. Base image layers remain untouched.
Layer cache
Lower layers are commonly cached across builds, so changes in upper layers build and deploy faster.
Groups and limits
- CPU limits: quotas/shares for container processes.
- Memory limits: hard limits and OOM-killer.
- IO limits: control of disk operations.
- PIDs limits: limit the number of processes.
Containers vs virtual machines
Containers
- They share the host core, so they start quickly.
- Less overhead and higher density.
- Requires a compatible kernel and shares it with other containers.
Virtual machines
- Each VM has its own guest OS.
- Higher insulation, but higher overhead.
- Suitable for different OS and strict security boundaries.
Container evolution timeline
LXC in Linux
LXC combines namespaces and cgroups into a practical format, making Linux containerization broadly usable.
Docker mainstreams containers
Docker simplifies UX with image layers, Dockerfile, registry, and a portable workflow for developers and platform teams.
OCI standardizes formats
Open Container Initiative defines runtime and image specifications so the ecosystem is not tied to one vendor.
CRI appears in Kubernetes
Kubernetes introduces Container Runtime Interface, decoupling orchestration from a specific runtime.
containerd/CRI-O and dockershim removal
Runtimes mature; Kubernetes moves to direct CRI integration and eventually removes dockershim.
Era of secure and specialized runtimes
For multi-tenant and sensitive workloads, adoption grows for gVisor, Kata Containers, and microVM-style isolation.
VM underlay comparison for container workloads
Full virtualization
- Best at: Maximum guest OS compatibility.
- Container impact: More I/O and CPU overhead for container nodes inside VMs.
- Typical scenario: Legacy and mixed-OS environments.
Paravirtualization
- Best at: Optimized network and disk drivers.
- Container impact: Better latency/throughput for container workloads in VMs.
- Typical scenario: Cloud VMs using paravirtualized drivers (virtio, VMXNET3, PVSCSI).
Hardware-accelerated
- Best at: Default model for production clusters.
- Container impact: Best balance of isolation and performance for Kubernetes/containers.
- Typical scenario: Mainstream approach in public cloud and enterprise DC.
Solutions used today
Runtime and low-level layer
- containerd and CRI-O are the primary Kubernetes runtimes via CRI.
- runc/crun are common OCI executors.
- gVisor and Kata Containers provide stronger workload isolation.
Developer and local environments
- Docker Desktop is the most common local stack.
- Podman/Desktop supports daemonless and rootless workflows.
- Colima/OrbStack (macOS) are alternatives for local Linux VM + containers.
Orchestration and platform layer
- Kubernetes is the industry-standard container orchestrator.
- Nomad and Docker Swarm remain lighter alternatives for specific scenarios.
- Managed Kubernetes in AWS/GCP/Azure underpins many production platforms.
Request path to nginx inside the container
The request path is similar to virtual machines, but without a separate hypervisor - the container uses the host kernel.
Request path to nginx in a container
Request path: internet → host → runtime → container
External
Layer 1Host Linux
Layer 2Container runtime
Layer 3Container
Layer 4Service
Layer 5Active step
Click "Start" to walk through the request path.
Why is this important for systems design?
- Containers speed up delivery and simplify environments (dev/test/prod).
- Understanding cgroups helps you set limits and avoid noisy neighbors.
- The container network model affects latency and security rules.
- Containers have become the basic unit of deployment in Kubernetes and the cloud.
Related chapters
- Why is fundamental knowledge needed? - explains how OS, networking, and hardware constraints shape container execution.
- Operating system: overview - builds the user/kernel model needed to reason about namespaces and cgroups.
- Linux: architecture and popularity - shows the Linux foundation behind modern container runtimes.
- Virtualization and virtual machines - helps choose between VM and containers for different isolation and performance targets.
- Why know Cloud Native and 12 factors - connects containerization to cloud-native operating and delivery models.
- Kubernetes Fundamentals (v1.35): Architecture, Objects, and Core Practices - moves from single-container usage to production orchestration.
- Infrastructure as Code - how to describe and reproducibly provision container infrastructure.
- GitOps - how to run container deployments through declarative, pull-based delivery.
