Book page
Computer networks
Petersburg Publishing House, anniversary edition.
Computer Networks: Principles, Technologies, Protocols
Authors: V. G. Olifer, N. A. Olifer
Publisher: Piter, 2026
Length: 1008 pages
Tanenbaum's classic textbook: layers, protocols, routing, security and network applications.
OriginalWhat is this book about?
This is a fundamental textbook on networking with an engineering focus, from physical signals to application protocols. It gives a complete picture of how data flows and why delays, losses and errors occur in practice.
Models and levels
Systematic understanding of where one layer ends and another begins.
Protocols and devices
Switches, routers, addressing and delivery mechanisms.
Reliability and performance
Losses, windows, congestion and evaluation of real network constraints.
Key topics
Layers, encapsulation, and responsibility boundaries
A core message of the book is to treat networking as independent layers with explicit contracts.
- OSI/TCP-IP helps isolate failures to link, transport, or application level.
- Encapsulation explains header overhead and MTU constraints in real traffic.
- In system design, this maps directly to service boundaries and API contracts.
Switching, routing, and packet path decisions
The book details how packets move through L2/L3 devices in production networks.
- Switches and routers make decisions at different layers and performance profiles.
- ARP, routing tables, and next-hop logic influence both latency and failure modes.
- Hop-by-hop visibility improves observability and incident triage.
Transport layer: TCP, UDP, and congestion behavior
Transport choice determines latency profile, delivery guarantees, and retry cost.
- TCP provides ordering and flow control but adds handshake and head-of-line risks.
- UDP reduces overhead and fits realtime workloads with app-level reliability.
- Windows, retransmissions, and congestion control define behavior under load.
Reliability, failures, and operational metrics
Reliability is treated as an engineering discipline, not just protocol theory.
- Packet loss, jitter, and burst errors explain unstable p95/p99 latency.
- Timeout budgets and retry policies must be grounded in real measurements.
- RTT, loss, and retransmit telemetry are required for diagnosing degradations.
Application protocols and user-perceived latency
DNS and HTTP are shown as part of an end-to-end user journey, not isolated components.
- Fast backend logic cannot compensate for slow DNS/TLS phases.
- DNS/HTTP caching, keep-alive, and compression directly affect perceived performance.
- Protocol choices must account for mobile and multi-region access patterns.
Wireless networks and channel variability
Wi-Fi and mobile channels vary significantly, so network behavior is non-stationary.
- High RTT/loss variance requires adaptive timeouts and retry strategies.
- Mobile clients need graceful degradation and offline-aware design.
- This is critical for realtime and high-throughput systems.
OSI model and the role of each layer
OSI model layers
Select a layer to see its role and protocol examples
Active layer
Layer 7: Application
Application-level interfaces and protocols.
Examples
Packet path from client to server
DNS and connection establishment
Name resolution, TCP/TLS handshake, first RTTs.
Routing and Transport
Packets go through hops, taking into account MTU, windows and congestion control.
Server and application
Decoding, request processing, queues and business logic.
Reply and retries
Repetitions, timeouts and stability during losses.
Network metrics
What is useful to take away
- Where does latency come from and how stack levels increase it.
- Why loss and congestion control are the key to reliability.
- How to choose protocols (TCP/UDP/QUIC) for business scenarios.
Why is it important for System Design
- Helps you think in layers and clearly separate the responsibilities of components.
- Provides a basis for assessing latency, throughput and network limitations.
- Teaches you to look at fault tolerance from a protocol perspective.
- Allows you to design timeouts, retries and balancing more consciously.
Who is it suitable for?
Engineers who want to gain a systematic understanding of network protocols and better understand the limitations of distributed systems.
