Official page
Structured Computer Organization
Book page on Pearson.
Structured Computer Organization
Authors: Andrew S. Tanenbaum, Todd Austin
Publisher: Pearson, 2013 (6th Edition)
Length: ~800 pages
Basic computer architecture: abstraction layers, ISA, memory, input/output and interaction with the OS.
OriginalKey topics
Abstraction layers and system contracts
The book shows how hardware and software layers stay decoupled through stable interfaces.
- ISA separates software from a specific chip implementation: compilers and OS target a contract, not transistor wiring.
- Microarchitecture may evolve without breaking applications as long as the external contract is preserved.
- For system design, this is the same decomposition rule: hide internals and keep boundaries explicit.
ISA, microarchitecture, and execution cost
The same algorithm can behave very differently because of decoding, pipelining, and branch behavior.
- RISC/CISC and microcode help explain trade-offs between instruction complexity and execution simplicity.
- Pipeline hazards, branch prediction, and out-of-order execution directly affect real latency.
- In production CPU-bound services, bottlenecks come from both algorithmic complexity and data locality.
Memory hierarchy and locality
A core message is that data access cost differs by orders of magnitude, so architecture must follow that ladder.
- Temporal and spatial locality explain why cache-aware access patterns often beat raw CPU upgrades.
- Cache misses and page faults can dominate response time even when business logic is simple.
- This supports practical choices like prefetching, batching, and cache-friendly data layout.
I/O path: controllers, interrupts, DMA
Input/output is treated as a pipeline from device to controller, driver, kernel, and user process.
- Polling vs interrupts is a workload decision: lower latency versus lower CPU overhead.
- DMA minimizes CPU involvement in bulk transfer, critical for network and storage-heavy workloads.
- Batching and event coalescing reduce context-switch and syscall overhead.
Parallelism, synchronization, and scaling limits
The book connects hardware and software parallelism, from pipelines to multithreaded programs.
- Instruction-level and thread-level parallelism work only when tasks are sufficiently independent.
- Lock contention, false sharing, and memory barriers can erase expected speedups.
- Amdahl's law is a fast sanity check for both vertical and horizontal scaling assumptions.
Levels of computer organization
Digital logic
Basic elements, bits, logic circuits.
Microarchitecture
ISA, microcode, pipelines, basic performance trade-offs.
Memory and I/O
Caches, buses, DMA, external devices and access speed.
Operating systems
Scheduler, virtual memory, syscalls and abstractions.
Access Cost Ladder
What is really useful in system design
- Understanding why caches provide multiple gains in latency.
- Bottleneck evaluation: CPU-bound vs IO-bound.
- Why do we need batch and parallelism in large systems?
- Why virtual memory and pages affect predictability.
Why is this important for System Design?
- Understanding latency and throughput at the CPU/memory level helps evaluate bottlenecks.
- The idea of I/O and caches explains why some queries are expensive.
- Basic knowledge of concurrency helps in the design of concurrent systems.
- Layers of abstraction make it easier to talk about tradeoffs in architecture.
Who is it suitable for?
Engineers who need to gain a deeper understanding of hardware and computational costs - useful for optimization, backend development, and system design.
