Connection
Data Pipeline / ETL / ELT Architecture
Basic context: ingestion, orchestration, data quality and recovery processes.
Apache Iceberg is an open analytical table format that brings DWH level manageability to the data lake: atomic commits, schema evolution, time travel and predictable reading of large tables. In practice, this is the foundation for the lakehouse approach, when streaming and batch circuits work on top of a single tabular representation of data.
Evolution of data approaches
Data Warehouse (1990s)
Nightly ETL, strict scheme and reports with a lag until the next day.
High quality control, but low flexibility and expensive changes.
Data Lake (2010s)
Schema-on-read, ELT and scale on object storage (S3/GCS/Blob).
Flexibility is higher, but transactionality and consistency are difficult.
Lakehouse / Open Table Format
Iceberg adds ACID, circuit evolution and time travel on top of the data lake.
More metadata and operational discipline appear, but manageability increases sharply.
Pains of the classic data lake
- Slow list operations and unpredictable query planning with a large number of files.
- No ACID for parallel writes and race conditions for overwrite.
- Complex schema evolution: rename/drop columns often breaks compatibility.
- Manual support for partitioning and pain with reprocessing/backfill.
Iceberg architectural layers
Shows how the engine reads only relevant files via metadata pruning.
Interactive run
Click "Start" to walk through the layers and see execution order.
Catalog lookup
The engine gets a pointer to the current table metadata file.
Read metadata file
Schema, partition spec, and available snapshots list are loaded.
Select snapshot
A consistent snapshot is selected for query execution and time travel.
Load manifest list
The set of manifest files referenced by the snapshot is resolved.
Predicate pruning
Only relevant data files are selected using min/max/null stats.
Scan data files
The engine scans only selected files and returns the query result.
Shows both paths: query path down and commit path up.
Selected layer
Snapshot
Captures a consistent table version for reads and time travel.
Contains: Snapshot ID, commit timestamp, pointer to manifest list.
Why needed: Enables reproducible queries, rollback, and change audits.
Compliance
Data Governance & Compliance
Row-level deletes and lineage are especially important for regulatory requirements.
What exactly does Iceberg solve?
ACID transactions
Copy-on-write + optimistic concurrency at the metadata commit level.
Safe parallel INSERT/DELETE/MERGE without table corruption.
Time Travel
Reading by snapshot ID or timestamp.
Reproducible queries, change auditing and rollback scripts.
Schema Evolution
Column IDs and schema metadata in JSON, not positional indexes.
Adding/renaming columns without completely rewriting the table.
Hidden Partitioning
Partition transforms (bucket/truncate/day) are hidden behind the table abstraction.
Fast scan and fewer errors from manually selecting partition key in queries.
Row-level deletes
V2 specification with delete files and positional references.
GDPR/FZ-152 delete workflows, upsert and spot data corrections.
Physical model and deployment
- Iceberg is not a separate server, but an open specification + libraries and engine integrations.
- Data and metadata are stored as regular files in object storage.
- Catalog - external component (Hive Metastore, AWS Glue, JDBC, REST catalog).
- Spark/Flink/Trino/Impala/Hive read the same table using the same metadata format.
- The design avoids rename/list bottleneck patterns that are critical for object storage.
Tableflow and streaming circuit
- Kafka topics are automatically materialized into Iceberg tables for near real-time analytics.
- Hand-made ETL code between streaming ingestion and BI/lakehouse layers is reduced.
- Data contracts and schema governance are needed, otherwise garbage-in scales quickly.
- Useful as a bridge between the operational stream and analytical SLAs for freshness.
