Schedule
Query less and ship faster: Lakehouse with DuckDB DuckLake
Host
Guests

Data and AI teams increasingly discover that the fastest queries are the ones that read the least data. This presentation shows how a SQL‑first lakehouse design places lakehouse metadata where it belongs: in a database. I'll unpack how this architecture enables efficient file/segment pruning, true ACID transactions, and fresh small writes without the long read paths typical of file‑only metadata stacks. I'll show three concrete patterns for “reading less data”: partition elimination, segment‑level pruning with column stats, and metadata‑only planning, contrasted with techniques in Iceberg and BigQuery. Then we connect the dots to today’s AI workflows: agent pipelines and evaluation loops benefit from snapshot isolation (reproducibility), time travel (rollbacks), and near‑real‑time availability of small updates.