> ALTER TABLE events ADD COLUMN version INT DEFAULT 1;
I’ve always disliked this approach. It conflates two things: the value to put in preexisting rows and the default going forward. I often want to add a column, backfill it, and not have a default.
Fortunately, the Iceberg spec at least got this right under the hood. There’s “initial-default”, which is the value implicitly inserted in rows that predate the addition of the column, and there’s “write-default”, which is the default for new rows.
hodgesrm 15 hours ago [-]
This Google article was nice as a high level overview of Iceberg V3. I wish that the V3 spec (and Iceberg specs in general) were more readable. For now the best approach seems to be read the Javadoc for the Iceberg Java API. [0]
To the contrary, the Delta Lake paper is extremely easy to read and implement the basics of (I did) and Iceberg has nothing so concise and clear.
twoodfin 10 hours ago [-]
If I implement what’s described in the Delta Lake paper, will I be able to query and update arbitrary Delta Lake tables as populated by Databricks in 2025?
(Would be genuinely excited if the answer is yes.)
eatonphil 10 hours ago [-]
Not sure (probably not). But it's definitely much easier to immediately understand IMO.
twoodfin 10 hours ago [-]
OK, but at least from my perspective, the point of OTF’s is to allow ongoing interoperability between query and update engines.
A “standard” getting semi-monthly updates via random Databricks-affiliated GitHub accounts doesn’t really fit that bill.
Many companies seem to be using Apache Iceberg, but the ecosystem feels immature outside of Java. For instance, iceberg-rust doesn't even support HDFS. (Though admittedly, Iceberg's tendency to create many small files makes it a poor fit for HDFS anyway.)
hodgesrm 13 hours ago [-]
Seems like this is going to be a permanent issue, no? Library level storage APIs are complex and often quite leaky. That's based on looking at the innards of MySQL and ClickHouse for a while.
It seems quite possible that there will be maybe three libraries that can write to Iceberg (Java, Python, Rust, maybe Golang), while the rest at best will offer read access only. And those language choices will condition and be conditioned by the languages that developers use to write applications that manage Iceberg data.
ozgrakkurt 11 hours ago [-]
This was the same with arrow/parquet libraries as well. It takes a long time for all implementations to catch up
sgarland 8 hours ago [-]
I read this [0] (I also recommend reading part 1 for background) a few weeks ago, and found it quite interesting.
The entire concept of data lakes seems odd to me, as a DBRE. If you want performant OLAP, then get an OLAP DB. If you want temporality, have a created_at column and filter. If the problem is that you need to ingest petabytes of data, fix your source: your OLTP schema probably sucks and is causing massive storage amplification.
Cool to see Iceberg getting these kinds of upgrades. Deletion vectors and default column values sound like real quality-of-life improvements, especially for big, messy datasets. Curious to hear if anyone’s tried V3 in production yet and what the performance looks like.
jamesblonde 12 hours ago [-]
Is it out yet?
16 hours ago [-]
talatuyarer 15 hours ago [-]
This new version has some great new features, including deletion vectors for more efficient transactions and default column values to make schema evolution a breeze. The full article has all the details.
jamesblonde 12 hours ago [-]
When will open source v3 come out?
It's supposed to be in Apache Iceberg 1.10, right?
talatuyarer 12 hours ago [-]
Yes 1.10 version will be first version for V3 spec. But not all features are implemented on runners such as Spark or Flink.
Of course I haven't seen any implementations supporting these yet.
talatuyarer 11 hours ago [-]
Yes, the specification will be finalized with version 1.10. Previous versions also include specification changes. Iceberg's implementation of V3 occurs in three stages: Specification Change, Core Implementation, and Spark/Flink Implementation.
So far only Variant is supported in Spark and with 1.10 Spark will support nano timestamp and unknowntype I believe.
jamesblonde 4 hours ago [-]
Any idea when 1.10 will be released?
robertlagrant 12 hours ago [-]
> default column values
The way they implemented this seems really useful for any database.
nojito 10 hours ago [-]
It's a mismatch that this is on the official blog, but their implementation of Iceberg is still behind and doesn't have feature parity with the spec.
I’ve always disliked this approach. It conflates two things: the value to put in preexisting rows and the default going forward. I often want to add a column, backfill it, and not have a default.
Fortunately, the Iceberg spec at least got this right under the hood. There’s “initial-default”, which is the value implicitly inserted in rows that predate the addition of the column, and there’s “write-default”, which is the default for new rows.
[0] https://javadoc.io/doc/org.apache.iceberg/iceberg-api/latest...
https://github.com/delta-io/delta/blob/master/PROTOCOL.md
(Would be genuinely excited if the answer is yes.)
A “standard” getting semi-monthly updates via random Databricks-affiliated GitHub accounts doesn’t really fit that bill.
Look at something like this:
https://github.com/delta-io/delta/blob/master/PROTOCOL.md#wr...
Ouch.
It seems quite possible that there will be maybe three libraries that can write to Iceberg (Java, Python, Rust, maybe Golang), while the rest at best will offer read access only. And those language choices will condition and be conditioned by the languages that developers use to write applications that manage Iceberg data.
The entire concept of data lakes seems odd to me, as a DBRE. If you want performant OLAP, then get an OLAP DB. If you want temporality, have a created_at column and filter. If the problem is that you need to ingest petabytes of data, fix your source: your OLTP schema probably sucks and is causing massive storage amplification.
[0]: https://database-doctor.com/posts/iceberg-is-wrong-2.html
Of course I haven't seen any implementations supporting these yet.
So far only Variant is supported in Spark and with 1.10 Spark will support nano timestamp and unknowntype I believe.
The way they implemented this seems really useful for any database.
https://cloud.google.com/bigquery/docs/iceberg-tables#limita...