Iceberg: Choosing a catalog when using "Dremio Software" with other compute engines

There are a lot of catalog implementations that Iceberg currently supports for common engines like Spark and Flink. I was wondering what the best catalog would be Iceberg and Dremio is used in a multi-engine environment.

This thread is inspired by the discussions in:

  1. Can Dremio be co-used with other compute engine for modifying Iceberg table?
  2. Incremental data reflection with Iceberg

I’d like to scope it to the “Dremio Software” edition only, since Arctic is available in Dremio Cloud and an obvious choice (unless the “Preview” disclaimer is a blocker for some). In regards to thread 1., I suggest at least initially also scoping to Dremio only reading data, in order to avoid discussions about concurrent writes.

The options known to me are:

  1. Glue catalog
  2. Hive catalog
  3. Hadoop catalog
  4. Nessie catalog (using an undocumented services.nessie.remote-uri config in dremio.conf)
  5. Roll your own support for the JDBC catalog, REST catalog or similar
  6. Wait for Arctic to come to the “Software” version

Short comments on the options above:

  1. Glue may be an option, but it is not an option for non-AWS clouds or on-prem deployments
  2. I would rather want to avoid all the complexities of Hive (unless you’re deeply invested in Hive already, but not all are - my company isn’t either)
  3. The lowest, common denominator. For one, engines doing writes need an explicit lock manager for coordinating writes, since it isn’t provided by the catalog.
  4. This might not even be officially supported, and as far as I can test, Dremio-wise it seems to be treated very similar to the Hadoop catalog
  5. This seems like a daunting task. It may also be in vein, since Dremio may add more catalogs in the future (Dremio’s roadmap isn’t open, so on-one knows)
  6. One could hope. I haven’t heard of any ambitions of it being added to the “Software” version (Dremio’s roadmap isn’t open, so on-one knows)

For context, in my company we’re currently:

  1. Running Dremio OSS on-prem
  2. Using a per-table Hadoop catalog
  3. Using an AWS S3 remote file source (MinIO) and manually configuring folders as Iceberg tables
  4. Using a custom built lockmanager to coordinate commits by writers

I’d like to do better and there are a lot of good reasons to use a proper catalog. Especially 4. above is an issue for me, since it makes writing to Iceberg risky. You need to be certain you have nailed the configuration of the LockManager, because, unlike other catalogs, if you forget or misconfigure the LockManager, it is still possible to write to the table. We have corrupted more than one table this way.

TL;DR - can we do better than the Hadoop catalog and still use Dremio for reading Iceberg tables - now or in the near future? @Benny_Chow may have some insights/recommendations, but what are others in the community doing?