Hive metastore has already be supported in many bigdata products and it now can be split from hadoop as a standalone service on cloud native.
Dremio can read Hive metastore even if it is outside of Hadoop. Is that your question? As long as Dremio is able to talk to the metastore host and get locations of files on HDFS, it should work
Currently Dremio holds metadata in a directory of disk.
Can I configure the custom Dremio metadata storage using
Dremio’s metadata on RocksDB and Hive metastore are 2 entirely different things, cannot be clubbed
Is there a way to register in the
Hive Metastore a dataset served by
Apache Arrow Flight) ?
In such an architecture
Dremio is not the
datalake engine but a part of a bigger
Is that what you meant koolay ?
Is that making sense?
Yes, it’s another very usefull sense.
Dremio’s metastore requires a mounted disk for distribution.
Dremio will be stateless if that it’s metastore is backend in hive metastore.