After restarrting Dremio rflection failed

Hi @summersmd

The JDBC error is a known issue and we will be addressing it. The reflection matching but not getting picked up is something we need to investigate. Will have to get back to you on that

Thanks
@balaji.ramaswamy

Balaji,

We’re getting this error while trying to build an aggregation reflection.

CompileException: File ‘com.dremio.exec.compile.DremioJavaFileObject[HashTableGen9463.java]’, Line 13930, Column 24: HashTableGen9463.java:13930: error: code too large

Is this a different issue? If so, please start a new thread and tell us about how you have Dremio deployed, how much memory and size of the physical data.

I now have this issue. The only thing I can think that I did was set my quote character to a control character in my source to avoid quoting on "

I’m going to try restarting my cluster to solve it, but I get this error on every page.

Full error:

org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed

It appears that luceneon the master node has went defunct. I’ll look into restarting the master node and/or fixing the lucene issue.

I found that my master node’s disk was full:

Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 16046920 0 16046920 0% /dev
tmpfs 16060392 0 16060392 0% /dev/shm
tmpfs 16060392 372 16060020 1% /run
tmpfs 16060392 0 16060392 0% /sys/fs/cgroup
/dev/nvme0n1p1 20959212 20959192 20 100% /
tmpfs 3212080 0 3212080 0% /run/user/0

And that the dremio db/catalog was huge:
du -h . | grep ‘[0-9.]+G’
4.0K ./usr/share/locale/zh_CN.GB2312/LC_MESSAGES
4.0K ./usr/share/locale/zh_CN.GB2312
8.1G ./opt/dremio/data/db/catalog
8.2G ./opt/dremio/data/db
8.2G ./opt/dremio/data
3.5G ./opt/dremio/restore
5.7G ./opt/dremio/backup/dremio_backup_2019-12-07_08.00
5.7G ./opt/dremio/backup
18G ./opt/dremio
19G ./opt
20G .

This could be related. Is there any reason the db/catalog needs to use so much space? It thought it was just metadata? Does this mean that DIST location on s3 is incorrectly configured?

@kprifogle

dist:// is different from metadata (rocksDB). dist:// is more for reflection, uploads, downloads etc while rocksDB stores metadata about reflections, datasets, VDS, users, spaces etc. Altthough we do not require terabytes tto start with, we have seen based on how many times you drop/create, create new things it can gett big. I hope you are on our latest version in which we self clean more efficiently. You can also manually reindex, compact etc, see below link

Clean Metadata

Thanks
@balaji.ramaswamy