Running out of file descriptors

We are using Dremio for a small business to generate reports daily.

We recently turned on reflections for 5 of our most used queries and observes possible memory / fd leak issues.

Dremio is deployed within docker container as a standalone server (c + e)
We are using 3.0.0 as it’s the last version supports our old Elasticsearch 1.7
Most our queries are pure MySQL query.

The servers starts with ~1.7G memory and would go up gradually to 4G and die, with following memory settings
DREMIO_MAX_HEAP_MEMORY_SIZE_MB=1536
DREMIO_MAX_DIRECT_MEMORY_SIZE_MB=3072

Server starts with ~13000 fds and go ~130000 over night (haven’t get the actual number when server dies).
The limit printed by prlimit is:
RESOURCE DESCRIPTION SOFT HARD UNITS
NOFILE max number of open files 1048576 1048576

The server would die after running for ~3days and reports:

[[36mstandalone_1  |^[[0m 2019-12-06 15:45:54,797 [2215884f-c9b7-09bb-0bd0-d0b5c0bccf00:foreman] INFO  c.dremio.exec.catalog.DatasetManager - User Error Occurred [ErrorId: 9248282e-2293-4fe4-8ddc-1f46d8678dc0]
^[[36mstandalone_1  |^[[0m com.dremio.common.exceptions.UserException: Failure while attempting to read metadata for table "__accelerator"."11944439-893f-490e-b2fb-5ba3370e99ef"."f2fac94c-610b-4941-8221-f351e83f9272" from source.
^[[36mstandalone_1  |^[[0m      at com.dremio.common.exceptions.UserException$Builder.build(UserException.java:746) ~[dremio-common-3.0.0-201810262305460004-5c90d75.jar:3.0.0-201810262305460004-5c90d75]

^[[36mstandalone_1  |^[[0m 2019-12-06 15:45:54,809 [dremio-general-8] ERROR c.d.s.reflection.ReflectionManager - Couldn't handle reflection entry 11944439-893f-490e-b2fb-5ba3370e99ef
^[[36mstandalone_1  |^[[0m java.lang.RuntimeException: java.nio.file.FileSystemException: /opt/dremio/data/db/search/materialization_store/core/_bpi.fdt: Too many open files
^[[36mstandalone_1  |^[[0m      at com.google.common.base.Throwables.propagate(Throwables.java:240) ~[guava-20.0.jar:na]

Currently my observation is most of the fds are deleted links with jobs shown like:

Screen Shot 2019-12-09 at 10.35.42 AM

If run:

ls -l /proc/1/fd | grep 49ddce69-7831-4a37-8919-516008133194

in the docker container, results are all deleted file links such as:

lr-x------ 1 dremio dremio 64 Dec  9 02:10 203939 -> /opt/dremio/data/pdfs/accelerator/11944439-893f-490e-b2fb-5ba3370e99ef/_49ddce69-7831-4a37-8919-516008133194_0_2455120465742423200_-5324881200546519941/20847 (deleted)
lr-x------ 1 dremio dremio 64 Dec  9 02:10 203940 -> /opt/dremio/data/pdfs/accelerator/11944439-893f-490e-b2fb-5ba3370e99ef/_49ddce69-7831-4a37-8919-516008133194_0_2455120465742423200_-5324881200546519941/20847/131949 (deleted)
lr-x------ 1 dremio dremio 64 Dec  9 02:10 203941 -> /opt/dremio/data/pdfs/accelerator/11944439-893f-490e-b2fb-5ba3370e99ef/_49ddce69-7831-4a37-8919-516008133194_0_2455120465742423200_-5324881200546519941/20847/131949 (deleted)

So my question would be is that normal for Dremio to keep all the deleted fds active?
Or any advice for how to solve the problem?