Query was cancelled because it exceeded the memory limits set by the administrator

Receiving this error while refreshing a reflection … i
“Query was cancelled because it exceeded the memory limits set by the administrator”

I’m not aware of having set a memory limit for queries … which setting does this correlate to ?

Thanks

Hi @stevenyk

This just means you do not have enough direct memory on your Dremio executors to run this query. On your Dremio executor do a ps -ef | grep Dremio and see how much direct memory is allocated. You might want to consider increasing that via the dremio-env file on all the executors

Thanks
@balaji.ramaswamy

Could you elaborate on which variable are you referring to.
The default values for dremio-env file is:
DREMIO_MAX_HEAP_MEMORY_SIZE_MB=4096
DREMIO_MAX_DIRECT_MEMORY_SIZE_MB=8192
DREMIO_MAX_PERMGEN_MEMORY_SIZE_MB=512

The second variable, DREMIO_MAX_DIRECT_MEMORY_SIZE_MB=8192

Worth reading this from the docs: https://docs.dremio.com/deployment/dremio-config.html#recommended-memory-setup

Any pointers on what could be a good setting for a dataset (single table) of around 3TB?

How many executor nodes?

2 executors - m5d.4xl
2 coordinators - m5d.2xl

It really depends on the queries. Many join and aggregation operations require the dataset to fit in memory. You have 128GB RAM across the two executor nodes.

Have you looked at Data Reflections? This can significantly reduce the requisite memory for many query patterns.

I’m just trying out Dremio as of now. So all i’m trying to do is get a simple group by … by a couple of columns and a few sums and counts in the projections.
I tried creating a raw reflection, but even that job fails with the same memory limits error or gets cancelled with some memory / GC overhead / heap space issue.

You could try increasing the DIRECT setting to 54GB.

What is the data source?

s3 files (parquet format)

Hi,

I am facing the same problem

I already increased the direct memory by increasing the node memory in the provisioning step
the DREMIO_MAX_DIRECT_MEMORY_SIZE_MB=12GB (instead of 8GB before)
and the node memory is passed to 24GB (instead of 16GB before )
but still have the error

is there a limit in the support keys to modify too ?

Thanks

c7ec8023-5f49-4a69-8c5a-a16ffbce110f.zip (59,4 Ko)

@balaji.ramaswamy @kelly how could be set this configuration (DREMIO_MAX_DIRECT_MEMORY_SIZE_MB) by default after creating/destroying nodes?
I’m using Dremio on AWS Build 22.1.1-202208230402290397-a7010f28

@Diego If this is an executor, Dremio should automatically set heap to 8 GB, leave some for the OS and remaining to Direct. What is your issue? Are you running out of direct memory? Kindly send us the

  • Profile of the failed job
  • on of your GC logs so we can review your JVM flags

Thanks
Bali

@balaji.ramaswamy this is the profile of query error
751e9cd2-4f7a-4659-a0bf-1ebf9af9d536.zip (613,4,KB)

Where I could get GC logs?

@balaji.ramaswamy here another example
cc32c3c4-f0f5-4d30-aa6e-87e7ce6daeac.zip (99,5,KB)

@Diego Can you enable planner.memory.aggressive and see if this helps?

Sure. I will try this. I’ll come back here to say if it worked

@balaji.ramaswamy planner.memory.aggressive it was already enabled by default

@Diego I see a total of 3 profiles from you,1 is running out of direct memory during execution, 2nd is failing the direct memory check during execution-planning and the third one is the coordinator heap monitor cancelling queries to avoid a heap outage. I see you have configured direct memory of 12 GB. This seems a little low and causing queries to either run OOM or fail the check

What is the total RAM on the coordinator and the executors?