AutoScaling dremio

Hello Team,

We have deployed dremio 4.7 on Kubernetes cluster using helm chart.
We have used disk as a local volume.
Is there a way by which we can autoscale the executors depending on the load or any other metric?

or if we deploy dremio on AWS using cloud formation, as mentioned here,
Is there a way by which we can autoscale the executors depending on the load or any other metric?

@ankita9 Currently Dremio does not support this, If K8’s decides to scale up based on resources utilized, currently running queries will not benefit but new queries that are submitted would leverage the increased number of executors

I can’t find anywhere in documentation which metrics about queue and node dremio provide.
I am looking for metrics like queue capacity and a number of queries/resources for query in the queue - I wanna use this information to upscale dremio.
I am looking for metrics about the current Node state like the number of queries running or any caches or splits live on Node - I wanna use this information for downscale, just to understand can node be downscale.

P.S. for now, I plan to ignore any information about reflections.

Dremio plans before executing the query, I believe you can modify the source code to add some kind of “interceptor” to scale your k8s deployment based on the calculated query cost. This is partly the same behavior of AWS deployment, where you can send your query to different executors based on query cost