Currently we are comparing and benchmarking Dremio against our existing setup. For some important usage scenarios we already got sub second response times from our current system and now have no clue if dremio is faster or slower.
Is there maybe a display setting to adapt this. Or is Dremio too shy to show real figures
You can see the total time in the query profile if you look into the details.
A better approach may be to run your tests through a tool like jmeter and record the round trip. This would give you a better apples to apples comparison.
In analytics my experience is that “a few seconds or less” is the goal. Of course this statement ignores concurrency. In OLTP systems a few ms or sub-ms is the goal in order to handle high concurrency.
Fortunately with data reflections Dremio is good in terms of latency and concurrency.
I think thats not an answer to my question/demand. I always see full seconds but ask for ms everywhere in the gui,mainly for sub sub 5 sec query times where ms really matter.
In addition I can not share your „a few seconds or less“-is-enough experience in analytics. Every ms counts if you have SLAs you get paid or punished for.
Please try to use REST API for this.
It should show you start and end time in milliseconds.
Here is an example of an output:
We hear your request for ms in the GUI and will track this internally.
For your workloads do you have sub-second SLAs for large analytical queries? It would be good to hear more about your requirements. Of course, faster is always better, I agree. I was simply contrasting latency and concurrency requirements for OLTP v. OLAP, which tend to vary by orders of magnitude.
I continue to believe that if you are comparing two different systems it is good to measure round trip for the entire stack.
many thanks for taking this up.
Our workloads are mainly smaller size aggregates (most often a few 10- to 100- thousand records are aggregated to a 10 to 100 records) from a set of 1 billion records to be filtered hy certain dimensions. But we have high concurrency of hundrets of parallel queries. When we can achieve half the query time, we you can serve either twice as many users or half their individual response time.
And yes, we will masure the entire roundtrip, but the query time is the by far most important aspect.