Hi All,
I am getting below Illegal error while running the query in Dremio. Any help is greatly appreciated.
Query: SELECT “@timestamp” AS REQ_DATE, extract_pattern(log_message, ‘(?<=Label=)(.)(?=,LastValue)’, 0, ‘INDEX’) AS operation,
extract_pattern(log_message, ‘\d+’, 0, ‘INDEX’) AS lastvalue, serviceName
FROM “E-167”.“logstash_cloudMetrics”.syslog AS syslog WHERE regexp_like( log_message, '.?\QLabel=\E.?’)
UNION ALL
SELECT “@timestamp” AS REQ_DATE, extract_pattern(log_message, '(?<=Label=)(.)(?=,LastValue)’, 0, ‘INDEX’) AS operation,
extract_pattern(log_message, ‘\d+’, 0, ‘INDEX’) AS lastvalue, serviceName
FROM “E-196”.“logstash_cloudMetrics”.syslog AS syslog
WHERE regexp_like( log_message, ‘.?\QLabel=\E.?’)
Error in server.log:
C:/Users/hdevar001c/AppData/Local/Dremio/pdfs/results/25757a07-380c-6723-9845-940851bc0d00
2018-03-21 08:15:44,928 [scheduler-4] INFO c.d.service.jobs.JobResultsStore - Deleted job output directory : C:/Users/hdevar001c/AppData/Local/Dremio/pdfs/results/25757937-3ccd-0920-75aa-c18a6705b900
2018-03-21 08:30:44,438 [FABRIC-rpc-event-queue] ERROR c.d.services.fabric.FabricServer - IllegalStateException: Received data batch for 254db37f-f73d-e97a-a396-cc194b26ea00:1:1 from 3:2 before fragment executor started
com.dremio.common.exceptions.UserException: IllegalStateException: Received data batch for 254db37f-f73d-e97a-a396-cc194b26ea00:1:1 from 3:2 before fragment executor started
at com.dremio.common.exceptions.UserException$Builder.build(UserException.java:648) ~[dremio-common-1.4.4-201801230630490666-6d69d32.jar:1.4.4-201801230630490666-6d69d32]
Unfortunately, there’s not enough information to figure out what could be causing the issue. Can you share the query profile? Also, did you look at the logs from node CSPAUTO-PO-1P?
When i ran above query in Dremio, i am getting “SYSTEM ERROR: IllegalStateException: Received data batch for 2545edbc-f56e-7b90-63d1-798e87b98a00:1:2 from 3:2 before fragment executor started” error and not fetching any results.
This is a known bug that we fixed and will be part of 1.5.0.
Basically once the limit is reached the fragments cancel themselves but the message was not propagated correctly to all fragments causing them to run a lot longer than needed, so you would expect this query to not only finish successfully, but also a lot sooner (less than 15s).
I have reviewed the profile you attached and it is one of the issues we have addressed in our recent release 4.0.4, so you can to upgrade and validate.