Previously stable dataset has started failing with "One or more nodes lost connectivity during query"

A dataset query/reflection that we have used for over a year, has started failing with this error. It is a relatively complex query but I see no reason why it has started to fail. I’ve added debug logging but do not see anything jumping out. I’ve also added further memory to the host and that has not made any difference - can you suggest any logging/investigation that I can do please? We only have one host, there are no separate worker nodes.

2023-12-29 14:20:55,804 [e2 - 1a712ab5-6287-fb08-5cbc-852f6947a000:frag:1:0] DEBUG com.dremio.sabot.driver.SmartOp - Closing operator: 1:0:5
2023-12-29 14:20:55,804 [Curator-ServiceCache-0] INFO c.d.s.c.TaskLeaderStatusListener - New Leader node for task MASTER dremioHost:45679 registered itself.
2023-12-29 14:20:55,804 [e2 - 1a712ab5-6287-fb08-5cbc-852f6947a000:frag:1:0] DEBUG com.dremio.sabot.driver.SmartOp - Closing operator: 1:0:6
2023-12-29 14:20:55,804 [e2 - 1a712ab5-6287-fb08-5cbc-852f6947a000:frag:1:0] DEBUG com.dremio.sabot.driver.SmartOp - Closing operator: 1:0:7
2023-12-29 14:20:55,807 [1a712ab5-6287-fb08-5cbc-852f6947a000:foreman] DEBUG - 1a712ab5-6287-fb08-5cbc-852f6947a000: cleaning up.
2023-12-29 14:20:55,807 [1a712ab5-6287-fb08-5cbc-852f6947a000:foreman] ERROR - ExecutionSetupException: One or more nodes lost connectivity during query. Identified nodes were [dremioHost:0].
com.dremio.common.exceptions.UserException: ExecutionSetupException: One or more nodes lost connectivity during query. Identified nodes were [dremioHost:0].
at com.dremio.common.exceptions.UserException$
at com.dremio.common.EventProcessor.processEvents(
at com.dremio.common.EventProcessor.sendEvent(
at com.dremio.exec.maestro.FragmentTracker.checkAndUpdateFirstError(
at com.dremio.exec.maestro.FragmentTracker.handleFailedNodes(
at com.dremio.exec.maestro.FragmentTracker$ExecutorNodeStatusListener.nodesUnregistered(
at com.dremio.service.coordinator.LocalExecutorSetService$1$1.nodesUnregistered(
at com.dremio.service.coordinator.AbstractServiceSet.nodesUnregistered(
at com.dremio.service.coordinator.zk.ZKServiceSet.updateEndpoints(
at com.dremio.service.coordinator.zk.ZKServiceSet.access$000(
at com.dremio.service.coordinator.zk.ZKServiceSet$EndpointListener.cacheChanged(
at org.apache.curator.x.discovery.details.ServiceCacheImpl$2.apply(
at org.apache.curator.x.discovery.details.ServiceCacheImpl$2.apply(
at org.apache.curator.framework.listen.ListenerContainer$
at org.apache.curator.framework.listen.ListenerContainer.forEach(
at org.apache.curator.x.discovery.details.ServiceCacheImpl.childEvent(
at org.apache.curator.framework.listen.ListenerContainer$
at org.apache.curator.framework.listen.ListenerContainer.forEach(
at java.util.concurrent.Executors$
at java.util.concurrent.Executors$
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$
Caused by: com.dremio.common.exceptions.ExecutionSetupException: One or more nodes lost connectivity during query. Identified nodes were [dremioHost:0].
at com.dremio.exec.maestro.FragmentTracker.handleFailedNodes(
… 27 common frames omitted
2023-12-29 14:20:55,811 [e3 - 1a712ab5-6287-fb08-5cbc-852f6947a000:frag:3:1] DEBUG c.d.exec.compile.ClassTransformer - Done compiling (bytecode size=14.9 KiB, time:16 millis).
2023-12-29 14:20:55,820 [Curator-ServiceCache-0] DEBUG c.d.exec.maestro.FragmentTracker - node completed for query 1a712ab5-6287-fb08-5cbc-852f6947a000 from node dremioHost:45679, pending nodes
2023-12-29 14:20:55,829 [Curator-ServiceCache-0] DEBUG - 1a712ab5-6287-fb08-5cbc-852f6947a000: State change requested FAILED → COMPLETED
2023-12-29 14:20:55,829 [Curator-ServiceCache-0] WARN - Dropping request to move to COMPLETED state as query 1a712ab5-6287-fb08-5cbc-852f6947a000 is already at FAILED state (which is terminal).

The dataset queries a number of folders of JSON data on an S3 bucket. Other queries on those S3 folders work without issue.

WITH ordered_basket_execs AS (
SELECT e1.dir0 as “BasketDate”,
when foMisc9.message.body.FOMisc9 is not null then foMisc9.message.body.FOMisc9
else e1.message.body.BasketName
end as BasketName,
e1.message.body.OrderId, e1.message.body.ClOrdID, e1.message.body.TransactTime, e1.message.body.NoAllocs[0].AllocAccount as “AllocAccount”,
e1.message.body.SecurityDesc, e1.message.body.OrdType[1] as “OrdType”, e1.message.body.TimeInForce[1] as “TimeInForce”,
newOrder.message.body.Text as “Text”, e1.message.body.CumQty, e1.message.body.AvgPx, e1.message.body.Price, e1.message.body.Currency, e1.message.body.OrderQty, e1.message.body.OrdStatus[0] as “OrdStatus”,
ROW_NUMBER() OVER (PARTITION BY e1.message.body.OrderId ORDER BY Right(e1.message.body.ExecID,20) DESC) AS rn
FROM s3Bucket.execReportsByDateGrouped e1
LEFT JOIN s3Bucket.execReportsByDateGrouped foMisc9
ON e1.message.body.OrderId = foMisc9.message.body.OrderId
AND foMisc9.message.body.FoMisc9 is not null
LEFT JOIN s3Bucket.ordersByDateGrouped newOrder
and e1.message.body.ClOrdId = newOrder.message.body.ClOrdId
where TO_DATE(e1.dir0, ‘YYYYMMDD’, 0) >= CURRENT_DATE - INTERVAL ‘5’ DAY
and ((e1.message.body.BasketName is not null) or (foMisc9.message.body.FOMisc9 is not null))
SELECT * FROM ordered_basket_execs WHERE rn = 1
AND OrdStatus!=‘4’

@mildlyth I remember seeing an issue similar on JSON files. Can you please look at server.out and see if there an error pid file path printed like /some/path/hs_err_<pid> and if yes, can you please attach server.out and that error file? (561.3 KB)

@balaji.ramaswamy thanks for the reply. No sign of a pid file. I’ve attached server.out and server.log. In this scenario I started the server at 13:53, and attempted to run a select on an s3 json data source - just a select * on the raw dataset, the query failed with the same error.

@mildlyth I do see the below

Dremio is exiting. Node lost its master status.
pure virtual method called
terminate called without an active exception

Can you please send me the below

  • GC logs when the executor went unavailable
  • /var/log/messages covering the time this query ran
  • server.log as the one attached only starts at 13:55 while Dremio went down at 13:53

server.log first entry

2023-12-31 13:55:28,977 [e3] DEBUG c.d.scheduler.cfs.MonitoredThreadMQ - Thread 0 got a request from thread 3 asking for work

Dremio start time

Sun Dec 31 13:53:55 UTC 2023 Starting dremio on dremiohost

I want to rule out GC pause, JVM crash and/or oom-kill


Hi @balaji.ramaswamy I’ve recreated from scratch. Server starts at 10:44 and I attempt to run the reflection at 10:46:10, it fails after 1:22. I do not have permissions to access /var/log/messages , UPDATE: I have attached the relevant section of the messages file (1.1 MB) (781 Bytes)

@mildlyth In the current logs you sent,

  • server.out does not have any errors
  • server.log has only one DEBUG message coming from turning on c.d.s.namespace.NamespaceServiceImpl , Do you have any other DEBUG’s turned on? Can we first diable all DEBUG’s?
  • server.gc does indicate a problem
2024-01-02T10:46:53.503+0000: 163.556: [Full GC (Allocation Failure) 2024-01-02T10:46:53.503+0000: 163.556: [Class Histogram (before full gc): 
 num     #instances         #bytes  class name
   1:     366713428     5867414848  com.dremio.exec.expr.fn.FunctionErrorContextImpl
   2:        287897     5663089768  [Ljava.lang.Object;
   3:         86574      139835048  [I
   4:        568877       53492872  [B
   5:        294956       28379104  [C

There is a Full GC pause for 30 seconds that will cause the executor to restart

What version of Dremio is this?

Apologies for delay - we were running dremio-community-20.1.0-202202061055110045-36733c65. We installed a new version which has not had this issue, but am now experiencing different problems (new post raised!)


Thanks for the update, this issue is resolved in a later version, I will answer your new post shortly