Out_of_memory error

This server memory and CPU and disk are enough to use
The error message is as follows:

OUT_OF_MEMORY ERROR: One or more nodes ran out of memory while executing the query.

Failure trying to allocate initial reservation for Allocator. Attempted to allocate 2000000 bytes and received an outcome of FAILED_PARENT.
Fragment 2:1

[Error Id: 013000b3-31c9-41c8-8239-6aee6a4b50b1 on scq02-304j08u2930-dp-app-48-35-msxf.host:-1]

(org.apache.arrow.memory.OutOfMemoryException) Failure trying to allocate initial reservation for Allocator. Attempted to allocate 2000000 bytes and received an outcome of FAILED_PARENT.
org.apache.arrow.memory.Accountant.():81
org.apache.arrow.memory.BaseAllocator.():81
org.apache.arrow.memory.BaseAllocator.():72
org.apache.arrow.memory.ChildAllocator.():48
org.apache.arrow.memory.BaseAllocator.newChildAllocator():332
com.dremio.sabot.exec.QueriesClerk$FragmentTicket.newChildAllocator():169
com.dremio.sabot.exec.fragment.FragmentExecutorBuilder.build():147
com.dremio.sabot.exec.rpc.CoordToExecHandlerImpl.startFragment():59
com.dremio.sabot.exec.rpc.CoordToExecHandlerImpl.startFragments():50
com.dremio.sabot.rpc.CoordExecService$CoordExecProtocol.handle():183
com.dremio.services.fabric.FabricMessageHandler.handle():76
com.dremio.services.fabric.FabricServer.handle():85
com.dremio.services.fabric.FabricServer.handle():39
com.dremio.exec.rpc.RpcBus$RequestEvent.run():411
com.dremio.common.SerializedExecutor$RunnableProcessor.run():87
com.dremio.exec.rpc.RpcBus$SameExecutor.execute():277
com.dremio.common.SerializedExecutor.execute():121
com.dremio.exec.rpc.RpcBus$InboundHandler.decode():311
com.dremio.exec.rpc.RpcBus$InboundHandler.decode():282
io.netty.handler.codec.MessageToMessageDecoder.channelRead():88
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():356
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():342
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead():335
io.netty.handler.timeout.IdleStateHandler.channelRead():286
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():356
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():342
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead():335
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead():312
io.netty.handler.codec.ByteToMessageDecoder.channelRead():286
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():356
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():342
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead():335
io.netty.channel.ChannelInboundHandlerAdapter.channelRead():86
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():356
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():342
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead():335
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead():1294
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():356
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():342
io.netty.channel.DefaultChannelPipeline.fireChannelRead():911
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read():131
io.netty.channel.nio.NioEventLoop.processSelectedKey():645
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized():580
io.netty.channel.nio.NioEventLoop.processSelectedKeys():497
io.netty.channel.nio.NioEventLoop.run():459
io.netty.util.concurrent.SingleThreadEventExecutor$2.run():131
java.lang.Thread.run():745

thx

@JoyJava often the in-memory representation of data might be larger than what’s expected or how much it is on disk. Could you retry the query with more cluster memory to confirm this is not a scale issue?

Restart the server, then run this SQL is able to get the correct result