Hey there!
I’ve been using Dremio Cloud for 6 months without an issue. This morning I found my reflections were not refreshed. Taking a closer look, I see I can’t reach the data stored in S3 object store. Running a preview I get StatusRuntimeException: INTERNAL
. This happened overnight without any changes on my Dremio account or on my AWS account.
I am using Data Source credential authentication method, default CTAS format is ICEBERG.
I’ve tried a couple of things, to no avail:
changing settings in source setting pane
changing CTAS to PARQUET
recreating format on top of files (Unexpected error occurred
)
making S3 bucket public
connecting with Project Data credentials and setting IAM policies, attaching them to relevant roles
Next thing I’ll try is deploying a new CloudStack (new project) and test this there.
Any advice will be greatly appreciated.
Edit: tried deploying a new CloudStack within the same organization and getting the same error.
annol
May 24, 2023, 9:35am
2
We are also seeing this issue, but only on reflection refreshes, both on S3 and PGSQL sources.
Started happening around 4am
Error from the raw profile:
SYSTEM ERROR: StatusRuntimeException: INTERNAL
SqlOperatorImpl WRITER_COMMITTER
Location 0:0:5
SqlOperatorImpl WRITER_COMMITTER
Location 0:0:5
Fragment 0:0
[Error Id: 93d8676a-3f99-49b9-9b10-25c952019131 on 192.168.4.111:0]
(io.grpc.StatusRuntimeException) INTERNAL
io.grpc.stub.ClientCalls.toStatusRuntimeException():271
io.grpc.stub.ClientCalls.getUnchecked():252
io.grpc.stub.ClientCalls.blockingUnaryCall():165
com.dremio.services.nessie.grpc.api.TreeServiceGrpc$TreeServiceBlockingStub.getDefaultBranch():641
com.dremio.services.nessie.grpc.client.impl.GrpcApiImpl.lambda$getDefaultBranch$0():111
com.dremio.services.nessie.grpc.client.GrpcExceptionMapper.handleNessieNotFoundEx():246
com.dremio.services.nessie.grpc.client.impl.GrpcApiImpl.getDefaultBranch():108
com.dremio.plugins.NessieClientImpl.getDefaultBranch():138
com.dremio.exec.store.iceberg.nessie.IcebergNessieTableOperations.getDefaultBranch():151
com.dremio.exec.store.iceberg.nessie.IcebergNessieTableOperations.doRefresh():85
org.apache.iceberg.BaseMetastoreTableOperations.refresh():97
org.apache.iceberg.BaseMetastoreTableOperations.current():80
com.dremio.exec.store.iceberg.model.IcebergBaseCommand.beginCreateTableTransaction():127
com.dremio.exec.store.iceberg.model.IcebergTableCreationCommitter.():58
com.dremio.exec.store.iceberg.model.IcebergTableCreationCommitter.():63
com.dremio.exec.store.iceberg.model.IcebergBaseModel.getCreateTableCommitter():88
com.dremio.exec.store.iceberg.manifestwriter.IcebergCommitOpHelper.setup():147
com.dremio.sabot.op.writer.WriterCommitterOperator.setup():142
com.dremio.sabot.driver.SmartOp$SmartSingleInput.setup():282
com.dremio.sabot.driver.Pipe$SetupVisitor.visitSingleInput():74
com.dremio.sabot.driver.Pipe$SetupVisitor.visitSingleInput():64
com.dremio.sabot.driver.SmartOp$SmartSingleInput.accept():227
com.dremio.sabot.driver.StraightPipe.setup():103
com.dremio.sabot.driver.StraightPipe.setup():102
com.dremio.sabot.driver.StraightPipe.setup():102
com.dremio.sabot.driver.StraightPipe.setup():102
com.dremio.sabot.driver.StraightPipe.setup():102
com.dremio.sabot.driver.StraightPipe.setup():102
com.dremio.sabot.driver.Pipeline.setup():71
com.dremio.sabot.exec.fragment.FragmentExecutor.setupExecution():619
com.dremio.sabot.exec.fragment.FragmentExecutor.run():441
com.dremio.sabot.exec.fragment.FragmentExecutor.access$1700():107
com.dremio.sabot.exec.fragment.FragmentExecutor$AsyncTaskImpl.run():999
com.dremio.sabot.task.AsyncTaskWrapper.run():122
com.dremio.sabot.task.slicing.SlicingThread.mainExecutionLoop():249
com.dremio.sabot.task.slicing.SlicingThread.run():171
Hello,
Thanks for sharing these details. We are looking into the issue and get back to you on that.
Best,
Payal
Dremio
Hey!
Solved on my side by Dremio Team internally. Much appreciated!
1 Like
@klemens - Thanks for confirming!
@annol - could you also confirm if your issue has been resolved or you still facing the error?
Thanks.
annol
May 24, 2023, 1:18pm
6
Yes! It is now working again, thank you!
1 Like