DiskErrorException: when inserting into Aws Glue Iceberg table

Hi.
I am getting such error
DiskErrorException: Could not find any valid local directory for s3ablock-0001-
when trying to insert into Aws glue Iceberg table(created not by Dremio) from other glue table

INSERT INTO Iceberg(columns) SELECT columns FROM GlueTable WHERE ..

7c68d260-f8bc-42ff-b95e-6b9cd1ccdce0.zip (92,5 КБ)

Same error when running DELETE FROM Iceberg where col1=.. and col2=..

UPDATE: i found out similar query that inserts to Iceberg table i ran 2 weeks ago successfully in dremio But now fails with the same error.

Build
25.0.7-202407181421320869-2632b04f
Edition
AWS Edition (activated)

It turns out that any Iceberg DML statement(INSERT, TRUNCATE, DELETE, OPTMIZE and etc.) fails with such error. For sure this was not the case weeks ago and i was able to insert/delete and etc. from any iceberg table.

After googling on the error text, I found out that the error might be connected with Dremio being unable to write to local disk tmp folder due to: 1) not enough space 2) no permissions.

My first guess was not enough space on executor node(it has 8GB volume only). So i restarted engine. After engine restarted and node instance recreated:

  1. OPTMIZE statements started to run fine
  2. TRUNCARE statement still fails BUT now with a bit different symptoms:
    Same DiskErrorException: Could not find any valid local directory for s3ablock-0001- error BUT
    seems now it is raised on coordinator node(Failure node: 10.0.1… =coordinator address) with stack trace verbose errors:
Unexpected exception during fragment initialization: java.io.UncheckedIOException: Failed to create file: s3://l...../metadata/4fcce574-433c-4651-a38c-550467ec1464-m0.avro
....
Caused By (org.apache.hadoop.util.DiskChecker.DiskErrorException) Could not find any valid local directory for s3ablock-0001-

So seems we ran out of space on coordinator node??
Also how could it happen that we ran out of space on executor node?

Hello.
UPDATE: Actually DELETE and etc statements run fine, only TRUNCATE fails with such error.
Here are two profiles
Truncate fails:
17129c30-f3ce-4146-875c-efff73465201.zip (9,0 КБ)
Delete runs:
1d987ff9-7924-4c8e-95d1-9e563b5f19ed.zip (31,5 КБ)

Also i checked coordinator - it has enough free memory. As i undertand this error is connected to s3a buffer for writing to s3?

UPDATE: The error was gone after i updated in Glue DataSource Settings → Advanced Options → Connection properties:
hive.metastore.warehouse.dir: s3://....
The reason why i did so - CREATE TABLE .. also failed with same error(DiskErrorException), So i read https://docs.dremio.com/24.3.x/reference/sql/commands/apache-iceberg-tables/apache-iceberg-create/#location-in-amazon-glue-data-sources
and found this setting. I can not explain why adding this setting helped, but now CREATE/TRUNCATE and etc works fine.

I used to add this setting before, but that removed it after a while.

@vladislav-stolyarov When this query fails, can you check if /tmp on coordinator is full?