3.I can see some positional delete related file in the table info-> ‘total-delete-files’: ‘1’, ‘total-position-deletes’: ‘1’, ‘total-equality-deletes’: ‘0’
4.Spark can query out with correct result.
5.Dremio query out all data without merge the delete file
Dremio version. Support for reading tables with positional deletes was added in version 23. This includes v2 tables created and modified in Spark with merge-on-read enabled.
Spark version.
Your Spark configuration for the Iceberg catalog, excluding any secrets. These would be all the spark.sql.catalog.* properties you have configured for your S3 Iceberg catalog, excluding access key/secret key/etc.
If you can reproduce the issue on a small Iceberg table that you can share, if you could zip up the entire table - data + metadata and share with us that would help as well.
I suspect the problem is related to path/URI normalization differences between Dremio and Spark. Being able to look at the table metatadata and delete files should help confirm if this is the case.