ArrayIndexOutOfBoundsException when querying Iceberg table with null values

I have an Iceberg table created using Iceberg 0.14.0 and Spark 3.3.0

When querying the table with a where condition on columns that have null values, I get an ArrayIndexOutOfBoundsException: null

coordinator-0_1  | 2022-09-07 04:20:49,444 [Fabric-RPC-Offload4] INFO  c.d.exec.work.foreman.AttemptManager - 1ce7e3df-8350-c339-56b2-eb3e03fc8500: State change requested RUNNING --> FAILED, Exception com.dremio.common.exceptions.UserRemoteException: DATA_READ ERROR: Failed to decode column <colname>::int64
coordinator-0_1  |
coordinator-0_1  | Total records decoded and sent upstream 0
coordinator-0_1  | PLAIN encoded pages read 0
coordinator-0_1  | DICTIONARY encoded pages read 0
coordinator-0_1  | Total records decoded in current page and sent upstream after passing filter 0
coordinator-0_1  | File path /<bucket>/iceberg/<table>/data/<partition>/00000-882-aa04bc6d-55c6-40c1-b725-718bf0bab390-00001.parquet
coordinator-0_1  | Rowgroup index 0
coordinator-0_1  | SqlOperatorImpl TABLE_FUNCTION
coordinator-0_1  | Location 1:11:6
coordinator-0_1  | Fragment 1:0
coordinator-0_1  |
coordinator-0_1  | [Error Id: ccc65246-9c70-451a-adfe-29972cb2957b on dev-dremio-1:0]
coordinator-0_1  |
coordinator-0_1  |   (java.lang.ArrayIndexOutOfBoundsException) Index 7 out of bounds for length 0
coordinator-0_1  |     org.apache.parquet.bytes.BytesUtils.bytesToLong():313
coordinator-0_1  |     org.apache.parquet.column.statistics.LongStatistics.setMinMaxFromBytes():75
coordinator-0_1  |     com.dremio.parquet.reader.metadata.LazyPageStatsProvider.getPageStats():47
coordinator-0_1  |     com.dremio.parquet.reader.metadata.PageMetadata.getPageStats():86
coordinator-0_1  |     com.dremio.parquet.pages.FilteringPageReaderIterator.next():35
coordinator-0_1  |     com.dremio.parquet.pages.MemoizingPageIterator.next():32
coordinator-0_1  |     com.dremio.parquet.pages.PageIterator.nextPage():108
coordinator-0_1  |     com.dremio.parquet.pages.PageIterator.hasNextPage():63
coordinator-0_1  |     com.dremio.parquet.reader.column.generics.BigIntDeltaReader.evalNextBatch():158
coordinator-0_1  |     com.dremio.parquet.reader.FilteringRowGroupReader.eval():46
coordinator-0_1  |     com.dremio.extra.exec.store.dfs.parquet.ParquetVectorizedReader.next():786
coordinator-0_1  |     com.dremio.exec.store.parquet.UnifiedParquetReader.readEnsuringReadersReturnSameNumberOfRecords():383
coordinator-0_1  |     com.dremio.exec.store.parquet.UnifiedParquetReader.next():366
coordinator-0_1  |     com.dremio.exec.store.parquet.TransactionalTableParquetReader.next():233
coordinator-0_1  |     com.dremio.exec.store.parquet.ParquetCoercionReader.next():125
coordinator-0_1  |     com.dremio.exec.store.parquet.ScanTableFunction.processRow():186
coordinator-0_1  |     com.dremio.sabot.op.tablefunction.TableFunctionOperator.outputData():103
coordinator-0_1  |     com.dremio.sabot.driver.SmartOp$SmartSingleInput.outputData():193
coordinator-0_1  |     com.dremio.sabot.driver.StraightPipe.pump():56
coordinator-0_1  |     com.dremio.sabot.driver.Pipeline.doPump():111
coordinator-0_1  |     com.dremio.sabot.driver.Pipeline.pumpOnce():101
coordinator-0_1  |     com.dremio.sabot.exec.fragment.FragmentExecutor$DoAsPumper.run():418
coordinator-0_1  |     com.dremio.sabot.exec.fragment.FragmentExecutor.run():355
coordinator-0_1  |     com.dremio.sabot.exec.fragment.FragmentExecutor.access$1600():97
coordinator-0_1  |     com.dremio.sabot.exec.fragment.FragmentExecutor$AsyncTaskImpl.run():820
coordinator-0_1  |     com.dremio.sabot.task.AsyncTaskWrapper.run():120
coordinator-0_1  |     com.dremio.sabot.task.slicing.SlicingThread.mainExecutionLoop():247
coordinator-0_1  |     com.dremio.sabot.task.slicing.SlicingThread.run():171
coordinator-0_1  |
coordinator-0_1  | 2022-09-07 04:20:49,497 [async-query-logger2] INFO  query.logger - Query: 1ce7e3df-8350-c339-56b2-eb3e03fc8500; outcome: FAILED

This seems to be a bug related to handling of missing page statistics. The same query works in a spark shell.

Filtering on columns that does not have any null values works