Why does adding more partitions when writing to the Iceberg table cause GC Out of memory?

When writing, if I partition by 1 column, it works fine. It’s even smoother if I don’t partition by anything.

(df_pos_sapo_order_orders.writeTo("pos_sapo_order.orders") 
    .partitionedBy(
         f.days("ModifiedOn")          # Partition by day of ModifiedOn
    #     f.days("CreatedOn")          # Partition by day of CreatedOn
    #     f.bucket(20, "TenantId"),      # Hash partition by tenantid
     #    f.bucket(100, "Id")             # Hash partition by id
    )
    .createOrReplace()
)

But as soon as I added more partition columns, for example, only adding days(CreatedOn), I instantly got an out-of-memory error (executor dies,…).

(df_pos_sapo_order_orders.writeTo("pos_sapo_order.orders") 
    .partitionedBy(
         f.days("ModifiedOn"),          # Partition by day of ModifiedOn
         f.days("CreatedOn")          # Partition by day of CreatedOn
    #     f.bucket(20, "TenantId"),      # Hash partition by tenantid
     #    f.bucket(100, "Id")             # Hash partition by id
    )
    .createOrReplace()
)

And of course, the same thing happens if I add more partition columns like bucket(…).
But the funny thing is, if I alter the table to add a partition later, it works fine. Still have a problem when writing additional data into the table after re-partitioning

spark.sql("ALTER TABLE pos_sapo_order.orders ADD PARTITION FIELD day(ModifiedOn)")
spark.sql("ALTER TABLE pos_sapo_order.orders ADD PARTITION FIELD day(CreatedOn)")
spark.sql("ALTER TABLE pos_sapo_order.orders ADD PARTITION FIELD bucket(100, TenantId)")

Why? I have already increased executor memory to nearly the maximum capacity of our cluster. Also, caching before writing prevents any overhead workload.
Is there any tip trick here? Why does adding partitions cause so much trouble when writing to the Icerberg table?