feat: Implement shared delete file loading and caching for ArrowReader #1941
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.


Which issue does this PR close?
What changes are included in this PR?
Currently, ArrowReader instantiates a new CachingDeleteFileLoader (and consequently a new DeleteFilter) for each FileScanTask when calling load_deletes. This
results in the DeleteFilter state being isolated per task. If multiple tasks reference the same delete file (common in positional deletes), that delete file is
re-read and re-parsed for every task, leading to significant performance overhead and redundant I/O.
Changes
its lifetime, the DeleteFilter state is now effectively shared across all file scan tasks processed by that reader.
ArrowReader. Therefore, if a task encounters a file that is currently being loaded by another task, it must asynchronously wait (notify.notified().await)
during the loading phase to ensure the data is fully populated before ArrowReader proceeds.
context.
Are these changes tested?
Added test_caching_delete_file_loader_caches_results to verify that repeated loads of the same delete file return shared memory objects