feat(spark): implement column pruning for incremental queries #17514
+125
−32
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Describe the issue this Pull Request addresses
This PR implements column pruning optimization for incremental queries by migrating
IncrementalRelationV1andIncrementalRelationV2fromTableScantoPrunedScaninterface. Currently, incremental queries read all columns from the source files even when only a subset is needed, leading to unnecessary I/O and memory overhead.Summary and Changelog
Users gain improved query performance for incremental reads by only reading required columns from source files instead of the full schema.
Changes:
IncrementalRelationV1andIncrementalRelationV2fromTableScantoPrunedScanbuildScan(requiredColumns: Array[String])to accept column pruninggetPrunedSchema()to build schema with required columns plus mandatory fields (_hoodie_commit_time and partition columns)filterRequiredColumnsFromDF()to remove auxiliary columns from final resultHoodieStreamSourceV1andHoodieStreamSourceV2to pass required columnsImpact
Performance improvement - Incremental queries will read fewer columns from Parquet files, reducing I/O and improving query latency, especially for wide tables when selecting few columns.
Risk Level
Low - This is an optimization that maintains backward compatibility. The changes only affect the incremental query path and fallback to reading all necessary fields if required.
Documentation Update
None - This is an internal optimization with no user-facing configuration changes or API modifications.
Contributor's checklist