-
Notifications
You must be signed in to change notification settings - Fork 245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Support partition pushdown in Flink connector #196
Comments
I would like to contribute this issue, can assign it to me |
now flussAdmin.listPartitionInfos only return the partition value, missing partition key. Flink require return a map contains partition key and value.so it's must be blocked by #195 |
@Alibaba-HZY you can get the partition keys from the For multiple partition pushdown, yes, we need to extend |
@wuchong partition pushdown be executed only in batch mode?if so now batch mode only support datalake enabled or point queries on primary key. |
After the discussion with @luoyuxia |
@Alibaba-HZY yes. for batch mode, we have to wait #40, and for streaming mode, we need to leverage |
Search before asking
Motivation
Partition pushdown is a performance optimization technique that allows the query engine to filter out unnecessary data early in the query processing pipeline. By pushing down partition filters, we can significantly reduce the amount of data transferred and processed, leading to improved query performance and resource efficiency.
Consider a scenario where a user queries a large dataset partitioned by
region
anddate
. Without partition pushdown, the entire dataset needs to be scanned, which is inefficient. With partition pushdown, only the relevant partitions (e.g., data for a specific region and date range) are scanned, resulting in faster query execution and reduced resource usage.Solution
FlinkTableSource
to implementSupportsPartitionPushDown
and pushdown partitions.FlinkSourceEnumerator
only discovers buckets for the specific partitions to read.Anything else?
No response
Willingness to contribute
The text was updated successfully, but these errors were encountered: