-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Watch for Multi-Cluster Resources #452
Labels
kind/feature
New feature
Comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We want to gather information about what users want from Watch capabilities
The current user wants to get change events for resources in multiple clusters and needs to use informer to list-watch the resources in each cluster.
The disadvantages of this are also relatively obvious.
Clusterpedia plans to provide a watch capability that allows users to sense changes in resources across multiple clusters using the same principle as Informer, replacing < N Informers > => < N Member KubeAPIServer > with <1 informer> => <1 Clusterpedia APIServer> to get resource change events across N member clusters.
This avoids uncontrollable pressure on the Member APIServer by connecting to the clusterpedia.
In addition, with the Watch capability, Clusterpedia will also consider providing a new Informer in the same way as the native one, but instead of caching all the data in memory, we will only keep the metadata for user filtering and go to Clusterpedia when the user needs to get the full resource.
We have implemented Watch capability in memory storage and hope to provide a generic solution to provide Watch capability for most of the storage layers.
If you feel that the Watch capabilities of multiple clusters would be useful to you, comment on your usage scenarios and suggestions to drive the design and development of Watch features
We can also discuss the Watch implementation in Implementations of Watch that can be used for most storage layers
The text was updated successfully, but these errors were encountered: