You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 30, 2020. It is now read-only.
Describe the feature you'd like to have.
The operator needs to be aware of failure domains so that it can:
Create Gluster pods that target different domains
Maintain a pool of storage across domains so that resilient volumes can be provisioned
What is the value to the end user? (why is it a priority?)
Users want to be able to control how their data is spread relative to failure boundaries. For example, they may want a R3 volume to use 3 different domains so that an infrastructure outage does not affect the storage. They may also want to co-locate their storage with their workload to increase performance since crossing infrastructure boundaries tands to increase latency & decrease bandwidth.
How will we know we have a good solution? (acceptance criteria)
Different templates can be defined to place pods into different failure domains. This includes both node-based affinities (for rack/quadrant/host/DC/AZ granularity) and storage affinities (for obtaining South PVs from different pools)
Gluster's presence in each domain can be scaled independently
The failure domains can be used as a part of of volume provisioning. The template and topology information plugs into both the CSI driver and Gluster.
Work items
Support for topology templates
Be able to vary node count per template
Additional context
Dependencies:
GD2 node-level tags to designate topology template
Topology tag can be used as a filter by GD2 IVP
CSI can use topology information from a StorageClass to send tags in provisioning request to GD2
The text was updated successfully, but these errors were encountered:
Describe the feature you'd like to have.
The operator needs to be aware of failure domains so that it can:
What is the value to the end user? (why is it a priority?)
Users want to be able to control how their data is spread relative to failure boundaries. For example, they may want a R3 volume to use 3 different domains so that an infrastructure outage does not affect the storage. They may also want to co-locate their storage with their workload to increase performance since crossing infrastructure boundaries tands to increase latency & decrease bandwidth.
How will we know we have a good solution? (acceptance criteria)
Work items
Additional context
Dependencies:
The text was updated successfully, but these errors were encountered: