-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: PodConfig as a custom resource #64
Comments
kind: PodConfig
metadata:
name: containers
namespace: blah
spec:
podConfigResult:
name: final
priority: 100
state:
spec:
containers:
- name: stress
image: stress:v1
resources:
requests:
memory: "50Mi"
limits:
memory: "100Mi" kind: PodConfig
metadata:
name: tolerations
spec:
podConfigResult:
name: final
priority: 100
state:
spec:
tolerations:
- key: "example-key"
operator: "Exists"
effect: "NoSchedule" kind: PodConfigResult
metadata:
name: containers
namespace: blah
status:
state:
spec:
containers:
- name: stress
image: stress:v1
resources:
requests:
memory: "50Mi"
limits:
memory: "100Mi"
tolerations:
- key: "example-key"
operator: "Exists"
effect: "NoSchedule" |
Great @AmitKumarDas, this can definitely help us in building the |
@sagarkrsd can you explain how exactly this can help? For example, can you list down the pain points that this solution tries to solve & i will try to fine tune the approach. This approach considers both controller & users making their changes & the change that has higher priority gets accepted as the final state. For example this should help in arriving at the desired states for STS, Deployment, DaemonSet. Can this reduce the amount of code/boilerplate inn openebs upgrade? |
When we say this(PodConfig) can be edited by users also, we should consider the case where it seems too generic and users would be more interested in providing the config at Deployment, DaemonSet, etc level instead of all the way coming down to pod. |
Some of the pain points that it can solve wrt
|
@sagarkrsd If PodConfig becomes too generic we can have specific ones e.g. Third party operators such as openebs upgrade will be looking out for specific In other words, If we try to think from OpenEBS upgrade as a 3rd party controller, it will now deal with various custom resources like PodConfig, PodConfigResult, DeploymentConfig, DeploymentConfigResult & so on. Amount of boilerplate code in upgrade controller should reduce significantly. |
# PodBuilder will lookup various configs & optional existing Pod
# resource to build a new Pod specification.
kind: PodBuilder
spec:
# prefer Config over Pod or vice-versa
# Config or Pod are valid values
prefer:
# selection may include one or more containers
# they will be added/merged into the generated Pod spec based
# on priorities assigned
containerConfigSelector:
matchLabels:
# selection may include one or more tolerations
# they will be merged into the generated Pod spec based
# on priorities assigned
tolerationsConfigSelector:
matchLabels:
# selection should merge the matching pods based on
# priorities. Priority can be set as an annotation. If no
# priority is found then use the first possible match.
# Log a warning if other matches are ignored.
podSelector:
matchLabels:
# the resource that is applied after pod spec is built
# PodPreview is the default
output: # PodPreview, or Pod
status: # This will reflect one kubernetes Container spec
kind: ContainerConfig
metadata:
name: CC-1
labels:
cstor: mgmt
spec:
priority:
template:
name:
image: cstor-pool-mgmt
command:
args:
env:
resources:
cpu:
mem: # This will reflect one kubernetes Container spec
kind: ContainerConfig
metadata:
name: CC-2
labels:
cstor: monitor
spec:
priority:
template:
name:
image: cstor-pool-monitor
command:
args:
env:
resources:
cpu:
mem: # This will reflect one Kubernetes tolerations spec
kind: TolerationsConfig
metadata:
name:
namespace:
labels:
spec:
priority
template:
- key:
operator:
effect:
tolerationSeconds: # This will be the output of PodBuilder
# PodBuilder might apply a Pod as well if set
kind: PodPreview
result: |
How about just limiting to We might want to include This might be something to do with this enhancement at metac library. |
ProblemStatement: As a DevOps engineer, I want a k8s controller that accepts values from various sources and provides a final set of merged values that can in turn be used to arrive at a desired state of a Pod.
For example:
A
provides taints & tolerations for PodB
provides resource limits (memory & cpu) for PodC
provides environment variables for containers of the PodF
should merge all the above & present a final state that is a 3 way merge of all the above based on priority.Now there can be custom controllers that make use of resource
F
to build the desired Pod state & subsequently apply this Pod state against the kubernetes cluster.The text was updated successfully, but these errors were encountered: