Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix ceph_orch_apply multiple yaml #9

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

NikolayDachev
Copy link

@NikolayDachev NikolayDachev commented Jan 21, 2025

this fix will load existing orch service when we have more than one yaml return for same service.

Current yaml.safe_load return exception when we have more than 1 yaml output exist via

ceph orch ls {SERVICE} --format=yaml

This is expected behavior of yaml.safe_load however we should expet more than 1 yaml "document" for example

ceph1:~ # ceph orch ls ingress --format=yaml
service_type: ingress
service_id: nfs.iac
service_name: ingress.nfs.iac
placement:
  count: 1
  label: nfs
spec:
  backend_service: nfs.iac
  first_virtual_router_id: 50
  frontend_port: 2049
  monitor_port: 9001
  virtual_ip: 172.16.20.11/24
status:
  created: '2025-01-21T17:57:52.630212Z'
  last_refresh: '2025-01-21T17:53:42.051637Z'
  ports:
  - 2049
  - 9001
  running: 2
  size: 2
  virtual_ip: 172.16.20.11/24
events:
- 2025-01-21T17:57:54.044109Z service:ingress.nfs.iac [INFO] "service was created"
---
service_type: ingress
service_id: rgw.iac
service_name: ingress.rgw.iac
placement:
  count: 1
  label: rgw
spec:
  backend_service: rgw.iac
  first_virtual_router_id: 50
  frontend_port: 8080
  monitor_port: 9002
  virtual_ip: 172.16.20.12/24
status:
  created: '2025-01-21T09:06:37.514928Z'
  last_refresh: '2025-01-21T17:57:50.437677Z'
  ports:
  - 8080
  - 9002
  running: 2
  size: 2
  virtual_ip: 172.16.20.12/24
events:
- 2025-01-21T09:06:53.351531Z service:ingress.rgw.iac [INFO] "service was created"

To handle that we should use yaml.safe_load_all

This is a simple fix , this is my first time with ceph.automation , from what I see this module is ceph/cephadm "on steroids :) " so no surprises , my point is ... I hope I not miss something please double check

Regards,

this fix will load exisitng orch when we have more
than one yaml returnet for same serice.
@NikolayDachev
Copy link
Author

I see now there is a issue with idempotency check. Look like it's compare keys but the problem is when we have a list of dict(service spec).

I will try to find a solution about this, probably we can use service_type and service_id keys if they match then continue with existing keys check. I'm not sure how relevant is this check but I think ceph service can be identified by pair of those two keys

this help when we have more than one service
since ceph aways set it to service_id+service_type

if we use custom service_name then we aways will
get staus changed
@NikolayDachev
Copy link
Author

Latest changes handle all cases with multiple exiting service.
service_name = (service_type + service_id)

so we use 2 parameters parameters one of them is unique which help to get correct existing service to compare.
I also update the documentation, as general if we want more than one spec then loop must be used (everhitng else will lead to more complex validation and I'm not sure if we want that )

@NikolayDachev
Copy link
Author

I will wait this MR to be reviewed , if is submitted or other solution is implemented to fix the exiting problem then I think I can add and state: (service to be present or removed :) ) but imo changes must be done one by one

@NikolayDachev
Copy link
Author

also please let me know if I should bump version (1.1.0 > 1.1.1 since is a bug fix ?)

@asm0deuz
Copy link
Collaborator

also please let me know if I should bump version (1.1.0 > 1.1.1 since is a bug fix ?)

No need for now, we will do it just before releasing a new version

@asm0deuz
Copy link
Collaborator

@NikolayDachev As you can see in the checks https://github.com/ceph/ceph.automation/actions/runs/12905142747/job/36124058813?pr=9 the one related to changelog failed.
Can you create a new yml file in /changelogs/fragments following the documentation with a bugfixes section?

Thx

Copy link
Collaborator

@asm0deuz asm0deuz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@NikolayDachev
Copy link
Author

NikolayDachev commented Jan 31, 2025

@asm0deuz Thank you for this approval, however look like I left some trailing spaces in fragment yaml, please approve workflow when is possible

regards,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants