Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup multus conf on exit #1299

Merged

Conversation

adrianchiris
Copy link
Contributor

@adrianchiris adrianchiris commented Jun 18, 2024

  • if cleanup-config-on-exit is set delete generated multus
    config file on exit.

  • add an option to skip watch for master cni config and kubeconfig
    as cleanup-config-on-exit with multus-conf-file=auto also triggered
    the watch for cases when deletion of multus config is desired but watch isnt

  • setup signal handling to allow config file cleanup on exit

@adrianchiris adrianchiris changed the title Cleanup multus conf Cleanup multus conf on exit Jun 18, 2024
@adrianchiris
Copy link
Contributor Author

This is a proposed alternative to : #1298

as we used to clean up multus conf file in preStop hook, however since the container no longer has shell we cannot run simple sh commands for cleanup.

@coveralls
Copy link

Coverage Status

coverage: 62.736% (-0.4%) from 63.116%
when pulling 4198a76 on adrianchiris:cleanup-multus-conf
into aff99fc on k8snetworkplumbingwg:master.

@coveralls
Copy link

Coverage Status

coverage: 62.667% (-0.4%) from 63.116%
when pulling 273c175 on adrianchiris:cleanup-multus-conf
into 41013e7 on k8snetworkplumbingwg:master.

@coveralls
Copy link

Coverage Status

coverage: 62.667% (-0.4%) from 63.116%
when pulling ecc67c7 on adrianchiris:cleanup-multus-conf
into 41013e7 on k8snetworkplumbingwg:master.

@dougbtv
Copy link
Member

dougbtv commented Jul 18, 2024

this provides a simple way to handle incoming
os signas using context

Signed-off-by: adrianc <[email protected]>
- if cleanup-config-on-exit is set delete generated multus
  config file on exit.

- add an option to skip watch for master cni config and kubeconfig
  as cleanup-config-on-exit with multus-conf-file=auto also triggered
  the watch for cases when deletion of multus config is desired but watch isnt

- setup signal handling to allow config file cleanup on exit

Signed-off-by: adrianc <[email protected]>
@coveralls
Copy link

coveralls commented Jul 18, 2024

Coverage Status

coverage: 62.667% (-0.7%) from 63.364%
when pulling 6ade0ce on adrianchiris:cleanup-multus-conf
into f54693f on k8snetworkplumbingwg:master.

@adrianchiris
Copy link
Contributor Author

@dougbtv docs update.

additional feedback/review appreciated :)

Signed-off-by: adrianc <[email protected]>
Copy link
Member

@dougbtv dougbtv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Appreciate the updates for the docs!

@dougbtv dougbtv merged commit b11ea82 into k8snetworkplumbingwg:master Aug 1, 2024
24 checks passed
@seastco
Copy link

seastco commented Aug 8, 2024

@dougbtv @adrianchiris ty for landing this.

is disabling the watch functionality necessary if CleanupConfigOnExit=true and MultusConfFile=auto because the file may be recreated before we're able to completely exit? does the cleanupMultusConf being in the defer block not ensure this? I'm not seeing how the file could be recreated after this is called.

if I'm understanding correctly, CleanupConfigOnExit and the watch functionality are basically incompatible. happy to attempt improving this once I clarify my understanding.

@adrianchiris
Copy link
Contributor Author

adrianchiris commented Aug 11, 2024

@seastco there is a select statement on signal handler context. cleanup config on exit should work both if watching or not.

the "skip-config-watch" flag was added to preserve backward compatibility since cleanup config on exit originally triggered the watch for some reason (imo it shouldnt and is counter intuitive).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants