-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Decommission Hub] Callysto #3689
Comments
@Chealion Can you confirm that deletions are fine? |
@damianavila does February 29, 2024, work for you for the decommission date of our 2i2c hub? How much lead time do you need to decommission the hub? We're still finalizing our data migration plans but should have that in place soon but want to give you enough time on your end to complete. |
@byrcyb @damianavila Can confirm that we're slated to do our back up tomorrow and after that everything will be clear to do the terraform destroy to delete everything. |
@damianavila As a heads up we have backed up all the user home directories. Although we're not sure if there is an easy way to use CILogin later on to find out the user ID used for their home folder. Is there any documentation on how to use CILogin after the fact to determine what the user ID it would calculate would be? |
Unfortunately I don't think there is. I think each user would have to login and take note of their user ID that would be displayed in the top right near the logout button. @GeorgianaElena am I right? |
Thanks @sgibson91 - if it's not we'll have to run with it. That said, the hub is ready to be taken down anytime between now and the 29th as well. (We'd like to make sure the terraform destroy has been run before EOD on the 29th to avoid any unexpected March charges) |
Thanks @Chealion - I will start work on decommissioning the hub today |
@Chealion, @sgibson91, if I'd understood the question correctly, you could also get the user ID directly from CILogon.
![]() |
This still involves every user logging into CILogon and reporting what their user ID and their email is, so it's not very automatable 😕 |
Note that there were two CILogon apps for Callysto's prod hub: $ deployer cilogon-client get-all | grep "${CLUSTER_NAME}"
{'client_id': 'cilogon:/client_id/5e08f6663d8004c17ef6cf6c597b1d6e', 'name': 'callysto-prod'}
{'client_id': 'cilogon:/client_id/62c981a3d525b4d25f0f771624989ec2', 'name': 'callysto-prod'} The deployer didn't have the correct information to delete one of them: deployer cilogon-client delete --client-id cilogon:/client_id/5e08f6663d8004c17ef6cf6c597b1d6e $CLUSTER_NAME $HUB_NAME
{
"app_type": "web",
"at_lifetime": 0,
"client_id": "cilogon:/client_id/62c981a3d525b4d25f0f771624989ec2",
"client_id_issued_at": 1662020080,
"client_name": "callysto-prod",
"client_uri": "",
"ersatz_client": false,
"ersatz_inherit_id_token": false,
"extends_provisioners": false,
"forward_scopes_to_proxy": false,
"is_service_client": false,
"proxy_claims_list": [],
"proxy_request_scopes": [
"*"
],
"redirect_uris": [
"https://2i2c.callysto.ca/hub/oauth_callback"
],
"registration_client_uri": "https://cilogon.org/oauth2/oidc-cm?client_id=cilogon:/client_id/62c981a3d525b4d25f0f771624989ec2",
"rt_lifetime": 0,
"scope": [
"org.cilogon.userinfo",
"openid",
"email"
],
"service_client_users": [
"*"
],
"skip_server_scripts": false,
"strict_scopes": true
}
CILogon records are different than the client app stored in the configuration file. Consider updating the file. Hopefully there is a web UI where I can manually delete this? EDIT:
|
The cluster is now removed, just waiting for #3741 to be merged (for file cleanup purposes) |
Awesome and thank you so very much everyone! The need for username is only in the event we do have a user report they did forget to retrieve something. So knowing we could have them log into CILogin to get that ID is far more than we expected. |
Summary
Callysto's funding from the Government of Canada ends soon.
Info
Task List
Phase I
Phase II - Hub Removal
(These steps are described in more detail in the docs at https://infrastructure.2i2c.org/en/latest/hub-deployment-guide/hubs/other-hub-ops/delete-hub.html)
config/clusters/<cluster_name>/<hub_name>.values.yaml
files. A complete list of relevant files can be found under the appropriate entry in the associatedcluster.yaml
file.config/clusters/<cluster_name>/cluster.yaml
file.helm --namespace HUB_NAME delete HUB_NAME
kubectl delete namespace HUB_NAME
Phase III - Cluster Removal
This phase is only necessary for single hub clusters.
deployer grafana central-ds remove CLUSTER_NAME
terraform plan -destroy
andterraform apply
from the appropriate workspace, to destroy the clusterterraform workspace delete <NAME>
config/clusters/<cluster_name>
directory and all its contentsdeploy-hubs.yaml
validate-clusters.yaml
The text was updated successfully, but these errors were encountered: