-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[catalyst-latam] Standarize profileList #3721
Conversation
Merging this PR will trigger the following deployment actions. Support and Staging deployments
Production deployments
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!! I suggest we try to go for a merge and iterate from there!
There are some inefficiencies around requesting 8GB exactly for example, as that would only fit 3 users on a 32 GB node due to some capacity is reserved by k8s and some is needed by a few system pods running on the node besides the user pods. I discussed this in #3631 (comment).
I was when I wrote that more positive towards use of n2-highmem-2
nodes, but I think now that I've remembered the complexities about overhead that becomes relatively large on smaller node makes me think its good to avoid use of a combination of 2 CPU and 4 CPU nodes, and figure it may very well be that 4 CPU nodes is needed to fit a few users per node in order to avoid regular startup delays.
Deliberations
-
Should memory requests be lowered to account for 32 GB not being available?
I think let's not try to do it and make the UX simple, if you request 1, 2, 4, 8GB, you get exactly that. At worst we use ~25% of the capacity inefficiently on a 4 CPU node, which happens if all users start the 8 GB servers. -
Should we have a node running?
I think having one node running at all time would be a reasonable optimization to ensure startup times. It adds cloud costs, but its quite a big deal I think to get quick access. -
Should
n2-highmem-2
orn2-highmem-4
nodes be used?
Using 2 CPU / 16GB node, the 8GB requests would only fit a single user pod on the node and wasting at worst ~50% of the capacity. If we use 2 CPU nodes, I think at least the 8 GB request must be tweaked to ensure at least 2 such pods would fit by themselves on a node to at worst waste ~25% of the capacity.Overall, I lean towards thinking that use of 4 CPU nodes is a better compromise weighing in operational costs etc of handling 2 CPU nodes, that may at some point be considered too small since they could force new node startups too often if users requests 4 GB / 8 GB in classes of 30 for example.
e49161c
to
11a29b9
Compare
11a29b9
to
dbdc182
Compare
a5be3da
to
fd5b4bf
Compare
🎉🎉🎉🎉 Monitor the deployment of the hubs here 👉 https://github.com/2i2c-org/infrastructure/actions/runs/8116989943 |
This changes was deployed in staging hub: https://staging.latam.catalystproject.2i2c.cloud
ref: #3631