-
Notifications
You must be signed in to change notification settings - Fork 367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix instances ui problem #1602
base: master
Are you sure you want to change the base?
fix instances ui problem #1602
Conversation
In kubernetes environnement, when deploying the application and while everything works fine in the cluster, all pods are up, all connected to the database, etc... the instances in the bunerweb-ui appears down all the time. This is because of the condition that i removed, read it and you'll understand that status will always be down or it will be up when the pods are not up lol, i think it was just a typo mistake it should have been inversed, declare status = "down", and status inside the if condition to be up. i've added a better and simpler test to check the status of the pods. this fix also the problem of pods appearing in the bunkerweb-ui appearing always down (it's clear based on the old condition). We can discuss
…stances fix instances ui problem
In a kubernetes environnement, when deploying the application following the documentation and while everything works fine in the cluster side, meaning that all pods are up, evrything works fine, etc... i've added a parameter which should be an empty list to the constructor, this is used to make warnings or error messages unique in the pods logs, instead of having the same log line repeat itself infinitely, the problem i had in my case is that there are some ingresses that point to a port name instead of port number, and when _to_services function does not find a valid port number it logs the line as a "Ignoring unsupported ingress rule without backend service port number." the test i've added check if the service name is already present on the list, otherwise it appends it and logs it for one time, because the next time it'll be present in the list ( this is obviously clear :) ). We can discuss to provide you with other informations if needed. Regards
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't related to this P/R but #1603
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i agree, i was going to push them both in one MR, then i revert, but i don't know how they stayed (.gitignore & IngressController.py :/ )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shouldn't be in the P/R.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting approach, I'll have a look at it once the other requested changes have been made.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'll be here if you need any help 😄
In a kubernetes environnement, when deploying the application following the documentation and while everything works fine in the cluster side, meaning that all pods are up, evrything works fine, etc... the instances in the bunerweb-ui appears down all the time. This is because of the condition that i removed, read it and you'll understand that status will always be down or it will be up when the pods are not up lol, i think it was just a typo mistake it should have been inversed, declare status = "down", and status inside the if condition to be up or change the condition to be != instead of ==, etc..
I've added a better and simpler test to check the status of the pods. this fix also the problem of pods appearing in the bunkerweb-ui appearing always down (it's clear based on the old condition).
Regards