Skip to content
This repository has been archived by the owner on Oct 3, 2020. It is now read-only.

Pod restarting due to liveness failure #246

Open
steven-ellis opened this issue Oct 1, 2019 · 1 comment
Open

Pod restarting due to liveness failure #246

steven-ellis opened this issue Oct 1, 2019 · 1 comment

Comments

@steven-ellis
Copy link

steven-ellis commented Oct 1, 2019

I'm seeing the following at an OpenShift level for the kube-ops-view pod

Liveness probe failed: Get http://10.130.4.158:8080/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

Looking at the pods logs I can see the health requests and then a graceful exit
10.130.4.1 - - [2019-10-01 03:04:45] "GET /health HTTP/1.1" 200 115 0.000597 10.130.4.1 - - [2019-10-01 03:04:48] "GET /health HTTP/1.1" 200 117 0.000535 INFO:kube_ops_view.main:Received TERM signal, shutting down..
I'm running this on OpenShift 4.1.3

@ivaanko
Copy link

ivaanko commented Jul 10, 2020

Hi! I have the same problem on EKS (ubuntu 18.04 nodes). CPU and memory usage seems to be OK. What could be the reason for the TERM signal followed by 503 health checks?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants