It was so close, question for Local Quickstart:Back-off pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0" #172
Unanswered
OOAAHH
asked this question in
Questions & Answers
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Back-off pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0"
When observing the deployment of
KF-TOOLS--TENSORBOARDS--TENSORBOARD-CONTROLLER
, I found that one of my pods reported an error: Back-off pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0". This is very strange because when I check withdocker images
, I can see that the imagegcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
is already saved on my machine. However, as shown in the picture below, my pod still encounters issues during the image pulling process. As a measure, I directly changed the image source in its corresponding Live Manifest file, and then I saw my pod working normally. But when I ranbash ./sync_argocd_apps.sh
to sync all my applications, I found that my Live Manifest reverted to its original state before my changes. Perhaps I should fork a branch of deployKF to my repository and then change the image? Requesting assistance on which file in the repository I should change to affect the behavior ofbash ./sync_argocd_apps.sh
?当我在观察
KF-TOOLS--TENSORBOARDS--TENSORBOARD-CONTROLLER
的部署的时候发现,我有一个pods报错Back-off pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0",这非常奇怪,我通过docker images
是可以看到我的gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0是已经保存在我的计算机中了,然而就像下图中所展示的那样,我的pods依然在拉取镜像的过程中遇到了问题。于是我采取了措施,我直接在它对应的Live Manifest文件进行了更改,更换了镜像的来源,随后我看到我的pods正常工作了。但是当我进行
bash ./sync_argocd_apps.sh
我的所有应用以后,我发现我的Live Manifest又回到了我更改前的样子,或许我应该fork一个deployKF的分支到我的存储库,然后进行image的更改?请求帮助,我应该到更改存储库中的哪个文件来影响
bash ./sync_argocd_apps.sh
的行为?Beta Was this translation helpful? Give feedback.
All reactions