Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

volumeMounts in PodConfiguration are being overridden #55

Open
alexfrancavilla opened this issue Jun 29, 2018 · 6 comments
Open

volumeMounts in PodConfiguration are being overridden #55

alexfrancavilla opened this issue Jun 29, 2018 · 6 comments

Comments

@alexfrancavilla
Copy link

Hi everyone,

in order to access private git repositories through SSH I was trying to mount a Secret or ConfigMap under /home/go/.ssh which contains everything that is required to SSH into our GitLab repositories (private keypair, prefilled known_hosts).

Basically what I did is the same what the images/profile-with-pod-yaml.png image is showing on the install.md page in this repo, but with a different directory. Is this intentional behaviour on the .ssh directory or am I facing a bug? (yes the secret exists and contains all data, tested it with a busybox mounting the directory in the same way)

Here is my pod configuration from the elastic agent profile:

apiVersion: v1
kind: Pod
metadata:
  name: pod-name-prefix-{{ POD_POSTFIX }}
  labels:
    app: web
spec:
  containers:
    - name: gocd-agent-container-{{ CONTAINER_POSTFIX }}
      image: {{ GOCD_AGENT_IMAGE }}:{{ LATEST_VERSION }}
      securityContext:
        privileged: true
      volumeMounts:
        - name: ssh
          mountPath: /home/go/.ssh
          readOnly: true
  volumes:
    - name: ssh
      secret:
        secretName: gocd-ssh-key

Output of kubectl describe -n infra pod/k8s-ea-77f0a914-7877-494f-9fb6-872044ab3b5a, which is the elastic agent pod. As you can see no extra volume was mounted:

Name:         k8s-ea-77f0a914-7877-494f-9fb6-872044ab3b5a
Namespace:    infra
Node:         MY-NODE/MY-IP
Start Time:   Thu, 28 Jun 2018 15:08:10 +0200
Labels:       Elastic-Agent-Created-By=cd.go.contrib.elasticagent.kubernetes
              Elastic-Agent-Job-Id=7
              kind=kubernetes-elastic-agent
Annotations:  Elastic-Agent-Job-Identifier={"pipeline_name":"hello_world_ssh","pipeline_counter":1,"pipeline_label":"1","stage_name":"default_stage","stage_counter":"1","job_name":"default_job","job_id":7}
              Environment=
              Image=gocd/gocd-agent-docker-dind:v18.6.0
              MaxCPU=
              MaxMemory=
              PodConfiguration=apiVersion: v1
kind: Pod
metadata:
  name: pod-name-prefix-{{ POD_POSTFIX }}
  labels:
    app: web
spec:
  containers:
    - name: gocd-agent-container-{{ CONTAINER_POSTFIX }}
      ...
         Privileged=true
         SpecifiedUsingPodConfiguration=false
Status:  Running
IP:      100.96.0.22
Containers:
  k8s-ea-77f0a914-7877-494f-9fb6-872044ab3b5a:
    Container ID:   docker://179c413dec10dd689e19af2794acba14c47a998fbb322628d5809501ffd4fe14
    Image:          gocd/gocd-agent-docker-dind:v18.6.0
    Image ID:       docker-pullable://gocd/gocd-agent-docker-dind@sha256:90521ed917de7c6535c072eae8432870e7d9004e0f08100a0dc7aa01b01107ac
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 28 Jun 2018 15:08:23 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      GO_EA_SERVER_URL:                       https://gocd-server:8154/go
      GO_EA_AUTO_REGISTER_KEY:                1df623f2-601f-45ec-8578-711a2ca9ba2a
      GO_EA_AUTO_REGISTER_ELASTIC_AGENT_ID:   k8s-ea-77f0a914-7877-494f-9fb6-872044ab3b5a
      GO_EA_AUTO_REGISTER_ELASTIC_PLUGIN_ID:  cd.go.contrib.elasticagent.kubernetes
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ms8jc (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  default-token-ms8jc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-ms8jc
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                                                     Message
  ----    ------                 ----  ----                                                     -------
  Normal  Scheduled              9m    default-scheduler                                        Successfully assigned k8s-ea-77f0a914-7877-494f-9fb6-872044ab3b5a to MY-NODE
  Normal  SuccessfulMountVolume  9m    kubelet, MY-NODE  MountVolume.SetUp succeeded for volume "default-token-ms8jc"
  Normal  Pulling                9m    kubelet, MY-NODE  pulling image "gocd/gocd-agent-docker-dind:v18.6.0"
  Normal  Pulled                 9m    kubelet, MY-NODE  Successfully pulled image "gocd/gocd-agent-docker-dind:v18.6.0"
  Normal  Created                9m    kubelet, MY-NODE  Created container
  Normal  Started                9m    kubelet, MY-NODE  Started container

And the output of kubectl get po -n infra k8s-ea-77f0a914-7877-494f-9fb6-872044ab3b5a -o yaml, which correctly outputs my pod configuration template in the annotation, but is missing the actual mount in the spec down below:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    Elastic-Agent-Job-Identifier: '{"pipeline_name":"hello_world_ssh","pipeline_counter":1,"pipeline_label":"1","stage_name":"default_stage","stage_counter":"1","job_name":"default_job","job_id":7}'
    Environment: ""
    Image: gocd/gocd-agent-docker-dind:v18.6.0
    MaxCPU: ""
    MaxMemory: ""
    PodConfiguration: |-
      apiVersion: v1
      kind: Pod
      metadata:
        name: pod-name-prefix-{{ POD_POSTFIX }}
        labels:
          app: web
      spec:
        containers:
          - name: gocd-agent-container-{{ CONTAINER_POSTFIX }}
            image: {{ GOCD_AGENT_IMAGE }}:{{ LATEST_VERSION }}
            securityContext:
              privileged: true
            volumeMounts:
              - name: ssh
                mountPath: /home/go/.ssh
                readOnly: true
        volumes:
          - name: ssh
            secret:
              secretName: gocd-ssh-key
    Privileged: "true"
    SpecifiedUsingPodConfiguration: "false"
  creationTimestamp: 2018-06-28T13:08:10Z
  labels:
    Elastic-Agent-Created-By: cd.go.contrib.elasticagent.kubernetes
    Elastic-Agent-Job-Id: "7"
    kind: kubernetes-elastic-agent
  name: k8s-ea-77f0a914-7877-494f-9fb6-872044ab3b5a
  namespace: infra
  resourceVersion: "2879120"
  selfLink: /api/v1/namespaces/infra/pods/k8s-ea-77f0a914-7877-494f-9fb6-872044ab3b5a
  uid: 4ea4b29d-7ad4-11e8-940f-02118ba64a3e
spec:
  containers:
  - env:
    - name: GO_EA_SERVER_URL
      value: https://gocd-server:8154/go
    - name: GO_EA_AUTO_REGISTER_KEY
      value: 1df623f2-601f-45ec-8578-711a2ca9ba2a
    - name: GO_EA_AUTO_REGISTER_ELASTIC_AGENT_ID
      value: k8s-ea-77f0a914-7877-494f-9fb6-872044ab3b5a
    - name: GO_EA_AUTO_REGISTER_ELASTIC_PLUGIN_ID
      value: cd.go.contrib.elasticagent.kubernetes
    image: gocd/gocd-agent-docker-dind:v18.6.0
    imagePullPolicy: IfNotPresent
    name: k8s-ea-77f0a914-7877-494f-9fb6-872044ab3b5a
    resources: {}
    securityContext:
      privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-ms8jc
      readOnly: true
  dnsPolicy: ClusterFirst
  nodeName: MY-NODE
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-ms8jc
    secret:
      defaultMode: 420
      secretName: default-token-ms8jc
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-06-28T13:08:10Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-06-28T13:08:23Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-06-28T13:08:10Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://179c413dec10dd689e19af2794acba14c47a998fbb322628d5809501ffd4fe14
    image: gocd/gocd-agent-docker-dind:v18.6.0
    imageID: docker-pullable://gocd/gocd-agent-docker-dind@sha256:90521ed917de7c6535c072eae8432870e7d9004e0f08100a0dc7aa01b01107ac
    lastState: {}
    name: k8s-ea-77f0a914-7877-494f-9fb6-872044ab3b5a
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2018-06-28T13:08:23Z
  hostIP: MY-IP
  phase: Running
  podIP: 100.96.0.22
  qosClass: BestEffort
  startTime: 2018-06-28T13:08:10Z
@alexfrancavilla alexfrancavilla changed the title volumeMounts in PodConfiguration are being overriden volumeMounts in PodConfiguration are being overridden Jun 29, 2018
@arvindsv
Copy link
Member

arvindsv commented Jul 9, 2018

I see that /home/go is expected to be mounted (according to this). Maybe that's a problem? Since you seem to be mounting /home/go/.ssh.

Maybe @GaneshSPatil or @varshavaradarajan have some idea?

@sheroy
Copy link
Contributor

sheroy commented Jul 9, 2018

Hi @alexfrancavilla, this appears to be a defect in the elastic agent implementation to me. We'll take a look and keep this issue updated on the fix.

@varshavaradarajan
Copy link
Member

varshavaradarajan commented Jul 10, 2018

Since you seem to be mounting /home/go/.ssh.

@arvindsv - Its okay to mount any directory for docker. I checked that that worked.

@alexfrancavilla - do you mind sharing the go-server logs? I want to check if there are any errors. One thing that I did run into while checking the volume mounts was the the material update would be stuck with a prompt to add github to known hosts. This would be present in the logs.

I got the following prompt -

The authenticity of host 'github.com (192.30.253.113)' can't be established.
RSA key fingerprint is ...
Are you sure you want to continue connecting (yes/no)?

This will be followed by Skipping update of material ... which has been in-progress since .... The dashboard will not show up any errors when you trigger the pipeline because of a known bug on the dashboard - gocd/gocd#4647

If you have got a similar prompt, you can solve this by mounting the known-hosts file.

@darkedges
Copy link

Here is the version I have working

apiVersion: v1
kind: Pod
metadata:
  name: pod-name-prefix-{{ POD_POSTFIX }}
  labels:
    app: web
spec:
  containers:
    - name: gocd-agent-container-{{ CONTAINER_POSTFIX }}
      image: {{ GOCD_AGENT_IMAGE }}:{{ LATEST_VERSION }}
      securityContext:
        privileged: true 
      volumeMounts:
        - name: git-ssh-key
          mountPath: /home/go/.ssh/
          readOnly: true
  volumes:
    - name: git-ssh-key
      secret:
        secretName: git-ssh-key

kubernetes secret created this way

kubectl create secret generic git-ssh-key -n gocd --from-file=c:\development\forgerock\tls\id_rsa,c:\development\forgerock\tls\config

c:\development\forgerock\tls\config

StrictHostKeyChecking no
UserKnownHostsFile /dev/null

c:\development\forgerock\tls\id_rsa

-----BEGIN RSA PRIVATE KEY-----
xxxxxxxxx
-----END RSA PRIVATE KEY-----

@alexfrancavilla
Copy link
Author

@varshavaradarajan Sorry for the late response. I can't provide the logs anymore since they're gone already. My solution was to build my own agent image on top of your official agent image which includes the key as well as a prepared known_hosts file (through ssh-keyscan). This setup runs fine and I haven't a changed a thing until now.

@simcomp2003
Copy link

Working solution as proposed:

apiVersion: v1
kind: Pod
metadata:
name: pod-name-prefix-{{ POD_POSTFIX }}
labels:
app: web
spec:
containers:
- name: gocd-agent-container-{{ CONTAINER_POSTFIX }}
image: {{ GOCD_AGENT_IMAGE }}:{{ LATEST_VERSION }}
securityContext:
privileged: true
volumeMounts:
- name: git-ssh-key
mountPath: /home/go/.ssh
readOnly: true
volumes:
- name: git-ssh-key
secret:
secretName: git-ssh-key

Problem was mount path with "/" at end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants