Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vernemq on openshift okd #130

Closed
yeganx opened this issue Apr 10, 2019 · 11 comments
Closed

vernemq on openshift okd #130

yeganx opened this issue Apr 10, 2019 · 11 comments

Comments

@yeganx
Copy link

yeganx commented Apr 10, 2019

I want use vernemq into my openshift project .
I add the image stream and deploy vernemq:1.7.1 image

#oc logs vernmq-1-65m48
sed: can't read /vernemq/etc/vm.args: No such file or directory
id: cannot find name for user ID 1000390000
vm.args needs to have a -name parameter.
  -sname is not supported.
/usr/sbin/start_vernemq: line 122: ps: command not found
@larshesel
Copy link
Contributor

Can you launch the docker image locally to test if that works? I believe there were a 1.7.1-2 image published, so perhaps you could give that (or latest) a go and see if that helps.

@yeganx
Copy link
Author

yeganx commented Apr 10, 2019

I try this :
#docker run -p 1883:1883 --name vernemq1 -d erlio/docker-vernemq:1.7.1
It seems ok
#docker logs vernemq1
13:59:31.369 [info] Try to start vmq_plumtree: ok
13:59:32.111 [info] loaded 0 subscriptions into vmq_reg_trie
13:59:32.119 [info] cluster event handler 'vmq_cluster' registered

@yeganx
Copy link
Author

yeganx commented Apr 10, 2019

my problem is in runnig the image in openshift
I try using statefuls-set.yaml



apiVersion:
 apps/v1
kind: StatefulSet
metadata:
  creationTimestamp: '2019-04-10T13:44:19Z'
  generation: 1
  labels:
    app.kubernetes.io/instance: dunking-marmot
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: vernemq
    helm.sh/chart: vernemq-1.2.0
  name: test-vernemq
  namespace: test-vern
  resourceVersion: '5743717'
  selfLink: /apis/apps/v1/namespaces/test-vern/statefulsets/test-vernemq
  uid: bd7e4d7f-5b96-11e9-8809-000c296169fc
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: dunking-marmot
      app.kubernetes.io/name: vernemq
  serviceName: dunking-marmot-vernemq-headless
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: dunking-marmot
        app.kubernetes.io/name: vernemq
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - vernemq
                    - key: release
                      operator: In
                      values:
                        - dunking-marmot
                topologyKey: kubernetes.io/hostname
              weight: 100
      containers:
        - env:
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
              value: '1'
            - name: DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR
              value: >-
                app.kubernetes.io/name=vernemq,app.kubernetes.io/instance=dunking-marmot
            - name: DOCKER_VERNEMQ_LISTENER__TCP__LOCALHOST
              value: '127.0.0.1:1883'
            - name: DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT
              value: 'on'
            - name: DOCKER_VERNEMQ_ALLOW_PUBLISH_DURING_NETSPLIT
              value: 'on'
            - name: DOCKER_VERNEMQ_ALLOW_SUBSCRIBE_DURING_NETSPLIT
              value: 'on'
            - name: DOCKER_VERNEMQ_ALLOW_UNSUBSCRIBE_DURING_NETSPLIT
              value: 'on'
          image: 'erlio/docker-vernemq:1.7.1'
          imagePullPolicy: IfNotPresent
          livenessProbe:
            exec:
              command:
                - /bin/sh
                - '-c'
                - /vernemq/bin/vernemq ping | grep pong
            failureThreshold: 3
            initialDelaySeconds: 90
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
          name: vernemq
          ports:
            - containerPort: 1883
              name: mqtt
              protocol: TCP
            - containerPort: 8883
              name: mqtts
              protocol: TCP
            - containerPort: 4369
              name: epmd
              protocol: TCP
            - containerPort: 44053
              name: vmq
              protocol: TCP
            - containerPort: 8080
              name: ws
              protocol: TCP
            - containerPort: 8888
              name: prometheus
              protocol: TCP
            - containerPort: 9100
              protocol: TCP
            - containerPort: 9101
              protocol: TCP
            - containerPort: 9102
              protocol: TCP
            - containerPort: 9103
              protocol: TCP
            - containerPort: 9104
              protocol: TCP
            - containerPort: 9105
              protocol: TCP
            - containerPort: 9106
              protocol: TCP
            - containerPort: 9107
              protocol: TCP
            - containerPort: 9108
              protocol: TCP
            - containerPort: 9109
              protocol: TCP
          readinessProbe:
            exec:
              command:
                - /bin/sh
                - '-c'
                - /vernemq/bin/vernemq ping | grep pong
            failureThreshold: 3
            initialDelaySeconds: 90
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /vernemq/log
              name: logs
            - mountPath: /vernemq/data
              name: data
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 10000
        runAsUser: 10000
      serviceAccount: dunking-marmot-vernemq
      serviceAccountName: dunking-marmot-vernemq
      terminationGracePeriodSeconds: 60
      volumes:
        - emptyDir: {}
          name: logs
        - emptyDir: {}
          name: data
  updateStrategy:
    type: RollingUpdate
status:
  replicas: 0

log is :

_

create Pod test-vernemq-0 in StatefulSet test-vernemq failed error: pods "test-vernemq-0" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{10000}: 10000 is not an allowed group spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 10000: must be in the ranges: [1000450000, 1000459999]]

_

Is the image run with root user ? I guess this because openshift dont allow run images with root user

@francois-travais
Copy link
Contributor

francois-travais commented Apr 10, 2019 via email

@yeganx
Copy link
Author

yeganx commented Apr 11, 2019

The image run with UID and GID 10000, apparently openshift does not allow uid and gid lower than 1000450000.

On Wed, Apr 10, 2019, 16:08 yeganx @.***> wrote: my problem is in runnig the image in openshift I try using statefuls-set.yaml apiVersion: apps/v1 kind: StatefulSet metadata: creationTimestamp: '2019-04-10T13:44:19Z' generation: 1 labels: app.kubernetes.io/instance: dunking-marmot app.kubernetes.io/managed-by: Tiller app.kubernetes.io/name: vernemq helm.sh/chart: vernemq-1.2.0 name: test-vernemq namespace: test-vern resourceVersion: '5743717' selfLink: /apis/apps/v1/namespaces/test-vern/statefulsets/test-vernemq uid: bd7e4d7f-5b96-11e9-8809-000c296169fc spec: podManagementPolicy: OrderedReady replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/instance: dunking-marmot app.kubernetes.io/name: vernemq serviceName: dunking-marmot-vernemq-headless template: metadata: creationTimestamp: null labels: app.kubernetes.io/instance: dunking-marmot app.kubernetes.io/name: vernemq spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - vernemq - key: release operator: In values: - dunking-marmot topologyKey: kubernetes.io/hostname weight: 100 containers: - env: - name: MY_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES value: '1' - name: DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR value: >- app.kubernetes.io/name=vernemq,app.kubernetes.io/instance=dunking-marmot - name: DOCKER_VERNEMQ_LISTENER__TCP__LOCALHOST value: '127.0.0.1:1883' - name: DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT value: 'on' - name: DOCKER_VERNEMQ_ALLOW_PUBLISH_DURING_NETSPLIT value: 'on' - name: DOCKER_VERNEMQ_ALLOW_SUBSCRIBE_DURING_NETSPLIT value: 'on' - name: DOCKER_VERNEMQ_ALLOW_UNSUBSCRIBE_DURING_NETSPLIT value: 'on' image: 'erlio/docker-vernemq:1.7.1' imagePullPolicy: IfNotPresent livenessProbe: exec: command: - /bin/sh - '-c' - /vernemq/bin/vernemq ping | grep pong failureThreshold: 3 initialDelaySeconds: 90 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: vernemq ports: - containerPort: 1883 name: mqtt protocol: TCP - containerPort: 8883 name: mqtts protocol: TCP - containerPort: 4369 name: epmd protocol: TCP - containerPort: 44053 name: vmq protocol: TCP - containerPort: 8080 name: ws protocol: TCP - containerPort: 8888 name: prometheus protocol: TCP - containerPort: 9100 protocol: TCP - containerPort: 9101 protocol: TCP - containerPort: 9102 protocol: TCP - containerPort: 9103 protocol: TCP - containerPort: 9104 protocol: TCP - containerPort: 9105 protocol: TCP - containerPort: 9106 protocol: TCP - containerPort: 9107 protocol: TCP - containerPort: 9108 protocol: TCP - containerPort: 9109 protocol: TCP readinessProbe: exec: command: - /bin/sh - '-c' - /vernemq/bin/vernemq ping | grep pong failureThreshold: 3 initialDelaySeconds: 90 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /vernemq/log name: logs - mountPath: /vernemq/data name: data dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 10000 runAsUser: 10000 serviceAccount: dunking-marmot-vernemq serviceAccountName: dunking-marmot-vernemq terminationGracePeriodSeconds: 60 volumes: - emptyDir: {} name: logs - emptyDir: {} name: data updateStrategy: type: RollingUpdate status: replicas: 0 log is : create Pod test-vernemq-0 in StatefulSet test-vernemq failed error: pods "test-vernemq-0" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{10000}: 10000 is not an allowed group spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 10000: must be in the ranges: [1000450000, 1000459999]] Is the image run with root user ? I guess this because openshift dont allow run images with root user — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#130 (comment)>, or mute the thread https://github.com/notifications/unsubscribe-auth/ADgk-fq4nCZWBlCkO12kZV80MODr-UQ7ks5vffBqgaJpZM4cmefF .

now how can I fix it ?

@JohnCMcDonough
Copy link
Contributor

JohnCMcDonough commented Apr 12, 2019

You need to override these values in the helm chart:

securityContext:
  runAsUser: 10000
  runAsGroup: 10000
  fsGroup: 10000

To be something like:

securityContext:
  runAsUser: 1000450001
  runAsGroup: 1000450001
  fsGroup: 1000450001

@blazdivjak
Copy link

blazdivjak commented Apr 12, 2019

@yeganx please check PR: #131 .

Container images are also available on Docker Hub: https://hub.docker.com/r/blazdivjak/docker-vernemq .

@yeganx
Copy link
Author

yeganx commented Apr 13, 2019

You need to override these values in the helm chart:

securityContext:
  runAsUser: 10000
  runAsGroup: 10000
  fsGroup: 10000

To be something like:

securityContext:
  runAsUser: 1000450001
  runAsGroup: 1000450001
  fsGroup: 1000450001

#oc describe project test-vern

Name:			test-vern
Created:		2 days ago
Labels:			<none>
Annotations:		openshift.io/description=
			openshift.io/display-name=
			openshift.io/requester=admin
			openshift.io/sa.scc.mcs=s0:c21,c15
			openshift.io/sa.scc.supplemental-groups=1000450000/10000
			openshift.io/sa.scc.uid-range=1000450000/10000
Display Name:		<none>
Description:		<none>
Status:			Active
Node Selector:		<none>
Quota:			<none>
Resource limits:	<none>

I override my yaml file to :
securityContext:
runAsUser: 1000450010
runAsGroup: 1000450010
fsGroup: 1000450010

and create it but again :
log is :

create Pod test-vernemq-0 in StatefulSet test-vernemq failed error: pods "test-vernemq-0" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{10000}: 10000 is not an allowed group spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 10000: must be in the ranges: [1000450000, 1000459999]]

I also add anyuid policy
#oc adm policy add-scc-to-user anyuid -z default -n test-vern
but nothing has changed !

@yeganx
Copy link
Author

yeganx commented Apr 13, 2019

I found i should add policy to dunking-marmot-vernemq.
one time the deployment was with sucess.
but i create some other test project ,now error is:
Liveness probe failed: Node '[email protected]' not responding to pings.

in the pod also :

#vmq-admin cluster show
Node '[email protected]' not responding to pings.

@yeganx
Copy link
Author

yeganx commented Apr 13, 2019

#nano cluster_vernmq.yaml

apiVersion: v1
kind: Service
metadata:
  name: vernemq
  labels:
    app: vernemq
spec:
  ports:
  - port: 80
    name: vernemq
  clusterIP: None
  selector:
    app: vernemq
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: vernemq
spec:
  selector:
    matchLabels:
      app: vernemq # has to match .spec.template.metadata.labels
  serviceName: "vernemq"
  replicas: 1 # by default is 1
  template:
    metadata:
      labels:
        app: vernemq # has to match .spec.selector.matchLabels
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - vernemq
                    - key: release
                      operator: In
                      values:
                        - dunking-marmot
                topologyKey: kubernetes.io/hostname
              weight: 100
      containers:
        - env:
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
              value: '1'
            - name: DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR
              value: >-
                app.kubernetes.io/name=vernemq,app.kubernetes.io/instance=dunking-marmot
            - name: DOCKER_VERNEMQ_LISTENER__TCP__LOCALHOST
              value: '127.0.0.1:1883'
            - name: DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT
              value: 'on'
            - name: DOCKER_VERNEMQ_ALLOW_PUBLISH_DURING_NETSPLIT
              value: 'on'
            - name: DOCKER_VERNEMQ_ALLOW_SUBSCRIBE_DURING_NETSPLIT
              value: 'on'
            - name: DOCKER_VERNEMQ_ALLOW_UNSUBSCRIBE_DURING_NETSPLIT
              value: 'on'
          image: 'erlio/docker-vernemq:1.7.1'
          imagePullPolicy: IfNotPresent
          livenessProbe:
            exec:
              command:
                - /bin/sh
                - '-c'
                - /vernemq/bin/vernemq ping | grep pong
            failureThreshold: 3
            initialDelaySeconds: 90
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
          name: vernemq
          ports:
            - containerPort: 1883
              name: mqtt
              protocol: TCP
            - containerPort: 8883
              name: mqtts
              protocol: TCP
            - containerPort: 4369
              name: epmd
              protocol: TCP
            - containerPort: 44053
              name: vmq
              protocol: TCP
            - containerPort: 8080
              name: ws
              protocol: TCP
            - containerPort: 8888
              name: prometheus
              protocol: TCP
            - containerPort: 9100
              protocol: TCP
            - containerPort: 9101
              protocol: TCP
            - containerPort: 9102
              protocol: TCP
            - containerPort: 9103
              protocol: TCP
            - containerPort: 9104
              protocol: TCP
            - containerPort: 9105
              protocol: TCP
            - containerPort: 9106
              protocol: TCP
            - containerPort: 9107
              protocol: TCP
            - containerPort: 9108
              protocol: TCP
            - containerPort: 9109
              protocol: TCP
          readinessProbe:
            exec:
              command:
                - /bin/sh
                - '-c'
                - /vernemq/bin/vernemq ping | grep pong
            failureThreshold: 3
            initialDelaySeconds: 90
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /vernemq/log
              name: logs
            - mountPath: /vernemq/data
              name: data
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 10000
        runAsUser: 10000
      serviceAccount: dunking-marmot-vernemq
      serviceAccountName: dunking-marmot-vernemq
      terminationGracePeriodSeconds: 60
      volumes:
        - emptyDir: {}
          name: logs
        - emptyDir: {}
          name: data
  updateStrategy:
    type: RollingUpdate
#oc create -f cluster_vernmq.yaml -n test
#oc odm policy add-scc-to-user privileged system:serviceaccount:test:dunking-marmot-vernemq
#oc odm policy add-scc-to-user anyuid system:serviceaccount:test:dunking-marmot-vernemq
#oc get pods -n test
vernemq-0       0/1       Running   6          18m

Error is :
Readiness probe failed: Node '[email protected]' not responding to pings.
what is wrong?

@francois-travais
Copy link
Contributor

DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR must match the labels you are using on your pods. In the snippet it does not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants