Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hi Team Could you please help me #188

Open
Dev-siva opened this issue Apr 27, 2021 · 6 comments
Open

Hi Team Could you please help me #188

Dev-siva opened this issue Apr 27, 2021 · 6 comments

Comments

@Dev-siva
Copy link

Hi Team ,
I have gave CKAD exam 27th april 2021. could you please help me with below questions

image

image

image

image

@ahmedqazi444
Copy link

First Pic Q1
I have made some assumptions here,this is just my take on the problem.
1.You can update the nginxsvc using the following command kubectl edit svc nginxsvc this will open up the svc config in YAML format. Change the port of the service to 9090.Save and check the svc using kubectl get svc nginxsvc
2.I am assuming that there is already a pod running with a single container and you want to add the ambassador ,get the yaml output of the pod config using the command kubectl get nameofthepod -o yaml > podname.yaml,or you can even use the poller pod yaml file located at /opt/KDMxxxxx/poller.yaml,once you have the yaml file.
Follow the order of the steps:
1.config map--> kubectl create configmap haproxy-config --from-file=/opt/KDMCxxx/haproxy.cfg
2.mount the config map to a volume,bound port 60 to ambassador container,mount the volume for configMap
3.Update the args under poller container to connect to localhost,removing the svc name and adding localhost or 127.0.0.1 might work although I am not sure what exact arguments are:-
-->code snippet--format as per YAML syntax
spec:
volumes:
- name: my-vol
configMap:
name: haproxy-config
containers:
- image: haproxy
name: ambassador-container
port:
- containerPort: 60
volumeMounts:
- name: my-vol
mountPath: /usr/local/etc/haproxy/haproxy.cfg
- image: poller
name: poller
args:
- remove the name of the svc and use local host.

@ahmedqazi444
Copy link

You can refer to the solution for Question2 here
#187 (comment)

@Dev-siva
Copy link
Author

Hi ahmedqazi444,

Thank You so much for the Question1.

For question2 I need to use the -o wide command to get the error of the pod.

I think I need to run the command " kubectl get po -o wide" or "kubectl get events -A | grep -i "Liveness probe are failed " | awk '{print $1 "/"$5}" or "kubectl get events -A -o wide"?

to the fix the issue :- I think it will be related to the port right?

Question3:
what changes do I need to do in the kdsn00201-newpod to make it allow trraffic to send and recieve only to and from the poxy and db pods?

NOTE:- I have seen two network policy with name: poxy-neyworkpolicy and db-networkpolicy already had been created.

Question4: is the below command correct?

kubectl create cronjob --image=busybox:stable --schedule="* /1 * * * *" --/bin/sh -c "uname" --dry-run=client -o yaml > pod.yaml

in the edit i need to add under spec.jobtemplate.spec.template.spec.ActiveDeadlineSeconds right?

Please help me here Team/@ahmedqazi444

@ahmedqazi444
Copy link

ahmedqazi444 commented Apr 28, 2021

Hi ahmedqazi444,

Thank You so much for the Question1.

For question2 I need to use the -o wide command to get the error of the pod.

I think I need to run the command " kubectl get po -o wide" or "kubectl get events -A | grep -i "Liveness probe are failed " | awk '{print $1 "/"$5}" or "kubectl get events -A -o wide"?

to the fix the issue :- I think it will be related to the port right?

Question3:
what changes do I need to do in the kdsn00201-newpod to make it allow trraffic to send and recieve only to and from the poxy and db pods?

NOTE:- I have seen two network policy with name: poxy-neyworkpolicy and db-networkpolicy already had been created.

Question4: is the below command correct?

kubectl create cronjob --image=busybox:stable --schedule="* /1 * * * *" --/bin/sh -c "uname" --dry-run=client -o yaml > pod.yaml

in the edit i need to add under spec.jobtemplate.spec.template.spec.ActiveDeadlineSeconds right?

Please help me here Team/@ahmedqazi444
1.-A is used for all the namespaces ,you can use -n namespacename,-o wide gives you much more information.You should use it and adjust the command for the second part of the question.

  1. Inorder to fix the issue you need to know what is defined for liveness probe if its exec then probably command,if its httpGet then the HTTP path or port and for TCP the right port.As I have no clue why the liveness Probe is failing it is difficult for me assume what needs to be fixed.What gave you the impression that port needs to be modified?.
  2. In Question 3 ,you will need to match the labels on the pods,the question clearly mentions that the policies are configured properly and you shouldn't modify them,hence you can only change and match the labels that correspond to the appropriate policies.For exam for db network policy check the labels defined in pod selector,match these with labels on the db pod.
    4.You need to add completions: 1. Add it to the YAML file as the question says the job must be completed once,name should be hello, ActiveDeadlineSeconds this should be added under job template section,it can be confusing with multiple job sections.In this first job section is for corn and the second one will be for Job.ActiveDeadlineSeconds works with job and not cronjob as per my understanding.Do verify this.
    kubectl create cronjob hello --image=busybox:stable --schedule="* /1 * * * *" --dry-run=client -o yaml --/bin/sh -c "date"> pod.yaml
  3. If you have any other questions,I would be happy to solve them.

@Dev-siva
Copy link
Author

@ahmedqazi444 Thank you so much for your response and answer...

I have few questions and I just want you to verify and if i am incorrect please correct me

Context: Your application's namespace requires a specific service account to be used

Task:: Update the "appa" deployment in the "production-app" namespace to run as the "restrictedservice" service account. The Service account has already created

Answer: kubectl get serviceaccounts # it will show "restrictedservice"
kubectl edit -n production-app deploy appa # then add the line spec.serviceAccountName: restrictedservice

Context:
you are tasked to create a ConfigMap and Consume the ConfigMap in a pod using a volume mount

Task:
. create a ConfigMap named "another-config" containing the key/value pair: key4/value4
. Start a pod named "nginx-configmap" containing a single container using the "nginx" image. and mount the key
you just created into the pod under directory /yet/another/path

Answer: kubectl create configmap another-config --from-literal=key4=value4
kubectl run nginx-configmap --image=nginx --dry-run=client -o yaml >pod.yaml

then edit the pod.yaml with below lines:
volumeMounts:
- name: config-volume
mountPath: /yet/another/path
volumes:
- name: config-volume
configMap:
name: another-config

Context:
You are required to create a pod that requests a certain amount of CPU and memory, so it gets scheduled to a node that
has those resources available

Task:
. Create a pod named nginx-resources in the pod-resources namespace that requests a minimum of 400m CPU and 2Gi memory for its container
. The pod should use image nginx
. The pod-resources namespace has already been created

Answer: kubectl -n pod-resources run nginx-resources --image=nginx --requests='cpu=400m,memory=2Gi '

@ahmedqazi444
Copy link

I believe your approach is correct. Furthermore, you can practice these exact questions from the exercises in this repo to get a solid understanding.Good Luck.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants