CKAD - Killer.sh 문제 풀이
Question 1.
The DevOps team would like to get the list of all Namespaces in the cluster. Get the list and save it to /opt/course/1/namespaces on ckad5601.
Answer
kubectl get ns > /opt/course/1/namespaces
Question 2.
Create a single Pod of image httpd:2.4.41-alpine in Namespace default. The Pod should be named pod1 and the container should be named pod1-container.
Your manager would like to run a command manually on occasion to output the status of that exact Pod. Please write a command that does this into /opt/course/2/pod1-status-command.sh on ckad5601. The command should use kubectl.
Answer
kubectl run pod1 --image httpd:2.4.41-alpine --dry-run=client -o yaml > 2.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: pod1-container
image: httpd:2.4.41-alpine
kubectl apply -f 2.yaml
echo kubectl describe pods pod1 | grep -i status: > /opt/course/2/pod1-status-command.sh
echo kubectl get pod pod1 -o jsonpath="{.status.phase}" > /opt/course/2/pod1-status-command.sh
Question 3.
Team Neptune needs a Job template located at /opt/course/3/job.yaml. This Job should run image busybox:1.31.0 and execute sleep 2 && echo done. It should be in namespace neptune, run a total of 3 times and should execute 2 runs in parallel.
Start the Job and check its history. Each pod created by the Job should have the label id: awesome-job. The job should be named neb-new-job and the container neb-new-job-container.
Answer
kubectl create job -n neptune neb-new-job --image busybox:1.31.0 --dry-run=client -o yaml > 3.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: neb-new-job
namespace: neptune
labels:
id: awesome-job
spec:
completions: 3
parallelism: 2
template:
spec:
restartPolicy: OnFailure
containers:
- image: busybox:1.31.0
name: neb-new-job-container
command:
- sh
- -c
- sleep 2 && echo done
Question 4.
Team Mercury asked you to perform some operations using Helm, all in Namespace mercury:
- Delete release internal-issue-report-apiv1
- Upgrade release internal-issue-report-apiv2 to any newer version of chart bitnami/nginx available
- Install a new release internal-issue-report-apache of chart bitnami/apache. The Deployment should have two replicas, set these via Helm-values during install
- There seems to be a broken release, stuck in pending-install state. Find it and delete it
Answer
helm uninstall -n mercury internal-issue-report-apiv1
helm upgrade -n mercury intenral-issue-report-apiv2 bitnami/nginx
helm show values bitnami/apache > values.yaml
helm install -n mercury internal-issue-report-apache bitnami/apache --set replicas 2
helm list -n mercury -a
NAME NAMESPACE ... STATUS CHART APP VERSION
internal-issue-report-apache mercury ... deployed apache-11.2.20 2.4.62
internal-issue-report-apiv2 mercury ... deployed nginx-18.2.0 1.27.1
internal-issue-report-app mercury ... deployed nginx-18.1.14 1.27.1
internal-issue-report-daniel mercury ... pending-install nginx-18.1.14 1.27.1
helm uninstall -n mercury internal-issue-report-daniel
Question 5.
Team Neptune has its own ServiceAccount named neptune-sa-v2 in Namespace neptune. A coworker needs the token from the Secret that belongs to that ServiceAccount. Write the base64 decoded token to file /opt/course/5/token on ckad7326.
Answer
kubectl get serviceaccount -n neptune neptune-sa-v2
kubectl describe serviceaccount -n neptune netpune-sa-v2
kubectl get secret -n neptune
kubectl describe secret -n neptune neptune-secret-1
kubectl get secret -n neptune neptune-secret-1 -o jsonpath='{.data.token}' | base64 -d > /opt/course/5/token
Question 6.
Create a single Pod named pod6 in Namespace default of image busybox:1.31.0. The Pod should have a readiness-probe executing cat /tmp/ready. It should initially wait 5 and periodically wait 10 seconds. This will set the container ready only if the file /tmp/ready exists.
The Pod should run the command touch /tmp/ready && sleep 1d, which will create the necessary file to be ready and then idles. Create the Pod and confirm it starts.
Answer
kubectl run pod6 --image busybox:1.31.0 --dry-run=client -o yaml > 6.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod6
spec:
containers:
- image: busybox:1.31.0
name: pod6
command:
- sh
- -c
- touch /tmp/ready && sleep 1d
readinessProbe:
initialDelaySeconds: 5
periodSeconds: 10
exec:
command:
- sh
- -c
- cat /tmp/ready
Question 7.
The board of Team Neptune decided to take over control of one e-commerce webserver from Team Saturn. The administrator who once setup this webserver is not part of the organisation any longer. All information you could get was that the e-commerce system is called my-happy-shop.
Search for the correct Pod in Namespace saturn and move it to Namespace neptune. It doesn't matter if you shut it down and spin it up again, it probably hasn't any customers anyways.
Answer
kubectl get pods -n saturn
kubectl get pods -o yaml -n saturn | grep -i my-happy-shop
kubectl get pods -n saturn webserver-sat-003 -o yaml > 7.yaml
chanage namespace
apiVersion: v1
kind: Pod
metadata:
annotations:
description: this is the server for the E-Commerce System my-happy-shop
labels:
id: webserver-sat-003
name: webserver-sat-003
namespace: neptune # new namespace here
spec:
containers:
- image: nginx:1.16.1-alpine
imagePullPolicy: IfNotPresent
name: webserver-sat
restartPolicy: Always
kubectl apply -f 7.yaml
Question 8.
There is an existing Deployment named api-new-c32 in Namespace neptune. A developer did make an update to the Deployment but the updated version never came online. Check the Deployment history and find a revision that works, then rollback to it. Could you tell Team Neptune what the error was so it doesn't happen again?
Answer
kubectl get deployment -n neptune
kubectl describe deployment -n neptune api-new-c32
kubectl rollout -n neptune status
kubectl rollout -n neptune history deployment api-new-c32
kubectl rollout undo -n neptune deployment api-new-c32 --to-revision 4
Question 9.
In Namespace pluto there is single Pod named holy-api. It has been working okay for a while now but Team Pluto needs it to be more reliable.
Convert the Pod into a Deployment named holy-api with 3 replicas and delete the single Pod once done. The raw Pod template file is available at /opt/course/9/holy-api-pod.yaml.
In addition, the new Deployment should set allowPrivilegeEscalation: false and privileged: false for the security context on container level.
Please create the Deployment and save its yaml under /opt/course/9/holy-api-deployment.yaml on ckad9043.
- holy-api-pod.yaml
- apiVersion: v1 kind: Pod metadata: lables: id: holy-api name: holy-api namespace: pluto spec: volumes: - name: cahce-volume1 emptyDir: {} - name: cache-volume2 emptyDir: {} - name: cache-volume3 emptyDir: {} containers: - image: nginx:1.17.3-alpine name: holy-api-container volumeMounts: - mountPath: /cahce1 name: cache-volume1 - mountPath: /cache2 name: cahce-volume2 - mountPath: /cache3 name: cache-volume3 env: - name: CACHE_KEY1 value: "cache1" - name: CACHE_KEY2 value: "cache2" - name: CACHE_KEY3 value: "cache3" dnsPolicy: ClusterFirst restartPolicy: Always
Answer
cp /opt/course/9/holy-api-pod.yaml /opt/course/9/holy-api-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
id: holy-api
app: holy-api
name: holy-api
namespace: pluto
spec:
replicas: 3
selector:
matchLabels:
app: holy-api
template:
metadata:
labels:
app: holy-api
spec:
volumes:
- name: cache-volume1
emptyDir:{}
- name: cache-volume2
emptyDir: {}
- name: cache-volume3
emptyDir: {}
env:
- name: CACHE_KEY1
value: "cache1"
- name: CACHE_KEY2
value: "cache2"
- name: CACHE_KEY3
value: "cache3"
containers:
- image: nginx:1.17.3-alpine
name: holy-api-container
securityContext:
allowPrivilegeEscalation: false
privileged: false
volumeMounts:
- mountPath: /cache1
name: cache-volume1
- mountPaht: /cahce2
name: cache-volume2
- mountPaht: /cache3
name: cache-volume3
dnsPolicy: ClusterFirst
restartPolicy: Always
kubectl apply -f holy-api-deployment.yaml
Question 10.
Team Pluto needs a new cluster internal Service. Create a ClusterIP Service named project-plt-6cc-svc in Namespace pluto. This Service should expose a single Pod named project-plt-6cc-api of image nginx:1.17.3-alpine, create that Pod as well. The Pod should be identified by label project: plt-6cc-api. The Service should use tcp port redirection of 3333:80.
Finally use for example curl from a temporary nginx:alpine Pod to get the response from the Service. Write the response into /opt/course/10/service_test.html on ckad9043. Also check if the logs of Pod project-plt-6cc-api show the request and write those into /opt/course/10/service_test.log on ckad9043.
Answer
kubectl run project-plt-6cc-api --image nginx:1.17.3-alpine -n pluto --dry-run=client -o yaml > 10.yaml
apiVersion: v1
kind: Pod
metadata:
name: project-plt-6cc-api
namespace: pluto
labels:
project: plt-6cc-api
spec:
containers:
- image: nginx:1.17.3-alpine
name: project-6cc-api-container
kubectl expose pod project-6cc-api -n pluto --name project-plt-6cc-svc --port 3333 --target-port 80
kubectl run verify-pod -n pluto --image nginx:alpine
kubectl exec verify-pod -n pluto -- curl project-plt-6cc-svc.pluto.svc.cluster.local > /opt/course/10/service_test.log
Question 11.
There are files to build a container image located at /opt/course/11/image on ckad9043. The container will run a Golang application which outputs information to stdout. You're asked to perform the following tasks:
ℹ️ Run all Docker and Podman commands as user root. Use sudo docker and sudo podman or become root with sudo -i
- Change the Dockerfile: set ENV variable SUN_CIPHER_ID to hardcoded value 5b9c1065-e39d-4a43-a04a-e59bcea3e03f
- Build the image using sudo docker, tag it registry.killer.sh:5000/sun-cipher:v1-docker and push it to the registry
- Build the image using sudo podman, tag it registry.killer.sh:5000/sun-cipher:v1-podman and push it to the registry
- Run a container using sudo podman, which keeps running detached in the background, named sun-cipher using image registry.killer.sh:5000/sun-cipher:v1-podman
- Write the logs your container sun-cipher produces into /opt/course/11/logs on ckad9043
- Dockerfile
- FROM docker.io/library/golang:1.15.15-alpine3.14 WORKDIR /src COPY . . RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o bin/app . FROM docker.io/libray.alpine:3.12.4 COPY --from=0 /src/bin/app app ENV SUN_CIPHER_ID=5b9c1065-e39d-4a43-a04a-e59bcea3e03f CMD ["./app"]
Answer
cd /opt/course/11/image
sudo docker build -t registry.killer.sh:5000/sun-cipher:v1-docker
sudo podman build -t registry.killer.sh:5000/sun-cipher:v1-podman
sudo docker push registry.killer.sh:5000/sun-cipher:v1-docker
sudo podman push registry.killer.sh:5000/sun-cipher:v1-podman
sudo podman run -d -it --name sun-cipher --image registry.killer.sh:5000/sun-cipher:v1-podman
sudo podman logs sun-cipher > /opt/course/11/logs
Question 12.
Create a new PersistentVolume named earth-project-earthflower-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.
Next create a new PersistentVolumeClaim in Namespace earth named earth-project-earthflower-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.
Finally create a new Deployment project-earthflower in Namespace earth which mounts that volume at /tmp/project-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.
Answer
## earth-project-earthflower-pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: earth-project-earthflower-pv
namespace: earth
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
hostPath:
paht: /Volumes/Data
## earth-project-earthflower-pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: earth-project-earthflower-pvc
namespace: earth
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
sotrage: 2Gi
kubectl create deployment project-earthflower -n earth --image httpd:2.4.41-alpine --dry-run=client -o yaml > 12.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: earth
name: project-earthflower
labels:
app: project-earthflower
spec:
replicas: 2
selector:
matchLabels:
app: project-earthflower
template:
metadata:
labels:
app: project-earth-flower
spec:
containers:
- image: httpd:2.4.41-alpine
name: project-earth-flower-container
volumeMounts:
- mountPaht: /tmp/project-data
name: earth-pv
volumes:
- name: earth-pv
persistentVolumeClaim:
claimName: earth-project-earthflower-pvc
Question 13.
Team Moonpie, which has the Namespace moon, needs more storage. Create a new PersistentVolumeClaim named moon-pvc-126 in that namespace. This claim should use a new StorageClass moon-retain with the provisioner set to moon-retainer and the reclaimPolicy set to Retain. The claim should request storage of 3Gi, an accessMode of ReadWriteOnce and should use the new StorageClass.
The provisioner moon-retainer will be created by another team, so it's expected that the PVC will not boot yet. Confirm this by writing the event message from the PVC into file /opt/course/13/pvc-126-reason on ckad9043.
Answer
## moon-retain
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: moon-retain
provisioner: moon-retainer
reclaimPolicy: Retain
## moon-pvc-126
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: moon-pvc-126
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 3Gi
storageClassName: moon-retain
kubectl describe pvc moon-pvc-126
/opt/course/13/pvc-126-reason
Waiting for a volume to be created either by the external provisioner 'moon-retainer' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
Question 14.
You need to make changes on an existing Pod in Namespace moon called secret-handler. Create a new Secret secret1 which contains user=test and pass=pwd. The Secret's content should be available in Pod secret-handler as environment variables SECRET1_USER and SECRET1_PASS. The yaml for Pod secret-handler is available at /opt/course/14/secret-handler.yaml.
There is existing yaml for another Secret at /opt/course/14/secret2.yaml, create this Secret and mount it inside the same Pod at /tmp/secret2. Your changes should be saved under /opt/course/14/secret-handler-new.yaml on ckad9043. Both Secrets should only be available in Namespace moon.
Answer
# Create Secret secret1
kubectl create secret generic secret1 -n moon --from-literal=user=test --from-literal=pass=pwd
# Create Secret secret2
kubectl apply -f /opt/course/14/secret2.yaml
cp /opt/course/14/secret-handler.yaml /opt/course/14/secret-handler-new.yaml
apiVersion: v1
kind: Pod
metadata:
name: secret-handler
namespace: moon
spec:
volumes:
- name: secret2
secret:
secretName: secret2
containers:
- name: secret-handler-container
image:
env:
- name: SECRET1_USER
valueFrom:
secretKeyRef:
- name: secret1
key: user
- name: SECRET1_PASS
valueFrom:
secretKeyRef:
- name: secret1
key: pass
volumeMounts:
- name: secret2
mountPath: /tmp/secret2
Question 15.
Team Moonpie has a nginx server Deployment called web-moon in Namespace moon. Someone started configuring it but it was never completed. To complete please create a ConfigMap called configmap-web-moon-html containing the content of file /opt/course/15/web-moon.html under the data key-name index.html.
The Deployment web-moon is already configured to work with this ConfigMap and serve its content. Test the nginx configuration for example using curl from a temporary nginx:alpine Pod.
Answer
kubectl describe deployment -n moon web-moon
kubectl create configmap -n moon configmap-web-moon-html --from-file=index.html=/opt/course/15/web-moon.html
kubectl rollout restart deployment -n moon web-moon
kubectl run tmp --restart=Never --rm -i --imamge nginx:alpine -- curl
Question 16.
The Tech Lead of Mercury2D decided it's time for more logging, to finally fight all these missing data incidents. There is an existing container named cleaner-con in Deployment cleaner in Namespace mercury. This container mounts a volume and writes logs into a file called cleaner.log.
The yaml for the existing Deployment is available at /opt/course/16/cleaner.yaml. Persist your changes at /opt/course/16/cleaner-new.yaml on ckad7326 but also make sure the Deployment is running.
Create a sidecar container named logger-con, image busybox:1.31.0 , which mounts the same volume and writes the content of cleaner.log to stdout, you can use the tail -f command for this. This way it can be picked up by kubectl logs.
Check if the logs of the new container reveal something about the missing data incidents.
- cleaner.yaml
- apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null name: cleaner namespace: mercury spec: replicas: 2 selector: matchLabels: id: cleaner template: metadata: labels: id: cleaner spec: volumes: - name: logs emptyDir: {} initContainers: - name: init image: bash:5.0.11 command: ['bash', '-c', 'echo init > /var/log/cleaner/cleaner.log'] volumeMounts: - name: logs mountPath: /var/log/cleaner containers: - name: cleaner-con image: bash:5.0.11 args: ['bash', '-c', 'while true; do echo `date`: "remove random file" >> /var/log/cleaner/cleaner.log; sleep 1; done'] volumeMounts: - name: logs mountPath: /var/log/cleaner
Answer
cp /opt/course/16/cleaner.yaml /opt/course/16/cleaner-new.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: cleaner
namespace: mercury
spec:
replicas: 2
selector:
matchLabels:
id: cleaner
template:
metadata:
labels:
id: cleaner
spec:
volumes:
- name: logs
emptyDir: {}
initContainers:
- name: init
image: bash:5.0.11
command: ['bash', '-c', 'echo init > /var/log/cleaner/cleaner.log']
volumeMounts:
- name: logs
mountPath: /var/log/cleaner
- name: logger-con
image: busybox:1.31.0
volumeMonts:
- name: logs
mountPath: /var/log/cleaner
command:
- sh
- -c
- tail -f /var/log/cleaner/cleaner.log
restartPolicy: Always
containers:
- name: cleaner-con
image: bash:5.0.11
args: ['bash', '-c', 'while true; do echo `date`: "remove random file" >> /var/log/cleaner/cleaner.log; sleep 1; done']
volumeMounts:
- name: logs
mountPath: /var/log/cleaner
Question 17.
Last lunch you told your coworker from department Mars Inc how amazing InitContainers are. Now he would like to see one in action. There is a Deployment yaml at /opt/course/17/test-init-container.yaml. This Deployment spins up a single Pod of image nginx:1.17.3-alpine and serves files from a mounted volume, which is empty right now.
Create an InitContainer named init-con which also mounts that volume and creates a file index.html with content check this out! in the root of the mounted volume. For this test we ignore that it doesn't contain valid html.
The InitContainer should be using image busybox:1.31.0. Test your implementation for example using curl from a temporary nginx:alpine Pod.
- test-init-container.yaml
- apiVersion: apps/v1 kind: Deployment metadata: name: test-init-container namespace: mars spec: replicas: 1 selector: matchLabels: id: test-init-container template: metadata: labels: id: test-init-container spec: volumes: - name: web-content emptyDir: {} containers: - image: nginx:1.17.3-alpine name: nginx volumeMounts: - name: web-content mountPath: /usr/share/nginx/html ports: - containerPort: 80
Answer
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-init-container
namespace: mars
spec:
replicas: 1
selector:
matchLabels:
id: test-init-container
template:
metadata:
labels:
id: test-init-container
spec:
volumes:
- name: web-content
emptyDir: {}
initcontainers:
- name: init-con
image: nginx:1.17.3-alpine
volumeMounts:
- name: web-content
mountPaht: /tmp/web-content
command: ["sh", "-c", "echo check this out! > /tmp/web-content/index.html"]
containers:
- image: nginx:1.17.3-alpine
name: nginx
volumeMounts:
- name: web-content
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
Question 18.
There seems to be an issue in Namespace mars where the ClusterIP service manager-api-svc should make the Pods of Deployment manager-api-deployment available inside the cluster.
You can test this with curl manager-api-svc.mars:4444 from a temporary nginx:alpine Pod. Check for the misconfiguration and apply a fix.
Answer
Service Selector 수정
kubectl run tmp --image nginx:alpine --restart=Never --rm -i -- curl -m 5 manager-api-svc.mars.svc.cluster.local:4444
Question 19.
In Namespace jupiter you'll find an apache Deployment (with one replica) named jupiter-crew-deploy and a ClusterIP Service called jupiter-crew-svc which exposes it. Change this service to a NodePort one to make it available on all nodes on port 30100.
Test the NodePort Service using the internal IP of all available nodes and the port 30100 using curl, you can reach the internal node IPs directly from your main terminal. On which nodes is the Service reachable? On which node is the Pod running?
Answer
kubectl get svc -n jupiter
kubectl edit svc -n jupiter jupiter-crew-svc
apiVersion: v1
kind: Service
metadata:
name: jupiter-crew-svc
namespace: jupiter
...
spec:
clusterIP: 10.3.245.70
ports:
- name: 8080-80
port: 8080
protocol: TCP
targetPort: 80
nodePort: 30100 # add the nodePort
selector:
id: jupiter-crew
sessionAffinity: None
#type: ClusterIP
type: NodePort # change type
status:
loadBalancer: {}
kubectl -n jupiter run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 jupiter-crew-svc:8080
kubectl get pods -n jupiter -o wide
kubectl get nodes -o wide
curl nodeip:30100
Question 20.
In Namespace venus you'll find two Deployments named api and frontend. Both Deployments are exposed inside the cluster using Services. Create a NetworkPolicy named np1 which restricts outgoing tcp connections from Deployment frontend and only allows those going to Deployment api. Make sure the NetworkPolicy still allows outgoing traffic on UDP/TCP ports 53 for DNS resolution.
Test using: wget www.google.com and wget api:2222 from a Pod of Deployment frontend.
Answer
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np1
namespace: venus
spec:
podSelector:
matchLabels:
app: frontend
policyTypes:
- Egress
egress:
- to:
- podSelctor:
matchLabels:
app: api
- ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
kubectl -n venus exec frontend -- wget -o- www.google.com
kubectl -n venus exec frontend -- wget -o- api:2222
Question 21.
Team Neptune needs 3 Pods of image httpd:2.4-alpine, create a Deployment named neptune-10ab for this. The containers should be named neptune-pod-10ab. Each container should have a memory request of 20Mi and a memory limit of 50Mi.
Team Neptune has it's own ServiceAccount neptune-sa-v2 under which the Pods should run. The Deployment should be in Namespace neptune.
Answer
kubectl create deployment -n neptune neptune-10ab --image httpd:2.4-alpine --dry-run=client -o yaml > 21.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: neptune-10ab
namespace: neptune
labels:
app: neputne-10ab
spec:
replicas: 3
selector:
matchLabels:
app: neptune-10ab
template:
metadata:
app: neptune-10ab
spec:
serviceAccountName: neptune-sa-v2
containers:
- name: neptune-pod-10ab
image: httpd:2.4-alpine
resources:
requests:
memory: 20Mi
limits:
memory: 50Mi
Question 22.
Team Sunny needs to identify some of their Pods in namespace sun. They ask you to add a new label protected: true to all Pods with an existing label type: worker or type: runner. Also add an annotation protected: do not delete this pod to all Pods having the new label protected: true.
Answer
kubectl get pods -n sun --show-labels
kubectl get pods -n sun -l type=runner
kubectl get pods -n sun -l type=worker
kubectl label pods -n sun -l type=runner protected=true
kubectl label pods -n sun -l type=worker protected=true
kubectl annotate pods -n sun -l protected=true protected="do not delete this pod"
kubectl get pods -n sun -l protected=true -o yaml | grep -A 8 metadata: