일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | ||||
4 | 5 | 6 | 7 | 8 | 9 | 10 |
11 | 12 | 13 | 14 | 15 | 16 | 17 |
18 | 19 | 20 | 21 | 22 | 23 | 24 |
25 | 26 | 27 | 28 | 29 | 30 | 31 |
- 쿠버네티스 컴포넌트
- Firelens
- 쿠버네티스
- github action 사용법
- kubernetes 동작 원리
- SAA 합격 후기
- EFS CSI Driver
- Solution Architecture
- terraform
- Aurora cluster
- EKS 클러스터
- LoadBalancer Controller
- Kubernetes
- 깃허브 액션
- Kubernets on Jenkins
- 그라파나 시각화
- livenessPorbe
- helm
- headless service
- blue-green
- 딥레이서
- 그라파나 대시보드
- 솔데스크
- 로드밸런서 컨트롤러
- 딥레이서 보상함수
- grafana on kubernetes
- 메탈LB
- jenkins
- AWS 딥레이서
- Prometheus install
mingming
Killer.sh CKA 문제 풀이 본문
Question 1 |
You have access to multiple clusters from your main terminal through kubectl
contexts. Write all those context names into /opt/course/1/contexts
.
Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh
, the command should use kubectl
.
Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh
, but without the use of kubectl
.
kubectl config get-contexts -o name > /opt/course/1/contexts
echo “kubectl config current-contexts” > opt/course/1/context_default_kubectl.sh
echo “cat ~/.kube/config | grep current | sed -e “s/current-context: //”
Question 2 |
Create a single Pod of image httpd:2.4.41-alpine
in Namespace default
. The Pod should be named pod1
and the container should be named pod1-container
. This Pod should only be scheduled on controlplane nodes. Do not add new labels to any nodes.
kubectl get nodes clutser1-controlplane1 | grep Taint -A1
kubectl get nodes clutser1-controlplane1 --show-labels
kubectl run pod1 --image httpd:2.4.41-alpine --dry-run=client > 2.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: pod1
image: httpd:2.4.41-alpine
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
nodeSelector:
node-role.kubernetes.io/control-plane: ""
kubectl apply -f 2.yaml
kubectl get pods -o wide
Question 3 |
There are two Pods named o3db-*
in Namespace project-c13
. C13 management asked you to scale the Pods down to one replica to save resources.
kubectl scale sts o3db -n project-c13 --replicas 1
Question 4 |
Do the following in Namespace default
. Create a single Pod named ready-if-service-ready
of image nginx:1.16.1-alpine
. Configure a LivenessProbe which simply executes command true
. Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80
is reachable, you can use wget -T2 -O- http://service-am-i-ready:80
for this. Start the Pod and confirm it isn't ready because of the ReadinessProbe.
Create a second Pod named am-i-ready
of image nginx:1.16.1-alpine
with label id: cross-server-ready
. The already existing Service service-am-i-ready
should now have that second Pod as endpoint.
Now the first Pod should be in ready state, confirm that.
kubectl run ready-if-service-ready --image nginx:1.16.1-alpine --dry-run=client -o yaml > 4.yaml
first pod
---
apiVersion: v1
kind: Pod
metadata:
name: ready-if-service-ready
namespace: default
spec:
containers:
- name: ready-if-service-ready
image: nginx:1.16.1-alpine
livenessPorbe:
exec:
command:
- 'true'
readinessProbe:
exec:
command:
- sh
- c
- 'wget -T2 -o- http://service-am-i-ready:80
kubectl apply -f 4.yaml
kubectl run am-i-ready --image nginx:1.16.1-alpine --labels id=corss-server-ready
---
apiVersion: v1
kind: Pod
metadata:
name: am-i-ready
labels:
id: cross-server-ready
spec:
containers:
- name: am-i-ready
image: nginx:1.16.1-alpine
Question 5 |
There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh
which lists all Pods sorted by their AGE (metadata.creationTimestamp
).
Write a second command into /opt/course/5/find_pods_uid.sh
which lists all Pods sorted by field metadata.uid
. Use kubectl
sorting for both commands.
echo "kubectl get pods -A --sort-by=.metadata.creationTimestamp" > /opt/course/5/find_pods.sh
echo "kubectl get pods -A --sorted-by=.metadata.uid" > /opt/course/5/find_pods_uid.sh
Question 6 |
Create a new PersistentVolume named safari-pv
. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data
and no storageClassName defined.
Next create a new PersistentVolumeClaim in Namespace project-tiger
named safari-pvc
. It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.
Finally create a new Deployment safari
in Namespace project-tiger
which mounts that volume at /tmp/safari-data
. The Pods of that Deployment should be of image httpd:2.4.41-alpine
.
safari-pv.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: safari-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
hostPath:
path: "/volume/Data"
safari-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: safari-pvc
namespace: project-tiger
spec:
accessModes:
- ReadWriteOnce
resource:
requests:
storage: 2Gi
deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: safari
namespace: project-tiger
labels:
app: safari
spec:
replicas: 1
selector:
matchLabels:
app: safari
template:
labels:
app: safari
spec:
containers:
- name: safari
image: httpd:2.4.41-alpine
volumeMounts:
- name: safari-pvc
mountPath: /tmp-safari-data
volumes:
- name: safari-pvc
persistentVolumeClaim:
claimName: safari-pvc
Question 7 |
The metrics-server has been installed in the cluster. Your college would like to know the kubectl commands to:
- show Nodes resource usage
- show Pods and their containers resource usage
Please write the commands into /opt/course/7/node.sh
and /opt/course/7/pod.sh
.
echo "kubectl top node" > /opt/course/7/node.sh
echo "kubectl top pod --containers=true" > /opt/course/7/pod.sh
Question 8 |
Ssh into the controlplane node with ssh cluster1-controlplane1
. Check how the controlplane components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the controlplane node. Also find out the name of the DNS application and how it's started/installed on the controlplane node.
Write your findings into file /opt/course/8/controlplane-components.txt
. The file should be structured like:
# /opt/course/8/controlplane-components.txtkubelet: [TYPE]kube-apiserver: [TYPE]kube-scheduler: [TYPE]kube-controller-manager: [TYPE]etcd: [TYPE]dns: [TYPE] [NAME]
Choices of [TYPE]
are: not-installed
, process
, static-pod
, pod
ssh cluster1-controlplane1
kubectl get pods -n kube-system
ps aux | grep kubelet
kubelet: process
kube-api-server: static-pod
kube-scheduler: static-pod
kube-controller-manager: static-pod
etcd: static-pod
dns: pod coredns
Question 9 |
Ssh into the controlplane node with ssh cluster2-controlplane1
. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.
Create a single Pod named manual-schedule
of image httpd:2.4-alpine
, confirm it's created but not scheduled on any node.
Now you're the scheduler and have all its power, manually schedule that Pod on node cluster2-controlplane1. Make sure it's running.
Start the kube-scheduler again and confirm it's running correctly by creating a second Pod named manual-schedule2
of image httpd:2.4-alpine
and check if it's running on cluster2-node1.
ssh cluster2-controlplane1
cd /etc/kubernetes/manifest
mv kube-scheduler.yaml ..
kubectl run manual-schedule --image httpd:2.4-alpine
kubectl get pods
kubectl get pods manual-schedule -o yaml > 9.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: manual-schedule
spec:
containers:
- name: manual-schedule
image: httpd:2.4-alpine
nodeName: cluster2-controlplane1
kubectl replace -f 9.yaml --force
mv ../kube-scheduler.yaml .
kubectl run manual-schedule2 --image httpd:2.4-alpine
kubectl get pods
Question 10 | RBAC ServiceAccount Role RoleBinding
Create a new ServiceAccount processor
in Namespace project-hamster
. Create a Role and RoleBinding, both named processor
as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.
kubectl create serviceaccount processor -n project-hamster
kubectl create role processor --verb=create --resource=cm,secret -n project-hamster
kubectl create rolebinding processor -n porject-hamster --role processor --serviceaccount project-hamster:processor
Question 11 | DaemonSet in all Nodes
Use Namespace project-tiger
for the following. Create a DaemonSet named ds-important
with image httpd:2.4-alpine
and labels id=ds-important
and uuid=18426a0b-5f59-4e10-923f-c0e078e82462
. The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. The Pods of that DaemonSet should run on all nodes, also controlplanes.
kubectl create deployment ds-important --image httpd:2.4-alpine > 11.yaml
---
apiVersion: apps/v1
kind: Daemonset
metadata:
name: ds-important
labels:
id: ds-important
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
spec:
selector:
matchLabels:
id: ds-important
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
template:
metadata:
labels:
id: ds-important
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
spec:
containers:
- name: ds-important
image: httpd:2.4-alpine
resources:
requests:
cpu: 10m
memory: 10Mi
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
Question 12 | Deployment on all Nodes
Use Namespace project-tiger
for the following. Create a Deployment named deploy-important
with label id=very-important
(the Pods
should also have this label) and 3 replicas. It should contain two containers, the first named container1
with image nginx:1.17.6-alpine
and the second one named container2 with image google/pause
.
There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-node1
and cluster1-node2
. Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won't be scheduled, unless a new worker node will be added.
In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.
kubectl create deployment --dry-run=client -n project-tiger deploy-important --replicas 3 -o yaml > 12.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-important
namespace: project-tiger
labels:
id: very-important
spec:
replicas: 3
selector:
matchLabels:
id: very-important
template:
metadata:
labels:
id: very-important
spec:
containers:
- name: container1
image: nginx:1.17.6
- name: container2
image: google/pause
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: id
operator: In
values:
- very-important
topologyKey: kubernetes.io/hostname
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
id: very-important # change
name: deploy-important
namespace: project-tiger # important
spec:
replicas: 3 # change
selector:
matchLabels:
id: very-important # change
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
id: very-important # change
spec:
containers:
- image: nginx:1.17.6-alpine
name: container1 # change
resources: {}
- image: google/pause # add
name: container2 # add
topologySpreadConstraints: # add
- maxSkew: 1 # add
topologyKey: kubernetes.io/hostname # add
whenUnsatisfiable: DoNotSchedule # add
labelSelector: # add
matchLabels: # add
id: very-important # add
status: {}
Question 13 | Multi Containers and Pod shared Volume
Create a Pod named multi-container-playground
in Namespace default
with three containers, named c1
, c2
and c3
. There should be a volume attached to that Pod and mounted into every container, but the volume shouldn't be persisted or shared with other Pods.
Container c1
should be of image nginx:1.17.6-alpine
and have the name of the node where its Pod is running available as environment variable MY_NODE_NAME.
Container c2
should be of image busybox:1.31.1
and write the output of the date
command every second in the shared volume into file date.log
. You can use while true; do date >> /your/vol/path/date.log; sleep 1; done
for this.
Container c3
should be of image busybox:1.31.1
and constantly send the content of file date.log
from the shared volume to stdout. You can use tail -f /your/vol/path/date.log
for this.
Check the logs of container c3
to confirm correct setup.
---
apiVersion: v1
kind: Pod
metadata:
name: multi-container-playground
namespace: default
spec:
containers:
- name: c1
image: nginx:1.17-6-alpine
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: c2
image: busybox:1.31.1
command: ["sh", "-c", "while true; do date >> /vol/date.log; sleep 1; done"]
volumeMounts:
- name: vol
mountPath: /vol
- name: c3
image: busybox:1.31.1
command: ["sh", "-c", "tail -f /vol/date.log"]
volumeMounts:
- name: vol
mountPath: /vol
volumes:
- name: vol
emptyDir: {}
kubectl exec -it multi-container-playground -c c1 -- sh -c env | grep MY_NODE_NAME
Question 14 | Find out Cluster Information
You're ask to find out following information about the cluster k8s-c1-H
:
- How many controlplane nodes are available?
- How many worker nodes are available?
- What is the Service CIDR?
- Which Networking (or CNI Plugin) is configured and where is its config file?
- Which suffix will static pods have that run on cluster1-node1?
Write your answers into file /opt/course/14/cluster-info
, structured like this:
# /opt/course/14/cluster-info1: [ANSWER]2: [ANSWER]3: [ANSWER]4: [ANSWER]5: [ANSWER]
kubectl get node
ssh cluster1-controlplane1
cat /etc/kubernetes/manifest/kube-apiserver.yaml | grep range
find /etc/cni/net.d
cat /etc/cni/net.d/10-weave.conflist
-cluster1-node1
Question 15 | Cluster Event Logging
Write a command into /opt/course/15/cluster_events.sh
which shows the latest events in the whole cluster, ordered by time (metadata.creationTimestamp
). Use kubectl
for it.
Now delete the kube-proxy Pod running on node cluster2-node1 and write the events this caused into /opt/course/15/pod_kill.log
.
Finally kill the containerd container of the kube-proxy Pod on node cluster2-node1 and write the events into /opt/course/15/container_kill.log
.
Do you notice differences in the events both actions caused?
echo "kubectl get events -A --sort-by=.metadata.creationTimestamp" > /opt/course/15/cluster_evnets.sh
kubectl delete pods -n kube-system kube-proxy-z64cg
kubectl get events -A --sort-by=.metadata.creationTimestamp > /opt/course/15/pod_kill.log
ssh cluster2-node1
crictl ps | grep kube-proxy
crictl rm <containerID>
kubectl get evnets -A --sort-by=.metadata.creationTimestamp > /opt/cousre/15/container_kill.log
Question 16 | Namespaces and Api Resrources
Write the names of all namespaced Kubernetes resources (like Pod, Secret, ConfigMap...) into /opt/course/16/resources.txt
.
Find the project-*
Namespace with the highest number of Roles
defined in it and write its name and amount of Roles into /opt/course/16/crowded-namespace.txt
.
kubectl api-resources
kubectl api-resources -h
kubectl api-resources --namespaced -o name > /opt/course/16/resources.txt
kubectl get ns
kubectl get role -n project-c13 --no-headers | wc -l
kubectl get role -n project-c14 --no-headers | wc -l
kubectl get role -n project-hamster --no-headers | wc -l
kubectl get role -n project-snake --no-headers | wc -l
kubectl get role -n project-tiger --no-headers | wc -l
echo "project-c14 with 300 resources" > /opt/course/16/crowded-namespace.txt
Question 17 | Find Container of Pod and check info
In Namespace project-tiger
create a Pod named tigers-reunite
of image httpd:2.4.41-alpine
with labels pod=container
and container=pod
. Find out on which node the Pod is scheduled. Ssh into that node and find the containerd container belonging to that Pod.
Using command crictl
:
- Write the ID of the container and the
info.runtimeType
into/opt/course/17/pod-container.txt
- Write the logs of the container into
/opt/course/17/pod-container.log
kubectl run -n project-tiger tigers-reunite --image httpd:2.4.41-alpine --labels pod=container,container=pod
kubectl get pods -n project-tiger -o wide | grep tigers-reunite
ssh cluster1-node2
crictp ps | grep tigers-reunite
crictl inspect <containerID> | grep runtimeType
echo "<containerID> <runitmeType>" > /opt/course/17/pod-container.txt
ssh cluster-node2 "crictl logs <containerID>" &> /opt/course/17/pod-container.log
Question 18 | Fix Kubelet
There seems to be an issue with the kubelet not running on cluster3-node1
. Fix it and confirm that cluster has node cluster3-node1
available in Ready state afterwards. You should be able to schedule a Pod on cluster3-node1
afterwards.
Write the reason of the issue into /opt/course/18/reason.txt
.
ssh cluster3-node1
ps aux | grep kubelet
systemctl status kubelet
which kubelet
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
/usr/bin/kubelet으로 변경
systemctl daemon-reload
systemctl restart kubelet
Question 19 | Create Secret and mount into Pod
Do the following in a new Namespace secret
. Create a Pod named secret-pod
of image busybox:1.31.1
which should keep running for some time.
There is an existing Secret located at /opt/course/19/secret1.yaml
, create it in the Namespace secret
and mount it readonly into the Pod at /tmp/secret1
.
Create a new Secret in Namespace secret
called secret2
which should contain user=user1
and pass=1234
. These entries should be available inside the Pod's container as environment variables APP_USER and APP_PASS.
Confirm everything is working.
kubectl apply -f /opt/course/19/secret1.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: secret-pod
namespace: secret
spec:
containers:
- name: secret-pod
image: busybox:1.31.1
command: ["sh", "-c", "sleep infinity"]
volumeMounts:
- name: secret
mountPath: /tmp/secret1
env:
- name: APP_USER
valueFrom:
secretKeyRef:
name: secret2
key: user
- name: APP_PASS
valueFrom:
secretKeyRef:
name: secret2
key: pass
volumes:
- name: secret1
secret:
secretName: secret
kubectl create secret generic secret2 --from-literal user=user1 --from-literal pass=1234
Question 20 | Update Kubernetes Version and join cluster
Your coworker said node cluster3-node2
is running an older Kubernetes version and is not even part of the cluster. Update Kubernetes on that node to the exact version that's running on cluster3-controlplane1
. Then add this node to the cluster. Use kubeadm for this.
kubectl get node
ssh cluster3-node2
kubeadm version
kubectl version --short
kubelet --version
apt update
apt show kubectl -a | grep 1.28
apt install kubectl=1.28.2-00 kubelet=1.28.2-00
kubelet --version
systemctl restart kubelet
exit
ssh cluster3-controlplane1
kubeadm token create --print-join-command
kubeadm token list
exit
ssh cluster3-node2
kubeadm join
Question 21 | Create a Static Pod and Service
Create a Static Pod
named my-static-pod
in Namespace default
on cluster3-controlplane1. It should be of image nginx:1.16-alpine
and have resource requests for 10m
CPU and 20Mi
memory.
Then create a NodePort Service named static-pod-service
which exposes that static Pod on port 80 and check if it has Endpoints and if it's reachable through the cluster3-controlplane1
internal IP address. You can connect to the internal node IPs from your main terminal.
ssh cluster3-controlplane1
cd /etc/kubernetes/manifest
vi 21.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: my-static-pod
labels:
app: my-static-pod
spec:
containers:
- name: my-static-pod
image: nginx:1.16-alpine
resources:
requests:
cpu: 10m
memory: 20Mi
kubectl expose pod my-static-pod-cluster3-controlplane1 --name static-pod-service --type NodePort --port 80
kubectl get ep
Question 22 | Check how long certificates are valid
Check how long the kube-apiserver server certificate is valid on cluster2-controlplane1
. Do this with openssl or cfssl. Write the exipiration date into /opt/course/22/expiration
.
Also run the correct kubeadm
command to list the expiration dates and confirm both methods show the same date.
Write the correct kubeadm
command that would renew the apiserver server certificate into /opt/course/22/kubeadm-renew-certs.sh
.
ssh cluster2-controlplane1
cd /etc/manifest/pki
ls | grep apiserver
openssl x509 -noout -text -in apiserver.crt | grep Validity -A2
echo kubeadm certs check-expiration | grep apiserver
echo "kubeadm certs renew apiserver" > /opt/course/22/kubeadm-renew-certs.sh
Question 23 | Kubelet client/server cert info
Node cluster2-node1 has been added to the cluster using kubeadm
and TLS bootstrapping.
Find the "Issuer" and "Extended Key Usage" values of the cluster2-node1:
- kubelet client certificate, the one used for outgoing connections to the kube-apiserver.
- kubelet server certificate, the one used for incoming connections from the kube-apiserver.
Write the information into file /opt/course/23/certificate-info.txt
.
Compare the "Issuer" and "Extended Key Usage" fields of both certificates and make sense of these.
ssh cluster2-node1
ps aux | grep kubelet
cat /etc/kubernetes/kubelet.conf
cd /var/lib/kubelet/pki
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep Issuer
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep "Extended Key Usage" -A1
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep Issuer
openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep "Extended Key Usage" -A1
Question 24 | NetworkPolicy
There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.
To prevent this create a NetworkPolicy called np-backend
in Namespace project-snake
. It should allow the backend-*
Pods only to:
- connect to
db1-*
Pods on port 1111 - connect to
db2-*
Pods on port 2222
Use the app
label of Pods in your policy.
After implementation, connections from backend-*
Pods to vault-*
Pods on port 3333 should for example no longer work.
kubectl get pods -n project-snake
kubectl api-resources | grep netpol
kubectl get pods -n project-snake -L app
---
apiversion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-backend
namespace: project-snake
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Egress
egress
-
to:
- podSelector:
matchLabels:
app: db1
ports:
- protocol: TCP
port: 1111
-
to:
- podSelector:
matchLabels:
app: db2
ports:
- protocol: TCP
port: 2222
Question 25 | Etcd Snapshot Save and Restore
Make a backup of etcd running on cluster3-controlplane1 and save it on the controlplane node at /tmp/etcd-backup.db
.
Then create any kind of Pod in the cluster.
Finally restore the backup, confirm the cluster is still working and that the created Pod is no longer with us.
ssh cluster1-controlplane1
## etcd backup
cat /etc/kubernetes/manifests/etcd.yaml
ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key
kubectl run test --image nginx
kubectl get pods -l run=test -w
## etcd restore
cd /etc/kubernetes/manifests
mv * ..
watch crictl ps
ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd-backup.db \
--data-dir /var/lib/etcd-backup \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key
vi /etc/kubernetes/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
...
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /var/lib/etcd-backup # change
type: DirectoryOrCreate
name: etcd-data
status: {}
Extra Question 1 | Find Pods first to be terminated
Check all available Pods in the Namespace project-c13
and find the names of those that would probably be terminated first if the nodes run out of resources (cpu or memory) to schedule all Pods. Write the Pod names into /opt/course/e1/pods-not-stable.txt
.
Kubernetes assigns Quality of Service classes to Pods based on the defined resources and limits, read more here: https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod
kubectl get pods -n project-c13
Preview Question 1
The cluster admin asked you to find out the following information about etcd running on cluster2-controlplane1:
- Server private key location
- Server certificate expiration date
- Is client certificate authentication enabled
Write these information into /opt/course/p1/etcd-info.txt
Finally you're asked to save an etcd snapshot at /etc/etcd-snapshot.db
on cluster2-controlplane1 and display its status.
kubectl get nodes
ssh cluster2-controlplane1
cd /etc/kubernetes/manifests
cat etcd.yaml
openssl x509 -text -noout -in /etc/kubernetes/pki/etcd/server.crt | grep Vaidity -A2
server private key location: /etc/kubernetes/pki/etcd/server.key
server certificate expirationdate:
client certificate authentication enabled: yes
ETCDCTL_API=3 ectdctl snapshot save /etc/etcd-snapshot.db \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key
ETCDCTL_API=3 etcdctl snapshot status /etc/etcd-snapshot.db
Preview Question 2
You're asked to confirm that kube-proxy is running correctly on all nodes. For this perform the following in Namespace project-hamster
:
Create a new Pod named p2-pod
with two containers, one of image nginx:1.21.3-alpine
and one of image busybox:1.31
. Make sure the busybox container keeps running for some time.
Create a new Service named p2-service
which exposes that Pod internally in the cluster on port 3000->80.
Find the kube-proxy container on all nodes cluster1-controlplane1
, cluster1-node1
and cluster1-node2
and make sure that it's using iptables. Use command crictl
for this.
Write the iptables rules of all nodes belonging the created Service p2-service
into file /opt/course/p2/iptables.txt
.
Finally delete the Service and confirm that the iptables rules are gone from all nodes.
kubectl get pods -n project-hamster -o wide | grep kube-proxy
---
apiVersion: v1
kind: Pod
metadata:
name: p2-pod
labels:
run: p2-pod
spec:
containers:
- name: container1
image: nginx:1.21.3-alpine
- name: container2
image: busybox:1.31
command: ["sh", "-c", "sleep 1d"]
kubectl expose pods p2-pod --name p2-service --port 3000 --target-port 80
ssh cluster1-controlplane1
crictl ps | grep kube-proxy
crictl logs <containerID>
ssh cluster1-controlplane1 iptables-save | grep p2-service
ssh cluster1-node1 iptables-save | grep p2-service
ssh cluster1-node2 iptables-save | grep p2-service
Preview Question 3
Create a Pod named check-ip
in Namespace default
using image httpd:2.4.41-alpine
. Expose it on port 80 as a ClusterIP Service named check-ip-service
. Remember/output the IP of that Service.
Change the Service CIDR to 11.96.0.0/12
for the cluster.
Then create a second Service named check-ip-service2
pointing to the same Pod to check if your settings did take effect. Finally check if the IP of the first Service has changed.
kubectl run check-ip --image httpd:2.4.41-alpine
kubectl expose pod check-ip --name check-ip-service --port 80
kubectl get svc
cd /etc/kubernetes/manifest
vi kube-apiserver.yaml
vi kube-controller-manager.yaml
crictl ps | grep scheduler
kubectl expose pod check-ip --name check-ip-service2 --port 80
'kubernetes' 카테고리의 다른 글
Kubectl 버전 업그레이드 (1) | 2024.09.01 |
---|---|
Jenkins Plugin Version Error (1) | 2024.03.22 |
Kubernetes - Kustomize (0) | 2023.12.03 |
Kubernetes Secret (2) | 2023.10.29 |
Kubernetes - Label & Node Label (2) | 2023.10.23 |