We have deployed a POD. Inspect the POD and wait for it to start running.
$ kubectl get pods webapp -o yaml
apiVersion: v1
kind: Pod
metadata:
name: webapp
namespace: default
spec:
containers:
- image: kodekloud/event-simulator
name: event-simulator
env:
- name: LOG_HANDLERS
value: file
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-klz6d
readOnly: true
restartPolicy: Always
volumes:
- name: kube-api-access-klz6d
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
name: kube-root-ca.crt
items:
- key: ca.crt
path: ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
$ kubectl describe pods webapp
Name: webapp
Namespace: default
Status: Running
IP: 10.244.0.4
Containers:
event-simulator:
Container ID: docker://7111346e4a1e6740d27be3e88833
Image: kodekloud/event-simulator
Image ID: docker-pullable://kodekloud/event-simulator
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 08 Aug 2022 15:22:55 +0000
Ready: True
Restart Count: 0
Environment:
LOG_HANDLERS: file
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-klz6d (ro)
Volumes:
kube-api-access-klz6d:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
Configure a volume to store these logs at /var/log/webapp
on the host. O container webapp armazena os logs em um arquivo chamado app.log
no diretório /log
. Isto faz com que estes dados sejam perdidos caso o Pod seja excluído ou o container reiniciado.
apiVersion: v1
kind: Pod
metadata:
name: webapp
spec:
containers:
- name: event-simulator
image: kodekloud/event-simulator
env:
- name: LOG_HANDLERS
value: file
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-klz6d
readOnly: true
- name: webapp
mountPath: /log
imagePullPolicy: Always
nodeName: controlplane
restartPolicy: Always
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-klz6d
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
- name: webapp
hostPath:
path: /var/log/webapp
type: Directory
Create a Persistent Volume
with the given specification.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-log
spec:
storageClassName: ""
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
hostPath:
path: /pv/log
type: DirectoryOrCreate
Let us claim some of that storage for our application. Create a Persistent Volume Claim
with the given specification.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim-log-1
spec:
storageClassName: ""
resources:
requests:
storage: 50Mi
accessModes:
- ReadWriteOnce
What is the state of the Persistent Volume Claim
?
$ kubectl get pvc -o wide
NAME **STATUS** AGE VOLUMEMODE
claim-log-1 **Pending** 31s Filesystem
What is the state of the Persistent Volume
?
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY **STATUS** CLAIM
pv-log 100Mi RWX Retain **Available**
Why is the claim not bound to the available Persistent Volume
? Access Modes Mismatch
$ kubectl get PersistentVolume pv-log -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
name: pv-log
spec:
**accessModes:
- ReadWriteMany**
capacity:
storage: 100Mi
hostPath:
path: /pv/log
type: DirectoryOrCreate
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
**status:
phase: Available**
$ kubectl get PersistentVolumeClaim claim-log-1 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim-log-1
spec:
**accessModes:
- ReadWriteOnce**
resources:
requests:
storage: 50Mi
storageClassName: ""
volumeMode: Filesystem
**status:
phase: Pending**
Update the Access Mode on the claim to bind it to the PV. Delete and recreate the claim-log-1
.
$ kubectl delete -f pvc.yaml
persistentvolumeclaim "claim-log-1" deleted
$ kubectl get PersistentVolumeClaim claim-log-1 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim-log-1
spec:
**accessModes:
- ReadWriteMany**
resources:
requests:
storage: 50Mi
storageClassName: ""
volumeMode: Filesystem
**status:
phase: Bound**
You requested for 50Mi
, how much capacity is now available to the PVC? 100 Mi
$ kubectl get pvc claim-log-1 -o wide
NAME **STATUS** **VOLUME** **CAPACITY** ACCESSMODES AGE VOLUMEMODE
claim-log-1 **Bound** **pv-log** **100Mi** RWX 68s Filesystem
Update the webapp
pod to use the persistent volume claim as its storage. Replace hostPath
configured earlier with the newly created PersistentVolumeClaim
.
$ kubectl get pvc
**NAME** STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
**claim-log-1** Bound pv-log 100Mi RWX 8m33s
apiVersion: v1
kind: Pod
metadata:
name: webapp
spec:
containers:
- name: event-simulator
image: kodekloud/event-simulator
(...)
volumeMounts:
- (...)
- name: webapp
mountPath: /log
volumes:
- (...)
- name: webapp
persistentVolumeClaim:
claimName: claim-log-1
What is the Reclaim Policy
set on the Persistent Volume pv-log
?
$ kubectl get pv
NAME CAPACITY ACCESSMODES **RECLAIMPOLICY** STATUS CLAIM
pv-log 100Mi RWX **Retain** Bound default/claim-log-1
What would happen to the PV if the PVC was destroyed? The PV is not deleted but is not available.
Try deleting the PVC and notice what happens. The PVC is stuck in ‘terminating’ state.
$ kubectl delete pvc claim-log-1
persistentvolumeclaim "claim-log-1" deleted
$ kubectl get pvc
NAME **STATUS** VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim-log-1 **Terminating** pv-log 100Mi RWX 12m
Why is the PVC stuck in Terminating
state? Because its being used by a Pod.
Let us now delete the webapp
Pod.
Once deleted, wait for the pod to fully terminate.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
webapp 1/1 Running 0 9m14s
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim-log-1 Terminating pv-log 100Mi RWX 15m
$ kubectl delete pods webapp --force
pod "webapp" force deleted
$ kubectl get pvc
No resources found in default namespace.
What is the state of the Persistent Volume now?
k get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-log 100Mi RWX Retain Released default/claim-log-1 38m