一、pv和pvc

pv

pv:Persistent volume

是k8s虚拟化的存储资源,实际上就是存储,例如本地的硬盘,网络文件系统(nfs)

lvm RAID oss(ceph) 云存储。

pvc

pvc:Persistent volume claim

用户对存储资源的请求,定义了需要存储的空间大小,以及对存储空间的访问模式。

有了pvc请求之后 ,和pv进行匹配,匹配到了之后绑定,绑定成功,就可以使用pv的存储空间。

1.1、静态-----手动创建pv

1.2、动态----自动创建pv

在这里插入图片描述

1.3、pv和pvc的生命周期

1、配置 定义pvc的请求详细情况-------匹配pv------绑定-------使用--------释放--------pv回收

pv的状态有四种:

1、Available:可用的,可以被pvc匹配。未绑定状态。

2、bound:已绑定,pv已经被pvc绑定,正在使用。

3、released:已释放,pvc以及被删除,但是pv的资源还没有被回收,pv不可用的状态。

4、failed失败 pv自动回收失败,pv不可用。

1.4、pvc在请求的过程中支持的权限控制选项:

ReadWriteOnce RWO : 存储目录可读,可写,但是这个目录只能被一个pod挂载。

ReadOnlyMany:ROX 存储可以以只读的方式被多个pod挂载

ReadWriteMany:RWX 存储目录可以读写的方式被多个pod挂载

NFS支持以上三种所有模式。

hostPath只支持ReadWriteOnce: 存储目录可读,可写,但是这个目录只能被一个pod挂载。

云存储(对象存储可以支持动态扩缩容)

ISCSI不支持ReadWriteMany

ISCSI是一种在网络上运行SCSI协议的网络存储技术

[root@master01 k8s-yaml]# lsscsi
[0:0:0:0] disk VMware, VMware Virtual S 1.0 /dev/sda
[2:0:0:0] cd/dvd NECVMWar VMware IDE CDR10 1.00 /dev/sr0

1.5、pv的回收策略:

Retain: 保留

虽然pvc被删除了,但是pv还是处于released的状态,即使恢复到available状态,上一个挂载的数据也不会丢失。

Delete:删除

虽然pvc被删除了,但是pv还是处于released的状态,即使恢复到available状态,数据全部删除

Recycle回收:

虽然pvc被删除了,但是pv还是处于released的状态,pv会自动的对资源进行回收,删除数据,然后pv自动回到available状态。

我们使用的文件系统是nfs共享

nfs共享文件
[root@k8s5 ~]# vim /etc/exports

/data/v1 192.168.168.0/24(rw,no_root_squash)
/data/v2 192.168.168.0/24(rw,no_root_squash)
/data/v3 192.168.168.0/24(rw,no_root_squash)
[root@k8s5 /]# rm -rf data
[root@k8s5 /]# mkdir -p /data/v{1,2,3}
[root@k8s5 /]# cd data
[root@k8s5 data]# ls
v1  v2  v3
[root@k8s5 data]# systemctl restart rpcbind
[root@k8s5 data]# systemctl restart nfs
[root@k8s5 data]# showmount -e
Export list for k8s5:
/data/v3 192.168.168.0/24
/data/v2 192.168.168.0/24
/data/v1 192.168.168.0/24
[root@master01 k8s-yaml]# showmount -e 192.168.168.85
Export list for 192.168.168.85:
/data/v3 192.168.168.0/24
/data/v2 192.168.168.0/24
/data/v1 192.168.168.0/24



pv指定
[root@master01 k8s-yaml]# kubectl explain pv
KIND:     PersistentVolume




[root@master01 k8s-yaml]# vim pv.yaml

#我们定义3个pv,定义目录的路径,访问模式,pv的大小
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv001
  labels:
    name: pv001
spec:
  nfs:
#定义pv使用的文件系统
    server: 192.168.168.85
    path: /data/v1
#如果请求匹配到这个pv,使用的是目标主机的/data/v1
  accessModes: ["ReadWriteOnce"]
#定义访问模式
  capacity:
    storage: 1Gi
#Mi Gi Ti
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv002
  labels:
    name: pv002
spec:
  nfs:
    server: 192.168.168.85
    path: /data/v2
  accessModes: ["ReadWriteMany"]
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv003
  labels:
    name: pv003
spec:
  nfs:
    server: 192.168.168.85
    path: /data/v3
  accessModes: ["ReadOnlyMany"]
  capacity:
    storage: 3Gi




[root@master01 k8s-yaml]# kubectl apply -f pv.yaml 
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created

[root@master01 k8s-yaml]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
pv001   1Gi        RWO            Retain           Available                                           49m
pv002   2Gi        RWX            Retain           Bound       default/mypvc                           49m
pv003   3Gi        ROX            Retain           Available   




pvc
[root@master01 k8s-yaml]# kubectl explain pvc
KIND:     PersistentVolumeClaim


[root@master01 k8s-yaml]# vim pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
spec:
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 2Gi
#我需要pv,权限ReadWriteMany,空间是2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx1
  labels:
    app: nginx1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx1
  template:
    metadata:
      labels:
        app: nginx1
    spec:
      containers:
      - name: nginx1
        image: nginx:1.22
        volumeMounts:
        - name: xy102
          mountPath: /usr/share/nginx/html/
      volumes:
      - name: xy102
        persistentVolumeClaim:
#直接使用pvc的请求把容器内目录挂载pv请求对应的目录。
          claimName: mypvc

[root@master01 k8s-yaml]# kubectl apply -f pvc.yaml 
persistentvolumeclaim/mypvc unchanged
deployment.apps/nginx1 created
[root@master01 k8s-yaml]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
pv001   1Gi        RWO            Retain           Available                                           23m
pv002   2Gi        RWX            Retain           Bound       default/mypvc                           23m
pv003   3Gi        ROX            Retain           Available                                           23m

[root@master01 k8s-yaml]# kubectl exec -it nginx1-7fd846678-jfs42 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx1-7fd846678-jfs42:/# cd /usr/share/nginx/
root@nginx1-7fd846678-jfs42:/usr/share/nginx# ls
html
root@nginx1-7fd846678-jfs42:/usr/share/nginx# cd html/
root@nginx1-7fd846678-jfs42:/usr/share/nginx/html# ls
index.html
root@nginx1-7fd846678-jfs42:/usr/share/nginx/html# cat index.html 
123


[root@master01 k8s-yaml]# kubectl expose deployment nginx1 --port=80 --target-port=80 --type=NodePort
service/nginx1 exposed            ##开启主机和service服务
[root@master01 k8s-yaml]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        9d
nginx1       NodePort    10.96.38.241   <none>        80:30972/TCP   21s
[root@master01 k8s-yaml]# curl 192.168.168.81:30972
123


[root@k8s5 v2]# ls
index.html
[root@k8s5 v2]# rm -rf *
[root@k8s5 v2]# ls
[root@k8s5 v2]# echo 456 > index.html
[root@master01 k8s-yaml]# curl 192.168.168.81:30972
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.22.1</center>
</body>
</html>
[root@master01 k8s-yaml]# curl 192.168.168.81:30972
456



1、卸载回收默认策略Retain

[root@master01 k8s-yaml]# kubectl delete -f pvc.yaml 
persistentvolumeclaim "mypvc" deleted
deployment.apps "nginx1" deleted
[root@master01 k8s-yaml]# kubectl get pod -o wide
No resources found in default namespace.
[root@master01 k8s-yaml]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
pv001   1Gi        RWO            Retain           Available                                           55m
pv002   2Gi        RWX            Retain           Released    default/mypvc                           55m
pv003   3Gi        ROX            Retain           Available    

[root@master01 k8s-yaml]# kubectl edit pv pv002
claimRef:                 ##进去删除
    nfs:

persistentvolume/pv002 edited


[root@master01 k8s-yaml]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv001   1Gi        RWO            Retain           Available                                   56m
pv002   2Gi        RWX            Retain           Available                                   56m
pv003   3Gi        ROX            Retain           Available                                   56m

[root@k8s5 v2]# ls
index.html

宿主机文件还是存在

2、回收策略Delete

[root@master01 k8s-yaml]# vim pv.yaml 

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv002
  labels:
    name: pv002
spec:
  nfs:
    server: 192.168.168.85
    path: /data/v2
  accessModes: ["ReadWriteMany"]
  persistentVolumeReclaimPolicy: Delete
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv003
  labels:
    name: pv003
spec:
  nfs:
    server: 192.168.168.85
    path: /data/v3
  accessModes: ["ReadOnlyMany"]
  capacity:
    storage: 3Gi

[root@master01 k8s-yaml]# kubectl apply -f pv.yaml 
persistentvolume/pv001 unchanged
persistentvolume/pv002 configured
persistentvolume/pv003 unchanged

[root@master01 k8s-yaml]# kubectl apply -f pvc.yaml 
persistentvolumeclaim/mypvc created
deployment.apps/nginx1 created


[root@master01 k8s-yaml]# kubectl get pod -o wide


[root@master01 k8s-yaml]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
pv001   1Gi        RWO            Retain           Available                                           146m
pv002   2Gi        RWX            Delete           Bound       default/mypvc                           146m
pv003   3Gi        ROX            Retain           Available                                           146m

[root@master01 k8s-yaml]# curl 192.168.168.81:30972
123
[root@k8s5 v2]# echo 456 > index.html
[root@master01 k8s-yaml]# curl 192.168.168.81:30972
456
[root@master01 k8s-yaml]# kubectl delete -f pvc.yaml 
persistentvolumeclaim "mypvc" deleted
deployment.apps "nginx1" deleted
[root@master01 k8s-yaml]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
pv001   1Gi        RWO            Retain           Available                                           150m
pv002   2Gi        RWX            Delete           Failed      default/mypvc                           150m
pv003   3Gi        ROX            Retain           Available                                           150m
[root@master01 k8s-yaml]# kubectl edit pv pv002
claimRef:                 ##进去删除
    nfs:


persistentvolume/pv002 edited

[root@master01 k8s-yaml]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv001   1Gi        RWO            Retain           Available                                   151m
pv002   2Gi        RWX            Delete           Available                                   151m
pv003   3Gi        ROX            Retain           Available                                   151m


[root@k8s5 v2]# cat index.html 
456

宿主机文件还在没有删除

3、回收策略Recycle

#我们定义3个pv,定义目录的路径,访问模式,pv的大小
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv001
  labels:
    name: pv001
spec:
  nfs:
#定义pv使用的文件系统
    server: 192.168.168.85
    path: /data/v1
#如果请求匹配到这个pv,使用的是目标主机的/data/v1
  accessModes: ["ReadWriteOnce"]
#定义访问模式
  capacity:
    storage: 1Gi
#Mi Gi Ti
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv002
  labels:
    name: pv002
spec:
  nfs:
    server: 192.168.168.85
    path: /data/v2
  accessModes: ["ReadWriteMany"]
  persistentVolumeReclaimPolicy: Recycle
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv003
  labels:
    name: pv003
spec:
  nfs:
    server: 192.168.168.85
    path: /data/v3
  accessModes: ["ReadOnlyMany"]
  capacity:
    storage: 3Gi
    
[root@master01 k8s-yaml]# kubectl apply -f pv.yaml 
persistentvolume/pv001 unchanged
persistentvolume/pv002 configured
persistentvolume/pv003 unchanged

[root@master01 k8s-yaml]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv001   1Gi        RWO            Retain           Available                                   154m
pv002   2Gi        RWX            Recycle          Available                                   154m
pv003   3Gi        ROX            Retain           Available                                   154m

[root@master01 k8s-yaml]# kubectl apply -f pvc.yaml 
persistentvolumeclaim/mypvc created
deployment.apps/nginx1 created
[root@master01 k8s-yaml]# kubectl get pod -o wide

[root@master01 k8s-yaml]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
pv001   1Gi        RWO            Retain           Available                                           155m
pv002   2Gi        RWX            Recycle          Bound       default/mypvc                           155m
pv003   3Gi        ROX            Retain           Available                                           155m

[root@master01 k8s-yaml]# curl 192.168.168.81:30972
456
[root@k8s5 v2]# echo 789 > index.html
[root@master01 k8s-yaml]# curl 192.168.168.81:30972
789

[root@master01 k8s-yaml]# kubectl delete -f pvc.yaml 
[root@master01 k8s-yaml]# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv001   1Gi        RWO            Retain           Available                                   91m
pv002   2Gi        RWX            Recycle          Available                                   91m
pv003   3Gi        ROX            Retain           Available                                   91m

[root@k8s5 v2]# ls
[root@k8s5 v2]# 

宿主机文件自动和容器一起被回收,且回收pv002的分配

二、动态pv:

不需要人工创建pv,根据pvc的请求自动创建pv然后实现挂载和使用。

k8s创建动态pv的机制是根据StorageClass,相当于提供pv的模板。

StorageClass+NFS 动态创建nfs的pv

k8s的本身不支持nfs创建动态pv,使用外部的插件

Provisioner 存储分配器,自动使用配置好的nfs自动创建pv。

k8s当中,使用Provisioner来创建动态pv。

配置storageclass一起使用。

Provisioner配置信息,以及pvc的相关配置

storageclass在根据配置信息调用模块创建pv。

pod---------->provisioner---------->storageclass------>pv

Provisioner:

nfs-client:实现nfs网络共享的协作

aws-ebs:亚马逊云服务器的动态卷,进行协作。

local-storage:k8s节点的本地创建pv,一般是内部测试用的。

external-Storage:云平台支持的对象存储协作。

[root@k8s5 v2]# vim /etc/exports

/data/v1 192.168.168.0/24(rw,no_root_squash)
/data/v2 192.168.168.0/24(rw,no_root_squash)
/data/v3 192.168.168.0/24(rw,no_root_squash)
/opt/k8s 192.168.168.0/24(rw,no_root_squash)
[root@k8s5 v2]# systemctl restart rpcbind
[root@k8s5 v2]# systemctl restart nfs
[root@k8s5 v2]# showmount -e
Export list for k8s5:
/opt/k8s 192.168.168.0/24
/data/v3 192.168.168.0/24
/data/v2 192.168.168.0/24
/data/v1 192.168.168.0/24
[root@k8s5 opt]# mkdir k8s
[root@k8s5 opt]# chmod 777 k8s/
[root@master01 k8s-yaml]# showmount -e 192.168.168.85
Export list for 192.168.168.85:
/opt/k8s 192.168.168.0/24
/data/v3 192.168.168.0/24
/data/v2 192.168.168.0/24
/data/v1 192.168.168.0/24


2.1、动态pvc的流程

1、创建Server Account NFS-CLENT provisioner账户

2、设定集群的角色,赋权

3、设定权限和server account绑定。

4、NFS provisioner创建,deployment方式创建。

声明:存储点 提供nfs服务的服务器

存储路径 共享目录

挂载点

5、创建storageclass,作为pv的模板,和NFS provisioner关联。

6、创建pvc的请求和业务pod测试。

1-3、创建Server Account NFS-CLENT provisioner账户、设定集群的角色,赋权、设定权限和server account绑定。

[root@master01 ~]# kubectl explain clusterrole
KIND:     ClusterRole
VERSION:  rbac.authorization.k8s.io/v1

[root@master01 k8s-yaml]# vim nfs-client-rbac.yaml

#创建角色:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner

---
#赋权
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-client-provisioner-clusterrole
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
#rules规则,包含nfs provisioner权限可以新建和删除pv以及更新pv,监听pvc的变化,实时更新>挂载点的变化
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: nfs-provisioner-clusterrolebinging
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner
  namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-clusterrole
  apiGroup: rbac.authorization.k8s.io

[root@master01 k8s-yaml]#  kubectl apply -f nfs-client-rbac.yaml 
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-clusterrole created
clusterrolebinding.rbac.authorization.k8s.io/nfs-provisioner-clusterrolebinging created


以上1、2、3步骤已经完成
保留SelfLink字段
[root@master01 k8s-yaml]# vim /etc/kubernetes/manifests/kube-apiserver.yaml 

spec:
  containers:
  - command:
    - kube-apiserver
    - --feature-gates=RemoveSelfLink=false
[root@master01 k8s-yaml]# kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
pod/kube-apiserver created

[root@master01 k8s-yaml]# kubectl get pod -n kube-system 
[root@master01 k8s-yaml]# kubectl delete pod -n kube-system kube-apiserver
pod "kube-apiserver" deleted

[root@master01 k8s-yaml]# kubectl get pod -n kube-system 
NAME                               READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-6z2pg           1/1     Running   4          9d
coredns-7f89b7bc75-lg4gw           1/1     Running   4          9d
etcd-master01                      1/1     Running   4          9d
kube-apiserver-master01            1/1     Running   0          2m36s
kube-controller-manager-master01   1/1     Running   6          9d
kube-flannel-ds-48rnt              1/1     Running   5          9d
kube-flannel-ds-wphvj              1/1     Running   7          9d
kube-proxy-d5fnf                   1/1     Running   4          9d
kube-proxy-kpvs2                   1/1     Running   5          9d
kube-proxy-nrszf                   1/1     Running   5          9d
kube-scheduler-master01            1/1     Running   5          9d


4、创建provisioner的pod,声明路径和挂在点(NFS provisioner创建,deployment方式创建。声明:存储点 提供nfs服务的服务器,存储路径 共享目录,挂载点)
[root@master01 k8s-yaml]# vim nfs-client-provisioner.yaml

#创建provisioner的pod,声明路径和挂在点
#前面账户使用在这个pod里面
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs1
  strategy:
    type: Recreate
#Recreate,每次升级容器或者更新都会将所有的旧的pod停止,然后再启动新的pod
#会导致服务暂时中断
  template:
    metadata:
      labels:
        app: nfs1
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
      - name: nfs1
        image: quay.io/external_storage/nfs-client-provisioner:latest
        volumeMounts:
        - name: nfs1
          mountPath: /persistentvolumes
        env:
        - name: PROVISIONER_NAME
          value: nfs-storage
#配置provisioner的名称
        - name: NFS_SERVER
          value: 192.168.168.85
#在容器内设nfs服务器的ip地址变量
        - name: NFS_PATH
          value: /opt/k8s
#绑定nfs服务器的目录
      volumes:
#申明nfs数据卷的类型:
      - name: nfs1
        nfs:
          server: 192.168.168.85
          path: /opt/k8s
          
[root@master01 k8s-yaml]# kubectl apply -f nfs-client-provisioner.yaml 
deployment.apps/nfs1 created

[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME                   READY   STATUS    RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATES
nfs1-76f66b958-8ffpz   1/1     Running   0          2m19s   10.244.1.217   node01   <none>           <none>
5、创建storageclass,作为pv的模板,和NFS provisioner关联。
[root@master01 k8s-yaml]# vim nfs-client-storageclass.yaml

#定义模板
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client-storageclass
provisioner: nfs-storage
parameters:
  archiveOnDelete: "false"
#true,表示删除pv时进行存档。会把pv的状态标记为Achived。数据依然可用,但是还不再对新的pvc进行绑定。
#false,删除pvc,pv的状态先进去released,然后变成可用
reclaimPolicy: Retain
#pv的回收策略
allowVolumeExpansion: true
#要想pv支持动态扩容,必须是true。



[root@master01 k8s-yaml]# kubectl apply -f nfs-client-storageclass.yaml 
storageclass.storage.k8s.io/nfs-client-storageclass created


[root@master01 k8s-yaml]# vim nfs-client-storageclass.yaml

#定义模板
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client-storageclass
provisioner: nfs-storage
parameters:
  archiveOnDelete: "false"
#true,表示删除pv时进行存档。会把pv的状态标记为Achived。数据依然可用,但是还不再对新的pvc进行绑定。
#false,删除pvc,pv的状态先进去released,然后变成可用
reclaimPolicy: Delete
#pv的回收策略
allowVolumeExpansion: true
#要想pv支持动态扩容,必须是true。





6、创建pvc的请求和业务pod测试。

[root@master01 k8s-yaml]# vim pvc.yaml 


#定义好pvc的请求:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-mypvc
spec:
  accessModes: ["ReadWriteMany"]
  storageClassName: nfs-client-storageclass
  resources:
    requests:
      storage: 2Gi
#我需要pv,权限是ReadWriteMany,空间是2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx1
  labels:
    app: nginx1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx1
  template:
    metadata:
      labels:
        app: nginx1
    spec:
      containers:
      - name: nginx1
        image: nginx:1.22
        volumeMounts:
        - name: xy102
          mountPath: /usr/share/nginx/html/
      volumes:
      - name: xy102
#直接使用pvc的请求把容器内目录挂载pv请求对应的目录。
        persistentVolumeClaim:
          claimName: nfs-mypvc

[root@master01 k8s-yaml]# kubectl apply -f pvc-pod.yaml 
persistentvolumeclaim/nfs-mypvc created
deployment.apps/nginx1 created
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME                      READY   STATUS    RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATES
nfs1-76f66b958-7g8jt      1/1     Running   0          7m31s   10.244.2.165   node02   <none>           <none>
nginx1-74ddb78f7d-l5xjf   1/1     Running   0          5s      10.244.1.219   node01   <none>           <none>
nginx1-74ddb78f7d-rhmz5   1/1     Running   0          5s      10.244.2.168   node02   <none>           <none>
nginx1-74ddb78f7d-zntjd   1/1     Running   0          5s      10.244.2.169   node02   <none>           <none>

[root@master01 k8s-yaml]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM               STORAGECLASS              REASON   AGE
pvc-0c83315d-d923-4bf9-9144-91595e5ed0fe   2Gi        ROX            Retain           Released   default/nfs-mypvc   nfs-client-storageclass            5m38s
pvc-463c89ac-5120-4f4a-ba18-86454382ba20   2Gi        ROX            Retain           Released   default/nfs-mypvc   nfs-client-storageclass            5m48s
pvc-7894a5a0-73e0-493b-a502-f610f7d33968   2Gi        RWX            Retain           Bound      default/nfs-mypvc   nfs-client-storageclass            44s
[root@master01 k8s-yaml]# kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS              AGE
nfs-mypvc   Bound    pvc-7894a5a0-73e0-493b-a502-f610f7d33968   2Gi        RWX            nfs-client-storageclass   68s


[root@master01 k8s-yaml]#kubectl logs -f nfs1-76f66b958-8ffpz   ##查看日志


[root@k8s5 k8s]# cd default-nfs-mypvc-pvc-7894a5a0-73e0-493b-a502-f610f7d33968
[root@k8s5 default-nfs-mypvc-pvc-7894a5a0-73e0-493b-a502-f610f7d33968]# ls
[root@k8s5 default-nfs-mypvc-pvc-7894a5a0-73e0-493b-a502-f610f7d33968]# echo 789 > index.html
[root@k8s5 default-nfs-mypvc-pvc-7894a5a0-73e0-493b-a502-f610f7d33968]# ls
index.html

[root@master01 k8s-yaml]# curl 192.168.168.81:30972
789

[root@master01 k8s-yaml]# vim nfs-client-storageclass.yaml

#定义模板
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client-storageclass
provisioner: nfs-storage
parameters:
  archiveOnDelete: "false"
#true,表示删除pv时进行存档。会把pv的状态标记为Achived。数据依然可用,但是还不再对新的pvc进行绑定。
#false,删除pvc,pv的状态先进去released,然后变成可用
reclaimPolicy: Delete
#pv的回收策略
allowVolumeExpansion: true
#要想pv支持动态扩容,必须是true。

[root@master01 k8s-yaml]# vim nfs-client-storageclass.yaml
[root@master01 k8s-yaml]# kubectl delete -f pvc-pod.yaml 
persistentvolumeclaim "nfs-mypvc" deleted
deployment.apps "nginx1" deleted
[root@master01 k8s-yaml]# kubectl get pod -o wide
NAME                   READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
nfs1-76f66b958-7g8jt   1/1     Running   0          21m   10.244.2.165   node02   <none>           <none>
[root@master01 k8s-yaml]# kubectl apply -f nfs-client-storageclass.yaml
The StorageClass "nfs-client-storageclass" is invalid: reclaimPolicy: Forbidden: updates to reclaimPolicy are forbidden.
[root@master01 k8s-yaml]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM               STORAGECLASS              REASON   AGE
pvc-0c83315d-d923-4bf9-9144-91595e5ed0fe   2Gi        ROX            Retain           Released   default/nfs-mypvc   nfs-client-storageclass            22m
pvc-463c89ac-5120-4f4a-ba18-86454382ba20   2Gi        ROX            Retain           Released   default/nfs-mypvc   nfs-client-storageclass            23m
pvc-7894a5a0-73e0-493b-a502-f610f7d33968   2Gi        RWX            Retain           Released   default/nfs-mypvc   nfs-client-storageclass            17m
[root@master01 k8s-yaml]# kubectl get pvc
No resources found in default namespace.
[root@master01 k8s-yaml]# kubectl delete pv pvc-7894a5a0-73e0-493b-a502-f610f7d33968 
persistentvolume "pvc-7894a5a0-73e0-493b-a502-f610f7d33968" deleted
[root@master01 k8s-yaml]# kubectl delete pv pvc-463c89ac-5120-4f4a-ba18-86454382ba20 
persistentvolume "pvc-463c89ac-5120-4f4a-ba18-86454382ba20" deleted
[root@master01 k8s-yaml]# kubectl delete pv pvc-0c83315d-d923-4bf9-9144-91595e5ed0fe 
persistentvolume "pvc-0c83315d-d923-4bf9-9144-91595e5ed0fe" deleted
[root@master01 k8s-yaml]# kubectl get pvc
No resources found in default namespace.
[root@master01 k8s-yaml]# kubectl get pv
No resources found



[root@k8s5 k8s]# ll
总用量 0
drwxrwxrwx. 2 root root 24 9月   5 16:31 default-nfs-mypvc-pvc-0c83315d-d923-4bf9-9144-91595e5ed0fe
drwxrwxrwx. 2 root root 24 9月   5 16:32 default-nfs-mypvc-pvc-463c89ac-5120-4f4a-ba18-86454382ba20
drwxrwxrwx. 2 root root 24 9月   5 16:33 default-nfs-mypvc-pvc-7894a5a0-73e0-493b-a502-f610f7d33968


[root@master01 k8s-yaml]# kubectl apply -f nfs-client-storageclass.yaml
The StorageClass "nfs-client-storageclass" is invalid: reclaimPolicy: Forbidden: updates to reclaimPolicy are forbidden.
[root@master01 k8s-yaml]# kubectl apply -f nfs-client-storageclass.yaml --force
storageclass.storage.k8s.io/nfs-client-storageclass configured

[root@master01 k8s-yaml]# kubectl apply -f pvc-pod.yaml 
persistentvolumeclaim/nfs-mypvc created
deployment.apps/nginx1 created
[root@master01 k8s-yaml]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS              REASON   AGE
pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b   2Gi        RWX            Delete           Bound    default/nfs-mypvc   nfs-client-storageclass            5s



drwxrwxrwx. 2 root root  6 9月   5 16:47 default-nfs-mypvc-pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b
[root@k8s5 k8s]# cd default-nfs-mypvc-pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b
[root@k8s5 default-nfs-mypvc-pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b]# echo 777 > index.html
[root@k8s5 default-nfs-mypvc-pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b]# ls
index.html

[root@master01 k8s-yaml]# kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS              AGE
nfs-mypvc   Bound    pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b   2Gi        RWX            nfs-client-storageclass   41s
[root@master01 k8s-yaml]# curl 192.168.168.81:30972
777



[root@master01 k8s-yaml]# vim nfs-client-storageclass.yaml

#定义模板
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client-storageclass
provisioner: nfs-storage
parameters:
  archiveOnDelete: "false"
#true,表示删除pv时进行存档。会把pv的状态标记为Achived。数据依然可用,但是还不再对新的pvc进行绑定。
#false,删除pvc,pv的状态先进去released,然后变成可用
reclaimPolicy: Recycle
#pv的回收策略
allowVolumeExpansion: true
#要想pv支持动态扩容,必须是true。

[root@master01 k8s-yaml]# kubectl apply -f nfs-client-storageclass.yaml
The StorageClass "nfs-client-storageclass" is invalid: 
* reclaimPolicy: Unsupported value: "Recycle": supported values: "Delete", "Retain"
* reclaimPolicy: Forbidden: updates to reclaimPolicy are forbidden.

[root@master01 k8s-yaml]# kubectl apply -f nfs-client-storageclass.yaml --force
The StorageClass "nfs-client-storageclass" is invalid: reclaimPolicy: Unsupported value: "Recycle": supported values: "Delete", "Retain"
* reclaimPolicy: Forbidden: updates to reclaimPolicy are forbidden.
[root@master01 k8s-yaml]# kubectl apply -f nfs-client-storageclass.yaml --force
The StorageClass "nfs-client-storageclass" is invalid: reclaimPolicy: Unsupported value: "Recycle": supported values: "Delete", "Retain"
[root@master01 k8s-yaml]# kubectl delete -f pvc-pod.yaml 
persistentvolumeclaim "nfs-mypvc" deleted
deployment.apps "nginx1" deleted

[root@k8s5 k8s]# ll
总用量 0
drwxrwxrwx. 2 root root 24 9月   5 16:31 default-nfs-mypvc-pvc-0c83315d-d923-4bf9-9144-91595e5ed0fe
drwxrwxrwx. 2 root root 24 9月   5 16:32 default-nfs-mypvc-pvc-463c89ac-5120-4f4a-ba18-86454382ba20
drwxrwxrwx. 2 root root 24 9月   5 16:33 default-nfs-mypvc-pvc-7894a5a0-73e0-493b-a502-f610f7d33968
drwxrwxrwx. 2 root root  6 9月   5 16:47 default-nfs-mypvc-pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b
[root@k8s5 k8s]# cd default-nfs-mypvc-pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b
[root@k8s5 default-nfs-mypvc-pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b]# echo 777 > index.html
[root@k8s5 default-nfs-mypvc-pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b]# ls
index.html
[root@k8s5 default-nfs-mypvc-pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b]# cd ..
[root@k8s5 k8s]# ll
总用量 0
drwxrwxrwx. 2 root root 24 9月   5 16:31 default-nfs-mypvc-pvc-0c83315d-d923-4bf9-9144-91595e5ed0fe
drwxrwxrwx. 2 root root 24 9月   5 16:32 default-nfs-mypvc-pvc-463c89ac-5120-4f4a-ba18-86454382ba20
drwxrwxrwx. 2 root root 24 9月   5 16:33 default-nfs-mypvc-pvc-7894a5a0-73e0-493b-a502-f610f7d33968


由于pv的回收策略是delete,动态可以删除
drwxrwxrwx. 2 root root  6 9月   5 16:47 default-nfs-mypvc-pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b
所以宿主机文件被删除

2.2、总结:

就是说定义storageclass的时候,设置pv的回收策略,只能是retain,delete的话,一旦删除pvc挂点也一起删除了。数据没了。

出大事,Recycle不能作为动态的回收策略。

7、delete 会把pvc删除,挂载目录会被删除,数据会消失

8、动态不支持recycle
f36-48ec9bc4900b
[root@k8s5 k8s]# cd default-nfs-mypvc-pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b
[root@k8s5 default-nfs-mypvc-pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b]# echo 777 > index.html
[root@k8s5 default-nfs-mypvc-pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b]# ls
index.html
[root@k8s5 default-nfs-mypvc-pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b]# cd …
[root@k8s5 k8s]# ll
总用量 0
drwxrwxrwx. 2 root root 24 9月 5 16:31 default-nfs-mypvc-pvc-0c83315d-d923-4bf9-9144-91595e5ed0fe
drwxrwxrwx. 2 root root 24 9月 5 16:32 default-nfs-mypvc-pvc-463c89ac-5120-4f4a-ba18-86454382ba20
drwxrwxrwx. 2 root root 24 9月 5 16:33 default-nfs-mypvc-pvc-7894a5a0-73e0-493b-a502-f610f7d33968

由于pv的回收策略是delete,动态可以删除
drwxrwxrwx. 2 root root 6 9月 5 16:47 default-nfs-mypvc-pvc-92a53a5e-ea41-4a46-af36-48ec9bc4900b
所以宿主机文件被删除


## 2.2、总结:

就是说定义storageclass的时候,设置pv的回收策略,只能是retain,delete的话,一旦删除pvc挂点也一起删除了。数据没了。

出大事,Recycle不能作为动态的回收策略。

7、delete 会把pvc删除,挂载目录会被删除,数据会消失

8、动态不支持recycle

点赞(0) 打赏

评论列表 共有 0 条评论

暂无评论

微信公众账号

微信扫一扫加关注

发表
评论
返回
顶部