LINSTOR creates block devices which are accessible only from a single pod (RWO or Read Write Once). However, it's possible to create an NFS pod that shares a LINSTOR volume with many pods, indirectly enabling RWX support.
You need to have setup a LINSTOR® storageClass in Kubernetes that the NFS server pod can use for it's persistent storage. Make sure you set the appropriate storageClass name in the following pv definition that will be used by the NFS server pod (mine is linstor-csi-lvm-thin-r3). Also, set the size of the NFS server's volume accordingly:
cat << EOF > nfs-server-pv.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pv-provisioning-demo
labels:
demo: nfs-pv-provisioning
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 4Gi
# some existing LINSTOR storageClass
storageClassName: linstor-csi-lvm-thin-r3
EOF
kubectl create -f nfs-server-pv.yaml
Then, create the nfs-server pods (controlled by a replicationController):
cat << EOF > nfs-server-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-server
spec:
replicas: 1
selector:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: k8s.gcr.io/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: nfs-pv-provisioning-demo
EOF
kubectl create -f nfs-server-rc.yaml
Create the service for the NFS server, then get and set the ClusterIP for the service in an environment variable:
cat << EOF > nfs-server-service.yaml
kind: Service
apiVersion: v1
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
EOF
kubectl create -f nfs-server-service.yaml
NFSIP=$(kubectl describe services nfs-server | grep ^IP\: | awk '{print $2}')
echo $NFSIP
You now have an NFS server running in your cluster exporting a filesystem backed by LINSTOR. You could mount this NFS share manually within your application's pods, but it's more likely you'd want to consume this share as a persistent volume. Using a PV creates an indirection to the shared filesystem, and you won't have to hard code the NFS server's IP into your pod.
Create the PV and PVC for your applications to consume the shared NFS filesystem. If you're copy/pasting these commands exactly as they're written, the $NFSIP should be evaluated and placed into the definition when the file is created. If you're not following along like that, just be sure to replace the $NFSIP with the correct value for your cluster. Also, the size of the storage here can be set to anything less than what the NFS server's volume was set to:
cat << EOF > nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: $NFSIP
path: "/"
mountOptions:
- nfsvers=4.2
EOF
kubectl create -f nfs-pv.yaml
cat << EOF > nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
EOF
kubectl create -f nfs-pvc.yaml
You can now consume the "nfs" PVC from many pods. Also, you can use the same NFS server pod/pv to host more than one share, but you'll have to create directories for each share in the NFS server's /exports/ directory, and a separate pv/pvc for indirection of each.
A simple test of the Read Write Many pv/pvc created above would be to create a pod with 2 (or more) containers updating the NFS share/pvc, while another pod with 2 (or more) containers reads data from the NFS share/pvc:
Test backend updating the NFS share/pvc with dummy data (hostname and date):
cat << EOF > nfs-busybox-backend.yaml
# This mounts the nfs volume claim into /mnt and continuously
# overwrites /mnt/index.html with the time and hostname of the pod.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-busybox
spec:
replicas: 2
selector:
name: nfs-busybox
template:
metadata:
labels:
name: nfs-busybox
spec:
containers:
- image: busybox
command:
- sh
- -c
- 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
imagePullPolicy: IfNotPresent
name: busybox
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/mnt"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
EOF
kubectl create -f nfs-busybox-backend.yaml
Test front end pods and service for reading the dummy data from NFS share/pvc:
cat << EOF> nfs-nginx-frontend.yaml
# This pod mounts the nfs volume claim into /usr/share/nginx/html and
# serves a simple web page.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-web
spec:
replicas: 2
selector:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
EOF
kubectl create -f nfs-nginx-frontend.yaml
cat << EOF > nfs-frontend-service.yaml
kind: Service
apiVersion: v1
metadata:
name: nfs-web
spec:
ports:
- port: 80
selector:
role: web-frontend
EOF
kubectl create -f nfs-frontend-service.yaml
You can now, from a pod or a Kubernetes cluster node, access the front end to see the updates/changes:
WWWIP=$(kubectl describe services nfs-web | grep ^IP\: | awk '{print $2}')
echo $WWWIP
watch -d -n10 curl http://$WWWIP
Reviewed 2021/12/08 – MDK