kubernetes
nfs
storage
csi
container-storage-interface
nginx

How To Set Up ReadWriteMany (RWX) Persistent Volumes with NFS on Kubernetes

Jun 22, 2020 by: Tarek Elsamni

Introduction

With the distributed and dynamic nature of containers, managing and configuring storage statically has become a difficult problem on Kubernetes, with workloads now being able to move from one Virtual Machine (VM) to another in a matter of seconds. To address this, Kubernetes manages volumes with a system of Persistent Volumes (PV), API objects that represent a storage configuration/volume, and PersistentVolumeClaims (PVC), a request for storage to be satisfied by a Persistent Volume. Additionally, Container Storage Interface (CSI) drivers can help automate and manage the handling and provisioning of storage for containerized workloads.

These drivers are responsible for provisioning, mounting, unmounting, removing, and snapshotting volumes.

The Network File System (NFS) protocol, does support exporting the same share to many consumers. This is called ReadWriteMany (RWX), because many nodes can mount the volume as read-write.

Prerequisites

Before you begin this guide you’ll need the following:

  • The kubectl command-line interface installed on your local machine. You can read more about installing and configuring kubectl in its official documentation.
  • A Kubernetes cluster with your connection configured as the kubectl default.
  • The Helm package manager installed on your local machine, and Tiller installed on your cluster. Note: Starting with Helm version 3.0, Tiller no longer needs to be installed for Helm to work. If you are using the latest version of Helm, see the Helm installation documentation for instructions.

Step 1 — Setting up NFS server

Follow this tutorial to setup NFS server on Ubuntu 20.04.

Step 2 — Deploying an Application Using a Shared PersistentVolumeClaim

In this step, you will create an example deployment on your K8s cluster in order to test your storage setup. This will be an Nginx web server app named web.

To deploy this application, first write the YAML file to specify the deployment. Open up an nginx-test.yaml file with your text editor; this tutorial will use nano:

$ nano nginx-test.yaml
nginx-test.yaml

In this file, add the following lines to define the deployment with a PersistentVolumeClaim named nfs-data:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web
  name: web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: web
    spec:
      containers:
      - image: nginx:latest
        name: nginx
        resources: {}
        volumeMounts:
        - mountPath: /data
          name: data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: nfs-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-data
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  storageClassName: nfs
nginx-test.yaml

Save the file and exit the text editor.

This deployment is configured to use the accompanying PersistentVolumeClaim nfs-data and mount it at /data.

In the PVC definition, you will find that the storageClassName is set to nfs. This tells the cluster to satisfy this storage using the rules of the nfs storageClass you created in the previous step. The new PersistentVolumeClaim will be processed, and then an NFS share will be provisioned to satisfy the claim in the form of a Persistent Volume. The pod will attempt to mount that PVC once it has been provisioned. Once it has finished mounting, you will verify the ReadWriteMany (RWX) functionality.
Run the deployment with the following command:

$ kubectl apply -f nginx-test.yaml

This will give the following output:

deployment.apps/web created
persistentvolumeclaim/nfs-data created
output

Next, check to see the web pod spinning up:

$ kubectl get pods

This will output the following:

NAME                                                   READY   STATUS    RESTARTS   AGE
nfs-server-nfs-server-provisioner-0                    1/1     Running   0          23m
web-64965fc79f-b5v7w                                   1/1     Running   0          4m
Output

Now that the example deployment is up and running, you can scale it out to three instances using the kubectl scale command:

$ kubectl scale deployment web --replicas=3

This will give the output:

deployment.extensions/web scaled
output

You now have three instances of your Nginx deployment that are connected into the same Persistent Volume. In the next step, you will make sure that they can share data between each other.

Step 3 — Validating NFS Data Sharing

For the final step, you will validate that the data is shared across all the instances that are mounted to the NFS share. To do this, you will create a file under the /data directory in one of the pods, then verify that the file exists in another pod’s /data directory.

To validate this, you will use the kubectl exec command. This command lets you specify a pod and perform a command inside that pod.

To create a file named hello_world within one of your web pods, use the kubectl exec to pass along the touch command. Note that the number after web in the pod name will be different for you, so make sure to replace the highlighted pod name with one of your own pods that you found as the output of kubectl get pods in the last step.

$ kubectl exec web-64965fc79f-q9626 -- touch /data/hello_world

Next, change the name of the pod and use the ls command to list the files in the /data directory of a different pod:

$ kubectl exec web-64965fc79f-qgd2w -- ls /data

Your output will show the file you created within the first pod:

hello_world
output

This shows that all the pods share data using NFS and that your setup is working properly.

Conclusion

The NFS server exported NFS shares to workloads in a RWX-compatible protocol. In doing this, you were able to get around a technical limitation of block storage and share the same PVC data across many pods and nodes.



share this article:

Other Articles by Shebang Labs

Interested in our services?
Let’s Talk
Sepapaja tn 6, 15551 Tallinn, Estonia
hello@shebanglabs.io
+372 602 7088
Proudly made in Estonia

© 2024 Shebang Labs All rights reserved.