/ k8s

Get dirty with Kubernetes Persistent Volume Claims on AWS

So you want to persist data of your containers no matter at what availability zone your node is running on. When you provision via AWS CLI you will get a zone mismatch. So in order to achieve the correct AWS Zone you can safely rely on Kubernetes build-in provisioner to schedule your pod, storage class and persistent volume claim in the same availability zone.

You will need three things in your cluster:

  • Container Spec
  • Persistent Volume Claim
  • StorageClass

Storage Class

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: ebs-ssd-eu-west-2a
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  zone: eu-west-2a
  encrypted: 'false'

Persistent Volume Claim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-ssd-eu-west-2a
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: ebs-ssd-eu-west-2a

Container Spec

spec:
   containers:
      volumeMounts:
      - mountPath: /someDir
        name: ebs-ssd-eu-west-2a
    volumes:
    - name: ebs-ssd-eu-west-2a
      persistentVolumeClaim:
        claimName: ebs-ssd-eu-west-2a

That's it. Roll out with kubectl or helm as usual. Now you could copy files to your new persistent folder with kubectl cp sourceFile conatiner:/someDir.

Be aware that this is ReadWriteOnce Policy only on AWS (One Pod, One Share. Period). To get ReadWriteMany you will need to look for more sophisticated network solutions for K8S on AWS.

Sources: https://kubernetes.io/docs/concepts/storage/persistent-volumes/

Get dirty with Kubernetes Persistent Volume Claims on AWS
Share this