Quick Start with yaml files

Start kadalu operator

kubectl apply -f https://raw.githubusercontent.com/kadalu/kadalu/devel/manifests/kadalu-operator.yaml

You have different flavors of operator yaml files available. Mainly for kubernetes (default), openshift, RKE, Microk8s. Find the relevant file from here. Just remember to use the raw file as argument to kubectl.

Note: Openshift may have some Security Context Constraints, which can be applied only by admins, Run oc login -u system:admin to login as admin.

Prepare and start storage exports

Identify the devices available from nodes and run the following command to add storage to Kadalu.

$ kubectl get nodes
kube1 # <-- known your hostname, which needs to be given in storage config below.
...
...

NOTE: if your host is running RHEL/CentOS 7.x series or Ubuntu/Debian older than 18.04, you may need to do below tasks before adding storage to kadalu.

# On CentOS7.x/Ubuntu-16.04
sudo wipefs -a -t dos -f /dev/sdc
sudo mkfs.xfs /dev/sdc

Once the device is ready, inform the kadalu operator that it can be used as storage.

# File: sample-storage-config.yaml
---
apiVersion: kadalu-operator.gluster/v1alpha1
kind: KadaluVolume
metadata:
  # This will be used as name of PV Hosting Volume
  name: storage-pool-1
spec:
  type: Replica1
  storage:
    - node: kube1
      device: /dev/sdc
  options: {}
$ kubectl apply -f sample-storage-config.yaml

Other than device, kadalu also supports path and pvc as storage options.

kadalu supports Replica1, Replica2 and Replica3 type. This maps to glusterfs volume replication feature. It also support External as a type, where gluster volume may be managed externally, and kadalu is used to provide only CSI access.

Check status

Operator will start the storage export pods as required. And, in 2 steps, your storage system is up and running.

Check the status of Pods using,

$ kubectl get pods -n kadalu
NAME                             READY   STATUS    RESTARTS   AGE
server-storage-pool-1-kube1-0    1/1     Running   0          84s
csi-attacher-0                   2/2     Running   0          30m
csi-nodeplugin-5hfms             2/2     Running   0          30m
csi-nodeplugin-924cc             2/2     Running   0          30m
csi-nodeplugin-cbjl9             2/2     Running   0          30m
csi-provisioner-0                3/3     Running   0          30m
operator-6dfb65dcdd-r664t        1/1     Running   0          30m

CSI to claim Persistent Volumes (PVC/PV)

Now we are ready to create Persistent volumes and use them in application Pods.

# File: sample-pvc.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pv1
spec:
  storageClassName: kadalu.replica1
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

Create PVC using,

$ kubectl apply -f sample-pvc.yaml
persistentvolumeclaim/pv1 created

and check the status of PVC using,

$ kubectl get pvc
NAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
pv1    Bound    pvc-8cbe80f1-428f-11e9-b31e-525400f59aef   1Gi        RWO            kadalu.replica1  42s

Now, this PVC is ready to be consumed in your application pod. You can see the sample usage of PVC in an application pod by below:

# File: sample-app.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: pod1
  labels:
    app: sample-app
spec:
  containers:
  - name: sample-app
    image: docker.io/kadalu/sample-pv-check-app:latest
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - mountPath: "/mnt/pv"
      name: csivol
  volumes:
  - name: csivol
    persistentVolumeClaim:
      claimName: pv1
  restartPolicy: OnFailure
$ kubectl apply -f sample-app.yaml
pod1 created