We believe the success of a project depends on how easy it is for a developer or an user to try out the project, get a hang of it.
As part of that, we have decided the overall install process of the storage system should be 2 step process.
In the initial versions, we would like to keep the 2nd step manual, ie, admin has to provide details of storage. Later, we can enhance it to pick the storage based on the tag etc. If it is cloud, with required auth keys, it can setup the Storage within operator itself.
You can follow ‘Install minikube’ document to setup minikube. Please note that if you are using
minikube version below 1.17.0, use
--vm-driver=none option. More on this issue is recorded at kadalu/issue#351.
For testing, login to minikube and create a virtual device as below.
$ cd /mnt/vda1/ $ sudo truncate -s 10G storage-pool-1.disk.img
After this follow our Homepage. You are good to get started.
We understand many examples given are for setting up it on 1 node, 1 device. When you have more storage you need to export, then just export more storage in the multiple of Replica count.
# File: storage-config.yaml --- apiVersion: kadalu-operator.storage/v1 kind: KadaluStorage metadata: # This will be used as name of PV Hosting Volume name: storage-pool-1 spec: type: Replica1 storage: - node: kube1 # node name as shown in `kubectl get nodes` device: /dev/vdc - node: kube2 device: /dev/vdd - node: kube3 device: /dev/vdc
NOTE: If you are using kadalu versions below 0.8.0, then please refer to document on 0.7.7 version.
The answer we have is, providing kadalu-config using 3 nodes, and using gluster replicate module (replica 3). The sample looks something like below:
# File: storage-config.yaml --- apiVersion: kadalu-operator.storage/v1 kind: KadaluStorage metadata: # This will be used as name of PV Hosting Volume name: storage-replica-pool-1 spec: type: Replica3 # Notice that this field tells kadalu operator to use replicate module. storage: - node: kube1 # node name as shown in `kubectl get nodes` device: /dev/vdc # Device to provide storage to all PV - node: kube2 # node name as shown in `kubectl get nodes` device: /dev/vdd # Device to provide storage to all PV - node: kube3 # node name as shown in `kubectl get nodes` device: /dev/vdc # Device to provide storage to all PV ---
With this, there will be 3 bricks, and kadalu CSI driver will mount the corresponding volume and provide data.
NOTE: There can be both replica1 and replica3 type volume co-existing in the system. Note that while claiming the PV, you just need to provide
storageClassName: kadalu.replica1 or
storageClassName: kadalu.replica3 to use the relevant option.
As we use glusterfs as storage backend, without any sharding/striping/disperse mode, the data remains as is, on your backend storage. Just that each PV would be a subdirectory on your storage. So, no need to panic.
As long as glusterfs promises to keep the backend layout same, and continue to provide storage after upgrade, we don’t see any issue with upgrade. Currently one known issue is that our operator is not checking for heal pending count while upgrading storage pods.
We have compiled a list of things here