HDD-based Ceph Cluster with Open Cache Acceleration Software (Open CAS)
Creating Block Storage
Block storage allows a single pod to mount storage.
Details in Block Storage Overview.
Create a StorageClass and CephBlockPool:
kubectl create -f csi/rbd/storageclass.yaml
Then, Rook can provision storage to Kubernetes via
cat <<EOF > test.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-pv-claim labels: app: test spec: storageClassName: rook-ceph-block accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: test labels: app: test spec: selector: matchLabels: app: test template: metadata: labels: app: test spec: containers: - name: test image: alpine:3.17 command: ["sleep", "infinity"] volumeMounts: - name: test-pv mountPath: /data volumes: - name: test-pv persistentVolumeClaim: claimName: test-pv-claim EOF kubectl apply -f test.yaml # wait until it is ready kubectl get po -w
Enter the container and check:
kubectl exec -it deploy/test -- sh # check the volume mounted mount | grep /data /dev/rbd0 on /data type ext4 (rw,relatime,stripe=16) # install fio and do some tests apk add fio fio --name=read_iops --directory=/data --size=1G \ --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \ --verify=0 --bs=4K --iodepth=64 --rw=randread --group_reporting=1