Self Hosting Part VI - Storage
13 Jul 2023Now that the applications in the Kubernetes cluster are reachable (at least from within the network), the last missing piece of the puzzle is the storage.
In Self Hosting Part II - Ubuntu Server Installation with PXE Booting I set aside about 20% of the disk space for the operating system and the remaining 80% as one unformatted raw partition (here is the full CloudInit user-data file).
To use these raw partitions as a Cloud-Native distributed storage for Kubernetes we’ll use Rook. This will allow us to create and use block storage, object storage, and shared file systems with our pods.
The installation is straightforward, and the project provides a ToolBox container for verification and troubleshooting.
After the installation, use the Rook Toolbox to check the ceph cluster status:
kubectl create -f toolbox.yaml
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
bash-4.4$ ceph status
cluster:
id: 86957b71-ea16-4975-8963-b20b4604e872
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 82s)
mgr: a(active, starting, since 0.721464s), standbys: b
osd: 3 osds: 3 up (since 28s), 3 in (since 48s)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 22 MiB used, 1.1 TiB / 1.1 TiB avail
pgs: 100.000% pgs not active
1 creating+peering
bash-4.4$ ceph osd status
ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE
0 node1 9004k 363G 0 0 0 0 exists,up
1 node3 9004k 363G 0 0 0 0 exists,up
2 node2 8940k 363G 0 0 0 0 exists,up
As expected, Rook manages 1.1TiB distributed across the three nodes.
Now we are ready to create some storage classes to use with pods deployments.
kubectl apply -f deploy/examples/csi/rbd/storageclass-ec.yaml
kubectl apply -f deploy/examples/filesystem-ec.yaml
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 18d
rook-cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate true 18d