Guide: Installing Ceph on Every Proxmox VE Node and Using it in Kubernetes
This guide will show how to install Ceph on all Proxmox VE nodes, set up a Ceph cluster, and use it as storage in a Kubernetes cluster.
1. Prerequisites
- 6 Proxmox VE servers installed and clustered.
- Dedicated disks on each server for Ceph OSDs (minimum 1 per server, ideally 2+).
- Network setup:
- Public network: 10.42.0.0/24
- Cluster/Replication network: 10.43.0.0/24
- Root or sudo access on all nodes.
2. Check Network and IPs
Before installing Ceph, verify the network and IP addresses.
Check IP Address
bash
ip a
- Confirm IPs for both public and cluster/replication networks.
Ping Test
bash
ping -c 3 <other-node-ip>
- Ensure all nodes can communicate on both public and cluster networks.
Update /etc/hosts (Optional)
bash
# On all nodes
nano /etc/hosts
# Add entries
192.42.0.101 proxmox1
192.42.0.102 proxmox2
192.42.0.103 proxmox3
192.42.0.104 proxmox4
192.42.0.105 proxmox5
192.42.0.106 proxmox6
3. Install Ceph on All Proxmox VE Nodes
Update System
bash
apt update && apt full-upgrade -y
Install Ceph Packages
bash
apt install ceph ceph-common ceph-fuse ceph-mgr ceph-mon ceph-osd ceph-mds -y
Initialize Ceph Cluster (Only on First Node)
bash
pveceph init --cluster-network 10.43.0.0/24 --public-network 10.42.0.0/24
Create MON Nodes
bash
# Run on first node
pveceph createmon
# Repeat on at least 2 more nodes for HA
pveceph createmon
Add OSDs (Object Storage Daemons) on All Nodes
bash
# Replace /dev/sdb with your dedicated storage disk
pveceph createosd /dev/sdb
- Repeat for each server and each disk you want Ceph to manage.
Deploy Manager Daemons (MGR)
bash
pveceph createmgr
- Deploy at least one per cluster for monitoring.
Verify Ceph Cluster
bash
ceph status
ceph osd tree
ceph df
4. Prepare Ceph for Kubernetes
Create Pool for Kubernetes PVs
bash
pveceph pool create k8s-pool 128
128is the number of placement groups (adjust based on total OSDs).
Install RBD CSI Driver in Kubernetes
bash
kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/devel/deploy/kubernetes/rbd/kubernetes/csi-rbdplugin.yaml
Create Kubernetes StorageClass
yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rbd
provisioner: rbd.csi.ceph.com
parameters:
clusterID: <cluster-id>
pool: k8s-pool
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
- Replace
<cluster-id>with the ID of your Ceph cluster (ceph statusoutput).
5. Using Ceph Storage in Kubernetes
Example: Deploy MongoDB with Ceph PVC
yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-pvc
spec:
storageClassName: ceph-rbd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
- Attach the PVC to your MongoDB Deployment/StatefulSet.
Example: Deploy MinIO with Ceph PVC
yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pvc
spec:
storageClassName: ceph-rbd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
- Use this PVC in the MinIO Deployment.
6. Notes & Best Practices
- Always install Ceph on all nodes contributing OSDs.
- Use a separate network for Ceph replication for performance.
- For HA, run MONs on at least 3 nodes.
- Monitor the cluster:
ceph status,ceph osd tree,ceph df. - Kubernetes workloads (DBs, file servers) should use worker nodes, not master/control-plane nodes.
- Before deploying, always check network connectivity and correct IPs.
This setup ensures that Ceph provides reliable, HA, and scalable storage to your Kubernetes workloads across all Proxmox VE nodes.