Complete Guide: Setting up Proxmox Storage and Using it in Kubernetes
This guide walks through:
- Creating storage in Proxmox VE.
- Making it available for Kubernetes workloads.
- Deploying stateful applications like MongoDB and MinIO using this storage.
Step 1: Prepare Storage in Proxmox VE
Assuming a multi-node Proxmox cluster.
1.1 Check existing storage
bash
pvesm status
lsblk -f
You should see your pools like zfspool or local.
1.2 Create a dedicated directory for Kubernetes PVs
Example on each node:
bash
mkdir -p /mnt/pve/data/k8s-pv1
mkdir -p /mnt/pve/data/k8s-pv2
mkdir -p /mnt/pve/data/k8s-pv3
chmod 777 /mnt/pve/data/k8s-pv*
Using ZFS datasets is also possible if you want snapshot support:
bashzfs create data/k8s-pv1 zfs create data/k8s-pv2 zfs create data/k8s-pv3
1.3 Add the storage in Proxmox GUI (optional)
- Go to Datacenter -> Storage -> Add -> Directory
- Set
Directoryto/mnt/pve/data/k8s-pv1(or the ZFS dataset) - Set Content to
VZDump backup file, ISO image, Container template(optional) - Repeat for other directories if desired
Step 2: Install Local Path Provisioner in Kubernetes
2.1 Deploy Local Path Provisioner
bash
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
2.2 Configure it to use Proxmox directories
bash
kubectl edit configmap local-path-config -n local-path-storage
Modify config.json:
json
{
"nodePathMap":[
{ "node":"k8s-node-1", "paths":["/mnt/pve/data/k8s-pv1"] },
{ "node":"k8s-node-2", "paths":["/mnt/pve/data/k8s-pv1"] },
{ "node":"k8s-node-3", "paths":["/mnt/pve/data/k8s-pv1"] }
]
}
Ensure the paths exist on each node and permissions allow Kubernetes to write.
Step 3: Create Persistent Volume Claims (PVC)
Example for MongoDB:
yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: local-path
Apply with:
bash
kubectl apply -f mongodb-pvc.yaml
Step 4: Deploy Stateful Applications
4.1 MongoDB StatefulSet
yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb
namespace: default
spec:
serviceName: "mongodb"
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:6
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongodb-storage
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "local-path"
resources:
requests:
storage: 20Gi
4.2 MinIO StatefulSet
yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: minio
namespace: default
spec:
serviceName: "minio"
replicas: 1
selector:
matchLabels:
app: minio
template:
metadata:
labels:
app: minio
spec:
containers:
- name: minio
image: minio/minio:RELEASE.2025-01-01T00-00-00Z
args: ["server", "/data"]
ports:
- containerPort: 9000
volumeMounts:
- name: minio-storage
mountPath: /data
volumeClaimTemplates:
- metadata:
name: minio-storage
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "local-path"
resources:
requests:
storage: 50Gi
Step 5: Verify Storage and Pods
- List PVCs:
bash
kubectl get pvc -A
- Check PV bindings:
bash
kubectl get pv -o wide
- Check pods and mounted storage:
bash
kubectl get pods -o wide
kubectl describe pod <pod-name>
You should see pods using the Proxmox directories via local-path provisioner.
✅ Result:
- Kubernetes uses Proxmox storage without needing Ceph.
- Stateful apps like MongoDB and MinIO persist data.
- Each node contributes its local storage for workloads.
Notes:
- For production, consider using NFS, Ceph, or GlusterFS for replicated storage.
- Local-path works well for testing or small clusters.
- Ensure proper backups of data if using local ZFS datasets or directories.