Complete Guide: 6-Server Proxmox VE Kubernetes Setup
This guide explains step-by-step how to set up a 6-server cluster with Proxmox VE and deploy an HA Kubernetes cluster with multiple master nodes and worker nodes. The design supports up to 600,000 users and can scale as needed.
1. Physical Server Specs
Each server:
- 40 CPU cores / 80 threads
- 512 GB RAM
- 8 TB storage
- Redundant power supply
- 10GbE network interface
Total servers: 6
2. Proxmox VE Installation
- Download Proxmox VE ISO from the official site.
- Boot each physical server from ISO.
- Install Proxmox VE using default settings.
- Configure management network with a static IP.
- Update Proxmox packages:bash
apt update && apt full-upgrade -y - Repeat on all 6 servers.
3. Proxmox Cluster Setup
- On the first server:bash
pvecm create my-cluster - On other servers, join the cluster:bash
pvecm add <IP-of-first-server> - Verify cluster status:bash
pvecm status
4. VM Layout for Kubernetes
Goal: 3 master nodes (HA control plane) + 6 worker nodes
| Physical Server | VM Name | Role | CPU | RAM | Storage |
|---|---|---|---|---|---|
| Server 1 | Master-1 | Master | 8 | 64GB | 100GB |
| Server 1 | Worker-1 | Worker | 16 | 128GB | 500GB |
| Server 2 | Master-2 | Master | 8 | 64GB | 100GB |
| Server 2 | Worker-2 | Worker | 16 | 128GB | 500GB |
| Server 3 | Master-3 | Master | 8 | 64GB | 100GB |
| Server 3 | Worker-3 | Worker | 16 | 128GB | 500GB |
| Server 4 | Worker-4 | Worker | 16 | 128GB | 500GB |
| Server 5 | Worker-5 | Worker | 16 | 128GB | 500GB |
| Server 6 | Worker-6 | Worker | 16 | 128GB | 500GB |
Notes:
- Masters are distributed across servers to avoid single points of failure.
- Workers are spread for balanced resource utilization.
- VMs can be adjusted based on workload needs.
5. Debian Installation on VMs
- Install Debian minimal on all VMs.
- Update OS packages:bash
sudo apt update && sudo apt upgrade -y - Set static IP for each VM.
- Disable swap:bash
sudo swapoff -a sudo sed -i '/swap/d' /etc/fstab - Install dependencies:bash
sudo apt install -y apt-transport-https curl docker.io sudo systemctl enable --now docker
6. Kubernetes Installation
- Add Kubernetes repo:bash
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt update - Install Kubernetes components:bash
sudo apt install -y kubelet kubeadm kubectl sudo systemctl enable --now kubelet
7. Initialize First Master (Master-1)
bash
sudo kubeadm init \
--control-plane-endpoint "192.168.1.10:6443" \
--upload-certs \
--pod-network-cidr=192.168.0.0/16
- Copy kubeconfig for kubectl access:
bash
mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- Install network plugin (Calico):
bash
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
8. Add Additional Masters (Master-2, Master-3)
On each additional master VM:
bash
sudo kubeadm join 192.168.1.10:6443 \
--token <token> \
--discovery-token-ca-cert-hash sha256:<hash> \
--control-plane \
--certificate-key <cert-key-from-init>
9. Add Worker Nodes
On each worker VM:
bash
sudo kubeadm join 192.168.1.10:6443 \
--token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
10. Verify Cluster
bash
kubectl get nodes -o wide
kubectl get pods -A
- All masters and workers should show
Ready. - Kube-system pods should be running (network plugin, coredns, etc).
11. Notes / Best Practices
- HA API endpoint: Use HAProxy or keepalived for the 3 masters.
- VM distribution: Don’t put multiple masters on the same physical server.
- Networking: Ensure all VMs can reach the control-plane endpoint.
- Storage: Use Ceph or NFS for persistent volumes.
- Backups: Regularly backup etcd and critical manifests.
- Scaling: Worker VMs can be resized or new VMs added as needed.
12. Summary
- Physical servers: 6
- Master VMs: 3 (HA control plane)
- Worker VMs: 6
- VM placement: Masters distributed for HA, workers balanced
- Software: Proxmox VE, Debian, Docker, Kubernetes, Calico
- Networking: Static IPs, Pod network 192.168.0.0/16
- Storage: Optional Ceph for persistence
- Power / Resources: Ensure enough CPU/RAM for cluster workloads
This setup provides a robust, HA Kubernetes cluster capable of supporting 600k users with room for scaling.