Skip to content

Preventing Workloads on Kubernetes Master / Control-Plane Nodes

In Kubernetes, master or control-plane nodes are responsible for managing the cluster: running API server, scheduler, controller manager, etc. Deploying workloads on master nodes is not recommended for production, as it can affect stability and cluster operations.

This guide explains how to prevent scheduling workloads on master nodes and how to verify that no workloads are running there.


1. Understanding Taints

Kubernetes uses taints to repel pods from nodes. The default taint for master nodes is:

node-role.kubernetes.io/master:NoSchedule
  • NoSchedule: prevents pods that do not tolerate this taint from scheduling on the node.
  • NoExecute: evicts pods that do not tolerate the taint.
  • PreferNoSchedule: tries to avoid scheduling pods on the node but does not enforce it strictly.

2. Check Current Node Taints

Run:

bash
kubectl get nodes -o json | jq '.items[] | {name:.metadata.name, taints:.spec.taints}'

Example output:

json
{
  "name": "k8s-master",
  "taints": null
}

null means no taints exist — pods can schedule here.


3. Add Taints to Master Node

To prevent workloads from scheduling on the master/control-plane node:

bash
kubectl taint nodes k8s-master node-role.kubernetes.io/master=:NoSchedule
kubectl taint nodes k8s-master node-role.kubernetes.io/control-plane=:NoSchedule
  • Replace k8s-master with your node name.
  • Both taints ensure no pods schedule on the master by default.

4. Verify Taints

bash
kubectl describe node k8s-master | grep -i taint

Expected output:

Taints: node-role.kubernetes.io/master:NoSchedule
        node-role.kubernetes.io/control-plane:NoSchedule

5. Moving Existing Pods Off the Master

If pods are already scheduled on the master node, you can drain the node:

bash
kubectl drain k8s-master --ignore-daemonsets --delete-emptydir-data
  • This evicts all pods (except DaemonSets) and reschedules them on worker nodes.

Uncordon the node after draining to make it available again for control-plane components only:

bash
kubectl uncordon k8s-master

6. Check for Existing or Future Workloads on Master

To ensure no application pods are running on the master node and monitor for future deployments:

bash
kubectl get pods -A -o wide | grep k8s-master
  • Only system/DaemonSet pods should appear.
  • If any deployment or StatefulSet pods appear on the master, investigate immediately.

Optional: create a monitoring alert or script to notify if non-system pods are scheduled on the master.


7. Optional: Allow Specific Pods on Master

Sometimes, you may want to run system or special pods on master nodes. Use tolerations in the pod spec:

yaml
apiVersion: v1
kind: Pod
metadata:
  name: special-pod
spec:
  tolerations:
  - key: "node-role.kubernetes.io/master"
    operator: "Exists"
    effect: "NoSchedule"
  containers:
  - name: nginx
    image: nginx

8. Best Practices

  • Always taint master/control-plane nodes to protect cluster stability.
  • Run all application workloads, databases, and file servers on worker nodes only.
  • Use node selectors or affinity to direct specific workloads to designated worker nodes.
  • Keep master nodes reserved for control-plane and cluster management tasks.
  • Regularly check with kubectl get pods -A -o wide | grep <master-node> to ensure no workloads accidentally schedule on master nodes.

Summary

  • Problem: Workloads running on master nodes can interfere with cluster management.
  • Solution: Add NoSchedule taints on master/control-plane nodes and drain existing pods.
  • Verification: Check regularly for pods on master nodes using kubectl get pods -A -o wide.
  • Optional: Use tolerations for exceptions.
  • Result: Separation of control-plane and worker workloads, improving stability and production reliability.