Okay...Where to begin?
I'm going to assume you already have some wits about you or you wouldn't be here. So...We have and some VM's deployed via XCP-ng.
6 in this case.
1 Master Node and 5 Worker nodes.
You've provided ssh keys during the wizard, and can access the VM's after cloud-init has run. The default user in this case is named 'debian'.
The cluster CIDR that you selected was 10.1.40.1/24 - This is the overlay network.
Power down all the VM's and expand out the disk to 25GB & power back on
kubectl get nodes will show if your nodes are healthy.
Install nfs-common on all nodes.
On Master:
sudo kubadm reset
sudo rm $Home/.kube/config
sudo kubeadm init --pod-network-cidr=10.1.40.1/24
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the output of kubeadm init above and run the the nodes after running
sudo kubeadm reset
kubectl logs csi-nfs-controller-5dcb5446f9-58lxt -c nfs -n kube-system > csi-nfs-controller.log
kubeadm init --pod-network-cidr 10.244.0.0/16
Flannel must use 10.244.0.0/16,