Skip to main content
Question

Cloud Cluster Testing


Forum|alt.badge.img+4

Hi All,
I’ve been doing some testing with IFS Cloud cluster setup in 24R2. The issue I’m facing is that kubeconfig is only created for one node, even though 3 nodes are defined. Due to this limitation, powershell cannot connect and issue any kubectl commands if the first node is down.
My expectation was that the kube config would include all 3 nodes or at least have 3 config files, but it doesn’t appear to be the case. What is the process to manage kubenetes when the first node is down? What is the way around of this?

2 replies

Forum|alt.badge.img+11
  • Hero (Employee)
  • 185 replies
  • March 10, 2025

If the master is “gone” that should be a temporary state. If you during this time need to connect to the k8s cluster, you can ssh to one of the available nodes and do kubectl commands locally.
e.g.:

$ sudo microk8s kubectl  get nodes


Forum|alt.badge.img+4
  • Author
  • 6 replies
  • March 10, 2025

Thank you for the reply ​@hhanse. Is this the expected behavior? Let’s say if we permanently lose the master, is there a way to promote another node to the master and create a kubeconfig to connect and manage the kubernetes cluster? If we have to reinstall the kubernetes again for all cluster nodes, I think it defeats the purpose of having high availability.


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings