Skip to main content

Hi All,

We see few pods are restarting so many times in our instance. and found the below error in hostpath-provisioner pods in the kube-system.

 

I1128 08:18:28.901804       1 leaderelection.go:248] attempting to acquire leader lease kube-system/microk8s.io-hostpath...
I1128 08:19:01.716190       1 leaderelection.go:258] successfully acquired lease kube-system/microk8s.io-hostpath
I1128 08:19:01.734941       1 event.go:285] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"microk8s.io-hostpath", UID:"c78c2716-8ea7-4a14-af83-5e53b7aa866a", APIVersion:"v1", ResourceVersion:"492052", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' hostpath-provisioner-7df77bc496-js5zf_0f88d79c-fec8-4b74-bef5-b5f107295c29 became leader
I1128 08:19:01.771083       1 controller.go:810] Starting provisioner controller microk8s.io/hostpath_hostpath-provisioner-7df77bc496-js5zf_0f88d79c-fec8-4b74-bef5-b5f107295c29!
I1128 08:19:02.974741       1 controller.go:859] Started provisioner controller microk8s.io/hostpath_hostpath-provisioner-7df77bc496-js5zf_0f88d79c-fec8-4b74-bef5-b5f107295c29!
E1128 08:19:48.463858       1 leaderelection.go:367] Failed to update lock: Put "https://10.152.183.1:443/api/v1/namespaces/kube-system/endpoints/microk8s.io-hostpath": context deadline exceeded
I1128 08:19:48.528501       1 leaderelection.go:283] failed to renew lease kube-system/microk8s.io-hostpath: timed out waiting for the condition
F1128 08:19:48.589396       1 controller.go:888] leaderelection lost

 

Based on the provided log, it's difficult to pinpoint the exact cause of the pod restart. Since this issue is occurring across all the pods, it could indicate that the allocated resources are insufficient for the Middleware.

 

Thanks,

Ashen


Reply