Changing the Pod CIDR range in Kubernetes (including MicroK8s) is definitely possible, but it needs to be done carefully because it affects all networking between your pods.
-
Your pods are getting IPs from the default CNI (Container Network Interface) range.
-
In MicroK8s (and many other distros), the default is usually something like 10.1.0.0/24, which can overlap with existing corporate or service networks.
Changing the pod CIDR will break existing pods. You’ll need to delete/redeploy them (or recycle the whole cluster if it’s new and not production-critical).
Best Practices:-
- Pick a non-overlapping private subnet (e.g., 10.14.0.0/16 or 172.30.0.0/16).
- Document this so future clusters don’t overlap.
- If you’re in production, test the new CIDR in DEV/TEST first.
- If you’re already deep in PROD, a safer option may be to migrate workloads gradually to a new cluster with the desired subnet.
The best approach is to create the entire IFS Cloud namespace on a non-overlapping private subnet
The networks inside k8s are private, i.e. they will never disturb other applications outside the k8s cluster. However if a pod inside a k8s cluster with network cidr 10.1.204.0/24 tries to access an external application e.g. the DB and that DB has ip 10.1.204.100 then the pods will not be able to reach the DB as it will assume the DB is inside the private network inside the k8s cluster and never route the calls to the external network.
What kind of issues do you have with the listed pods? They only connect to the DB, on other external applications.
To change the private network in k8s use the script. here https://docs.ifs.com/techdocs/25r1/070_remote_deploy/010_installing_fresh_system/030_preparing_server/50_management_server/010_Setting_up_An_Environment/015_Custom_installation/#change_pod_ip_range
Hi @hhanse ,
Thank you the reply, We have an internal VM running in the 10.1.x.x range, and we’ve set up a third-party application to communicate with the IFS Cloud middleware. Due to this IP range, as you mentioned in your comment, the communication has failed.
That’s why I’m looking into changing the pod IP range to a different one. Thank you for your support