Does anyone have advice or experience on how to safely change the pod IP range to a different subnet (like `10.14.x.x`)? Any tips or best practices would be really appreciated.
Thanks in advance!
Best answer by hhanse
The networks inside k8s are private, i.e. they will never disturb other applications outside the k8s cluster. However if a pod inside a k8s cluster with network cidr 10.1.204.0/24 tries to access an external application e.g. the DB and that DB has ip 10.1.204.100 then the pods will not be able to reach the DB as it will assume the DB is inside the private network inside the k8s cluster and never route the calls to the external network.
What kind of issues do you have with the listed pods? They only connect to the DB, on other external applications.
Changing the Pod CIDR range in Kubernetes (including MicroK8s) is definitely possible, but it needs to be done carefully because it affects all networking between your pods.
Your pods are getting IPs from the default CNI (Container Network Interface) range.
In MicroK8s (and many other distros), the default is usually something like 10.1.0.0/24, which can overlap with existing corporate or service networks.
Changing the pod CIDR will break existing pods. You’ll need to delete/redeploy them (or recycle the whole cluster if it’s new and not production-critical).
Best Practices:-
Pick a non-overlapping private subnet (e.g., 10.14.0.0/16 or 172.30.0.0/16).
Document this so future clusters don’t overlap.
If you’re in production, test the new CIDR in DEV/TEST first.
If you’re already deep in PROD, a safer option may be to migrate workloads gradually to a new cluster with the desired subnet.
The best approach is to create the entire IFS Cloud namespace on a non-overlapping private subnet
The networks inside k8s are private, i.e. they will never disturb other applications outside the k8s cluster. However if a pod inside a k8s cluster with network cidr 10.1.204.0/24 tries to access an external application e.g. the DB and that DB has ip 10.1.204.100 then the pods will not be able to reach the DB as it will assume the DB is inside the private network inside the k8s cluster and never route the calls to the external network.
What kind of issues do you have with the listed pods? They only connect to the DB, on other external applications.
Thank you the reply, We have an internal VM running in the 10.1.x.x range, and we’ve set up a third-party application to communicate with the IFS Cloud middleware. Due to this IP range, as you mentioned in your comment, the communication has failed.
That’s why I’m looking into changing the pod IP range to a different one. Thank you for your support
Maybe I am reading too much into this, or complicating it too much, but has anyone considered using CIDR to subdivide this down for each IFS Cloud environment?
eg. DEV = 10.10.201.64/26
TST = 10.10.201.128/26
PRD = 10.10.201.192/26
Thus making each environment unique while not using up too many available networks.
There is no real reason to have different internal networks on dev/tst/prd. The only important thing to think of is that the internal network cidr shouldn’t overlap with any external network to which the internal pods night do calls to. dev/tst/prd will not be aware of the other k8s clusters internal network ranges. For simplicity I would keep dev/tst/prd internal network ranges the same if possible.
“Thus making each environment unique while not using up too many available networks.” is not really correct. The internal networks are not know to the outside of the k8s cluster and will not use any available networks. It’s the other way around. If the internal network cidr is the same as the external network, calls from the internal interwork to a e.g. database or DNS will not route those calls to the external network.
Ok, so with that in mind, @darshana ‘s examples show a class A and a class B scheme, but is there any reason I cannot assign a private range class C address to the kubenetes cluster (eg. 192.168.254.0/24) for each of my application (middle-tire) servers?