Does anyone have an idea about garbage memory collection in IFS Cloud K8s PODs?
Recently, I was investigating one of our customers’ ifsapp-connect pod crashing multiple times due to OOM, and for further analysis, I created a Java heap dump of the pod.
Then I noticed that before heap dump generation, it was consuming 2397Mi, but soon after heap dump generation, it was reduced to 907Mi.


The reason for this is that when I generate the Java heap dump, the full garbage collector (GC) runs and dead objects get reclaimed.
As far as my investigation goes, this can only be achieved by generating the heap dump. When we initiate the GC run manually, it won't release the memory, and it will reserve the cleared memory for future usage. Therefore, we can not see a significant difference from the kubectl top command.
But yet I can see that manually running the GC has reduced the heap memory usage.


OOM Kill issue is very common for IFS Cloud, especially for ifsapp-odata, ifsapp-reporting, ifsapp-reporting-ren, ifsapp-connect, and ifsapp-client-services PODs.
Due to the above findings, I would like to know how Garbage memory is handled within K8s PODs. Whether its default behaviour from K8s or IFS has also done something specifically.