Skip to main content
Question

IFS Cloud K8s PODs Garbage memory collection

  • January 5, 2026
  • 11 replies
  • 107 views

Forum|alt.badge.img+7

Does anyone have an idea about garbage memory collection in IFS Cloud K8s PODs?

Recently, I was investigating one of our customers’ ifsapp-connect pod crashing multiple times due to OOM, and for further analysis, I created a Java heap dump of the pod.

Then I noticed that before heap dump generation, it was consuming 2397Mi, but soon after heap dump generation, it was reduced to 907Mi.
 

Before heap dump

 

After heap dump


The reason for this is that when I generate the Java heap dump, the full garbage collector (GC) runs and dead objects get reclaimed.

As far as my investigation goes, this can only be achieved by generating the heap dump. When we initiate the GC run manually, it won't release the memory, and it will reserve the cleared memory for future usage. Therefore, we can not see a significant difference from the kubectl top command.

But yet I can see that manually running the GC has reduced the heap memory usage.
 

Before the manual run of GC
After the manual run of GC


OOM Kill issue is very common for IFS Cloud, especially for ifsapp-odata, ifsapp-reporting, ifsapp-reporting-ren, ifsapp-connect, and ifsapp-client-services PODs.

Due to the above findings, I would like to know how Garbage memory is handled within K8s PODs. Whether its default behaviour from K8s or IFS has also done something specifically.

11 replies

Forum|alt.badge.img+7
  • Sidekick (Customer)
  • January 6, 2026

@HashanD is it possible to share how have you applied the heap dump and manually call the CG please? 


Forum|alt.badge.img+4
  • Do Gooder (Customer)
  • January 6, 2026

I tried that myself.

  1. Run “jsp” to find process ID
  2. Create dump “jcmd <ID> GC.heap_dump dump.hprof”

In my case, the used memory dind’t change much, less than 10% for sure.


Forum|alt.badge.img+7
  • Author
  • Sidekick (Partner)
  • January 6, 2026

@HashanD is it possible to share how have you applied the heap dump and manually call the CG please? 

 

Linux
kubectl exec -n <namespace> <pod name> -c <deployment name> -- jps | grep .jar
    
Windows
kubectl exec -n <namespace> <pod name> -c <deployment name> -- jps | Select-String ".jar”

in the output first column is the PID

Linux
kubectl exec -n <namespace> -it <pod name> -c <deployment name> -- jmap -dump:live,format=b,file=/tmp/heap-ifsapp-<container name>.hprof <PID>

Windows
kubectl exec -n <namespace> -it <pod name> -c <deployment name> -- jmap "-dump:live,format=b,file=/tmp/heap-ifsapp-<container name>.hprof" <PID>

Heap dump will be saved inside the particular pod "/tmp/heap-ifsapp-<container name>.hprof"

Copy the heap dump file into the local host
Linux / Windows
kubectl -n <namespace> -c <Deployment name> cp <Pod name>:<Full path of the file inside pod> <Target location and file name in local server>

 


Forum|alt.badge.img+7
  • Author
  • Sidekick (Partner)
  • January 6, 2026

I tried that myself.

  1. Run “jsp” to find process ID
  2. Create dump “jcmd <ID> GC.heap_dump dump.hprof”

In my case, the used memory dind’t change much, less than 10% for sure.

kubectl exec -n <namespace> -it <pod name> -c <deployment name> -- jmap -dump:live,format=b,file=/tmp/heap-ifsapp-<container name>.hprof <PID>

Have you used dump:live?
When you use live, the JVM must first run a Full Garbage Collection to determine which objects are still reachable. If you include live temporary collections, buffer, caches with weak/soft refs, etc will be forced by GC and then JVM releases memory back to OS.

Also you will have to wait for few seconds to see the significat diff from kubectl top command


Forum|alt.badge.img+4
  • Do Gooder (Customer)
  • January 6, 2026

That does indeed make a difference, 75% used memory to 25%

Interesting! Definitely something worth analysing!


Forum|alt.badge.img+7
  • Author
  • Sidekick (Partner)
  • January 6, 2026

That does indeed make a difference, 75% used memory to 25%

Interesting! Definitely something worth analysing!

Yup
But I forgot to mention it's better to execute in non-peak hours, as heap dump generation may cause some latency for the current executing services of the POD

I have dispatched a IFS Support case as well. I hope I would recive positive feedback from them 🙂


Forum|alt.badge.img+11
  • Hero (Employee)
  • January 7, 2026

First - global.scale is set to 100 i assume?

If you get OOM that mean that the pod is out of memory not that the GC is not working as it should.
In my experience OOM happens when large java metapace structures are created. Java metaspace is not included in GC and is not part of the memory sizing of java at pod startup.

monitor it with:
jcmd <pid> VM.metaspace 

You should not get frequent OOM’s in pods. Increase the number of pods to spread the metaspace over more pods. If issue persist report as bugs.

 


Forum|alt.badge.img+7
  • Author
  • Sidekick (Partner)
  • January 7, 2026

First - global.scale is set to 100 i assume?

If you get OOM that mean that the pod is out of memory not that the GC is not working as it should.
In my experience OOM happens when large java metapace structures are created. Java metaspace is not included in GC and is not part of the memory sizing of java at pod startup.

monitor it with:
jcmd <pid> VM.metaspace 

You should not get frequent OOM’s in pods. Increase the number of pods to spread the metaspace over more pods. If issue persist report as bugs.

 

Thank you for your input. I understand what an OOM condition means. If garbage collection is working correctly, the pod should have additional memory available to use.

For some of our customers, we have configured four replicas for the ifsapp-odata deployment and increased both memory requests and memory limits. Despite this, the pods still encounter OOM restarts.

My main point is that when I generate a Java heap dump, a full garbage collection is triggered, and a significant amount of memory is immediately released back to the system. This clearly shows that a large amount of reclaimable memory was being retained.

Another user has confirmed observing the same behavior as described above.


Forum|alt.badge.img+11
  • Hero (Employee)
  • January 7, 2026

What i’m trying to say is that the oData pod is out of the memory that is not handled by the GC. Memory areas like this:
 

  • Metaspace
  • Code cache
  • Thread stacks
  • JIT compiler memory
  • Direct ByteBuffers (off‑heap)
  • GC native structures
  • NIO buffers
  • Class data sharing
  • Other JVM native allocations
  • …. and other Linux OS memory 

Even if you set the pods memory limit really large, that will not affect these memory areas. If you look at your GC snapshots  you can see that none of the metaspace and classspace memory are released by your GC, just the “normal” heap memory. In your pic we can also see that 166Mb MetaSpace memory is used (which is a big chunk of the available 200Mb in oData).  

There is a hidden parameter that controls how much memory of the requested memory should be used for non-normal-heaps memory. For oData this is set to 200Mb, and only the Metaspace alone use 166 of that. That is leaving 34Mb for the rest of of the memory areas in the list above, so try increasing it to 500 to see if that reduces the OOM.

ifsappodata:
   minStatic: 500

 
This is not an official IFS recommendation as it comes through a community discussion. Using hidden parameters are not really supported. But it might help the troubleshooting via your normal support channel.


Forum|alt.badge.img+7
  • Author
  • Sidekick (Partner)
  • January 7, 2026

What i’m trying to say is that the oData pod is out of the memory that is not handled by the GC. Memory areas like this:
 

  • Metaspace
  • Code cache
  • Thread stacks
  • JIT compiler memory
  • Direct ByteBuffers (off‑heap)
  • GC native structures
  • NIO buffers
  • Class data sharing
  • Other JVM native allocations
  • …. and other Linux OS memory 

Even if you set the pods memory limit really large, that will not affect these memory areas. If you look at your GC snapshots  you can see that none of the metaspace and classspace memory are released by your GC, just the “normal” heap memory. In your pic we can also see that 166Mb MetaSpace memory is used (which is a big chunk of the available 200Mb in oData).  

There is a hidden parameter that controls how much memory of the requested memory should be used for non-normal-heaps memory. For oData this is set to 200Mb, and only the Metaspace alone use 166 of that. That is leaving 34Mb for the rest of of the memory areas in the list above, so try increasing it to 500 to see if that reduces the OOM.

ifsappodata:
   minStatic: 500

 
This is not an official IFS recommendation as it comes through a community discussion. Using hidden parameters are not really supported. But it might help the troubleshooting via your normal support channel.

Yeah, I get what you mean, and thank you for providing this hidden parameter.


Ced
Do Gooder (Customer)
Forum|alt.badge.img+1
  • Do Gooder (Customer)
  • January 15, 2026

Hi all 

@HashanD Could the minStatic Parameter solve your issue with the OOM of the Pods? 

We are having the same issue that Odata Pod gobbles up every bit of memory and as soon as it is over the limit of 95% of the Pods Memory Ressources it restarts with a OOM.
Is the restart of the pods the intended solution from IFS for a not working garbage collecions? ;) Or can we do something else there?

Thank you really much for your help.