Skip to main content

Problem:

FTP reader cannot proceed with OPTIMITY INTEGRATION - java.lang.OutOfMemoryError: Java heap space error in application message.

checked int server logs ifsconnect,intserver and other logs and could find "Caused by: java.lang.OutOfMemoryError: Java heap space" error on intserver logs,

  • Checked the heap dump and could not find heap issue.

  • in this generated xml file
    one xml contains 75mb and other one is 56 mb,
    however file reader can read, For a 2GB heap space can process about 100 MB file. For larger files will have to increase the heap space allocated for the Integration Server accordingly.

  • This issue was resolved by increasing int server memory up to 8 gb (both int servers 16gb) and that was processing file with 8GB heap memory. As of now this issue is sorted temporally . But in future if this file grows beyond 56MB this issue can reoccur in same environments and tweaking the memory in-between managed servers no longer will be a solution

Suggestion to avoid this issue

  1.  file size can be reduced my compressing before writing the file to the FTP location. This has to be done by teh customer end
  2. optimize this optimity integration when writing file in to FTP location
  3. find an alternative way to archive this process without using ifs connect


Requested From RnD

Are there any limitations to IFS Connect, and are there opportunities for optimizing its performance and capabilities?"

Hi @udlelk ,

Could you please assist on this question?


@Takesha Kaluarachchi Please find the answers for your questions below:

IFS Connect is a light weight integration broker and it has known limitations with File Integrations. One of those limitations is, IFS Connect memory consumption is high when processing files. To read/process a file with the size 100 MB IFS Connect consume around 2 GB of a heap memory.

This sizing is possible assuming the messages are processing InOrder (one at a time). If the execution is Parallel which means 100 MB * (nX parallal threads), then the heap consumption will also increases along with the running parallels threads.  Also the other integrations which are running in the environment also consume the same heap memory which should be a considerable factor for the memory allocation.

With that IFS Connect Framework limitation in mind, R&D can recommend some changes/modification for the integration solution:

  1. Send files in smaller chunks – Require changes in file pushing solution also may require changes in data processing solution.
  2. Send files in compressed format – Require changes in both file pushing solution as well as in data processing solution

Please note that, optimizing IFS Connect Framework to utilize low heap memory in file integration requires a redesign of the framework which has to be done in a future IFS release.


@udlelk Has there been any changes to this in the latest realeases?
We have also some issues with “java out of memory”, both when reading files through IFS Connect and when rendering Quick Reports to Excel format, the default action when selecting to send the output as mail.

Is there any documentation around this so we know what pods are involved for the different actions and how to size the pods?
In this topic it states that  a 100 mb file will consume around 2gb of heap memory. Does it mean that a 200 mb file will consume 4gb of heap memory or how does it work so we know how to size?


Reply