We have a barcode report that generate a report approx. 1800 pages with 3 barcodes on it. The result xml is around 16kb. When this report is printed to a logical printer the issues starts. The IFSAPP-REPORTING-REN prods starts crash-looping with OOMKILLED. The pod doesn't managed to update the status for the print jobs so it start all over again when it starts up ending up in a never ending loop of crashing the report rendering pod
We have increased the memory for the pod from the default value 2GB to 6GB and we got the print out true, the pods used almost 5.8GB of memory to get i true.
My question in IFS Appliction 8, 9 or 10 we had the system parameter to control the memory usage and utilize disk for the rendering process based on the xml result size
Breakpoint XML size for when to format Report Designer reports in memory or using disk storage (kB) | Reporting | This parameter controls the Disc Cache. With it, you can make sure that small reports are handled in memory without being swapped out to disc. Disc Cache will slow the PDF generation down by 30%. However large reports needs to be swapped out to disc in order to save memory. The parameter sets the limit of XML data size (Kb). If the data is larger than this limit, the Formatting Processor (FOP) will use Disc Cache. A good setting for this is 100kB. This is also the default. |
Where did this parameter go in IFS Cloud, is it fixed set in IFS cloud to start using disk for rendering to avoid the crash looping issue we now? The only way to fix the current issue are deleting the printjob with status working.
Or should it be implemented a solution that keeps track on how many times it has tried to render the report to be able to stop processing it forever?
Kjell Åge