Skip to main content
Solved

Reporting Pods Require Frequent Restarts in IFS 22R2

  • March 30, 2026
  • 2 replies
  • 49 views

Forum|alt.badge.img

Hi,

We are currently experiencing an issue in IFS 22R2(22.2.9) where the reporting pods require frequent restarts, occurring several times each month during month end activities. We would like to understand whether this is a known issue and seek guidance on the root cause and recommended resolution.

When the issue occurs, print jobs enter a “Waiting” status within IFS. No explicit errors are logged, and pod health indicators appear normal. However, report processing does not continue until the reporting pod is restarted.

Restarting the ifsapp-reporting-<id> pod (stop followed by start) immediately resumes report processing. This behavior is observed more frequently during high-volume printing.

 

Observations During the Issue

  • Print jobs remain in Waiting status
  • No errors are displayed against the print jobs
  • Pod status appears healthy
  • Restarting the reporting pod resolves the issue temporarily

Request

We have noted from community discussions that a similar issue has been addressed in later versions of IFS. We would appreciate your assistance in confirming:

  • Whether this is a known issue in 22R2
  • Details of any fix or configuration changes applied in later versions
  • Whether the same fix or a recommended workaround can be safely implemented in our current environment

We look forward to your guidance and recommendations.

Thank you for your support.

Regards,

Hruday Gupta.

Best answer by Lingesan08

Hi ​@Hruday Gupta ,

  Yes — this behavior is commonly observed in IFS Cloud 22R2 (22.2.x), especially during high-volume reporting scenarios like month-end processing.

Based on your symptoms:

  • Jobs stuck in Waiting
  • No errors in UI
  • Pod appears healthy
  • Restart fixes immediately

This strongly indicates an internal processing stall, not a Kubernetes health issue.

 How to Confirm Root Cause from Logs

When the issue occurs, capture logs from the reporting pod:

kubectl logs -n <namespace> ifsapp-reporting-<id> --since=30m > reporting.log
kubectl describe pod -n <namespace> ifsapp-reporting-<id> > describe.txt
kubectl top pod -n <namespace>

 

 Check for These Patterns

1. Thread / Worker Exhaustion (MOST COMMON)

  • No new “processing job” logs
  • Jobs remain in queue but not picked up
  • Pod is alive but inactive

 Confirms: Reporting engine stuck internally

2. Memory / JVM Pressure

Look for:

  • OutOfMemoryError
  • Java heap space
  • Long GC pauses

 Confirms: Pod becomes unresponsive under load

3. Connection / Queue Issues

Look for:

  • timeout
  • connection reset
  • pool exhausted

 Confirms: Reporting lost connection to queue/DB

4. Kubernetes Resource Issues

Check:

  • OOMKilled
  • CPU throttling
  • Node pressure

 If NOT present → issue is inside reporting service

Key Confirmation Pattern

If you observe:

  • No errors
  • No processing logs
  • Jobs stuck in Waiting
  • Restart instantly resumes

 Root cause = thread/queue processing stall inside reporting pod

(This matches known behavior reported in IFS community discussions)

 Recommended Fix

Immediate:

  • Increase pod memory & CPU
  • Monitor JVM heap usage
  • Split large print batches

Medium-term:

  • Scale reporting pods (avoid single pod bottleneck)

Long-term:

  • Upgrade to newer IFS versions (23R1/23R2+)
      Reporting stability improvements are available there

 Conclusion

1.This is a known stability issue under high load in 22R2
2. Restart works because it resets stuck processing threads
3. Root cause is typically:

  • Thread exhaustion
  • Memory pressure
  • Queue handling limitations

 Connect

If you need help analyzing logs or tuning your setup, feel free to connect:

 LinkedIn: www.linkedin.com/in/lingesanr86

Happy to assist further 

2 replies

Forum|alt.badge.img+9
  • Hero (Partner)
  • Answer
  • March 31, 2026

Hi ​@Hruday Gupta ,

  Yes — this behavior is commonly observed in IFS Cloud 22R2 (22.2.x), especially during high-volume reporting scenarios like month-end processing.

Based on your symptoms:

  • Jobs stuck in Waiting
  • No errors in UI
  • Pod appears healthy
  • Restart fixes immediately

This strongly indicates an internal processing stall, not a Kubernetes health issue.

 How to Confirm Root Cause from Logs

When the issue occurs, capture logs from the reporting pod:

kubectl logs -n <namespace> ifsapp-reporting-<id> --since=30m > reporting.log
kubectl describe pod -n <namespace> ifsapp-reporting-<id> > describe.txt
kubectl top pod -n <namespace>

 

 Check for These Patterns

1. Thread / Worker Exhaustion (MOST COMMON)

  • No new “processing job” logs
  • Jobs remain in queue but not picked up
  • Pod is alive but inactive

 Confirms: Reporting engine stuck internally

2. Memory / JVM Pressure

Look for:

  • OutOfMemoryError
  • Java heap space
  • Long GC pauses

 Confirms: Pod becomes unresponsive under load

3. Connection / Queue Issues

Look for:

  • timeout
  • connection reset
  • pool exhausted

 Confirms: Reporting lost connection to queue/DB

4. Kubernetes Resource Issues

Check:

  • OOMKilled
  • CPU throttling
  • Node pressure

 If NOT present → issue is inside reporting service

Key Confirmation Pattern

If you observe:

  • No errors
  • No processing logs
  • Jobs stuck in Waiting
  • Restart instantly resumes

 Root cause = thread/queue processing stall inside reporting pod

(This matches known behavior reported in IFS community discussions)

 Recommended Fix

Immediate:

  • Increase pod memory & CPU
  • Monitor JVM heap usage
  • Split large print batches

Medium-term:

  • Scale reporting pods (avoid single pod bottleneck)

Long-term:

  • Upgrade to newer IFS versions (23R1/23R2+)
      Reporting stability improvements are available there

 Conclusion

1.This is a known stability issue under high load in 22R2
2. Restart works because it resets stuck processing threads
3. Root cause is typically:

  • Thread exhaustion
  • Memory pressure
  • Queue handling limitations

 Connect

If you need help analyzing logs or tuning your setup, feel free to connect:

 LinkedIn: www.linkedin.com/in/lingesanr86

Happy to assist further 


Forum|alt.badge.img
  • Author
  • Do Gooder (Partner)
  • April 1, 2026

Hi ​@Hruday Gupta ,

  Yes — this behavior is commonly observed in IFS Cloud 22R2 (22.2.x), especially during high-volume reporting scenarios like month-end processing.

Based on your symptoms:

  • Jobs stuck in Waiting
  • No errors in UI
  • Pod appears healthy
  • Restart fixes immediately

This strongly indicates an internal processing stall, not a Kubernetes health issue.

 How to Confirm Root Cause from Logs

When the issue occurs, capture logs from the reporting pod:

kubectl logs -n <namespace> ifsapp-reporting-<id> --since=30m > reporting.log
kubectl describe pod -n <namespace> ifsapp-reporting-<id> > describe.txt
kubectl top pod -n <namespace>

 

 Check for These Patterns

1. Thread / Worker Exhaustion (MOST COMMON)

  • No new “processing job” logs
  • Jobs remain in queue but not picked up
  • Pod is alive but inactive

 Confirms: Reporting engine stuck internally

2. Memory / JVM Pressure

Look for:

  • OutOfMemoryError
  • Java heap space
  • Long GC pauses

 Confirms: Pod becomes unresponsive under load

3. Connection / Queue Issues

Look for:

  • timeout
  • connection reset
  • pool exhausted

 Confirms: Reporting lost connection to queue/DB

4. Kubernetes Resource Issues

Check:

  • OOMKilled
  • CPU throttling
  • Node pressure

 If NOT present → issue is inside reporting service

Key Confirmation Pattern

If you observe:

  • No errors
  • No processing logs
  • Jobs stuck in Waiting
  • Restart instantly resumes

 Root cause = thread/queue processing stall inside reporting pod

(This matches known behavior reported in IFS community discussions)

 Recommended Fix

Immediate:

  • Increase pod memory & CPU
  • Monitor JVM heap usage
  • Split large print batches

Medium-term:

  • Scale reporting pods (avoid single pod bottleneck)

Long-term:

  • Upgrade to newer IFS versions (23R1/23R2+)
      Reporting stability improvements are available there

 Conclusion

1.This is a known stability issue under high load in 22R2
2. Restart works because it resets stuck processing threads
3. Root cause is typically:

  • Thread exhaustion
  • Memory pressure
  • Queue handling limitations

 Connect

If you need help analyzing logs or tuning your setup, feel free to connect:

 LinkedIn: www.linkedin.com/in/lingesanr86

Happy to assist further 

Thank you for the detailed explanation. This information was very helpful. I will try to collect the logs when the issue reoccurs, and I will definitely reach out if I need any further assistance. Thank you for your time and support.