Skip to main content
Question

BPA Workflow - Size

  • January 31, 2025
  • 4 replies
  • 140 views

Forum|alt.badge.img+10

We are currently implementing a workflow that allows us to create customer sales orders.

 

Tests on a small scale are okay and we are happy.

However, as soon as we move to a larger scale, the workflow crashes without an error.

The dataset is composed of the same order that we create X times, so we know that we have no functional problems because iterating 30 creations, for example, is okay, but not 100.

We have already limited the return of the projection as much as possible by using $select to retrieve only the desired columns.

We have not exceeded the line reading limit per projection (10.000) (at most we read 5000 lines) and we don't think we have a problem with CASCADE_LIMIT because we set it to 250,000 for testing.

We are more inclined towards a memory/CPU/space limit for the one of the pods.

Do you have any knowledge of documentation or experience to parameterize the recommended memory or CPU or disk size?

 

@dsj ​@kamnlk ​@IFS RD Product Management 

 

Thank’s in advance 

4 replies

Forum|alt.badge.img+10
  • Author
  • Hero (Customer)
  • 118 replies
  • February 3, 2025

IFS Support : There is no any specific documentation for memory sizing guide of workflow.


Forum|alt.badge.img+10
  • Author
  • Hero (Customer)
  • 118 replies
  • February 14, 2025

it was indeed a memory limitation with the odata pod.

 


Forum|alt.badge.img+1
  • Do Gooder (Employee)
  • 6 replies
  • March 11, 2025

However, as soon as we move to a larger scale, the workflow crashes without an error.” Is this mean no errors even on the OData pod logs?


Forum|alt.badge.img+10
  • Author
  • Hero (Customer)
  • 118 replies
  • March 11, 2025

@Chanaka Perera we get an OOM error : 

 

 


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings