Skip to main content
Question

IFS Cloud Remote Deployment HA Sizing


Forum|alt.badge.img+2

Hi all,

 

our Customer is planning to have a 3 Node IFS Cloud Middle Tier Cluster and has asked for the Sizing of the three nodes. The available sizing for the Remote Deployment is based on a single Node Installation, and seems a bit high, when having 3 Nodes.

 

The Customer has round about 300 concurrent Users.

 

Do you have any Sizing, and also a recommendation how to scale the IFS Cloud Pods along the Cluster?

 

Thank you in advance.

 

Best Regards

Marion

5 replies

Malindu Fernando
Hero (Employee)
Forum|alt.badge.img+6

Hi @FleMarioE ,

For Kubernetes clusters (specifically the control plane), quorum needs to be maintained which means that at least 2 out of the 3 nodes must be available. So based on that, I believe you should be able to allocate nodes such that each node has enough resources to run the application with just 2 nodes available.

Beyond that, the sizing depends on how you want to scale each micro-service/pod/deployment.

Best Regards,
Malindu Fernando


untsrikanth
Sidekick (Customer)
Forum|alt.badge.img+4
  • Sidekick (Customer)
  • 14 replies
  • September 27, 2023

@Malindu Fernando Does it mean it is possible for the HA remote installation to use 2 nodes as well?


Forum|alt.badge.img+9
  • Sidekick (Employee)
  • 135 replies
  • September 24, 2025

Any updates on this? Recent tests seem to indicate there may be disruption to the application when run in replicas:1 - therefore sizing should be done so that replicas:2 can be used and each of the replicas distributed to different nodes.

However, no clear sizing for replicas:2 in HA configuration has been given.

Please make clear whether at least the minimum memory requirement of 128GB has to be met on every single node, or if it’s enough to distribute 128GB (or 3x64GB to have 128GB with 2 nodes) .


maheshmuz
Hero (Partner)
Forum|alt.badge.img+9
  • Hero (Partner)
  • 105 replies
  • October 6, 2025

Have you tried using the manual method to join the nodes? We currently have a HA setup with three middleware servers and two replicas for the OData pods. Increasing the OData pods to two helped resolve the application latency issue.

If you haven’t already, please try scaling the OData pods further. Currently, we cannot distribute memory individually across the nodes, as all three servers are using the same memory allocation.


Forum|alt.badge.img+5
  • Sidekick (Customer)
  • 47 replies
  • October 16, 2025

 

SamiL wrote:

Any updates on this? Recent tests seem to indicate there may be disruption to the application when run in replicas:1 - therefore sizing should be done so that replicas:2 can be used and each of the replicas distributed to different nodes.

However, no clear sizing for replicas:2 in HA configuration has been given.

Please make clear whether at least the minimum memory requirement of 128GB has to be met on every single node, or if it’s enough to distribute 128GB (or 3x64GB to have 128GB with 2 nodes) .

 

@SamiL any updates on your questions please?


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings