Question

About the global.scale parameter in ifscloud-values.yaml.

  • 28 March 2024
  • 5 replies
  • 52 views

Userlevel 1
Badge +5

Hi All.

I have a question about the global.scale parameter in ifscloud-values.yaml.
Please let me know what kind of change will occur in CPU and memory usage when changing this parameter.

I would like to know more about what "the scaling of cpu/memory" means in the documentation.
When global.scale is set to 100, how much of the CPU and memory installed in the AP server will IFS use?
How does CPU and memory usage change if you set it to 50 or 10 compared to if you set it to 100?

https://docs.ifs.com/techdocs/23r1/070_remote_deploy/010_installing_fresh_system/200_installing_ifs_cloud/035_ifs_cloud_ifsinstaller/030_installation_parameters/#general_parameters
=========================================================================
Defines the scaling of cpu/memory compared to the production mode 100%.
Default: 100
A scale of 10-20 is a small development environment.
A scale of aprox 50 is a small test environment.
Scale should be set to 100 in all production like environments.
=========================================================================


5 replies

Userlevel 5
Badge +12

Hi Wakabayashi-san,

The global.scale parameter affects Kubernetes deployments and containers. For deployments, it defines memory and cpu for resource requests and limits. Please see the Kubernetes documentation for further details.

For target containers, it defines an environment variable ODP_JAVA_OPTS with a value of -XX:MaxRAMPercentage=<value>. Please see the JDK documentation for further details.

You can view the definition in the Helm templates. It also shows up in the verbose installer logs. Essentially it multiples the predefined container memory and CPU values by global.scale. There is some additional scaling and rounding before being added to the minimum memory and CPU.

Please check a target deployment for specific details. Specifically the resources.limits, resources.requests, and containers.env sections.

kubectl get deployment/<name> -n <ifs namespace> -o yaml

Best regards -- Ben

Userlevel 5
Badge +10

Be aware that if you lower memory size (using scale) in a test or dev environment it might not be possible to test production size reports or connect messages.

There is no single answer on how much memory a pods will use when it is scaled down.

the formula is is:

memory= minMem + addMem*(scale/100)

The minMem and addMem is set per pod - minMem is what is required to get the pod running without and load/request sent to it. By adding addMem it becomes production grade. Scale can add a degree of the addMem...

if you know how you will find these in the values.yaml in each subchart of the IFS Cloud helm chart.

Userlevel 2
Badge +5

I am assuming you are using a Remote Deployment based on the question.

If it is a remote deployment, then it is possible to view this information in the helm charts. But it is required to understand helm templates to read this information.


When running the mtinstaller, the helm charts will get downloaded based on the chart version used. This will be downloaded to local helm repository cache.

 

It is possible to find the location of the helm charts by looking at the helm env variables by running the below powershell command.

helm env

Usually the path of the repository cache is “C:\Users\username>\AppData\Local\Temp\helm\repository”

Once the chart file is extracted (first .tgz and then the .tar), the whole Helm chart can be inspected.

The file “ifs-cloud\charts\applicationtemplate\templates\_application.tpl will contain the calculation  used for setting up the memory and CPU. Below is from a 23R1 chart. Note that this can be overridden by individual chart (container).

 

resources:
limits:
memory: "{{ .Values.memLimit | default (add .Values.minMemory ( round ( div ( mul .Values.addMemory .Values.global.scale ) 100 ) 0 )) }}M"
{{ if .Values.global.limitCpu }}
cpu: "{{ .Values.cpuLimit | default (add .Values.minCpu ( round (div ( mul .Values.addCpu .Values.global.scale ) 100 ) 0 )) }}m"
{{ else }}
cpu: "{{ .Values.cpuLimit | default 10000 }}m"
{{ end }}
ephemeral-storage: "{{ .Values.limitsEphemeralStorage }}M"
requests:
memory: "{{ .Values.memRequest | default ( round ( div ( mul (add .Values.minMemory ( round ( div ( mul .Values.addMemory .Values.global.scale ) 100 ) 0 )) .Values.global.memRatio ) 100 ) 0 ) }}M"
cpu: "{{ .Values.cpuRequest | default ( round ( div ( mul (add .Values.minCpu ( round ( div ( mul .Values.addCpu .Values.global.scale ) 100 ) 0 )) .Values.global.cpuRatio ) 100 ) 0 ) }}m"
ephemeral-storage: "{{ .Values.requestsEphemeralStorage }}M"

Refer https://helm.sh/

Sanjaya

Userlevel 1
Badge +5
Hi All.Thank you for your information. I will review the content with the technical team and discuss the configuration of our environment.Let me ask you one question. Where can I set the Pod's memory size?Best Regards.
Userlevel 5
Badge +10

You should not change individual pod’s memory size - that is form a support perspective hardcoded. If you need more resources you start a second pod with the replicas parameter

Reply