Hi Wakabayashi-san,
The global.scale parameter affects Kubernetes deployments and containers. For deployments, it defines memory and cpu for resource requests and limits. Please see the Kubernetes documentation for further details.
For target containers, it defines an environment variable ODP_JAVA_OPTS with a value of -XX:MaxRAMPercentage=<value>. Please see the JDK documentation for further details.
You can view the definition in the Helm templates. It also shows up in the verbose installer logs. Essentially it multiples the predefined container memory and CPU values by global.scale. There is some additional scaling and rounding before being added to the minimum memory and CPU.
Please check a target deployment for specific details. Specifically the resources.limits, resources.requests, and containers.env sections.
kubectl get deployment/<name> -n <ifs namespace> -o yaml
Best regards -- Ben
Be aware that if you lower memory size (using scale) in a test or dev environment it might not be possible to test production size reports or connect messages.
There is no single answer on how much memory a pods will use when it is scaled down.
the formula is is:
memory= minMem + addMem*(scale/100)
The minMem and addMem is set per pod - minMem is what is required to get the pod running without and load/request sent to it. By adding addMem it becomes production grade. Scale can add a degree of the addMem...
if you know how you will find these in the values.yaml in each subchart of the IFS Cloud helm chart.
I am assuming you are using a Remote Deployment based on the question.
If it is a remote deployment, then it is possible to view this information in the helm charts. But it is required to understand helm templates to read this information.
When running the mtinstaller, the helm charts will get downloaded based on the chart version used. This will be downloaded to local helm repository cache.
It is possible to find the location of the helm charts by looking at the helm env variables by running the below powershell command.
helm env
Usually the path of the repository cache is “C:\Users\username>\AppData\Local\Temp\helm\repository”
Once the chart file is extracted (first .tgz and then the .tar), the whole Helm chart can be inspected.
The file “ifs-cloud\charts\applicationtemplate\templates\_application.tpl will contain the calculation used for setting up the memory and CPU. Below is from a 23R1 chart. Note that this can be overridden by individual chart (container).
resources:
limits:
memory: "{{ .Values.memLimit | default (add .Values.minMemory ( round ( div ( mul .Values.addMemory .Values.global.scale ) 100 ) 0 )) }}M"
{{ if .Values.global.limitCpu }}
cpu: "{{ .Values.cpuLimit | default (add .Values.minCpu ( round (div ( mul .Values.addCpu .Values.global.scale ) 100 ) 0 )) }}m"
{{ else }}
cpu: "{{ .Values.cpuLimit | default 10000 }}m"
{{ end }}
ephemeral-storage: "{{ .Values.limitsEphemeralStorage }}M"
requests:
memory: "{{ .Values.memRequest | default ( round ( div ( mul (add .Values.minMemory ( round ( div ( mul .Values.addMemory .Values.global.scale ) 100 ) 0 )) .Values.global.memRatio ) 100 ) 0 ) }}M"
cpu: "{{ .Values.cpuRequest | default ( round ( div ( mul (add .Values.minCpu ( round ( div ( mul .Values.addCpu .Values.global.scale ) 100 ) 0 )) .Values.global.cpuRatio ) 100 ) 0 ) }}m"
ephemeral-storage: "{{ .Values.requestsEphemeralStorage }}M"
Refer https://helm.sh/
Sanjaya
Hi All.Thank you for your information. I will review the content with the technical team and discuss the configuration of our environment.Let me ask you one question. Where can I set the Pod's memory size?Best Regards.
You should not change individual pod’s memory size - that is form a support perspective hardcoded. If you need more resources you start a second pod with the replicas parameter