[Mon Nov 28 01:42:17 UTC 2022] - INFO: Installing ifs-cloud
[Mon Nov 28 01:42:17 UTC 2022] - INFO: Using chart ifscloud/ifs-cloud --version 222.0.0
[Mon Nov 28 01:42:17 UTC 2022] - INFO: Installing ifs-cloud
[Mon Nov 28 01:42:17 UTC 2022] - INFO: Running helm upgrade
[Mon Nov 28 01:57:44 UTC 2022] - SEVERE: UPGRADE FAILED: post-upgrade hooks failed: timed out waiting for the condition
[Mon Nov 28 01:57:44 UTC 2022] - SEVERE: Failed to install ifs-cloud
[Mon Nov 28 01:57:44 UTC 2022] - INFO: ifs-db-init log:
I found this in a bug update for 21 R2 SU2:
Container deploy: intermittent failure in ifs-db-init job timeout - increase helm timeout - permanent solution | If HW is slow the helm default timeout of 5min is not sufficient and the installer will fail. |
I guess if I wanted to extend the time my command would look like this:
.\installer.cmd --set action=mtinstaller --timeout 20m
The default is currently 15mins
Couldn’t find the log file either and I have the same problem in 21.1.8.
Code:
helm upgrade --install ifs-cloud %chart% %chartVersion% %helmConfigFlag% --debug --timeout 15m --namespace %namespace% %helmArgs%
:: helm template %chart% %chartVersion% %helmConfigFlag% --debug --timeout 15m --namespace %namespace% %helmArgs%
if errorlevel 1 (
echo SEVERE: Failed to install ifs-cloud
goto exit_error
)
Extended during testing to 60m without any success. I have created a support ticket and will let you know if IFS support manage to resolve.
I am having the same issue and would be interested in the solution
Cheers
Lee
Any luck? We have found the same problem.
I have raised the issue with IFS Support - Investigations are ongoing.
One avenue that we have looked at is if the Kubernetes cluster had installed correctly as all the PODS were stuck at a Pending State..
I have raised the issue with IFS Support - Investigations are ongoing.
One avenue that we have looked at is if the Kubernetes cluster had installed correctly as all the PODS were stuck at a Pending State..
Hi
we have the same issue. How did you resolve that?
thanks
Gary
I have raised the issue with IFS Support - Investigations are ongoing.
One avenue that we have looked at is if the Kubernetes cluster had installed correctly as all the PODS were stuck at a Pending State..
Hi
we have the same issue. How did you resolve that?
thanks
Gary
Its been my experience that if you have PODS in a pending state you need to look at the logs using the ./mtctl dump --namespace anamespace] command. This will create a dump folder in the delivery folder and you can see which POD is holding things up. They run up one by one but will not start new PODS if one errors. It could be you simply don’t have enough memory or a database connection error, but you will only know by looking at the logs.
Thanks,
Alex
I have raised the issue with IFS Support - Investigations are ongoing.
One avenue that we have looked at is if the Kubernetes cluster had installed correctly as all the PODS were stuck at a Pending State..
Hi
we have the same issue. How did you resolve that?
thanks
Gary
Its been my experience that if you have PODS in a pending state you need to look at the logs using the ./mtctl dump --namespace anamespace] command. This will create a dump folder in the delivery folder and you can see which POD is holding things up. They run up one by one but will not start new PODS if one errors. It could be you simply don’t have enough memory or a database connection error, but you will only know by looking at the logs.
Thanks,
Alex
Hi Alex,
thanks for your response. We have resolved this now. It looks like we had some conflicting IP addresses between the database and local ip addresses.
We amended them and reran the mtinstaller and that resolved the issue for us.
Thanks
Gary
Reply
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.