Skip to main content
Solved

An error has occurred while remote deploying IFSCLOUD

  • November 28, 2022
  • 8 replies
  • 589 views

Forum|alt.badge.img+8

[Mon Nov 28 01:42:17 UTC 2022] - INFO: Installing ifs-cloud
[Mon Nov 28 01:42:17 UTC 2022] - INFO: Using chart ifscloud/ifs-cloud --version 222.0.0
[Mon Nov 28 01:42:17 UTC 2022] - INFO: Installing ifs-cloud
[Mon Nov 28 01:42:17 UTC 2022] - INFO: Running helm upgrade
[Mon Nov 28 01:57:44 UTC 2022] - SEVERE: UPGRADE FAILED: post-upgrade hooks failed: timed out waiting for the condition
[Mon Nov 28 01:57:44 UTC 2022] - SEVERE: Failed to install ifs-cloud
[Mon Nov 28 01:57:44 UTC 2022] - INFO: ifs-db-init log:

Best answer by crpgaryw

PROAHAR wrote:
crpgaryw wrote:
WyrLeeLeW wrote:

@cblome

I have raised the issue with IFS Support - Investigations are ongoing. 

 

One avenue that we have looked at is if the Kubernetes cluster had installed correctly as all the PODS were stuck at a Pending State.. 

Hi

we have the same issue. How did you resolve that? 

thanks

Gary 

 

 

Its been my experience that if you have PODS in a pending state you need to look at the logs using the ./mtctl dump --namespace [namespace] command. This will create a dump folder in the delivery folder and you can see which POD is holding things up. They run up one by one but will not start new PODS if one errors. It could be you simply don’t have enough memory or a database connection error, but you will only know by looking at the logs.

Thanks,

Alex

Hi Alex,

 

thanks for your response. We have resolved this now. It looks like we had some conflicting IP addresses between the database and local ip addresses. 

 

We amended them and reran the mtinstaller and that resolved the issue for us. 

Thanks

Gary 

View original
Did this topic help you find an answer to your question?

Forum|alt.badge.img+8
  • Sidekick (Customer)
  • December 7, 2022

I found this in a bug update for 21 R2 SU2:

 

Container deploy: intermittent failure in ifs-db-init job timeout - increase helm timeout - permanent solution

If HW is slow the helm default timeout of 5min is not sufficient and the installer will fail.
Added increased timeout to Helm when called from mt-installer.cmd/shThe Helm --timeout can still be overridden from command line if needed.

 

I guess if I wanted to extend the time my command would look like this:

.\installer.cmd --set action=mtinstaller --timeout 20m

The default is currently 15mins

Couldn’t find the log file either and I have the same problem in 21.1.8.

Code:

helm upgrade --install ifs-cloud %chart% %chartVersion% %helmConfigFlag% --debug --timeout 15m --namespace %namespace% %helmArgs%
:: helm template %chart% %chartVersion% %helmConfigFlag% --debug  --timeout 15m  --namespace %namespace% %helmArgs%
if errorlevel 1 (
  echo SEVERE: Failed to install ifs-cloud
  goto exit_error
)


Forum|alt.badge.img+8
  • Sidekick (Customer)
  • December 9, 2022

Extended during testing to 60m without any success. I have created a support ticket and will let you know if IFS support manage to resolve.


Forum|alt.badge.img+7
  • Hero (Partner)
  • April 27, 2023

@PROAHAR - Did you get this issue resolved with the assistance of IFS Support? 

I am having the same issue and would be interested in the solution

 

Cheers

Lee


Forum|alt.badge.img+1
  • Do Gooder (Partner)
  • May 2, 2023

Any luck? We have found the same problem.


Forum|alt.badge.img+7
  • Hero (Partner)
  • May 2, 2023

@cblome 

I have raised the issue with IFS Support - Investigations are ongoing. 

 

One avenue that we have looked at is if the Kubernetes cluster had installed correctly as all the PODS were stuck at a Pending State.. 

 

 


Forum|alt.badge.img+9
  • Sidekick (Partner)
  • December 14, 2023
WyrLeeLeW wrote:

@cblome

I have raised the issue with IFS Support - Investigations are ongoing. 

 

One avenue that we have looked at is if the Kubernetes cluster had installed correctly as all the PODS were stuck at a Pending State.. 

Hi

we have the same issue. How did you resolve that? 

thanks

Gary 

 

 


Forum|alt.badge.img+8
  • Sidekick (Customer)
  • December 14, 2023
crpgaryw wrote:
WyrLeeLeW wrote:

@cblome

I have raised the issue with IFS Support - Investigations are ongoing. 

 

One avenue that we have looked at is if the Kubernetes cluster had installed correctly as all the PODS were stuck at a Pending State.. 

Hi

we have the same issue. How did you resolve that? 

thanks

Gary 

 

 

Its been my experience that if you have PODS in a pending state you need to look at the logs using the ./mtctl dump --namespace [namespace] command. This will create a dump folder in the delivery folder and you can see which POD is holding things up. They run up one by one but will not start new PODS if one errors. It could be you simply don’t have enough memory or a database connection error, but you will only know by looking at the logs.

Thanks,

Alex


Forum|alt.badge.img+9
  • Sidekick (Partner)
  • December 14, 2023
PROAHAR wrote:
crpgaryw wrote:
WyrLeeLeW wrote:

@cblome

I have raised the issue with IFS Support - Investigations are ongoing. 

 

One avenue that we have looked at is if the Kubernetes cluster had installed correctly as all the PODS were stuck at a Pending State.. 

Hi

we have the same issue. How did you resolve that? 

thanks

Gary 

 

 

Its been my experience that if you have PODS in a pending state you need to look at the logs using the ./mtctl dump --namespace [namespace] command. This will create a dump folder in the delivery folder and you can see which POD is holding things up. They run up one by one but will not start new PODS if one errors. It could be you simply don’t have enough memory or a database connection error, but you will only know by looking at the logs.

Thanks,

Alex

Hi Alex,

 

thanks for your response. We have resolved this now. It looks like we had some conflicting IP addresses between the database and local ip addresses. 

 

We amended them and reran the mtinstaller and that resolved the issue for us. 

Thanks

Gary 


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings