Skip to main content

On running the installer.cmd script I receive the following error:


[Thu May 11 13:49:51 UTC 2023] - INFO: Installing ifs-cloud
[Thu May 11 13:49:51 UTC 2023] - INFO: Using chart ifscloud/ifs-cloud --version 222.6.0
[Thu May 11 13:49:51 UTC 2023] - INFO: Installing ifs-cloud
[Thu May 11 13:49:51 UTC 2023] - INFO: Running helm upgrade
[Thu May 11 13:52:13 UTC 2023] - SEVERE: UPGRADE FAILED: post-upgrade hooks failed: job failed: BackoffLimitExceeded
[Thu May 11 13:52:13 UTC 2023] - SEVERE: Failed to install ifs-cloud
[Thu May 11 13:52:13 UTC 2023] - INFO: ifs-db-init log:
Container initiated at Thu May 11 13:49:59 UTC 2023
SECRETIMAGE_VERSION=Wed Feb 15 07:36:21 UTC 2023 1.0.6
ALPINE_VERSION=Wed Feb 15 07:36:21 UTC 2023 3.16.3
BASEIMAGE_VERSION=Wed Feb 15 07:36:21 UTC 2023 1.0.43
CERTIFICATE_HANDLER_VERSION=Wed Feb 15 07:38:20 UTC 2023 1.0.8
JAVAIMAGE_VERSION=Wed Feb 15 07:38:20 UTC 2023 1.0.52
DRIVER_VERSION=Wed Feb 15 10:30:00 UTC 2023 22.2.6.0.0
DBINITIMAGE_VERSION=Wed Feb 15 10:30:00 UTC 2023 22.2.6.0.0
Using API to fetch certificates
/opt/ifs/get_certs.sh: line 50: $(sed -e 's/^"//' -e 's/"$//' <<< "$(echo -e "${split[0]}" | tr -d '[:space:]')"): ambiguous redirect
No certificates are loaded for database-certs ..
Using API to fetch certificates
No certificates are loaded for ifs-db-init-certs ..
/opt/ifs/get_certs.sh: line 50: $(sed -e 's/^"//' -e 's/"$//' <<< "$(echo -e "${split[0]}" | tr -d '[:space:]')"): ambiguous redirect
Failed updating database. IO Error: The Network Adapter could not establish the connection.
Failing updating, check credentials

 

All steps prior to this have worked correctly. It seems to happen during the MTInstaller portion of the script.

 

The VMs have fully open communications between themselves so that should not be causing any issues either

 

Does anyone have any ideas as to what might be causing this?

 

 

Hi owen,

I have seen that error before. In my case, the solutionset.yaml file was in the wrong folder. Moving it to ifsroot\deliveries\build-home\ifsinstaller fixed it.

Please check where your solutionset.yaml file is located.

Best regards, Ben


I think the focus should be on  “IO Error: The Network Adapter could not establish the connection.”
i.e. the container can not connect to the DB.
 


I’m having the same issue with two new environments using an existing PDB plugged in to the new CDB.

I didn’t have solutionset.yaml in ifsroot\deliveries\build-home\ifsinstaller but putting it there doesn’t affect this error for me.

@owen did you get this resolved by some other means?

My hosts can connect to the database, why wouldn’t the container be able to do this?


Hi @SamiL,

To answer: “My hosts can connect to the database, why wouldn’t the container be able to do this?”

The k8s cluster uses a virtual network. Testing DB connectivity from the host where k8s runs, doesn’t prove much.
If your DB IP is in the same IP ranges as the virtual network inside k8s, k8s will assume your db is located inside the k8s cluster (as a pod), and will not bridge the DB traffic outside the k8s clusters virtual network.

You will need to change the ip range of the k8s virtual network - read here:
https://docs.ifs.com/techdocs/23r1/070_remote_deploy/010_installing_fresh_system/030_preparing_server/50_windows_managementserver/#change_pod_ip_range

 


Good idea, thanks for the info.

But my setup uses
"PodCidrRange": "10.64.0.0/16",
"LocalNetworkIpRange": "10.1.0.0/16"

so they should not be within the same range.

Unless there’s a bug that makes it think the range is the same even with 10.
I could try 192. range just to be safe.


Oh boy, I did change the range and it worked. Probably fixed something else, since I have the same installation running in the same network with the default 10.64. and 10.1. address ranges.

But whatever it was great that it got resolved even though we may never know the root cause.
Thanks for your input!


I think you need to run the “main.ps1 CHANGE-POD-IP-RANGE “ to activate the setup you have in your main_config.json file - otherwise the default 10.1.0.0/16 will be used. 
The actual used k8s network can be seen using “kubectl get pods -A -o wide” - where you will see the IP’s of your pods in the k8s cluster.
  


kubectl get pods -A -o wide” gives me the 10.64 -addresses on a 22R2 SU8 fresh install at least without running the change pod-ip. Of couse it could be different in older versions.


Reply