Skip to main content

Hi IFS Community,

 

During the ‘ingress’ helm charts installation stage on a remote environment, we encountered the below issue.

Error: chart "priority-class" matching --kubeconfig=F:/ifs_25r1/ifsroot/utils/../config/kube/config not found in ifscloud index. (try 'helm repo update'): improper constraint: --kubeconfig=F:/ifs_25r1/ifsroot/utils/../config/kube/config
 

 The ingress namespace has been created but due to the above error none of the deployments have been initiated. The kubeconfig file was retrieved from the middltier server and it is placed inside the ifsroot\config\kube and the users working .\kube directory in the user’s directory as well.

 

Furthermore, The above error only occurs for the  ifs-ingress helm chart installation. The ifs-storage helm chart installation is successful
 

Has anyone encountered this issue before? Thanks in advance for your recommendations​​​​​

Hi Herath,

Was there a previous installation? There may be left overs from a previous installation.

Please type “helm repo update” and give it another try.

Best regards -- Ben


Hi Herath,

Was there a previous installation? There may be left overs from a previous installation.

Please type “helm repo update” and give it another try.

Best regards -- Ben

Hi ​@Ben Monroe ,
We previously had a 23r2 middle tier. We basically recreated the cluster since the internal certificates of the cluster were also soon to be expired.  

Using the main.ps1 script ,we installed Kubernetes again onto the ubuntu server and started the remaining steps from main.ps1. That’s where we encountered the issue. We also tried the helm repo update but still the error is logged during the ingress helm chart installation.
 

Do we need to submit an IFS ticket for this?


It seems likely that something from the previous environment is interfering with this. The problem could be on the Ubuntu server or the Windows Management server. Was Kubernetes purged prior to the reinstallation? Something like:

sudo snap remove microk8s --purge

On the Windows Management server the user .kube and .ssh folders may point to the previous instance. If the ifshome folder contains traces of the previous instance recreating it may be beneficial.

I am not sure how much Support can assist with cleaning up previous installations. If possible it may be easier to just prepare new VMs to avoid such cleanup issues.

Best regards -- Ben


It seems likely that something from the previous environment is interfering with this. The problem could be on the Ubuntu server or the Windows Management server. Was Kubernetes purged prior to the reinstallation? Something like:

sudo snap remove microk8s --purge

On the Windows Management server the user .kube and .ssh folders may point to the previous instance. If the ifshome folder contains traces of the previous instance recreating it may be beneficial.

I am not sure how much Support can assist with cleaning up previous installations. If possible it may be easier to just prepare new VMs to avoid such cleanup issues.

Best regards -- Ben


We are using the same ubuntu server. After the release update process, we completed the database deployment. Afterwards, we opted to recreate the cluster (since the internal certificates used by cluster are only valid for year on remote environments) before performing the middle tier installation.

for the recreation process, we completed the below steps;

The ifs root folder was updated with the build artifacts for 25R1 SU2 (downloaded from the manged deployments section on the RU studio)

  • .\main.ps1 -resource 'KUBERNETES' - successful
  • .\main.ps1 -resource 'GETKUBECONFIG' -  successful and the config file added to the .kube directory
  • .\main.ps1 -resource "SETK8SDNS" -  successful
  • .\main.ps1 -resource 'STORAGE' -  successful
  • .\main.ps1 -resource 'INGRESS' - unsuccessful with the above error

Did you attempt deleting the kube and secret folders under the ifsroot/config directory and re-running the middleware installation from scratch?


Did you attempt deleting the kube and secret folders under the ifsroot/config directory and re-running the middleware installation from scratch?

we tried this option as well but it was not successful


I assume there are no network or firewall restrictions between the middleware and the management server. Please also check the Helm version on the management server by running helm version. Additionally, confirm that the customer Helm username and password are working, the account is not locked, and it can access the Helm repository at https://ifscloud.jfrog.io/artifactory/helm/. and aslo, ensure that you have downloaded the correct artifact containing the appropriate Helm version.


The issue was resolved by updating the remote artifacts ingress version and the Priority Class Version on the main_config.json file.