Skip to main content
Question

.\main.ps1 -resource 'MONITORING' and IFS goes down

  • November 13, 2025
  • 5 replies
  • 13 views

Forum|alt.badge.img+7

Hello, 

 

I followed the steps on

https://docs.ifs.com/techdocs/24r2/070_remote_deploy/010_installing_fresh_system/030_preparing_server/50_management_server/010_Setting_up_An_Environment/#5_install_ifs-monitoring_helm_chart_command

 

After this step I was unable to logon on IFS Cloud Remote

 

After removing, It worked again.

 

Anyone expérienced this ?

5 replies

Forum|alt.badge.img+5
  • Sidekick (Customer)
  • 49 replies
  • November 13, 2025

@MCIPSTEV Could you please elaborate a bit more? we were able to connect normaly to IFS and Grafana & Kibana

When you say you’re unable to log in, do you mean that IFS isn’t loading at all, or that it loads but you get an incorrect username/password error?

 

Have you checked whether all your pods are running? Also, please make sure there are enough resources available so kubernetes could reserve ressources when starting the pods.

 

Also have you changed the IFSCloudNamespace et Linuxhost variables?


Forum|alt.badge.img+11
  • Hero (Employee)
  • 202 replies
  • November 13, 2025

Linuxhost  "yourvmname.yourdomain.com" should be the same system_url as defined in the ifscloud-values.yaml. As they share the same certificate. And that is why monitoring need to be installed after IFS Cloud. 


Forum|alt.badge.img+7
  • Author
  • Sidekick (Customer)
  • 73 replies
  • November 13, 2025

Hello,

All my pods for ifs were running but the frontend did not reply anymore. Once monitoring removed IFS cloud worked again


Forum|alt.badge.img+5
  • Sidekick (Customer)
  • 49 replies
  • November 13, 2025

I haven’t encountered this issue before.

I’d start by checking the IFS Client and IFS Client Services pod logs, as well as calling some REST API.

 

Also creating a case would be helpful 

 

good luck


Forum|alt.badge.img+11
  • Hero (Employee)
  • 202 replies
  • November 14, 2025

It’s most probably the ingress controller that got messed up.
When monitoring is created a copy of ifs-ingress.crt secret is copied from ifscloud ns to the monitoring ns… maybe something went wrong there… 


This is roughly how my conf looks. 

kubectl exec -it svc/ingress-ingress-nginx-controller -- cat /etc/nginx/nginx.conf | grep -E 'server_name |location_path'

server_name _ ;
                        set $location_path  "";
                server_name ds1k8s.corpnet.ifsworld.com ;
                        set $location_path  "/elasticsearch(/|${literal_dollar})(.*)";
                        set $location_path  "/grafana/?(.*)";
                        set $location_path  "/favicon.ico";
                        set $location_path  "/kibana";
                        set $location_path  "/kibana";
                        set $location_path  "/";

kubectl get ingress -A
NAME                                                    HOSTS                       
ifs-stateless-ingress                           ds1k8s.corpnet.ifsworld.com   
ifs-sticky-ingress                                 ds1k8s.corpnet.ifsworld.com  
elasticsearch-master                          ds1k8s.corpnet.ifsworld.com   
kibana                                                    ds1k8s.corpnet.ifsworld.com   
kube-prometheus-stack-grafana      ds1k8s.corpnet.ifsworld.com 

kubectl get secrets -A | findstr ifs-ingress.crt
hhanseremote     ifs-ingress.crt
ifs-monitoring   ifs-ingress.crt