Hi everyone.
This is in regards to IFS 9. Right now when I have been applying deliveries or performing reconfigurations on IFS instances, whenever there is an error that results in the IFS instance becoming unusable I have been taking a snapshot of the server the instance is running on before hand and then rolling back to that state. I’m noticing some issues when I do it like this however:
- Sometimes this causes an issue where wls diagnostic files keep being generated and not deleted. This causes all the space on the drive to be filled and the instance eventually stops working until all the diagnostic files are deleted and the as server is stopped and restarted.
- Reconfigurations no longer work properly. When I perform a reconfiguration it has errors. The instance still works afterwards, but the node managers no longer seem to function properly.
- As mentioned above, the node managers stop working properly. For example, running the administration script “stop_http_server” or “start_http_server” isn’t working. I can’t stop or start the http server at all. When I run the “check_server_status” script, it says the node manager is working fine as well as the http server, but something isn’t linking up in reality.
Is there another roll back process I should be doing? Or are there any ways around these issues that anyone has found?
Thanks,
Lavon