Skip to main content
Solved

IFS session management

  • February 16, 2026
  • 1 reply
  • 14 views

Forum|alt.badge.img

During our recent production outage, we observed that multiple servers are attempting to delete the same message repeatedly, causing database blocking. The repeated deletion attempts occur within active transactions, leading to locking issues, causing conflicts and prolonged transaction locks. Also, from the server logs we could see a message - 'trying to delete non-existent record'. So, we assume that the request is already processed by server A, and unknowingly server B is processing the same request which resulted in 'trying to delete non-existent record'

Could of you help to understand
1. Why is the same request (delete commands) being handled by multiple servers?
2. How does IFS manage user sessions?
3. How to route a request from a single user/device session to the same node till they complete their tasks?
4. Why can’t we enforce a “single active session per user” policy?
5. Can this be resolved by configuring session stickiness from F5 level

Can any expert help to understand the scenario here pl

Best answer by Mehmetkilivan

This looks more like a load balancer and retry behavior issue rather than a pure session problem.

IFS Cloud works in a stateless way. Each request is handled independently, and if you have multiple application servers behind a load balancer, consecutive requests from the same user can be routed to different nodes. That is normal behavior unless session persistence is configured.

If there was a production outage or a timeout, it is also possible that the client (for example FSM mobile or web client) retried the same request. In that case, one server may have already processed the delete successfully, while another server receives the same request again and tries to delete a record that no longer exists. That would explain the “trying to delete non-existent record” message.

IFS sessions are token-based and not tied to a specific node by default. So enforcing a “single active session per user” is not straightforward in this architecture.

You can configure session persistence (stickiness) on F5 to make requests from the same user go to the same node. That may reduce this behavior, but it will not completely solve it if retries or long-running transactions are involved.

I would suggest checking:

  • Whether the client is retrying requests after timeout

  • Transaction timeout settings

  • If the delete operation can safely handle repeated calls (for example by checking if the record exists before deleting)

  • Database locking behavior during long transactions

This is usually more about concurrency and retry handling in a distributed setup than about session management itself.

1 reply

Mehmetkilivan
Do Gooder (Customer)
Forum|alt.badge.img+5
  • Do Gooder (Customer)
  • Answer
  • February 16, 2026

This looks more like a load balancer and retry behavior issue rather than a pure session problem.

IFS Cloud works in a stateless way. Each request is handled independently, and if you have multiple application servers behind a load balancer, consecutive requests from the same user can be routed to different nodes. That is normal behavior unless session persistence is configured.

If there was a production outage or a timeout, it is also possible that the client (for example FSM mobile or web client) retried the same request. In that case, one server may have already processed the delete successfully, while another server receives the same request again and tries to delete a record that no longer exists. That would explain the “trying to delete non-existent record” message.

IFS sessions are token-based and not tied to a specific node by default. So enforcing a “single active session per user” is not straightforward in this architecture.

You can configure session persistence (stickiness) on F5 to make requests from the same user go to the same node. That may reduce this behavior, but it will not completely solve it if retries or long-running transactions are involved.

I would suggest checking:

  • Whether the client is retrying requests after timeout

  • Transaction timeout settings

  • If the delete operation can safely handle repeated calls (for example by checking if the record exists before deleting)

  • Database locking behavior during long transactions

This is usually more about concurrency and retry handling in a distributed setup than about session management itself.