Solved

Event archiving processing times

  • 17 October 2022
  • 3 replies
  • 84 views

Userlevel 2
Badge +5

Hello. Has anyone in the community had success using the dbArchive tool to archive and/or delete large numbers of events? We are currently investigating archiving in assyst, and we have about 1,400,000 events to either delete or archive. From our testing, it seems that the tool takes about 21 seconds per event, which would mean archiving the required events in our database would take months. We posed a question to IFS support about how to increase the speed or archiving, but they were not able to provide any guidance. 

icon

Best answer by Teslatradeup 1 November 2022, 15:34

View original

3 replies

Badge +6

Hi Duncan,

We are currently working with IFS for an automated archive solution as we have millions of records to delete as well.

We had extremely slow performance where deleting 200 events would take 45 minutes.

They identified an index that could be added…. and we lowered the processing time from 45 minutes to 2.5 minutes to delete 200 events.

I can’t seem to find the index information however… But just quote our incident number in working with IFS which is 369418. Perhaps you’re DB would benefit from that same index… or at least you could test it in non-prod to see if it helps.

One thing to note though - when running multiple batches, archiving so many events is causing our DB to temporarily disconnect. We are still working on that one…. :(

Cheers.

Martin

Userlevel 2
Badge +5

Hi Martin.

 

Tanks a lot for your reply- I was able to get the index from Gary at IFS after quoting your ticket number. For anyone else who is having similar issues, the index to add is below:

 

CREATE NONCLUSTERED INDEX incide_dx_372714 ON [dbo].[incident] ([parent_event_id])

 

This, along with running the dbArchive tool directly on the DB server instead of on a web server, has greatly reduced the processing time for archiving. We ran a test batch of 1000 records, and it took about 4 minutes (compared to about 70 minutes before adding the index). Great improvement.

Can you describe a bit more about your issue with your database disconnecting? Are you running multiple instances of the dbArchve tool at once, or do you just mean that with so many events to archive, the sheer number of batches is causing the issue? We have only tested small numbers so far (up to 2000 records).

Also, have you tried changing the database.batchSize value? If so, what value have you found works best? It looks like IFS recommends keeping it below 100 to avoid performance issues, so I am not sure increasing it will really make a difference.

 

Thanks for sharing.

 

Cheers,

 

Duncan

Badge +6

No sweat Duncan, glad to help!

We are running only one instance at a time - sending 5000 events per run. It looks like our batch size is setup to 50.

Our automated program will send 5000 events to the Archive Utility, once complete wait 5 minutes and then continue on with another run… it will continue this loop until we stop the process.

It actually has been running okay with no DB disconnects since 2pm Nov 1st…. running non stop it has deleted 565455 actual events in 58 hours - this is in our TEST environment.

 

Reply