Skip to main content

Hi everyone!

I'm kind of a newbie on ETM, even reading the documentation I think I need your help.

For corporate policy we have to delete all events older than 24 months, no matter the type or the status of the event. Right now we are using the archive/delete function in Assyst where I run a query that I modify every time I need it.

Reading the documentation I learnt that with a Datamapper you can perform a delete record so ideally I want to turn it into an automatic operation rather than doing it manually every time, but I'm not sure how I can tell the Datamapper to search only for the logged date older than 24 months (24 older from the actual date, ideally that date dynamically changes to do it automatically).

Ideally I would like to run this every night with a CRON function but right now I don't know how manage it.

Anyone could help me understand how to do it? If there are some examples or guidelines in the doc that I missed out feel free to point that out

Many thanks in advance!

Hi ​@akudama,

I’ve been wondering the same and took the opportunity to create an example channel and data mappers. I’d like to share my approach to this problem:

The channel uses a Timer as the source, along with an initializer data mapper and a follow-up data mapper that is iterated upon using the channel settings.

The idea of the first data mapper is to generate an array of events that the follow-up data mapper can then delete. Doing this in a data mapper rather than in the channel allows for dynamically fetching the datetime of your choosing by subtracting x days, hours, or minutes from a  new Date().

Using this dynamic date, I’ve added it as a query parameter in a variable Assyst search for events. This event search can also include other logic to limit or expand the list of events returned. Once the Assyst event search is completed, the events are accessible from this array, which will be used in the deletion data mapper. The data mapper ends with the "no value when" field set to true, as it has served its purpose and should gracefully stop.

This follow-up data mapper utilizes iterations to look up and delete the events, alongside some basic ETM log output. I couldn’t do this exactly as I wanted due to not being on IPaaS 1.10 yet and can’t use date ranges, so the example includes a workaround to simulate something similar to what will hopefully be possible in 1.10. This is not documented on the wiki and is done purely for debugging purposes and consists of “faking” the data fetched from Assyst by modifying the dateLogged in outbound.self, which was created by the Record to update field. To help visualize this, I’ve used a number of variable fields to show how the data changes.

I realize this does not exactly cover what you need, as but I’m limited to IPaaS 1.8, which does not support date ranges, and I did not spend much time thinking of an alternative aproach with this new version being planned for installation soon.

Hopefully, it should be easy to modify to your specifications if you are on IPaaS 1.10, or at least set you on the right track.

I’d be happy to answer any questions or see your take on the problem😊


A minor addendum to the example in the post above:

This example channel has the Continue on Error setting enabled. 😅 This is necessary because I keep getting an error on random iterations—about 10% of the iterated records in a run ( between 5-11out of 50), and its always the same error:

The requested resource cannot be created as its identifier is already in use. Type: ObjectInUseException. Diagnostic: could not execute batch eCannot insert duplicate key row in object 'dbo.arch_session' with unique index 'arch_sess1_ux'. The duplicate key value is (Thu Jul 03 20:00:35 CEST 2025 -  ZZ_ETM_REST_USER]).], (Cause exception: Cannot insert duplicate key row in object 'dbo.arch_session' with unique index 'arch_sess1_ux'. The duplicate key value is (Thu Jul 03 20:00:35 CEST 2025 -  ZZ_ETM_REST_USER]).).

 

I dont have access to another ETM test/dev environment to test whether this is reproducible elsewhere, but I get it consistently.

From what i can tell it seems harmless, and seems to be a result of ETM doing connections too rapidly. The channel is therefore set to continue on errors so that the remaining iterations are still executed. The error just means the record in that iteration was skipped in that import.

While the error appears mostly harmless, if you experience a similar issue and plan to use this delete record feature in a production environment, it might be worth logging a support ticket with IFS to clarify.

Since you mentioned you're new to ETM, here’s also a quick tip about using logging in datamappers:

You can use logger.debuglogger.infologger.warn, and logger.error to silently log messages to the import log. These are super useful when working with try-catch blocks where you want the data mapper to continue running and use a provided default value even if something goes wrong.

However, using throw new Error will cause the data mapper—and the entire channel—to stop, even if the Continue on Error setting is enabled in the import channel configuration. This is atleast my experience working with IPaaS 1.8 connected to an 24R2 Assyst enterprise installation. 

Happy datamapping :)


Hi ​@akudama,

I’ve been wondering the same and took the opportunity to create an example channel and data mappers. I’d like to share my approach to this problem:

The channel uses a Timer as the source, along with an initializer data mapper and a follow-up data mapper that is iterated upon using the channel settings.

The idea of the first data mapper is to generate an array of events that the follow-up data mapper can then delete. Doing this in a data mapper rather than in the channel allows for dynamically fetching the datetime of your choosing by subtracting x days, hours, or minutes from a  new Date().

Using this dynamic date, I’ve added it as a query parameter in a variable Assyst search for events. This event search can also include other logic to limit or expand the list of events returned. Once the Assyst event search is completed, the events are accessible from this array, which will be used in the deletion data mapper. The data mapper ends with the "no value when" field set to true, as it has served its purpose and should gracefully stop.

This follow-up data mapper utilizes iterations to look up and delete the events, alongside some basic ETM log output. I couldn’t do this exactly as I wanted due to not being on IPaaS 1.10 yet and can’t use date ranges, so the example includes a workaround to simulate something similar to what will hopefully be possible in 1.10. This is not documented on the wiki and is done purely for debugging purposes and consists of “faking” the data fetched from Assyst by modifying the dateLogged in outbound.self, which was created by the Record to update field. To help visualize this, I’ve used a number of variable fields to show how the data changes.

I realize this does not exactly cover what you need, as but I’m limited to IPaaS 1.8, which does not support date ranges, and I did not spend much time thinking of an alternative aproach with this new version being planned for installation soon.

Hopefully, it should be easy to modify to your specifications if you are on IPaaS 1.10, or at least set you on the right track.

I’d be happy to answer any questions or see your take on the problem😊

Hi ​@Richard Ellingsen ,

 

first of all, thanks for the examples and all the effort in general, it really helped understanding how to go on (I never tought about the timer) and how to implement it! There is just one thing that I don't understand: just for the sake of tasting and using a “basic” array I tried using an assyst variable search just with the loggedDate following your example, but when I try to debug it with a previous import (this import was essentially just a list of all events) the debugging lists all of my events, it’s like it is ignoring the logged date criteria in the search. My only hypothesis is that the search is trying to find the dynamic date that I previously set and, not finding it, it simply gives back all the results, am I correct? Long story short basically is not filtering correctly for the range I guess, have you any idea how to go by this limit? I really not want to delete currently valid events 😁

Many thanks again in advance!


Hi ​@akudama,

 

I’m delighted to hear it’s been helpful~ 😊

 

Just a quick note: if you’re planning to use the datamappers I attached, by all means feel free to do so—but please double-check that the logic is sound and that the inputs are properly validated. I can’t vouch for their reliability in a production environment. This was more of a learning exercise for me to test some theories and share initial ideas on how one might go about solving this.

 

Now, regarding the issue at hand: I’m not 100% sure I understand the exact problem you're encountering, but I’ll make a best guess:

In the Archive events - 0 - Initializer datamapper, there’s a variable Assyst search field named EventArray. This includes a loggedDate query parameter. In my example, a single date is added, as this is expected and supported in IPaaS 1.8. It’s equivalent to calling:

assystREST/v2/events?loggedDate=2023-07-01T20:00:00Z

However, this returns events logged after that date. For your use case, you want to specify an upper bound instead, and this is where my previous example sort of falls apart.  (The extra date in the next datamapper steps in to simulare something similar, but it’s not ideal and has some edge cases I don’t like and is really a poor-mans versjon of an acctual upper limit in the first data mapper.)

To create a date range with only an upper bound, you would use this format in assystREST:

assystREST/v2/events?loggedDate=,2023-07-01T20:00:00Z

Note the empty value before the comma, this tells Assyst to return events before the specified date.

This is where IPaaS 1.10 becomes especially useful, as it introduces native support for date ranges in variable Assyst searches:

Date ranges

All date-type query parameters used in assyst mapper lookups and variable assyst searches support date ranges. The 'from' and 'to' dates are passed to assystREST according to the format defined in assystREST Date Ranges. Dates can be specified as literal values or as JavaScript or Velocity expressions. Both dates are specified in the same way (i.e. both literal, both JavaScript etc.). Either end of the range can be empty - in which case the search is just a 'before' or 'after' search rather than 'between'. Note that ETM assumes that all date-type query parameters support ranges.

So if you're using IPaaS 1.10, you can define both from and to dates directly in the datamapper, and the DynamicArchiveDate variable could then be used for this upper bound.

 

If this isn’t the issue you’re facing, could you clarify a few things?

  • Which datamapper did you add the new variable to?
  • How is that variable being used in the mapping logic?
  • What version of IPaaS are you using?
  • Are you seeing this issue specifically when using debug with import?

It might be easier to understand if you could share the channel with the modifications you’ve made. That way, I can take a closer look and better understand what’s going on.

Just a heads-up: the JSON export of a datamapper can’t be uploaded directly to the community forum, so you’ll need to zip or compress the file before attaching it to your reply.


Hi ​@Richard Ellingsen ,

sorry for the late reply, I was on vacation and I tried to do some test based on the tips you provided, it took me more than I can admit 😁

I found that the problem that I was facing was caused by the lack of  precise “identifications” of the event that I was searching, after that I tried some other ways to achieve the result; I create this version of the channel trying to do things “in the most essential way”, I explain here in case if will be needed in the future and than I attach the zip file in the reply (in the zip you will find the channel + mapper set and the mapper only):

 

First, now the channel’s source is a assyst rest http source, just for the testing it is basically /events (I have just a few events logged in test environment).

Then I create just one datamapper that find the records I need and delete them: I use the dynamic archive date variable that you suggest, but I keep it with the YYYY-MM-DD format. After that I create another variable that I call “normalized date logged”, basically is just the logged date of the event in the format YYYY-MM-DD. I do this because I want to be able to confront the two dates, and to do so I have to get rid of the hours (and I also really don’t need them). 

After I declared the two variables I do a record to update operation on the event; here I put a condition on the search, so it can be skipped if the logged date is newer then my archive date, doing so the import will run on all ticket but it will ignore the ones that I don’t need. Then I use the eventRefRange to determine the specific ticket I want to update, using the formattedReference value and finally I add a delete record operation.

Trying an import, this version will return something like this:

 

as you can see, the record to update not found are the event newer than my archive date, then some ticket are effectively deleted but some others failed to delete. looking for one of them I have this message:

once the import finished his job (it finished with a failed status) If I run it again will do the same, it will delete some events and fail some others (it will also find what events have to be ignored). the only thing that came up to me is to modify the channel configuration record order with the one record at a time options, but the situation remains the same. 

In you opinion what it can be? Am I missing something obvious about the error or my configuration? 

I almost forgot, my IPaaS version is 1.7.2

Many thanks in advice as usual for the very useful help!


 I'm currently supposed to be on vacation, so tried to be brief, but failed😅.

I can’t see the entire error message in the screenshot, but to me it seems to be the same error I mentioned in my second reply:

The requested resource cannot be created as its identifier is already in use. Type: ObjectInUseException. Diagnostic: could not execute batch Cannot insert duplicate key row in object 'dbo.arch_session' with unique index 'arch_sess1_ux'. The duplicate key value is (Thu Jul 03 20:00:35 CEST 2025 - 2ZZ_ETM_REST_USER]).], (Cause exception: Cannot insert duplicate key row in object 'dbo.arch_session' with unique index 'arch_sess1_ux'. The duplicate key value is (Thu Jul 03 20:00:35 CEST 2025 - 2ZZ_ETM_REST_USER]).).

My theory is that this is due to the archive/delete method not supporting multiple active sessions from the same REST user at the same time.

I haven’t tested this extensively, but you might encounter fewer errors if you try setting the channel priority to low instead of high, as this could reduce the number of simultaneous REST connections made by ETM.

While testing the channel, I noticed some strange behavior: ETM reports that the event is deleted, but when running the channel again, the same event id is returned and “deleted” again. This also happens with my previous example channel and was just noticed now.

This is even tough the channel does NOT have Preview Only checked, and the Import log returns 204

A sanitized example from the import log:

2025-07-19T19:10:23,503 7CEST] | DEBUG | Camel (importToolProcessingContext) thread #99 - JmsConsumer imports] | Process Mapper Iteration         | 225 - org.apache.camel.camel-core-reifier - 3.22.1| Identified finalOperation: DELETE for record 1 for datamapper 0 (Iteration: 0)
2025-07-19T19:10:23,505 7CEST] | DEBUG | Camel (importToolProcessingContext) thread #99 - JmsConsumer imports] | AssystServiceImpl | 120 - com.axiossystems.integration.assyst-service - 1.8.0.STABLE2024-04-12T114824Z| Request URL: https://<server>:<port>/assystREST/v2/events/10000018 Method: DELETE Request Headers:
2025-07-19T19:10:23,506 7CEST] | DEBUG | Camel (importToolProcessingContext) thread #99 - JmsConsumer imports] | AssystServiceImpl | 120 - com.axiossystems.integration.assyst-service - 1.8.0.STABLE2024-04-12T114824Z| 2]
2025-07-19T19:10:23,506 7CEST] | DEBUG | Camel (importToolProcessingContext) thread #99 - JmsConsumer imports] | AssystServiceImpl | 120 - com.axiossystems.integration.assyst-service - 1.8.0.STABLE2024-04-12T114824Z| Request Payload:
2025-07-19T19:10:23,506 7CEST] | DEBUG | Camel (importToolProcessingContext) thread #99 - JmsConsumer imports] | AssystServiceImpl | 120 - com.axiossystems.integration.assyst-service - 1.8.0.STABLE2024-04-12T114824Z| <?xml version='1.0' encoding='UTF-8'?><event><custom>{}</custom></event>
2025-07-19T19:10:23,561 7CEST] | DEBUG | Camel (importToolProcessingContext) thread #99 - JmsConsumer imports] | RESTResponseHandler | 120 - com.axiossystems.integration.assyst-service - 1.8.0.STABLE2024-04-12T114824Z| delete: 55ms DELETE Response Code 204 https://<server>:<port>/assystREST/v2/events/10000018

Are you experiencing the same behavior for the imports that don’t return the error you mentioned?

 

In any case, further troubleshooting might need to involve IFS directly, as this delete record functionality may not be officially supported and/or may have an associated problem record that IFS support is aware of. If I were you, my next step would be to log a support ticket at: https://support.axiossystems.com/assystnet/

I haven’t submitted a ticket to IFS myself, as this has mostly been an academic endeavor on my part.

As a short aside:
There is a way to archive events rather than delete them using the REST API (events are moved from the incident table to incident_archive). However, there doesn’t yet seem to be a way to use Assyst Request Mappers to archive events via ETM (at least in IPaaS 1.8), as there is no field for this in the data mapping.

I’ve submitted an idea to have this added for increased usabilty:

This limitation could potentially be worked around using a regular HTTP datamapper that calls the standard Assyst REST API. I’ll see if I can make a datamapper that deletes and/or archives events using this method once I’m back from vacation august 11th.


Reply