Skip to main content
Question

EXT_FILE_TRANS_TAB growing and cannot be cleaned


Forum|alt.badge.img+7
  • Sidekick (Customer)
  • 15 replies

the  EXT_FILE_TRANS_TAB is taking more and more space in the the database and we do not have a clue about how to clean or purge it, it looks that there is still in the table row related to load file ID that are no more available in EXT_FILE_LOAD.

Does anyone already meet the same issue and/or knwo how to clean the EXT_FILE_TRANS_TAB

thank you in advance for any help, comment or remarks

10 replies

Forum|alt.badge.img+11
  • Hero (Employee)
  • 185 replies
  • March 31, 2023

Hi,

I have a faint memory (This was > 10 years ago...) that we found a load job that will “unload” or truncate the EXT_FILE_TRANS_TAB that is scheduled as any other file load jobs.

   /H

 

 


Forum|alt.badge.img+20
hhanse wrote:

Hi,

I have a faint memory (This was > 10 years ago...) that we found a load job that will “unload” or truncate the EXT_FILE_TRANS_TAB that is scheduled as any other file load jobs.

   /H

 

 

Your memory is correct. There is a file template you can run to clean up old transactions:

Make sure to enter the number of days on the templates you want to clean out transactions for and also determine if you want to include file log in the cleanup by ticking “Remove Completely”:

 


Forum|alt.badge.img+7
  • Author
  • Sidekick (Customer)
  • 15 replies
  • March 31, 2023

thank you guys that the first promising answer i get including IFS cloud support and our Consultant


Forum|alt.badge.img+16
  • Superhero (Partner)
  • 446 replies
  • December 17, 2024

Does someone face a problem with using this cleanup method? With over 60GB of data to remove it is not possible to purge this table. Snapshot to old errors occured.


Forum|alt.badge.img+11
  • Hero (Employee)
  • 185 replies
  • December 17, 2024

This has nothing to do with the application itself, this relates to how the Oracle DB is configured.

Try to run the first large cleanup when there is less other transactions in the db.
Also perhaps increase db parameter UNDO_RETENTION and maybe increase size of UNDO datafiles. 

Is there a “date” on the “remove job”? -  to remove data in smaller chunks?


Forum|alt.badge.img+16
  • Superhero (Partner)
  • 446 replies
  • January 14, 2025

There is a date but no easy way to establish the type of external file with big amount of data. Date estimation to remove data in smaller chunks it is also not so easy.

I understand that we can change UNDO_RETENTION parameter but we have easier way to avoid problem using commit after some chunk. Many times used in any Clenaup processes:

Batch_SyS.Fnd_Light_Cleanup and its subsequent calls.


Forum|alt.badge.img+2
  • Do Gooder (Partner)
  • 8 replies
  • June 24, 2025

Hi Piotr,
last time I used to copy records to new table (eg. 1 month old). 
After that I renamed base table to different name and new table to base table name.
It worked for table with 2,5TB data. 
After all I was able to use available method to clean old data with success.
 


Forum|alt.badge.img+16
  • Superhero (Partner)
  • 446 replies
  • June 24, 2025

Hi ​@RafalO 

I know how to figure out the problem but:

  1. it is time consuming to establish file types and time frames
  2. please imagine your steps on IFS cloud hosted environment without access to ifsapp/sys account

Forum|alt.badge.img+2
  • Do Gooder (Partner)
  • 8 replies
  • June 24, 2025

@knepiosko I can imagine that :)

It will be nice to add some global variables and/or parameters to handle any cleanup or template.
I know that such solution is used in some cleanup processes. (older then .. days, states, etc.)


Forum|alt.badge.img+16
  • Superhero (Partner)
  • 446 replies
  • June 24, 2025

There is a way to set up a clean process, but it is not easy just do it in one go with 2.5TB!

 


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings