Solved

Database size and Archiving old data Apps9

  • 9 January 2020
  • 7 replies
  • 889 views

Badge +2

Hello All,

Looking for some guidance on database size and overall data cleardown in some of the tables.

 

We have been in the system for 20months now and the database is at 2Tb, not massive in the grand scheme of things but need to know potential size moving forwards as well as ways of managing the size of the database so it doesnt continue to grow uncontrolably.

 

As far as I can see we can clear some old data in a select number of tables but this doesnt have much of a substancial impact on the overall size. These tables being shop order clocking and lines.

 

Assistance and experience is appreciated

 

Thanks


Mike

icon

Best answer by NickPorter 9 January 2020, 16:49

View original

7 replies

Userlevel 6
Badge +18

I would say that 20TB is a very large IFS database, especially for only 20 months of use, although I suppose the time frame would mean little if you pulled in a ton of legacy data at go-live.

You mentioned that clearing up shop order clocking and lines wouldn’t clear much, but do you know what is taking up that much space?  Do you have a lot of images or documents stored in there (aka BLOBs) taking up so much?

As a starting point you could check tables at the Oracle level using something like the following command...something might really stand out:

select
   *
from (
   select
      owner,
      segment_name,
      bytes/1024/1024 meg
   from
      dba_segments
   where
      segment_type = 'TABLE'
   order by
      bytes/1024/1024 desc)
where
   rownum <= 10;

 

It is also possible that your datafiles might be taking up a lot of space but they may not represent your true data size… that could indicate an issue with the way your Oracle storage configuration was set up (e.g. datafile initial size and extent sizes).  In some cases Oracle might keep taking new disk rather than reusing if allowed.

Badge +2

Thanks Nick,

Sorry I only said 2Tb, There is little to no Legacy data at GoLive. We cleared alot of historic data to ensure we only brought into the system useable data.

 

We store PDF copies of all printed documentation for traceability within our industry, but equally dont know how this is growing in size.

 

For reference the code you have given us, is that fed into SQL developer or into the Oracle?

Userlevel 6
Badge +18

My mistake, but 2TB is still pretty big for that time period.  For context, we’ve been live in IFS over multiple version for about 15 years now, with roughly 1000 users, and are under 500GB.  Obviously system usage makes a big difference, but it’s a data point for you, and we haven’t done much in the way data archiving.

Yes, that code can be run when connected to the database as your appowner (IFSAPP?) either in SQL Developer or any other command line tool, even SQLPlus.

Userlevel 5
Badge +10

2TB does sound large! Like Nick we have been using IFS for around 17 years and have not used Archiving in the application. Our production db size is currently at 552Gb.

Always worth keeping an eye on your history log Configuration. We have been bitten on this in the previous version. What is being logged and how many days are you keeping history logging.

I use this code to keep an eye on the History:
select table_name, count(*) from history_log_tab group by table_name;

Hope you find your problem. Bigger database = longer recovery time, more backup capacity req etc...

Userlevel 5
Badge +7

A couple fun notes about archiving.

  • There are two types of archiving,
    • Moving records to an archive table in the event you want to reference them again in the future.
    • Deleting records because you will never ever want the data again. This is tough and tricky as you can accidentally remove records that are required. This method is not advised.

When you archive data and move records into an archive table the size of your database will actually grow because Oracle does not automatically shrink the source tables. If you archive 20 GB of records, the archive tables will grow by 20 GB and you’ll have 20 GB of free space available in the source table extents for new records to be inserted. In summary, archiving 20 GB of records makes your database grow by 20 GB. To recover the 20 GB of free space you’ll need to ask your DBA to shrink the tables. This requires that the table be locked while the shrink operation takes place.

 

Before archiving we always check on areas that can be cleaned. For example, history log, bucket cost history, etc. Sometimes jobs haven’t been scheduled or maintained to keep these areas clean.

Userlevel 4
Badge +8

2TB does sound large! Like Nick we have been using IFS for around 17 years and have not used Archiving in the application. Our production db size is currently at 552Gb.

Always worth keeping an eye on your history log Configuration. We have been bitten on this in the previous version. What is being logged and how many days are you keeping history logging.

I use this code to keep an eye on the History:
select table_name, count(*) from history_log_tab group by table_name;

Hope you find your problem. Bigger database = longer recovery time, more backup capacity req etc...

@MikeArbon When you say you haven’t used archiving in application do you mean report archive only? Or more? 
We’ve had IFS for around 3 years but our DB is over 1Tb already….trying to see why that is. Did use your query for history log which is super handy so thanks for sharing! 

Userlevel 5
Badge +10

600GB and 200 users here. DocMan takes a lot up and have cost price roll ups going on for a silly number of parts

 

Beware of quotes for IFS Cloud Hosting that make assumptions based on users. We were quoted for a 200GB DB.

 

 

Reply