I would say that 20TB is a very large IFS database, especially for only 20 months of use, although I suppose the time frame would mean little if you pulled in a ton of legacy data at go-live.
You mentioned that clearing up shop order clocking and lines wouldn’t clear much, but do you know what is taking up that much space? Do you have a lot of images or documents stored in there (aka BLOBs) taking up so much?
As a starting point you could check tables at the Oracle level using something like the following command...something might really stand out:
select
*
from (
select
owner,
segment_name,
bytes/1024/1024 meg
from
dba_segments
where
segment_type = 'TABLE'
order by
bytes/1024/1024 desc)
where
rownum <= 10;
It is also possible that your datafiles might be taking up a lot of space but they may not represent your true data size… that could indicate an issue with the way your Oracle storage configuration was set up (e.g. datafile initial size and extent sizes). In some cases Oracle might keep taking new disk rather than reusing if allowed.
Thanks Nick,
Sorry I only said 2Tb, There is little to no Legacy data at GoLive. We cleared alot of historic data to ensure we only brought into the system useable data.
We store PDF copies of all printed documentation for traceability within our industry, but equally dont know how this is growing in size.
For reference the code you have given us, is that fed into SQL developer or into the Oracle?
My mistake, but 2TB is still pretty big for that time period. For context, we’ve been live in IFS over multiple version for about 15 years now, with roughly 1000 users, and are under 500GB. Obviously system usage makes a big difference, but it’s a data point for you, and we haven’t done much in the way data archiving.
Yes, that code can be run when connected to the database as your appowner (IFSAPP?) either in SQL Developer or any other command line tool, even SQLPlus.
2TB does sound large! Like Nick we have been using IFS for around 17 years and have not used Archiving in the application. Our production db size is currently at 552Gb.
Always worth keeping an eye on your history log Configuration. We have been bitten on this in the previous version. What is being logged and how many days are you keeping history logging.
I use this code to keep an eye on the History:
select table_name, count(*) from history_log_tab group by table_name;
Hope you find your problem. Bigger database = longer recovery time, more backup capacity req etc...
A couple fun notes about archiving.
- There are two types of archiving,
- Moving records to an archive table in the event you want to reference them again in the future.
- Deleting records because you will never ever want the data again. This is tough and tricky as you can accidentally remove records that are required. This method is not advised.
When you archive data and move records into an archive table the size of your database will actually grow because Oracle does not automatically shrink the source tables. If you archive 20 GB of records, the archive tables will grow by 20 GB and you’ll have 20 GB of free space available in the source table extents for new records to be inserted. In summary, archiving 20 GB of records makes your database grow by 20 GB. To recover the 20 GB of free space you’ll need to ask your DBA to shrink the tables. This requires that the table be locked while the shrink operation takes place.
Before archiving we always check on areas that can be cleaned. For example, history log, bucket cost history, etc. Sometimes jobs haven’t been scheduled or maintained to keep these areas clean.
2TB does sound large! Like Nick we have been using IFS for around 17 years and have not used Archiving in the application. Our production db size is currently at 552Gb.
Always worth keeping an eye on your history log Configuration. We have been bitten on this in the previous version. What is being logged and how many days are you keeping history logging.
I use this code to keep an eye on the History:
select table_name, count(*) from history_log_tab group by table_name;
Hope you find your problem. Bigger database = longer recovery time, more backup capacity req etc...
@MikeArbon When you say you haven’t used archiving in application do you mean report archive only? Or more?
We’ve had IFS for around 3 years but our DB is over 1Tb already….trying to see why that is. Did use your query for history log which is super handy so thanks for sharing!
600GB and 200 users here. DocMan takes a lot up and have cost price roll ups going on for a silly number of parts
Beware of quotes for IFS Cloud Hosting that make assumptions based on users. We were quoted for a 200GB DB.