Solved

Document Management - Moving File Repository


Userlevel 5
Badge +9

We are currently on IFS Apps10 UPD6.  When we originally implemented IFS at Apps8 SP1+, we made the decision to setup Document Management to be stored in the Oracle database.

When we upgrade to Apps10, we moved our quote process into IFS.   With that and other changes, we are seeing significant DB growth, which is mainly due to the number of documents we are storing in Doc Man.  Feedback from our IT team is it’s taking noticeably longer to back up Oracle and to refresh our development environments from Production.   I’m starting to thing about moving our doc man repository out of Oracle.  

Looking for guidance / words of wisdom / lessons learned

  1. Does anyone have experience to share moving document management out of the Oracle database
  2. Pros/Cons for the different doc man repository options?
    1. Do we lose functionality if we move out of Oracle?
  3. At what point would you consider an Oracle database to be “too big”?
icon

Best answer by david.harmer 27 April 2020, 15:35

View original

33 replies

Userlevel 7
Badge +30

Hi @Mathias Dahl 

I have the same question as @nicwlk. Almost sure I’ve seen somewhere that it is possible, but not recommended for performance issues, though may mix up with something else.

If you mean check the file size, yes it is discussed above, and I also mention a possible performance impact there.

 

 

We have currently moved majority of documents out from DB to local shares on application server, that way we are 100% sure there are never mix up between Prod and Test environment. There are some API’s not working well with “share”, but nothing we haven’t been able to work our way around, except the file-size that is needed on occasions. 

Good to know, thanks!

 

Userlevel 1
Badge +3

Hi @Mathias Dahl 

I have the same question as @nicwlk. Almost sure I’ve seen somewhere that it is possible, but not recommended for performance issues, though may mix up with something else.

We have currently moved majority of documents out from DB to local shares on application server, that way we are 100% sure there are never mix up between Prod and Test environment. There are some API’s not working well with “share”, but nothing we haven’t been able to work our way around, except the file-size that is needed on occasions. 

Cheers, Bjørn

Userlevel 7
Badge +30

Thanks for the input, @BjornH !

Userlevel 1
Badge +3

Thank you Mathias for the answer. 

Customer use this custom field to see whether the physical file is connected to ORIGINAL or VIEW record in EDM_FILE_STORAGE. 

Do they use the size to determine if it is an original file or a view copy? Why not look at the file type?

 

 

I do believe majority of customers in Norway have this set in base profile. Both nice to see the size of file before you download (largest we have is 5,5 GB), as well as seeing there actually is a file there and not only an “empty” record. For documents with Original files you can also see if a View file exist.

Userlevel 7
Badge +30

Thank you Mathias for the answer. 

Customer use this custom field to see whether the physical file is connected to ORIGINAL or VIEW record in EDM_FILE_STORAGE. 

Do they use the size to determine if it is an original file or a view copy? Why not look at the file type?

 

 

Badge +2

Thank you Mathias for the answer. 

 

Customer use this custom field to see whether the physical file is connected to ORIGINAL or VIEW record in EDM_FILE_STORAGE. 

 

/Nicum

Userlevel 7
Badge +30

Currently one of the customers I am working with, use DBMS_LOB.getlength function in a custom field to get the size of a document directly from Edm File Storage. 

In case we move documents from database to a FTP , will it be possible to fetch the file size? Is there any method we can fetch the size from a FTP/Shared folder? 

Hi,

It is possible, but I'm not sure I would recommend it. We have a method for it but, as you might expect, it needs to get "outside" the database, communicating with the FTP server. We do this via our middle tier code in Java, and we call that Java code via the PL/SQL access provider.

Having that being called as the user populate data is asking for performance problems. If I am right and they get problems, we would not accept that as a bug.

The warning aside, this is the method:

Edm_File_API.Get_Repository_File_Size

Why are they doing this in the first place? What is this information telling the user?

Thanks!

/Mathias
 

Userlevel 7
Badge +30

 

Badge +2

Currently one of the customers I am working with, use DBMS_LOB.getlength function in a custom field to get the size of a document directly from Edm File Storage. 

In case we move documents from database to a FTP , will it be possible to fetch the file size? Is there any method we can fetch the size from a FTP/Shared folder? 

 

/Nicum

Badge +2

Thank you for the quick reply Mathias.

By “this” what I meant was moving files from Database repository to a FTP and setting the FTP as the default repository instead of database repository for documents that will be created in the future.

The dialog window you mentioned seems to do the trick.:slight_smile:

 

/Nicum

Userlevel 7
Badge +30

Can I ask on specifics on how to do this from IFS App10 client? 

For example : Which settings to be changed in IEE client?

Just to be sure, what do you mean by “do this”? Perhaps the documentation can help:

http://docweb.corpnet.ifsworld.com/ifsdoc/Apps10/documentation/en/Docman/dlgChangeDocumentRepository.htm?StandAlone=true

(that link is only accessible by IFS employees)

 

Badge +2

Can I ask on specifics on how to do this from IFS App10 client? 

For example : Which settings to be changed in IEE client? etc..

 

/Nicum

Userlevel 7
Badge +30

@Mathias Dahl Do you think the new options in 21R2 would be backwards compatible to Apps10?

Not sure what that would actually mean, so I will say “no” :)

 

Userlevel 5
Badge +10

@Mathias Dahl Do you think the new options in 21R2 would be backwards compatible to Apps10?

Userlevel 5
Badge +10

Thanks @Mathias Dahl 

Userlevel 7
Badge +30

@Mathias Dahl is it still the case for hosted environments that document storage is database only?

Yes. We plan to add other storage options in later releases though, which should be supported. That can come in 21R2 at the earliest.

 

 

Userlevel 5
Badge +10

@Mathias Dahl is it still the case for hosted environments that document storage is database only?

Thanks.

Userlevel 7
Badge +30

Also, when you keep the files outside, and if you sometimes clone your PROD into TEST, there is a risk, if you don’t change the “pointers”, that users testing in TEST will overwrite production documents.

Cloning is a challenge throughout the entire application. It requires each customer to develop a bespoke solution to copy the database, reconfigure the extended server, and prevent the impersonation of PROD through various means. Handling the repointing of document repositories is just one tiny piece in developing this since a customer would already need to engage with a skilled developer to do this at all.

 

Our nonproduction environments use two repositories. Each environment has its own FTP repository for writing its own documents, which gets set as the primary repository. As a second repository, each has a link to PROD using a different FTP user who only has read-only permissions. This gives us in essence copy-on-write functionality. We can read PROD files and write TEST files as though it was one seamless copy of PROD.

Thanks for sharing that, it should be useful to others!

/Mathias

 

Userlevel 7
Badge +18

Also, when you keep the files outside, and if you sometimes clone your PROD into TEST, there is a risk, if you don’t change the “pointers”, that users testing in TEST will overwrite production documents.

Cloning is a challenge throughout the entire application. It requires each customer to develop a bespoke solution to copy the database, reconfigure the extended server, and prevent the impersonation of PROD through various means. Handling the repointing of document repositories is just one tiny piece in developing this since a customer would already need to engage with a skilled developer to do this at all.

 

Our nonproduction environments use two repositories. Each environment has its own FTP repository for writing its own documents, which gets set as the primary repository. As a second repository, each has a link to PROD using a different FTP user who only has read-only permissions. This gives us in essence copy-on-write functionality. We can read PROD files and write TEST files as though it was one seamless copy of PROD.

Userlevel 7
Badge +30

Absolutely on-prem and could are two different options, in my case, we had an on-prem installation as replications were part of the solution. So we kept separate tablespaces for each different type of lobs and monitored them very closely.

@Mathias Dahl what option do you have in order to have the document stored in oracle but on a separate DB in the could. (assuming  docs are not so frequently updated as data so can have different DR options)

-/thusitha

This will have to be our official answer, for now:

We have to ensure the cloud service meets high standards in a number of areas (performance, security, availability, manageability, etc) and there’s no perfect answer to your question due to a combination of complications and constraints which exist when running Oracle in Azure.

 

Userlevel 1
Badge +2

Absolutely on-prem and could are two different options, in my case, we had an on-prem installation as replications were part of the solution. So we kept separate tablespaces for each different type of lobs and monitored them very closely.

@Mathias Dahl what option do you have in order to have the document stored in oracle but on a separate DB in the could. (assuming  docs are not so frequently updated as data so can have different DR options)

-/thusitha

Userlevel 7
Badge +30

Sometimes from the DB admin point of view, you can keep these lob data in a separate tablespace for better housekeeping.  By doing that you can even split your storage options between not so expensive storage for the documents and high-speed storage for other data. Also, you can exactly monitor which tablespace is growing  in order to make better decisions.

Yes, and I would have hoped that we could do more of that but I think that “something” in Azure, makes it hard or impossible.

 

Yes, as a general rule (not pertaining only to IFS), it should not be assumed blindly that what is possible “on prem” is possible on the Cloud. Big disappointment can result some times.

Good point. I guess the “cloud hype” has something to do with it. “Move it all to the cloud, we can do everything in the cloud…” Still, there are things we cannot yet do “there” or which are harder or have complications…

 

Userlevel 2
Badge +6

Sometimes from the DB admin point of view, you can keep these lob data in a separate tablespace for better housekeeping.  By doing that you can even split your storage options between not so expensive storage for the documents and high-speed storage for other data. Also, you can exactly monitor which tablespace is growing  in order to make better decisions.

Yes, and I would have hoped that we could do more of that but I think that “something” in Azure, makes it hard or impossible.

 

Yes, as a general rule (not pertaining only to IFS), it should not be assumed blindly that what is possible “on prem” is possible on the Cloud. Big disappointment can result some times.

Userlevel 7
Badge +30

Sometimes from the DB admin point of view, you can keep these lob data in a separate tablespace for better housekeeping.  By doing that you can even split your storage options between not so expensive storage for the documents and high-speed storage for other data. Also, you can exactly monitor which tablespace is growing  in order to make better decisions.

Yes, and I would have hoped that we could do more of that but I think that “something” in Azure, makes it hard or impossible.

 

Userlevel 1
Badge +2

As mentioned above the ability to do content search was a strong reason to have the documents in the database, In addition to full backup/restore will take care of the document metadata and the same security will be applied when you have them in the database. Sometimes from the DB admin point of view, you can keep these lob data in a separate tablespace for better housekeeping.  By doing that you can even split your storage options between not so expensive storage for the documents and high-speed storage for other data. Also, you can exactly monitor which tablespace is growing  in order to make better decisions.

Reply