I’ve seen the same thing. Oracle DB grows fast and backup/restore takes a long time. We have moved to FTP doc repository, i’ve found it more robust than UNC. Also helps with segregation of doc classes/servers if needed (ITAR, Classification etc) as you can use Firewalls to stop even IFS accessing the FTP sites.
- The batch transfer process is good. Just remember you need space on the MWS server as the process stores the files there before putting into Final FTP source.
- I’ve not noticed any functionality difference, within IFS. If your fileservers/FTP are less secure then this is another risk but that is the IT team issue.
- I’d recommend nearly anyone to not put doc man inside IFS Oracle DB, but hopefully someone might explain the real benefits. In terms of performance or access i have found it is not noticeable, Oracle or FTP.
Hope that helps.
Agree with @david.harmer , ftp is the way to go, no loss of functionality. We had the same problems with database bloat and backup length so made the move to ftp a couple of years ago. IFS consultants and/or Partners should never have a customer setup the documents to go direct to the database, it is ok only until Doc Man is used prolifically.
One advantage of storing documents inside the database is ability to search for text within the documentation. You could lose this ability when you host them outside IFS DB.
Although this factor can vary from client to client.
As @Srikanth said, you lose the ability to search “inside” the checked in files if you keep the files outside the database. Another advantage is that, when you backup and restore Oracle, you also get the files in Docman included, and there is no risk of getting things out of sync. And, if you care about traceability (who accessed what document), you get almost 100% guarantee that you can track document usage if the documents are stored in the database. When keeping the files outside the database there is in theory always a risk that someone access the files without going through IFS.
Also, when you keep the files outside, and if you sometimes clone your PROD into TEST, there is a risk, if you don’t change the “pointers”, that users testing in TEST will overwrite production documents.
But yes, a growing database has its problems. How quickly it grows of course depends on what kind of files you check in. You can use a mixed approach though, and have some files on FTP and some in the database. And, as others have mentioned, you can also migrate from one repository to another if the size becomes a problem.
So, if database size is not a problem, either because you don't check in that many documents, or many large documents, or because you have a backup and recovery strategy that can handle it, then database storage is the most functional, most secure, most convenient and most reliable option, in our view. These are some of the reasons why it is currently the only option in IFS Managed Cloud offering.
Hi,
If i’m not mistaken, In Apps9 you would lose the ability to see attached documents in Mobile Apps such Notify me if the document is not attached to a class that is stored in the database. haven’t tested this in Apps10 so probably worth giving it a quick test.
My experience so far has been to have mixed repositories to have best of both worlds. Separating the classes in to FTP & DB based on their expected usage and the file sizes would give you the best chance to keep the DB size in check as well as get most out of features available for DB storage.
Cheers
Make use of the inbuilt functionality to transfer between respository. This will amend all of the pointers for ant existing files.
As @Srikanth said, ....
So, if database size is not a problem, either because you don't check in that many documents, or many large documents, or because you have a backup and recovery strategy that can handle it, then database storage is the most functional, most secure, most convenient and most reliable option, in our view. These are some of the reasons why it is currently the only option in IFS Managed Cloud offering.
Hello Mathias, a lot of interesting inputs in your message.
When you write “currently”… is FTP repository excluded now but could come in later upgrade? Or are there today no plans for this?
Thanks
Olivier
As @Srikanth said, ....
So, if database size is not a problem, either because you don't check in that many documents, or many large documents, or because you have a backup and recovery strategy that can handle it, then database storage is the most functional, most secure, most convenient and most reliable option, in our view. These are some of the reasons why it is currently the only option in IFS Managed Cloud offering.
Hello Mathias, a lot of interesting inputs in your message.
When you write “currently”… is FTP repository excluded now but could come in later upgrade? Or are there today no plans for this?
Thanks
Olivier
I think adding support for “basic” FTP, i.e. unencrypted, will not happen. Simply put, we are not allowing it because we want to fulfill certain “quality standards” (there are a lot of details that I don’t know well enough, and also don’t want to discuss here), which are very high in a cloud setting.
What we want to look into though is to use some sort of BLOB storage in the cloud, like Azure Blob Storage or similar. There are plans to do it, but I would not dare to say in which release such an option can come. I hope “soon” though… We have some customers who want to run the solution in the cloud and where they want to keep the database size down.
As of right now, the options are to NOT import all the type of documents you want to (for example images, video or audio might fill the database quickly, at least if the volumes are large) and manage them in some other way, or you could try some approach where you keep links to the files in IFS, and the links (URLs) will point to the location where the file really is. This works for simple viewing of documents, in many cases, but it is harder to “edit” such documents in a simple manner from IFS.
/Mathias
As mentioned above the ability to do content search was a strong reason to have the documents in the database, In addition to full backup/restore will take care of the document metadata and the same security will be applied when you have them in the database. Sometimes from the DB admin point of view, you can keep these lob data in a separate tablespace for better housekeeping. By doing that you can even split your storage options between not so expensive storage for the documents and high-speed storage for other data. Also, you can exactly monitor which tablespace is growing in order to make better decisions.
Sometimes from the DB admin point of view, you can keep these lob data in a separate tablespace for better housekeeping. By doing that you can even split your storage options between not so expensive storage for the documents and high-speed storage for other data. Also, you can exactly monitor which tablespace is growing in order to make better decisions.
Yes, and I would have hoped that we could do more of that but I think that “something” in Azure, makes it hard or impossible.
Sometimes from the DB admin point of view, you can keep these lob data in a separate tablespace for better housekeeping. By doing that you can even split your storage options between not so expensive storage for the documents and high-speed storage for other data. Also, you can exactly monitor which tablespace is growing in order to make better decisions.
Yes, and I would have hoped that we could do more of that but I think that “something” in Azure, makes it hard or impossible.
Yes, as a general rule (not pertaining only to IFS), it should not be assumed blindly that what is possible “on prem” is possible on the Cloud. Big disappointment can result some times.
Sometimes from the DB admin point of view, you can keep these lob data in a separate tablespace for better housekeeping. By doing that you can even split your storage options between not so expensive storage for the documents and high-speed storage for other data. Also, you can exactly monitor which tablespace is growing in order to make better decisions.
Yes, and I would have hoped that we could do more of that but I think that “something” in Azure, makes it hard or impossible.
Yes, as a general rule (not pertaining only to IFS), it should not be assumed blindly that what is possible “on prem” is possible on the Cloud. Big disappointment can result some times.
Good point. I guess the “cloud hype” has something to do with it. “Move it all to the cloud, we can do everything in the cloud…” Still, there are things we cannot yet do “there” or which are harder or have complications…
Absolutely on-prem and could are two different options, in my case, we had an on-prem installation as replications were part of the solution. So we kept separate tablespaces for each different type of lobs and monitored them very closely.
@Mathias Dahl what option do you have in order to have the document stored in oracle but on a separate DB in the could. (assuming docs are not so frequently updated as data so can have different DR options)
-/thusitha
Absolutely on-prem and could are two different options, in my case, we had an on-prem installation as replications were part of the solution. So we kept separate tablespaces for each different type of lobs and monitored them very closely.
@Mathias Dahl what option do you have in order to have the document stored in oracle but on a separate DB in the could. (assuming docs are not so frequently updated as data so can have different DR options)
-/thusitha
This will have to be our official answer, for now:
We have to ensure the cloud service meets high standards in a number of areas (performance, security, availability, manageability, etc) and there’s no perfect answer to your question due to a combination of complications and constraints which exist when running Oracle in Azure.
Also, when you keep the files outside, and if you sometimes clone your PROD into TEST, there is a risk, if you don’t change the “pointers”, that users testing in TEST will overwrite production documents.
Cloning is a challenge throughout the entire application. It requires each customer to develop a bespoke solution to copy the database, reconfigure the extended server, and prevent the impersonation of PROD through various means. Handling the repointing of document repositories is just one tiny piece in developing this since a customer would already need to engage with a skilled developer to do this at all.
Our nonproduction environments use two repositories. Each environment has its own FTP repository for writing its own documents, which gets set as the primary repository. As a second repository, each has a link to PROD using a different FTP user who only has read-only permissions. This gives us in essence copy-on-write functionality. We can read PROD files and write TEST files as though it was one seamless copy of PROD.
Also, when you keep the files outside, and if you sometimes clone your PROD into TEST, there is a risk, if you don’t change the “pointers”, that users testing in TEST will overwrite production documents.
Cloning is a challenge throughout the entire application. It requires each customer to develop a bespoke solution to copy the database, reconfigure the extended server, and prevent the impersonation of PROD through various means. Handling the repointing of document repositories is just one tiny piece in developing this since a customer would already need to engage with a skilled developer to do this at all.
Our nonproduction environments use two repositories. Each environment has its own FTP repository for writing its own documents, which gets set as the primary repository. As a second repository, each has a link to PROD using a different FTP user who only has read-only permissions. This gives us in essence copy-on-write functionality. We can read PROD files and write TEST files as though it was one seamless copy of PROD.
Thanks for sharing that, it should be useful to others!
/Mathias
@Mathias Dahl is it still the case for hosted environments that document storage is database only?
Thanks.
@Mathias Dahl is it still the case for hosted environments that document storage is database only?
Yes. We plan to add other storage options in later releases though, which should be supported. That can come in 21R2 at the earliest.
@Mathias Dahl Do you think the new options in 21R2 would be backwards compatible to Apps10?
@Mathias Dahl Do you think the new options in 21R2 would be backwards compatible to Apps10?
Not sure what that would actually mean, so I will say “no” :)
Can I ask on specifics on how to do this from IFS App10 client?
For example : Which settings to be changed in IEE client? etc..
/Nicum
Can I ask on specifics on how to do this from IFS App10 client?
For example : Which settings to be changed in IEE client?
Just to be sure, what do you mean by “do this”? Perhaps the documentation can help:
http://docweb.corpnet.ifsworld.com/ifsdoc/Apps10/documentation/en/Docman/dlgChangeDocumentRepository.htm?StandAlone=true
(that link is only accessible by IFS employees)
Thank you for the quick reply Mathias.
By “this” what I meant was moving files from Database repository to a FTP and setting the FTP as the default repository instead of database repository for documents that will be created in the future.
The dialog window you mentioned seems to do the trick.
/Nicum
Currently one of the customers I am working with, use DBMS_LOB.getlength function in a custom field to get the size of a document directly from Edm File Storage.
In case we move documents from database to a FTP , will it be possible to fetch the file size? Is there any method we can fetch the size from a FTP/Shared folder?
/Nicum