Question

IFS Cloud 23R2 - Setting up SMB storage

  • 8 January 2024
  • 27 replies
  • 254 views

Badge +3

Hello,
For one of our Remote IFS Cloud 23R2 Customers, we are trying to setup SMB storage. The SMB share was created by the Customer’s IT team and we are trying to configure the parameters in the ifscloud-values.yaml file by following the guide. ( https://docs.ifs.com/techdocs/23r2/070_remote_deploy/400_installation_options/120_file_storage_for_remote/122_installation_guide/#activating_file_storage_service

The MTinstaller is successful but the ifs-file-storage pod is stuck at Init State. 

 

Values on Config file

 

Anyone who have experienced this issue before?

Thanks,
 Nishanth


27 replies

Userlevel 7
Badge +30

I'll move this to the framework section. It's of course related to Docman since Docman can use it, but…

@chanaka-shanil 

 

Userlevel 6
Badge +15

@Nishanth can you describe the pod. kubectl describe pod ifs-file-storage -n <namespace>

Badge +3

Hi @chanaka-shanil,
Below is the last segment of describe which shows the error message. let me know if you need the full describe note

Events:
  Type     Reason              Age                  From                     Message
  ----     ------              ----                 ----                     -------
  Warning  FailedMount         5m7s (x61 over 17h)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[fss-volume], unattached volumes=[kube-api-access-rctjv linkerd-identity-end-entity linkerd-identity-token labelinfo secrets fss-volume linkerd-proxy-init-xtables-lock]: timed out waiting for the condition
  Warning  FailedAttachVolume  17s (x271 over 17h)  attachdetach-controller  AttachVolume.Attach failed for volume "ifs-fss-pv-smb-aspiit-448281875b94fdf905e063d09a09d46d5303ff0273703ff1e6486d2fc2e984ed" : timed out waiting for external-attacher of smb.csi.k8s.io CSI driver to attach volume ifs-fss-pv-smb-aspiit-448281875b94fdf905e063d09a09d46d5303ff0273703ff1e6486d2fc2e984ed


/Nishanth

Userlevel 6
Badge +15

@Nishanth can you check if the deployment can reach file share, I suspect that the share is not reachable, also try using the IP Address instead of the host

Badge +3

Hi @chanaka-shanil ,
Thanks for the input. I confirmed the connection from the k8s server to fileshare which is working fine. I couldnt test exactly with the file-share pod though since the pod is not started yet. I tried via another pod (via shell login), the deployment identifies the host/IP successfully. Any other way we could verify this?

I will do a reconfig using the IP instead of host and get back to you

Thanks,
/Nishanth

Badge +3

Hi @chanaka-shanil ,
I added the IP instead of the host and the result is unfortunately same. I described the pod & the error note is also similar

  Type     Reason              Age                 From                     Message
  ----     ------              ----                ----                     -------
  Warning  FailedMount         95s (x3 over 10m)   kubelet                  Unable to attach or mount volumes: unmounted volumes=[fss-volume], unattached volumes=[labelinfo secrets fss-volume linkerd-proxy-init-xtables-lock kube-api-access-f6mkl linkerd-identity-end-entity linkerd-identity-token]: timed out waiting for the condition
  Warning  FailedAttachVolume  56s (x12 over 31m)  attachdetach-controller  AttachVolume.Attach failed for volume "ifs-fss-pv-smb-aspiit-4249944e4445c893f4a70ac42f5f82effb23163d969aa7bdeafe6bc0ad858420" : timed out waiting for external-attacher of smb.csi.k8s.io CSI driver to attach volume ifs-fss-pv-smb-aspiit-4249944e4445c893f4a70ac42f5f82effb23163d969aa7bdeafe6bc0ad858420

Badge +3

Hi @chanaka-shanil , 
Thanks for the pointers. With your & Pasindu’s help, we found the root cause and managed to fix it.

Issue was: I setup the 23R2 middleware infrastructure using the “Advanced/Manual” mode. Apart from usual commands (eg: ‘KEY’, ‘KUBERENETES’ etc), I had to run the ‘STORAGE’ as well for this environment which I have missed.

ps> .\main.ps1 -resource 'STORAGE'

https://docs.ifs.com/techdocs/23r2/070_remote_deploy/010_installing_fresh_system/030_preparing_server/50_windows_managementserver/#install_ifs-storage_helm_chart

Once this is executed successfully, I can see a new namespace & 2 pods within it.

 
Then the ifs-storage pod started to work fine.
 



Thanks,
/Nishanth

Userlevel 3
Badge +7

Hi @Mathias Dahl / @chanaka-shanil,
We are actually facing the same issue, file storage pod is remaining in the init status,

 

 
 


Any idea what might be the issue?

I’m also little bit confused with highlighted statement in technical doc below, can you pls. explain?

 





 

Userlevel 7
Badge +30

@ORFJAN 

I can only comment on this:

> I’m also little bit confused with highlighted statement in technical doc below, can you pls. explain?

It means that IFS Cloud File Storage, in Remote deployment/residency, does not support using Azure Blob Storage as the storage location. Instead, SMB ("network share" technology) is used.

 

Userlevel 3
Badge +7

Hi @Mathias Dahl  then it’s ok, however I expect Azure file share will work too.

Userlevel 7
Badge +30

Hi @Mathias Dahl  then it’s ok, however I expect Azure file share will work too.

If that is some kind of software simulating/enabling the SMB protocol but with Azure Blob Storage as a backend, then it should work, but from IFS Cloud File Storage's point of view it's a normal SMB share.

As I remember it, this was actually discussed in another thread here on IFS Community and they got it to work.

Good luck!

Userlevel 3
Badge +7

Hi @Mathias Dahl 
yes, we have made Azure share working and we also did first testing, with few questions around:

1. it seems all the documents are stored into 1 subdirectory called “docman” on the Azure share, and we were not able to force it to store them into another subdirectory (subdirectory IFSOCTEST on the screenshot below was created manually), despite the repository setting below. So it’s not possible to store particular doc classes into their subdirectories as we do today with FTP?

2. I cannot see any connection between document in IFS and file on share, since I did not find any properties/etag or metada stored with the file on the share. 
 

 

 

 

Userlevel 7
Badge +30

@ORFJAN 

Hi again,

it seems all the documents are stored into 1 subdirectory called “docman” on the Azure share, and we were not able to force it to store them into another subdirectory (subdirectory IFSOCTEST on the screenshot below was created manually), despite the repository setting below. So it’s not possible to store particular doc classes into their subdirectories as we do today with FTP?

That's correct. Will it be a problem for you, and if so, what is the problem (provide as much details of how you use this as you can)?

As for how the files are named, this was discussed recently, here:

https://community.ifs.com/document-management-docman-248/file-names-in-file-storage-47260

 

Userlevel 3
Badge +7

Hi @Mathias Dahl,


Ref. 1 - we have FTP repository for Oriflame R&D which keeps all the documents related to Oriflame products, raw materials, ingrediencies, packaging etc. there. The repository is not being accessed via application only, but directly via FTP too and the docs are structured in subfolders, so the same type of the product related documentation (doc class) is stored in 1 folder. For instance subfolder RMSPEC, so the one knows there are Raw Material Specification documents, pls. see screenshot below. The files are named in the way the one can see what  raw material document is for. The direct access is being used mainly to get a bulk of documents, but I will check with our R&D further what particular business scenarios they use the direct access for. 
 

 


Ref. 2 - I have read the post, well, it seems to be a mindset changer. Today users and consultants are used to see a connection between the doc record and the file. I can understand the file names are base64 encoded and the one should not care what the file name is, and it should be just perceived as a storage service, but on the other hand what is the issue to show base64 encoded file name on the screenshot below and keep it in the table in backend? It can make the life easier in case of some investigations or doc migrations.
 



   


 

Userlevel 7
Badge +30

@ORFJAN 

> The repository is not being accessed via application only, but directly via FTP too

I had a strong hunch you would say that, if not I would have told you to not care about how the file is named there... 🙂 

> The direct access is being used mainly to get a bulk of documents, but I will check with our R&D further what particular business scenarios they use the direct access for. 

It would be interesting to know more details, so feel free to share what you find. It's more work, at least initially, of course, but the documents can always be extracted by using REST/projection calls as well. Then you don't need to care about where the files are stored in the backend, or how they are named.

> on the other hand what is the issue to show base64 encoded file name on the screenshot below and keep it in the table in backend

I can only speculate, as I did in that other post. It was news to me as well that the file names are Base64-encoded. In Cloud deployment, as I remember it, the "file name" (if we can talk of a file name when it is just the ID or name of a BLOB) is not Base64-encoded.

Whatever you "do" with these files, I hope it's done by automatic means and, if I'm right, it is hopefully not that hard to do a Base64 decode operation on those names before they are used in various "integrations" with other systems.

Good luck discussing with your R&D :)

 

Userlevel 3
Badge +7

 Hi @Mathias Dahl,
we have discussed the new way of the file saving further internally and good news is that we could survive with 1 subfolder and with the new file naming. 
However we have identified further business scenarios where we access files directly or via a http link and unfortunately we will have to invest some effort to migrate these cases to restAPI.
Since only 1 subfolder is allowed, there is also a question how to separate documents from PROD and TEST/UAT instances.

Regarding the reference between IFS doc record and the file, it seems it’s not only me who is convinced it’s not a big deal to see the reference on IFS screen and/or Azure file (on IFS screen or on Azure file) and it would bring more confident and transparency, some screenshots below to inspire;-) 
Have a nice weekend!
Jan
 

 

Userlevel 7
Badge +30

@ORFJAN 

> we have discussed the new way of the file saving further internally and good news is that we could survive with 1 subfolder and with the new file naming.

Good to hear!

> However we have identified further business scenarios where we access files directly or via a http link and unfortunately we will have to invest some effort to migrate these cases to restAPI.

Aha, so I think you are saying that in your current set up you are sharing some of the files from a HTTP server. Apart from having to figure out a good way to handle the authentication and access, accessing the files from one of our REST APIs/projections is better since you don't need to care where the files are stored. And I understand it is some work to go from one to the other solution (the URLs look different, for one thing, and the access and authentication might make things harder).

> Since only 1 subfolder is allowed, there is also a question how to separate documents from PROD and TEST/UAT instances.

The safest way is to use two separate shares for different environments. You don't want to risk overwriting PROD, at least, with files people are playing with in TEST/DEV...

> Regarding the reference between IFS doc record and the file, it seems it’s not only me who is convinced it’s not a big deal to see the reference on IFS screen and/or Azure file (on IFS screen or on Azure file) and it would bring more confident and transparency, some screenshots below to inspire;-) 

I think you missed the word "not" in there somewhere, but I think I get the message anyway... 🙂

This transparency you talk about, cannot be something a regular user need to care about, right? It's "an admin thing". And an admin can, once, to feel safe, use a tool or site to decode the Base64 encoding, to be convinced the file is actually where it should be. Once people understand how this works, do you think it's a big problem that you cannot look at the file names on the screen, in IFS and the Azure/SMB thing, and visually match them?
 

Userlevel 3
Badge +7

Hi @Mathias Dahl 
 

 

The safest way is to use two separate shares for different environments. You don't want to risk overwriting PROD, at least, with files people are playing with in TEST/DEV…

Yes, indeed. However, without a possibility to open existing documents from PROD in TEST/UAT since only 1 file share is allowed for an instance or? 

I think you missed the word "not" in there somewhere, but I think I get the message anyway... 🙂

Could be;-)

This transparency you talk about, cannot be something a regular user need to care about, right? It's "an admin thing". And an admin can, once, to feel safe, use a tool or site to decode the Base64 encoding, to be convinced the file is actually where it should be. Once people understand how this works, do you think it's a big problem that you cannot look at the file names on the screen, in IFS and the Azure/SMB thing, and visually match them?

Let me illustrate this on a case I faced on Friday. I was searching for a document in IFS and I wanted to get a preview of the file from there. When I pushed button “preview” the document was downloaded but with 0 size. Then I started to explore further for a reason, the first thought was to check, whether the file is on share, but looking at folder with the files I can just guess based on date/time creation. Do the one really need to go to PL/SQL and write the script to check to be 100% sure?

Userlevel 7
Badge +30

@ORFJAN 

Yes, indeed. However, without a possibility to open existing documents from PROD in TEST/UAT since only 1 file share is allowed for an instance or? 

Yes, that's a limitation today, only one SMB share per instance/environment.

I understand how it can be convenient to be logged in to TEST and be able to see documents from PROD, but it feels dangerous trying to do that by "reusing" the share. How about just copying a small sub set from PROD to TEST to be used for testing and verification purposes, if they are needed at all? (I would just create new ones, it takes a minute to upload a bunch of files to test with)

I strongly recommend against reusing the same share for PROD and TEST/UAT.

Let me illustrate this on a case I faced on Friday. I was searching for a document in IFS and I wanted to get a preview of the file from there. When I pushed button “preview” the document was downloaded but with 0 size. Then I started to explore further for a reason, the first thought was to check, whether the file is on share, but looking at folder with the files I can just guess based on date/time creation. Do the one really need to go to PL/SQL and write the script to check to be 100% sure?

Thanks for the scenario! One way to do it is to take the file name as seen in Docman (say, TE_DOC-4657860-1-1-1.JPG) and running them through an online Base64 encoder (like https://www.base64encode.org/)

Encoding the file name (minus period and file extension) above will get you this:

VEVfRE9DLTQ2NTc4NjAtMS0xLTE=

Then look for that file on the share.

No PL/SQL scripts needed 🙂
 

Userlevel 3
Badge +7

@Mathias Dahl ,
 

I understand how it can be convenient to be logged in to TEST and be able to see documents from PROD, but it feels dangerous trying to do that by "reusing" the share. How about just copying a small sub set from PROD to TEST to be used for testing and verification purposes, if they are needed at all? (I would just create new ones, it takes a minute to upload a bunch of files to test with)


I have an attempt to answer, which subset, since I don’t know what documents the files are connected too;-). Well, to keep additional share and copy means additional costs too. We will need to test and think further, thanks for confirmation of the limitations.

Thanks for the scenario! One way to do it is to take the file name as seen in Docman (say, TE_DOC-4657860-1-1-1.JPG) and running them through an online Base64 encoder (like https://www.base64encode.org/)

Encoding the file name (minus period and file extension) above will get you this:

VEVfRE9DLTQ2NTc4NjAtMS0xLTE=

Then look for that file on the share.

No PL/SQL scripts needed 🙂


I see, but still in case of particular files one need to go to a page do a copy/paste and check, however I still don’t understand why this information can not be available on IFS screen directly, is there any technical challenge I can not see? From my perspective it’s an easy cherry a lot of other customers will appreciate and on top of that you will save time on similar discussions with other stakeholders once this is used broadly;-)
Thanks

Userlevel 7
Badge +30

@ORFJAN 

I see, but still in case of particular files one need to go to a page do a copy/paste and check, however I still don’t understand why this information can not be available on IFS screen directly, is there any technical challenge I can not see?

It's not impossible. Firstly, the Base64 encoding was done recently in IFS Cloud File Storage and we who work with Docman was not aware of it, so it was never even discussed

Then again, why should "Docman care" how the file is stored or named behind the black box that is File Storage? It's "internal information" to File Storage, or an "implementation detail".

Docman asks File Storage to store a file given a certain file name (the one you see in Document Revision / File Refs) and File Storage takes care of the rest. If it change the name in the actual storage is something we don't need to care about.

It's good and interesting input regardless and we can think of some way to help customers trouble shoot these cases. I'm thinking that we should perhaps have an option to navigate into File Storage's own file registry, where an admin can see the details there.
 

Userlevel 3
Badge +7

@Mathias Dahl 
ok, thanks, that would be great from my perspective.

Thanks for your inputs, we will update you with further progress of the transition. We need to move significant volume of docs from FTP and even more from DB.
We are curious about the performance of the move, whether the functionality can handle hundreds or thousands of docs. We also hope to make our DB size significantly reduced by doing that.

Userlevel 7
Badge +30

@ORFJAN 

The built-in assistant to move document files between repositories is not very performant, partly because it goes via IFS Connect, which in turn calls the projection that does the heavy lifting.

Make sure to try it out with hundreds or so documents to see how fast it is for you (I think IFS Connect can be tweaked in various ways but I don't know much about it)

For documents stored in database, the IFS Cloud File Storage migration tool might be an option. It's really meant for Apps 8/9/10 -> IFS Cloud migrations, but it does actually work between IFS Cloud versions as well...

You might want to develop something "custom" as well, that takes shortcuts that works in your scenario/setup. Moving the files is the thing that takes time, the rest is "changing the file pointers".
 

Userlevel 3
Badge +7

Hi @Mathias Dahl,

thanks for sharing, then it’s little bit odd to use the screen, since the selection can be limited by Doc Class only. See the count of the documents behind on the screenshot (sorted in descending) 😉 

  



 

Userlevel 7
Badge +30

@ORFJAN 

Sorry, I'm probably slow :), but what's odd there? Are you thinking about the fact that there are many documents there and that it will take a long time to make a small test? If that's the case, create a dummy class, import a few documents (hundred?) of various (or at least known) sizes and transfer all documents in that class to the new repository.
 

Reply