We’re currently upgrading, and for DocMan, we’d like to move away from FTP and start using SMB. Our new application on version 22R1SU3 isn’t working with DocMan over a UNC share.
We’re a Windows shop, and we have our Linux middleware segregated off into its own network so we can properly treat it like a black box. What port do we need to open for the middleware to talk to the file server? Is it enough to just open 445?
I used telnet to confirm the middleware host can see port 445 on the file server. Can I do any further checking to see if the containers can see the port?
From EDM Basic, I tried the hostname and the IP address, and neither worked, but nevertheless, is there a way to confirm DNS is working? I would much prefer to use human-readable hostnames.
For a UNC path, can you confirm which //direction \\the //slashes \\ought //to \\go?
The username is on our domain. What should the format look like, e.g. acme.com\coyote or coyote@acme.com?
Are there restrictions on the characters allowed in the password? (Our character set is pretty simple right now, but I just want to confirm.)
Do you have any further troubleshooting tips?
Page 1 / 1
Hi,
This is a weak part of the SMB/Shared repository concept today, that is hard to know what's wrong when things fails. Some of it we have a hard time doing anything about I'm afraid, because of how the platform is.
I will try to comment on your different questions.
About the port number. I think SMB can use two different ports in theory, but in practice (perhaps depending on the version, I don't remember the details now) I think it's port 445 that should be open.
It's good that you tried using telnet from the k8s machine, but it's not the whole story. There is also a kind of firewall in the k8s cluster itself (called linkerd). If that does not accept outgoing calls on port 445, then it will not work. There is a way to configure that, but I don't know what it is. Some yaml file... I have been part of investigating customer installations where this was configured in the wrong way.
If you cannot verify that by looking at the configuration file(s), there is a way to connect to a particular container and do some testing inside it. Here are some notes I made when I had to do this earlier this year:
===
How to get terminal access to a container
Prerequisites
You need kubectl to setup and working on your PC. This should be documented in the technical documentation for IFS Cloud.
Commands
With this in place, use the commands below.
First, let us list all namespaces:
C:\foo\> kubectl get ns
This is how the result can look like:
NAME STATUS AGEkube-system Active 70dkube-public Active 70dkube-node-lease Active 70ddefault Active 70difs-ingress Active 70dappf-ci-dev-r2 Active 70d
You need to know which one is the relevant one for you. Let's pick the last one (appf-ci-dev-r2).
Next, we need to list all pods (containers) in the selected namespace:
C:\foo>kubectl -n appf-ci-dev-r2 get pods
NAME READY STATUS RESTARTS AGE...ifsapp-odata-76cf4fb745-wk7x5 2/2 Running 0 17h...
We're looking for the odata container, which is the one listed above.
As the last step we will connect to the container, providing the namespace and the pod/container name:
namespace name pod name | | -------------- -----------------------------C:\foo>kubectl -n appf-ci-dev-r2 exec -it ifsapp-odata-76cf4fb745-wk7x5 -- bash
After doing this you should get a bash prompt looking something like this:
bash-5.1$
===
Using the domain/machine name does work, but that also requires the k8s cluster to have the correct configuration. I know of a customer project where this too was a problem. IP address will work.
I think both type of slashes work, but use "forward" slashes (/). I think the documentation has this information, but what you should input under the Repository Address screen is something like //server/share
As for the username, we have seen problems when including the domain name, like this:
domain\user
Avoid it if possible and just use the username.
I actually don't know if username@hostname will work. Try only the username.
The password can be a problem, or at least we have seen that with FTP. It's due to the simplistic encryption algorithm we use when we save it. Some passwords, especially longer ones, does not work because they are "corrupted". Long passwords in combination with high ASCII characters, as I remember it, can be a problem. If you can, at least to rule out problems, experiment with some shorter password first.
Yeah, those are the tips I can give you right now. I wish it was easier to trouble shoot this. FTP can be hard to trouble shoot as well, because of how that protocol works with firewalls....
Good luck!
It looks like a firewall issue between the container and its host. If you can dig up any more details about linkerd configuration, I’d be very appreciative.
The odata pod contains multiple containers, and I had to specify the container with a -c argument to get a bash prompt because the linkerd-proxy container didn’t contain bash in its $PATH. (I also tried sh, csh, ksh, fish, and even /bin/bash in case it was a path issue.)
PS C:\Users\Me> kubectl -n mynamespace exec -it ifsapp-odata-0000000000-00000 -- bash Defaulting container name to linkerd-proxy. Use 'kubectl describe pod/ifsapp-odata-0000000000-00000 -n mynamespace' to see all of the containers in this pod. error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "acb0bddc50469a9f94050dd70f73d9c9d9bb6f1aab29604cf9f6781fb9c1cb1e": OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "bash": executable file not found in $PATH: unknown PS C:\Users\Me> kubectl -n mynamespace exec -it ifsapp-odata-0000000000-00000 -c ifsapp-odata -- bash bash-5.1$
The container doesn't have most tools installed like netcat or telnet, but I did manage to use bash redirection and the /dev/tcp directory object to prove the port isn't answering the same way from the container as it does from the Ubuntu host.
From the container, this returns nothing right away:
This "Connection reset by peer" answer is what I get from a machine that I’ve confirmed can see it properly.
I'll see if I can get some expert on linkerd to comment here. In the meantime I found the below in my notes from when we did some similar investigating for a customer. The notes are about how to disable or enable linkerd. This might or might not be relevant to your use case. Also, use this at your own risk:
kubectl edit deployment …
then find this section of the file/config that opens:
... template: metadata: annotations: config.linkerd.io/proxy-cpu-limit: 1000m config.linkerd.io/proxy-cpu-request: 20m config.linkerd.io/proxy-memory-limit: 128Mi config.linkerd.io/proxy-memory-request: 32Mi config.linkerd.io/skip-outbound-ports: 20,21,1025-8079,8081-9099,9101-9989, 9991-65535 linkerd.io/inject: enabled <----------------- should be enableD or disableD prometheus.io/port: "8080" ...
My suggestion is that you either try to disable linkerd, just to see if it is the culprit. If it is, the next step could be to modify the allowed outbound port range that you can also see above.
I didn't find this in IFS technical documentation but you can find the documentation about these settings on linkerd's website: