Hi all! I’ve moved Nextcloud and Immich data folders from local to a NFS folder. Everything is fine except when snapraid runs on the NFS server. Structure:
- Proxmox host that also act as a NFS server
- NFS exported folders are in a MergerFS folder (BRTFS) that are part of snapraid (2 data drives and 1 parity drives)
- A Proxmox VM with all the Docker containers
- Inside the VM I’ve mounted the NFS shares that are always available (even after the backup)
In the containers the folders are bind mounted like this /mnt/nfs/nextcloud/data:/var/www/html/data
After the backup the NFS share are still available in the VM, but if I enter the container I get:
root@nextcloud-app:/var/www/html# ls -latr data
ls: cannot access 'data': Stale file handle
The NFS are mounted like this:
172.5.0.5:/mnt/pool/@nextcloud /mnt/nfs/nextcloud nfs vers=4,rw,hard,intr,timeo=600,retrans=10,_netdev,nofail,x-systemd.requires=network-online.target 0 0
How can I solve this problem?


I have found docker hates locally mounted NFS mounts direct to volume mounts. its kind of like symlinking a symlink of a symlink.
best way I have found that this works is to use CIFS and declare a NAS share to an actual docker volume and then mount that inside the container like any other volume.
personally I would have preferred NFS, but here we are.
it has something to do with how NFS connectivity maintains a connection. it’s not “always on” but is inefficient when it quickly needs to reconnect for a read/write request. or at least something like that.