i have 2 local dirs on my server mounted on a remote server. Local maintains connections with
autossh -M 23 -R 24:localhost:22 user@server While remote mounts with dirs-sshfs.sh
sshfs -o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,cache_timeout=3600 [email protected]:/mnt/localdir1/ /home/user/dir1/ -p 24
sshfs -o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,cache_timeout=3600 [email protected]:/mnt/localdir2/ /home/user/dir2/ -p 24
When it locks up, sshfs-restart.sh is used to reset the connection
pkill -kill -f "sshfs"
fusermount -uz dir1
fusermount -uz dir2
./dirs-sshfs.sh
That all works fine, but i have to 1) notice that it locked up and 2) manually reset it.
What makes this tricky is that when it locks up, even ls of home dir is infinitely locked up until reset. Because of this, i have given up on this being managed from the remote server side. On my local server side, where the autossh connection is maintained, i have this following script that almost works. This does catch when it fails, and tries to reset the connection, but will not remount after unmounting. Tried putting dirs-sshfs.sh content inside ssh-restart.sh, and even run-sshfs-restart.sh remote server side which contained ./sshfs-restart.sh & did not get it to work. The test script (testsshfs2.sh) contains ls; echo; ls dir1; echo; ls dir2; echo; date; echo which was created to be a quick and easy way to check if everything is mounted/working.
This is sshfsmanager.sh, run from local server inside a while loop with a sleep command. Future will probably move it to a cronjob
sshout=$(timeout 300 ssh user@host ./testsshfs2.sh)
# timeout duration probably doesnt need to be that long, but sometimes it does take 90+ sec to ls
# the remote dirs as they are network shares mounted to a vm that are then mounted across a ssh
# tunnel. Once this is working, that duration would be fine tuned. Also 100 is just an arbitrary
# number larger than ls | wc -l would provide. It should be updated to where it would catch if
# either of the mounts fail instead of only when both do. This timeout method is the only way
# ive found to get around the infinite ls lock.
if [ `echo "$sshout" | wc -l` -le 100 ]; then
echo "$sshout" | wc -l
ssh user@host ./sshfs-restart.sh
#adding a "&& sleep 60" or using the run-sshfs-restart.sh script did not work
echo sshfs restarted
else
echo sshfs is fine
echo "$sshout" | wc -l
fi
Most the scripts have logging that was removed when put here (as well as changing ports and removing user/host). Most all the logging lines are just date >> sshfsmanager.log
Local VM is running ubuntu 18.04.5 and remote server is a shared VPS running gentoo 5.10.27
ssh user@host ./sshfs-restart.shdoes not remount from within the script but./sshfs-restart.shdoes work by hand over ssh.