This happens because the volume is using private
mount propagation. This means that once the mount happens, any changes that happen on the origin side (e.g. the "host" side in the case of Docker) will not be visible underneath the mount.
There are a couple of ways to handle this:
Do the NFS mount first, then start the container. The mount will propagate to the container, however as before any changes to the mount will not be seen by the container (including unmounts).
Use "slave" propagation. This means that once the mount is created, any changes on the origin side (docker host) will be able to be seen in the target (in the container). If you happen to be doing nested mounts, you'll want to use rslave
(r
for recursive).
There is also "shared" propagation. This mode would make changes to the mountpoint from inside the container propagate to the host, as well as the other way around. Since your user wouldn't even have privileges to make such changes (unless you add CAP_SYS_ADMIN), this is probably not what you want.
You can set the propagation mode when creating the mount like so:
$ docker run -v /foo:/bar:private
The other alternative would be to use a volume rather than a host mount.
You can do this like so:
$ docker volume create \
--name mynfs \
--opt type=nfs \
--opt device=:<nfs export path> \
--opt o=addr=<nfs host> \
mynfs
$ docker run -it -v mynfs:/foo alpine sh
This will make sure to always mount in the container for you, doesn't rely on having the host setup in some specific way or dealing with mount propagation.
note: the :
at the front of the device path is required, just something weird about the nfs kernel module.
note: Docker does not currently resolve <nfs host>
from a DNS name (it will in 1.13) so you will need to supply the ip address here.
More details on "shared subtree" mounts: https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt