I have several Linux VMs on VMware + SAN.
What happened
A problem occured on the SAN (failed path) so that for some time, there were I/O errors on the Linux VMs drives. When the path failover had been done, it was too late: every Linux machine considered most of its drives as not "trustworthy" anymore, setting them as read-only devices. The root filesystem's drives were also impacted.
What I tried
mount -o rw,remount /without success,echo running > /sys/block/sda/device/statewithout success,- dug into 
/systo find a solution without success. 
What I may have not tried
blockdev --setrw /dev/sda
Finally...
I had to reboot all my Linux VMs. The Windows VMs were fine...
Some more info from VMware...
The problem is described here. VMware suggests to increase the Linux scsi timeout to prevent this problem to happen.
The question!
However, when the problem does eventually happen, is there a way to get the drives back to read-write mode? (once the SAN is back to normal)
mount -o remount /mountpointon a real system. Perhaps that would work inside a VM too.