This is unusual problem and probably it is the consequence of bad design. If somebody can suggest anything better, I'd happy to hear. But right now I want to solve it "as is".
There is a bunch of interacting scripts. It doesn't matter for the sake of the question, for for completeness, these scripts switch Oracle database standby node between PHYSICAL STANDBY and SNAPSHOT STANDBY, create a snapshot database and add some grants for our reporting team, releasing obsolete archive logs in the process.
There are:
delete_archivelogs.shswitch_to_physical_standby.sh, which also callsdelete_archivelogs.shat the endswitch_to_snapshot_standby.shsync_standby.sh, which callsswitch_to_physical_standby.sh, waits for standby to catch up and then callsswitch_to_snapshot_standby.sh
The last sync_standby.sh is typically run from the cron job, but each script also should be possible to run at will if DBA decides to do so.
Each script has a lock file based protection (via flock) from running twice. However, it is clear that these scripts need to have a shared common locking, for instance, it should be impossible to start switch_to_snapshot_standby.sh (alone) while, say, sync_standby.sh is running, so DBA won't accidentally run the one script while other is working.
Normally I just configure the same lock file in all scripts. In this case it is not possible, because if sync_standby.sh acquire the lock, the called script won't run.
Which is the best way to have shared locking in this case? It is feasible to implement a "command line" switch to skip locking code and use it in calls from the parent script?