0

This is unusual problem and probably it is the consequence of bad design. If somebody can suggest anything better, I'd happy to hear. But right now I want to solve it "as is".

There is a bunch of interacting scripts. It doesn't matter for the sake of the question, for for completeness, these scripts switch Oracle database standby node between PHYSICAL STANDBY and SNAPSHOT STANDBY, create a snapshot database and add some grants for our reporting team, releasing obsolete archive logs in the process.

There are:

  • delete_archivelogs.sh
  • switch_to_physical_standby.sh, which also calls delete_archivelogs.sh at the end
  • switch_to_snapshot_standby.sh
  • sync_standby.sh, which calls switch_to_physical_standby.sh, waits for standby to catch up and then calls switch_to_snapshot_standby.sh

The last sync_standby.sh is typically run from the cron job, but each script also should be possible to run at will if DBA decides to do so.

Each script has a lock file based protection (via flock) from running twice. However, it is clear that these scripts need to have a shared common locking, for instance, it should be impossible to start switch_to_snapshot_standby.sh (alone) while, say, sync_standby.sh is running, so DBA won't accidentally run the one script while other is working.

Normally I just configure the same lock file in all scripts. In this case it is not possible, because if sync_standby.sh acquire the lock, the called script won't run.

Which is the best way to have shared locking in this case? It is feasible to implement a "command line" switch to skip locking code and use it in calls from the parent script?

2
  • It likely is possible through some complicated mechanism but at that point, wouldn't it be better to use a "proper" language with locking and threading to write an admin tool rather than rely on bash scripts? Commented Aug 1, 2022 at 12:45
  • This probably needs a state change diagram, and some means to make it more difficult for users to for example up arrow in shell history and blindly run the wrong thing. Commented Aug 1, 2022 at 13:17

2 Answers 2

2

If none of your scripts requires or expects command-line options, you could use a command-line option to indicate whether a "subordinate" script is to use the flock mechanism or not.

So, in your example, the sync_standby.sh script would call the switch_to_snapshot_standby.sh with an argument (e.g.) subordinate, such as

./switch_to_snapshot_standby.sh subordinate

The switch_to_snapshot_standby.sh script would then check if it was called with the argument, and bypass the flock code in that case:

if [[ "$1" != "subordinate" ]]
then
    flock # with your arguments
fi

If your scripts use command-line arguments, you could also use the mechanism but clearly would need sort them out to see if the subordinate argument was provided.

1
  • Thanks. I think I choose this approach because my scripts are not using any command line arguments and it is unlikely they will, so I am free to use them for any purpose. Commented Aug 2, 2022 at 5:50
1

Each script should check the existence (in practice: emptiness vs. non-emptiness) of a dedicated environment variable. Pick an unused name for the variable. If the variable exists then the script should assume it already holds the lock (inherited).

If the variable doesn't exist, the script should try to obtain the lock. If it succeeds, it should ensure other scripts will get the variable in their environment.


Proof of concept

scrpt:

#!/bin/bash
(
  if [ -z "$HAVE_LOCK" ]; then
    flock 9 || exit 1
    export HAVE_LOCK=1
  fi

  date
  sleep 1
  [ "$RANDOM" -gt 3000 ] && ./scrpt
  echo done
) 9>/tmp/my-lock

Run ./scrpt in two consoles almost simultaneously. One of them will obtain a lock. There will be a good chance it will call itself again and again, but none of the descendants will block. Eventually it will stop and the other one will proceed.

Notes:

  • bash (not sh) only because I needed $RANDOM.

  • echo done coming from each instance ensures each ./scrpt is a separate process. It doesn't allow Bash to implicitly exec ./scrpt. I'm not saying Bash would; but I think technically it could, and then the example would not be convincing.

  • I chose to export the variable right away, so I can forget about it. HAVE_LOCK=1 ./scrpt is a method to run a child ./scrpt with HAVE_LOCK in its environment, with no need for prior export.

  • The method is similar to the one in this other answer, but IMO it has an advantage: regardless of what variables your scripts already use and how, checking an extra variable does not interfere, it's straightforward, the relevant code may be totally independent. Adding support for an additional command-line option may be not as easy (depending on if an how your scripts already use command-line options).

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.