0

I have a list of bash commands that I'd like to execute one at a time. They could be in a file, or the terminal, or whatever is necessary. But I'd like to execute the first one then I'd check the results or whathaveyou, and then execute the next one, etc. I don't want a timed pause between each. Better still is if I could select the next command to execute. Can this be done?

I know copy/paste would work, and I'll do that if I have to, but I'd like something more efficient if possible.

9
  • could you tell us what the most complex bash script that you want to run is? Is it really just one command after the other, as your choice of words "a list of bash commands" suggests? or might it involve calling bash functions (and would you want that function to be one "command" or "step" through the function's contents as well? how deeply?) or even loops like for f in *.mp3; do id3tag "$f"; done? Commented Oct 11 at 22:59
  • @MarcusMüller Yes, they're simple commands. Almost all will be "mv file1 file2" commands. Commented Oct 11 at 23:02
  • To get to grips with things like the above and what other options you have in bash, check the Bash guides at www.tldp.org - already browsing through them will have you know more than currently. Commented Oct 12 at 9:03
  • 1
    Consider how you would proceed if you actually do find an error at some stage. You will want to fix the issue, and then restart the list of commands partway through. I would probably list the commands in a file, and then have a control script that works through that file from the top, showing every command before it is executed, and offering options to quit, skip, or run one command. You could also add to the list an echo noting the checks you need to do at each point. It is way too easy to lose the thread during this kind of activity unless you have reminders. Commented Oct 12 at 9:49
  • 2
    @Hannu no criticism, just an honest question: are the guides at tldp the thing you would have wanted to read as a beginner? I didn't learn about them before long after I've had amassed experience with shells, and quite honestly, they are didactically not what I would have needed when I was new to bash. But I'm just me, not everyone. Commented Oct 12 at 10:31

3 Answers 3

3

Well, you could always manually read the commands from a file, and run them after getting confirmation. If you want to support the full shell syntax and e.g. pipelines, you'll need to use eval. Assuming each command is on a single line, you could use a something like this script:

#!/bin/bash

while IFS=  read -r line; do
        # skip empty lines
        if [[ "$line" =~ ^[[:space:]]*$ ]]; then
                continue
        fi
        printf "Next command:\n"
        printf "  %s\n" "$line"
        printf "Hit enter to run or ^C to break"
        read x < /dev/tty
        echo -------------------------------------------------
        eval "$line"
        ret=$?
        echo -------------------------------------------------
        printf "Previous command exited with status %d\n" "$ret"
        echo
done

call that single-step.sh, and run as single-step.sh < file-with-commands.txt

1
  • 1
    I tried this and it works. Thank you for taking the time to write that. I'm only going to make some small changes to it, but it largely does exactly what I want. Commented Oct 16 at 14:10
0

Usually, when you have a list of commands, the main idea is to create a shell script where you can check more deeply error handling.

But just for fun, this is what I would do:

file:

uptime
pwd
whoami

command:

set -eo pipefail
xargs < file -n1 | sh

The part set -eo pipefail is taking care of failed commands and exit. Better put this in a script to avoir quitting your interactive shell in case of error.

5
  • would that not break on any loop? (I just realize the user said "list of commands", this might be totally OK) Commented Oct 11 at 22:57
  • @GillesQuénot I tried the above, but it executes all three commands without stopping in between. Commented Oct 11 at 23:17
  • @Mike Could something like { readarray -t arr <file; for cmd in "${arr[@]}"; do read -ei " yes" -p"Run: $cmd?" ans; [ "$ans" = "yes" ] && $cmd; done } be what you want? Commented Oct 12 at 9:36
  • 1
    @mike The pipefail only operates when a command fails (in the sense of returning a non-zero status). If you want to inspect each result by viewing stuff yourself, you need manual control of the list. You might add the checks in your list, but that would need full testing, so that depends on how often you might want repeat this process. Commented Oct 12 at 9:57
  • xargs -n1 < file will print each "word" in the file on a separate line (according to the quoting rules xargs follows), and that -o pipefail would stop the main shell running the xargs | sh pipeline if xargs exits with a failure. Interesting commands are seldom single words, and that pipefail won't do anything about some command within that inner sh failing. Commented Oct 13 at 11:36
0

You'll find that "stepping through my script, inspecting the system after every line": That's the classical job of a debugger!

How to debug a bash script? has quite a few options (and I've used none of them. If you prefer the command line, this answer mentions and illustrates bashdb; sadly the reference documentation seems offline, but the man page seems available. The bad news is that it's not available for any debian, fedora, suse and was dropped from Ubuntu nearly a decade ago, which often is an indication of packaging trouble. Might have become much better!


Assuming this holds true:

could you tell us what the most complex bash script that you want to run is? Is it really just one command after the other, as your choice of words "a list of bash commands" suggests? or might it involve calling bash functions (and would you want that function to be one "command" or "step" through the function's contents as well? how deeply?) or even loops like for f in *.mp3; do id3tag "$f"; done?

@MarcusMüller Yes, they're simple commands. Almost all will be "mv file1 file2" commands.

Alternatively, you could just start an interactive shell after every line. A line gets executed, you do whatever you want to do (and quit the "inspection" shell through ctrl+d or exit), and the next line gets executed.

You can do something like (assuming GNU sed, which is more likely than BSD sed for someone who asks about GNU bash):

sed \
 's/.*/echo "Next line is:"\ncat << END_ORIGINAL\n&\nEND_ORIGINAL\nbash -i;&;echo "Last line was:"\ncat << END_ORIGINAL\n&\nEND_ORIGINAL\n\n/' \
  originalscript.sh > steppablescript.sh

which is a way to take each original line (matching the whole line with .*), surround it in a HEREDOC with

<< delimiter_string
&
delimiter_string

outputting that through cat (which is a horrible way to not have to figure out how to correctly quote the line). (& in sed refers to "replace this with the whole match", i.e. here with the original line)

Then, it runs an interactive bash (bash -i); then it the actual line (& again!), then it outputs the "last line" through the same trick as before.

you can then run steppablescript.sh as you would have originalscript.sh.

2
  • The sed solution assumes that each command is just a single line. And even if you put an entire if or while on a single line, it won't stop after each nested command. Commented Oct 13 at 16:14
  • @Barmar fair, but I verified first whether that's the case. You're right, I need to mention that in the answer! Commented Oct 13 at 16:17

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.