2

I am using the following version of the bash:

GNU bash, version 5.1.16(1)-release (x86_64-pc-linux-gnu)

When I start some command (e.g. ./hiprogram) directly from the terminal, then bash forks itself, execs the command, and at the same time it places a new process in a new group that becomes the foreground, while moving its group to the background.

However, when I create a script that runs ./hiprogram, then bash (that executes script) forks/execs itself in the same way but now bash stays in the same group as the command and thus stays in foreground group, why?

The only reason I can see for this is that the bash instance executing the script must be able to receive signals intended for the foreground group, such as CTRL+C, and react in the right way (for CTRL+C, that would mean to stop further execution of the script). Is that the only reason? AIs say that bash executing the script also remains in the foreground group to manage job control, but that explanation doesn’t quite make sense to me—after all, job control works just fine in the first case when commands are executed directly from the terminal and bash is not part of the foreground group.

0

1 Answer 1

0

The Bash process running the interactive shell is the job controller, whereas the Bash process running a script is merely part of the same job as the other programs run by that script. Being non-interactive, it doesn't perform any job control – the script itself is the job after all – and for that matter I don't think it even has the ability to do so, due to not being the session leader (the parent interactive shell is that).

9
  • Shell doesn't need to be a session leader to perform a job control. Look at this Stéphane Chazelas's answer. Cite: "When you start xterm, xterm runs your shell (by default) in the new process that it starts in a new session and that will control the pseudo-terminal slave device. Then that shell will be the session leader. But if you start another interactive shell from that shell, it will not be session leader but will take over job control." Also, it can do a job control since you can start command asynchronously in the script via &. Commented Feb 19 at 12:14
  • Thanks for the correction. But starting commands with & does not imply job control; it's achieved with the same basic fork/exec/waitpid. Commented Feb 19 at 12:19
  • After trying some examples, it looks like you are correct about job control within script. You can't do it (fg command doesn't work; error: "fg: no job control"). However, then I don't get what is the difference between starting command with & and without within script. Commented Feb 19 at 12:25
  • 1
    Without & the interpreter forks, execs the program, and the parent process waits for the child to exit/return before proceeding with the next script statement. With & the interpreter forks, execs the program, and the parent process doesn't wait for the child to exit/return – that's the only difference. Instead it stores the PID in $! and immediately carries on with the next statement. (Though it can be asked to do so explicitly, using the wait builtin.) Commented Feb 19 at 12:38
  • 1
    Yes. They're intertwined of course (when job control is available, then each &-command becomes tracked as a job, etc.) but I'm pretty sure it predates the existence of tty job control in Unix shells – after all, it doesn't rely on any other feature except the creation of processes. (Which means it can also work even when there is no controlling tty at all.) Commented Feb 19 at 12:51

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.