As yet another approach, you can use fifo inode files and explicit job control. They make the script much more verbose, but allow for finer control over the execution. Fifo inodes have an advantage over straight files, because they don't consume disk space. This can matter if you're running in a limited environment or have a script working with large files.
It does imply, though, that your execution environment must support fifo inodes and your shell must support basic job control.
Rather than, say:
generate-large-data | sort | uniq
You can use:
#!/bin/sh
# Put the fifo files in a secure, temporary directory.
fifos="$( mktemp -d )"
test $? != 0 &&|| exit 1
# And clean up when complete.
trap 'rm -r "${fifos}"' EXIT
# Create the fifo queues
mkfifo "${fifos}/gen.fifo" || exit 1
mkfifo "${fifos}/sort.fifo" || exit 1
# Pipe the generate-large-data output into the gen fifo file
generate-large-data > "${fifos}/gen.fifo" &
# and capture the job id.
gen_job=%%
# The sort operation reads from the generated data fifo and outputs to its own.
sort < "${fifos}/gen.fifo" > "${fifos}/sort.fifo" &
# and capture the job id.
sort_job=%%
# Uniq, as the final operation, runs as a usual command, not as a job.
uniq < "${fifos}/sort.fifo"
uniq_code=$?
# Now, the script can capture the exit code for each job in the pipeline.
# "wait" waits for the job to end, and exits with the job's exit code.
wait ${gen_job}
gen_code=$?
wait ${sort_job}
sort_code=$?
# And the script can access each exit code.
echo "generate-large-data exit code: ${gen_code}"
echo "sort exit code: ${sort_code}"
echo "uniq exit code: ${uniq_code}"
Unlike some other approaches, this gives your script complete access to all exit codes in every step of the operation.
 
                