275

Let's say I have a script like the following:

useless.sh

echo "This Is Error" 1>&2
echo "This Is Output" 

And I have another shell script:

alsoUseless.sh

./useless.sh | sed 's/Output/Useless/'

I want to capture "This Is Error", or any other stderr from useless.sh, into a variable. Let's call it ERROR.

Notice that I am using stdout for something. I want to continue using stdout, so redirecting stderr into stdout is not helpful, in this case.

So, basically, I want to do

./useless.sh 2> $ERROR | ...

but that obviously doesn't work.

I also know that I could do

./useless.sh 2> /tmp/Error
ERROR=`cat /tmp/Error`

but that's ugly and unnecessary.

Unfortunately, if no answers turn up here that's what I'm going to have to do.

I'm hoping there's another way.

Anyone have any better ideas?

4

22 Answers 22

142

It would be neater to capture the error file thus:

ERROR=$(</tmp/Error)

The shell recognizes this and doesn't have to run 'cat' to get the data.

The bigger question is hard. I don't think there's an easy way to do it. You'd have to build the entire pipeline into the sub-shell, eventually sending its final standard output to a file, so that you can redirect the errors to standard output.

ERROR=$( { ./useless.sh | sed s/Output/Useless/ > outfile; } 2>&1 )

Note that the semi-colon is needed (in classic shells - Bourne, Korn - for sure; probably in Bash too). The '{}' does I/O redirection over the enclosed commands. As written, it would capture errors from sed too.

WARNING: Formally untested code - use at own risk.

Sign up to request clarification or add additional context in comments.

4 Comments

I had hoped that there'd be some really crazy trick I didn't know, but it looks like this is it. Thanks.
If you don't need the standard output, you can redirect it to /dev/null instead of outfile (If you're like me, you found this question via Google, and don't have the same requirements as the OP)
For an answer without temporary files, see here.
Here is a way to do it without redirecting it to files; it plays with swapping stdout and stderr forth and back. But beware, as here is said: In bash, it would be better not to assume that file descriptor 3 is unused".
113

Redirected stderr to stdout, stdout to /dev/null, and then use the backticks or $() to capture the redirected stderr:

ERROR=$(./useless.sh 2>&1 >/dev/null)

3 Comments

This is the reason I included the pipe in my example. I still want the standard output, and I want it to do other things, go other places.
For commands that send output only to stderr, the simple way to capture it is, for example PY_VERSION="$(python --version 2>&1)"
is /dev/null necessary?
88

alsoUseless.sh

This will allow you to pipe the output of your useless.sh script through a command such as sed and save the stderr in a variable named error. The result of the pipe is sent to stdout for display or to be piped into another command.

It sets up a couple of extra file descriptors to manage the redirections needed in order to do this.

#!/bin/bash

exec 3>&1 4>&2 #set up extra file descriptors

error=$( { ./useless.sh | sed 's/Output/Useless/' 2>&4 1>&3; } 2>&1 )

echo "The message is \"${error}.\""

exec 3>&- 4>&- # release the extra file descriptors

7 Comments

It is good technique to use 'exec' to set and close file descriptors. The close isn't really needed if the script exits immediately afterwards.
How would I capture both stderr and stdout in variables?
Excellent. This helps me implement a dry_run function that can reliably choose between echoing its arguments and running them, regardless of whether the command being dry-ran is being piped to some other file.
@t00bs: read doesn't accept input from a pipe. You can use other techniques to achieve what you're trying to demonstrate.
Could be simpler, with: error=$( ./useless.sh | sed 's/Output/Useless/' 2>&1 1>&3 )
|
34

There are a lot of duplicates for this question, many of which have a slightly simpler usage scenario where you don't want to capture stderr and stdout and the exit code all at the same time.

if result=$(useless.sh 2>&1); then
    stdout=$result
else
    rc=$?
    stderr=$result
fi

works for the common scenario where you expect either proper output in the case of success, or a diagnostic message on stderr in the case of failure.

Note that the shell's control statements already examine $? under the hood; so anything which looks like

cmd
if [ $? -eq 0 ], then ...

is just a clumsy, unidiomatic way of saying

if cmd; then ...

2 Comments

It won't work if in both cases the useless.sh generates stdout.
True; then it's really literally doubly useless.
26

For the benefit of the reader, this recipe here

  • can be re-used as oneliner to catch stderr into a variable
  • still gives access to the return code of the command
  • Sacrifices a temporary file descriptor 3 (which can be changed by you of course)
  • And does not expose this temporary file descriptors to the inner command

If you want to catch stderr of some command into var you can do

{ var="$( { command; } 2>&1 1>&3 3>&- )"; } 3>&1;

Afterwards you have it all:

echo "command gives $? and stderr '$var'";

If command is simple (not something like a | b) you can leave the inner {} away:

{ var="$(command 2>&1 1>&3 3>&-)"; } 3>&1;

Wrapped into an easy reusable bash-function (probably needs version 3 and above for local -n):

: catch-stderr var cmd [args..]
catch-stderr() { local -n v="$1"; shift && { v="$("$@" 2>&1 1>&3 3>&-)"; } 3>&1; }

Explained:

  • local -n aliases "$1" (which is the variable for catch-stderr)
  • 3>&1 uses file descriptor 3 to save there stdout points
  • { command; } (or "$@") then executes the command within the output capturing $(..)
  • Please note that the exact order is important here (doing it the wrong way shuffles the file descriptors wrongly):
    • 2>&1 redirects stderr to the output capturing $(..)
    • 1>&3 redirects stdout away from the output capturing $(..) back to the "outer" stdout which was saved in file descriptor 3. Note that stderr still refers to where FD 1 pointed before: To the output capturing $(..)
    • 3>&- then closes the file descriptor 3 as it is no more needed, such that command does not suddenly has some unknown open file descriptor showing up. Note that the outer shell still has FD 3 open, but command will not see it.
    • The latter is important, because some programs like lvm complain about unexpected file descriptors. And lvm complains to stderr - just what we are going to capture!

You can catch any other file descriptor with this recipe, if you adapt accordingly. Except file descriptor 1 of course (here the redirection logic would be wrong, but for file descriptor 1 you can just use var=$(command) as usual).

Note that this sacrifices file descriptor 3. If you happen to need that file descriptor, feel free to change the number. But be aware, that some shells (from the 1980s) might understand 99>&1 as argument 9 followed by 9>&1 (this is no problem for bash).

Also note that it is not particluar easy to make this FD 3 configurable through a variable. This makes things very unreadable:

: catch-var-from-fd-by-fd variable fd-to-catch fd-to-sacrifice command [args..]
catch-var-from-fd-by-fd()
{
local -n v="$1";
local fd1="$2" fd2="$3";
shift 3 || return;

eval exec "$fd2>&1";
v="$(eval '"$@"' "$fd1>&1" "1>&$fd2" "$fd2>&-")";
eval exec "$fd2>&-";
}

Security note: The first 3 arguments to catch-var-from-fd-by-fd must not be taken from a 3rd party. Always give them explicitly in a "static" fashion.

So no-no-no catch-var-from-fd-by-fd $var $fda $fdb $command, never do this!

If you happen to pass in a variable variable name, at least do it as follows: local -n var="$var"; catch-var-from-fd-by-fd var 3 5 $command

This still will not protect you against every exploit, but at least helps to detect and avoid common scripting errors.

Notes:

  • catch-var-from-fd-by-fd var 2 3 cmd.. is the same as catch-stderr var cmd..
  • shift || return is just some way to prevent ugly errors in case you forget to give the correct number of arguments. Perhaps terminating the shell would be another way (but this makes it hard to test from commandline).
  • The routine was written such, that it is more easy to understand. One can rewrite the function such that it does not need exec, but then it gets really ugly.
  • This routine can be rewritten for non-bash as well such that there is no need for local -n. However then you cannot use local variables and it gets extremely ugly!
  • Also note that the evals are used in a safe fashion. Usually eval is considerered dangerous. However in this case it is no more evil than using "$@" (to execute arbitrary commands). However please be sure to use the exact and correct quoting as shown here (else it becomes very very dangerous).

2 Comments

I don't think it gives access to the return code of the command
I don't see the local -n option in the bash 3.2 nor 4.2 man pages, just in the 5.2 man page.
8
# command receives its input from stdin.
# command sends its output to stdout.
exec 3>&1
stderr="$(command </dev/stdin 2>&1 1>&3)"
exitcode="${?}"
echo "STDERR: $stderr"
exit ${exitcode}

3 Comments

command is a bad choice here, inasmuch as there's actually a builtin by that name. Might make it yourCommand or such, to be more explicit.
exitcode is always zero
is it really necessary to do anything with /dev/stdin? Since FD=0 is not touched anywhere n your command im not sure what is gained
7

POSIX

STDERR can be captured with some redirection magic:

$ { error=$( { { ls -ld /XXXX /bin | tr o Z ; } 1>&3 ; } 2>&1); } 3>&1
lrwxrwxrwx 1 rZZt rZZt 7 Aug 22 15:44 /bin -> usr/bin/

$ echo $error
ls: cannot access '/XXXX': No such file or directory

Note that piping of STDOUT of the command (here ls) is done inside the innermost { }. If you're executing a simple command (eg, not a pipe), you could remove these inner braces.

You can't pipe outside the command as piping makes a subshell in bash and zsh, and the assignment to the variable in the subshell wouldn't be available to the current shell.

bash

In bash, it would be better not to assume that file descriptor 3 is unused:

{ error=$( { { ls -ld /XXXX /bin | tr o Z ; } 1>&$tmp ; } 2>&1); } {tmp}>&1; 
exec {tmp}>&-  # With this syntax the FD stays open

Note that this doesn't work in zsh.


Thanks to this answer for the general idea.

2 Comments

can u explain this line with details? did not understood 1>&$tmp ; { error=$( { { ls -ld /XXXX /bin | tr o Z ; } 1>&$tmp ; } 2>&1); } {tmp}>&1;
@ThiagoConrado I assume tmp in that case is just a variable that stores a file descriptor that you know is unused. For example, if tmp=3 then 1>&$tmp would become 1>&3 and the command would be the same as explained previously (it would store stdout (1) in the file descriptor 3, than stderr (2) would go to stdout and be stored in the error variable, and finally the content streamed to the file descriptor 3 goes back to the file descriptor 1, that is, stdout, because of {tmp}>&1 that turns into 3>&1, if I understood correctly).
7

A simple solution

{ ERROR=$(./useless.sh 2>&1 1>&$out); } {out}>&1
echo "-"
echo $ERROR

Will produce:

This Is Output
-
This Is Error

4 Comments

I like this. I tweaked it to this: OUTPUT=$({ ERROR=$(~/code/sh/x.sh 2>&1 1>&$TMP_FD); } {TMP_FD}>&1) this also allows the status to be seen via $?
@kdubs I've tried your solution but the ERROR is always empty. Even if the command failed. Have I missed something? $ STDOUT="$({ STDERR="$(ls -l /tmp/non-existing-file 2>&1 1>&$out)"; } {out}>&1)"; echo "stdout = [$STDOUT], stderr = [$STDERR]"; stdout = [], stderr = [] Have I missed something?
@kdubs I think to make it work in case of the error you have to swap 2>&1 and 1>&TMP_FD So it looks so: STDOUT="$({ STDERR="$(ls -l /tmp/non-existing-file 1>&$out 2>&1)"; } {out}>&1)"; echo "stdout = [$STDOUT], stderr = [$STDERR]";
mine is messed up. I got mislead, because the variables were set from a previous attempt. your ordering appears to write only to STDOUT
6

I think you want to capture stderr, stdout and exitcode if that is your intention you can use this code:

## Capture error when 'some_command() is executed
some_command_with_err() {
    echo 'this is the stdout'
    echo 'this is the stderr' >&2
    exit 1
}

run_command() {
    {
        IFS=$'\n' read -r -d '' stderr;
        IFS=$'\n' read -r -d '' stdout;
        IFS=$'\n' read -r -d '' stdexit;
    } < <((printf '\0%s\0%d\0' "$(some_command_with_err)" "${?}" 1>&2) 2>&1)
    stdexit=${stdexit:-0};
}

echo 'Run command:'
if ! run_command; then
    ## Show the values
    typeset -p stdout stderr stdexit
else
    typeset -p stdout stderr stdexit
fi

This scripts capture the stderr, stdout as well as the exitcode.

But Teo how it works?

First, we capture the stdout as well as the exitcode using printf '\0%s\0%d\0'. They are separated by the \0 aka 'null byte'.

After that, we redirect the printf to stderr by doing: 1>&2 and then we redirect all back to stdout using 2>&1. Therefore, the stdout will look like:

"<stderr>\0<stdout>\0<exitcode>\0"

Enclosing the printf command in <( ... ) performs process substitution. Process substitution allows a process’s input or output to be referred to using a filename. This means <( ... ) will pipe the stdout of (printf '\0%s\0%d\0' "$(some_command_with_err)" "${?}" 1>&2) 2>&1into the stdin of the command group using the first <.

Then, we can capture the piped stdout from the stdin of the command group with read. This command reads a line from the file descriptor stdin and split it into fields. Only the characters found in $IFS are recognized as word delimiters. $IFS or Internal Field Separator is a variable that determines how Bash recognizes fields, or word boundaries, when it interprets character strings. $IFS defaults to whitespace (space, tab, and newline), but may be changed, for example, to parse a comma-separated data file. Note that $* uses the first character held in $IFS.

## Shows whitespace as a single space, ^I(horizontal tab), and newline, and display "$" at end-of-line.
echo "$IFS" | cat -vte
# Output:
# ^I$
# $

## Reads commands from string and assign any arguments to pos params
bash -c 'set w x y z; IFS=":-;"; echo "$*"'
# Output:
# w:x:y:z

for l in $(printf %b 'a b\nc'); do echo "$l"; done
# Output: 
# a
# b
# c

IFS=$'\n'; for l in $(printf %b 'a b\nc'); do echo "$l"; done
# Output: 
# a b
# c

That is why we defined IFS=$'\n' (newline) as delimiter. Our script uses read -r -d '', where read -r does not allow backslashes to escape any characters, and -d '' continues until the first character '' is read, rather than newline.

Finally, replace some_command_with_err with your script file and you can capture and handle the stderr, stdout as well as the exitcode as your will.

1 Comment

Does it work with more than 512 bytes ?
5

Iterating a bit on Tom Hale's answer, I've found it possible to wrap the redirection yoga into a function for easier reuse. For example:

#!/bin/sh

capture () {
    { captured=$( { { "$@" ; } 1>&3 ; } 2>&1); } 3>&1
}

# Example usage; capturing dialog's output without resorting to temp files
# was what motivated me to search for this particular SO question
capture dialog --menu "Pick one!" 0 0 0 \
        "FOO" "Foo" \
        "BAR" "Bar" \
        "BAZ" "Baz"
choice=$captured

clear; echo $choice

It's almost certainly possible to simplify this further. Haven't tested especially-thoroughly, but it does appear to work with both bash and ksh.


EDIT: an alternative version of the capture function which stores the captured STDERR output into a user-specified variable (instead of relying on a global $captured), taking inspiration from Léa Gris's answer while preserving the ksh (and zsh) compatibility of the above implementation:

capture () {
    if [ "$#" -lt 2 ]; then
        echo "Usage: capture varname command [arg ...]"
        return 1
    fi
    typeset var captured; captured="$1"; shift
    { read $captured <<<$( { { "$@" ; } 1>&3 ; } 2>&1); } 3>&1
}

And usage:

capture choice dialog --menu "Pick one!" 0 0 0 \
        "FOO" "Foo" \
        "BAR" "Bar" \
        "BAZ" "Baz"

clear; echo $choice

3 Comments

It looks like the $? value of the command executed is lost in the process. It is possible to preserve it ?
@jplandrain yes, $? here is execution code of read command, not of the called command. Execution code is preserved in the current version of Léa Gris's answer
Why do you use read $captured <<<$(...) with losing execution code of the called function? Why not to use simple assignment captured=$(...) to preserve it?
4

Here's how I did it :

#
# $1 - name of the (global) variable where the contents of stderr will be stored
# $2 - command to be executed
#
captureStderr()
{
    local tmpFile=$(mktemp)

    $2 2> $tmpFile

    eval "$1=$(< $tmpFile)"

    rm $tmpFile
}

Usage example :

captureStderr err "./useless.sh"

echo -$err-

It does use a temporary file. But at least the ugly stuff is wrapped in a function.

5 Comments

@ShadowWizard Little doubt on my side. In French, colon is usually preceded by a space. I mistakenly apply this same rule with english answers. After checking this, I know I won't make this mistake again.
@Stephan cheers, this has also been discussed here. :)
There are safer ways to do this than using eval. For instance, printf -v "$1" '%s' "$(<tmpFile)" doesn't risk running arbitrary code if your TMPDIR variable has been set to a malicious value (or your destination variable name contains such a value).
Similarly, rm -- "$tmpFile" is more robust than rm $tmpFile.
I've improved this solution according to @CharlesDuffy comments, extended it for possibility to capture function out string result, fixed returning return code of called function/command, supported multiple parameters with "$@", used local out variable for returning stderr instead of global var, supported virtual temporary partition in RAM memory (if it configured temporary file will be created there). Improved solution posted here
2

This is an interesting problem to which I hoped there was an elegant solution. Sadly, I end up with a solution similar to Mr. Leffler, but I'll add that you can call useless from inside a Bash function for improved readability:

#!/bin/bash

function useless {
    /tmp/useless.sh | sed 's/Output/Useless/'
}

ERROR=$(useless)
echo $ERROR

All other kind of output redirection must be backed by a temporary file.

Comments

2

Improving on YellowApple's answer:

This is a Bash function to capture stderr into any variable

stderr_capture_example.sh:

#!/usr/bin/env bash

# Capture stderr from a command to a variable while maintaining stdout
# @Args:
# $1: The variable name to store the stderr output
# $2: Vararg command and arguments
# @Return:
# The Command's Return-Code
capture_stderr() {
    local -n stderr="${1:?}"
    shift
    {
        # shellcheck disable=SC2034 # nameref
        stderr="$({ "$@" 1>&3; } 2>&1)"
    } 3>&1
}

# Testing with a call to erroring ls
LANG=C capture_stderr my_stderr ls "$0" ''

printf 'RC=%d\n' $?
printf 'my_stderr contains:\n%s\n' "$my_stderr"

Testing:

bash stderr_capture_example.sh

Output:

 stderr_capture_example.sh

my_stderr contains:
ls: cannot access '': No such file or directory

This function can be used to capture the returned choice of a dialog command.

10 Comments

Why LANG=C is in the code?
Why local stderr is not marked with -n flag like local -n stderr=$1 or declare -n stderr=$1 to explicitly specify that it is out param? To work with bashes with versions prior to 5 (released in 2019)?
@AntonSamokat a nameref variable with the -n flag is not needed because printf -v varname already uses the variable name reference.
Return-Code is not properly returned back. It always returns as 0 that shows success of printf command. To return Return-Code for executing inside command something like the following is needed: function capture_stderr_with_return_code { [ $# -lt 2 ] && return 2; declare -n stderr=$1; shift; { stderr=$({ "$@" 1>&3; } 2>&1); local exitcode="$?"; return $exitcode; } 3>&1; }. To check it: capture_stderr_with_return_code errormsg ls 1 2; echo "return_code: $?"; echo "error_msg: $errormsg"
@AntonSamokat LANG=C inline environment ensures the error message to be printed as an example is the same for everyone running this code sample; regardless of system locale.
|
1

This post helped me come up with a similar solution for my own purposes:

MESSAGE=`{ echo $ERROR_MESSAGE | format_logs.py --level=ERROR; } 2>&1`

Then as long as our MESSAGE is not an empty string, we pass it on to other stuff. This will let us know if our format_logs.py failed with some kind of python exception.

Comments

1

In zsh:

{ . ./useless.sh > /dev/tty } 2>&1 | read ERROR
$ echo $ERROR
( your message )

Comments

1

Capture AND Print stderr

ERROR=$( ./useless.sh 3>&1 1>&2 2>&3 | tee /dev/fd/2 )

Breakdown

You can use $() to capture stdout, but you want to capture stderr instead. So you swap stdout and stderr. Using fd 3 as the temporary storage in the standard swap algorithm.

If you want to capture AND print use tee to make a duplicate. In this case the output of tee will be captured by $() rather than go to the console, but stderr(of tee) will still go to the console so we use that as the second output for tee via the special file /dev/fd/2 since tee expects a file path rather than a fd number.

NOTE: That is an awful lot of redirections in a single line and the order matters. $() is grabbing the stdout of tee at the end of the pipeline and the pipeline itself routes stdout of ./useless.sh to the stdin of tee AFTER we swapped stdin and stdout for ./useless.sh.

Using stdout of ./useless.sh

The OP said he still wanted to use (not just print) stdout, like ./useless.sh | sed 's/Output/Useless/'.

No problem just do it BEFORE swapping stdout and stderr. I recommend moving it into a function or file (also-useless.sh) and calling that in place of ./useless.sh in the line above.

However, if you want to CAPTURE stdout AND stderr, then I think you have to fall back on temporary files because $() will only do one at a time and it makes a subshell from which you cannot return variables.

Comments

1

I'll use find command

find / -maxdepth 2 -iname 'tmp' -type d

as non superuser for the demo. It should complain 'Permission denied' when acessing / dir.

#!/bin/bash

echo "terminal:"
{ err="$(find / -maxdepth 2 -iname 'tmp' -type d 2>&1 1>&3 3>&- | tee /dev/stderr)"; } 3>&1 | tee /dev/fd/4 2>&1; out=$(cat /dev/fd/4)
echo "stdout:" && echo "$out"
echo "stderr:" && echo "$err"

that gives output:

terminal:
find: ‘/root’: Permission denied
/tmp
/var/tmp
find: ‘/lost+found’: Permission denied
stdout:
/tmp
/var/tmp
stderr:
find: ‘/root’: Permission denied
find: ‘/lost+found’: Permission denied

The terminal output has also /dev/stderr content the same way as if you were running that find command without any script. $out has /dev/stdout and $err has /dev/stderr content.

use:

#!/bin/bash

echo "terminal:"
{ err="$(find / -maxdepth 2 -iname 'tmp' -type d 2>&1 1>&3 3>&-)"; } 3>&1 | tee /dev/fd/4; out=$(cat /dev/fd/4)
echo "stdout:" && echo "$out"
echo "stderr:" && echo "$err"

if you don't want to see /dev/stderr in the terminal output.

terminal:
/tmp
/var/tmp
stdout:
/tmp
/var/tmp
stderr:
find: ‘/root’: Permission denied
find: ‘/lost+found’: Permission denied

Comments

0

If you want to bypass the use of a temporary file you may be able to use process substitution. I haven't quite gotten it to work yet. This was my first attempt:

$ .useless.sh 2> >( ERROR=$(<) )
-bash: command substitution: line 42: syntax error near unexpected token `)'
-bash: command substitution: line 42: `<)'

Then I tried

$ ./useless.sh 2> >( ERROR=$( cat <() )  )
This Is Output
$ echo $ERROR   # $ERROR is empty

However

$ ./useless.sh 2> >( cat <() > asdf.txt )
This Is Output
$ cat asdf.txt
This Is Error

So the process substitution is doing generally the right thing... unfortunately, whenever I wrap STDIN inside >( ) with something in $() in an attempt to capture that to a variable, I lose the contents of $(). I think that this is because $() launches a sub process which no longer has access to the file descriptor in /dev/fd which is owned by the parent process.

Process substitution has bought me the ability to work with a data stream which is no longer in STDERR, unfortunately I don't seem to be able to manipulate it the way that I want.

1 Comment

If you did ./useless.sh 2> >( ERROR=$( cat <() ); echo "$ERROR" ) then you would see output of ERROR. The trouble is that the process substitution is run in a sub-shell, so the value set in the sub-shell doesn't affect the parent shell.
0
$ b=$( ( a=$( (echo stdout;echo stderr >&2) ) ) 2>&1 )
$ echo "a=>$a b=>$b"
a=>stdout b=>stderr

5 Comments

This looks like a good idea, but on Mac OSX 10.8.5, it prints a=> b=>stderr
I agree with @HeathBorders; this does not produce the output shown. The trouble here is that a is evaluated and assigned in a sub-shell, and the assignment in the sub-shell does not affect the parent shell. (Tested on Ubuntu 14.04 LTS as well as Mac OS X 10.10.1.)
The same in Windows GitBash. So, it doesn't work. (GNU bash, version 4.4.12(1)-release (x86_64-pc-msys))
Does not work on SLE 11.4 either and produces the effect described by @JonathanLeffler
While this code may answer the question, providing additional context regarding why and/or how this code answers the question improves its long-term value.
0

For error proofing your commands:

execute [INVOKING-FUNCTION] [COMMAND]

execute () {
    function="${1}"
    command="${2}"
    error=$(eval "${command}" 2>&1 >"/dev/null")

    if [ ${?} -ne 0 ]; then
        echo "${function}: ${error}"
        exit 1
    fi
}

Inspired in Lean manufacturing:

1 Comment

The idiomatic solution is toeput the assignment inside the if. Let me post a separate solution.
0

Solutions like in Léa Gris's answer have restriction with returning out variables from called function to calling environment. They are based on Command Substitution with stderr=$() construction that executes called function in subshell environment and due to this are not able to return back any variables to parent shell.

To address this below both Léa Gris's solution and enhanced solution of tfga with taking into account:

  • Charles Duffy's recommendations in comments under tfga's answer
  • Preserving return code of the called function/command
  • Using local out variable for returning stderr instead of global var
  • Supporting multiple parameters in called function with "$@"
  • If temporary partition in RAM "/mnt/tmpfs" exists temporary file will be created there, otherwise in standard "/tmp" directory.

This solution based on temporary file. Temporary file can be kept in RAM memory, not SSD/HDD.

Virtual partition in RAM can be created with this recipe: How to make a temporary partition in RAM?

In the code below If temporary partition in RAM "/mnt/tmpfs" exists temporary file will be created there, otherwise in standard "/tmp" directory. Read gurus comments about this approach here. Probably you will also not like the idea with using this temporary virtual partition in RAM. Perhaps will be better to ensure or setup keeping standard "/tmp" directory in RAM to prevent SSD/HDD degradation. Related question: How can I use RAM storage for the /tmp directory and how to set a maximum amount of RAM usage for it?

cat capture_part.sh

#!/bin/bash

# For capturing only stderr
# Captures stderr from a command to a variable while maintaining stdout
# @Args:
# $1: The variable name to store the stderr output
# $2: Vararg command and arguments
# @Return:
# The Command's Return-Code or 1 if missing argument
# source: https://stackoverflow.com/a/60936303/1455694
# See test_capture_stderr for usage example.
capture_stderr() {
  local -n stderr="${1:?}"; shift
  { stderr="$({ "$@" 1>&3; } 2>&1)"; } 3>&1
}

func_to_test_all_outs() {
    local param="${1:?}"; shift
    local -n out_result=${1:?}; shift

    echo "test error output" >&2
    echo "test normal output"

    out_result=""
    if [[ "$param" = "return_string_result" ]]; then
        out_result="result string"
    fi

    return 3
}

assert_equals() {
    local error_message=$1; shift
    local expected_result=$1; shift
    local actual_result=$1; shift

    if [ "$expected_result" != "$actual_result" ]; then
        echo "$error_message; Expected: $expected_result; Actual: $actual_result"
    fi
}

# For non english terminal local run with:
# LANG=C ./capture_part.sh test_capture_stderr
test_capture_stderr() {
    #test for capturing ls command error output
    local expected_stderr="ls: cannot access '123': No such file or directory"
    local actual_stderr
    local expected_return_code=2
    local actual_return_code

    capture_stderr actual_stderr ls "123"
    actual_return_code=$?

    assert_equals "test_capture_stderr" "$expected_stderr" "$actual_stderr"
    assert_equals "test_capture_stderr" "$expected_return_code" "$actual_return_code"

    #test for function
    local expected_stderr="test error output"
    local actual_stderr
    local expected_return_code=3
    local actual_return_code
    local expected="result string"
    local result
    capture_stderr actual_stderr func_to_test_all_outs "return_string_result" result
    actual_return_code=$?

    assert_equals "test_capture_stderr-10" "$expected_stderr" "$actual_stderr"
    assert_equals "test_capture_stderr-20" "$expected_return_code" "$actual_return_code"
}

# Calls specified as parameter command/function and captures
# stderr, string result and puts them to separate corresponding out params.
# Return code is returned for called command/function.
# See test_capture_stderr_outres for usage example.
# $1 - name of the variable where the contents of stderr will be stored
# $2 - called function string out parameter
# $3 - command/function to be executed
# $4, $5, ... - parameters for the command/function
capture_stderr_outres() {
    local -n stderr="${1:?}"; shift
    local -n _out_result="${1:?}"; shift # _out_result is string out param, not return code

    local tmp_dir="/tmp"
    local virtual_tmp_dir="/mnt/tmpfs"
    if [ -d "$virtual_tmp_dir" ]; then
        tmp_dir="$virtual_tmp_dir"
    fi
    # local tmp_file_for_stderr=$(mktemp)  # standard tmp file creation in /tmp
    local tmp_file_for_stderr=$(mktemp "${tmp_dir}/tmp_file_for_stderr.XXXXXXXXXX")
    
    # better to separate var declaration with local mark and $? assignment
    local -i return_code 
    "$@" _out_result 2> "$tmp_file_for_stderr"

    # this command must be the first command after calling "$@"
    return_code="$?"  # return code for called function/command will be also returned
     
    stderr="$(<"$tmp_file_for_stderr")" 

    rm -- "$tmp_file_for_stderr"
    return "$return_code"
}

test_capture_stderr_outres() {
    local expected_stderr="test error output"
    local actual_stderr
    local expected_return_code=3
    local actual_return_code
    local expected="result string"
    local actual="some previous value"  # to ensure that test will change it

    capture_stderr_outres actual_stderr actual func_to_test_all_outs "return_string_result"
    actual_return_code=$?

    assert_equals "test_capture_stderr_outres-50" "$expected_return_code" "$actual_return_code"
    assert_equals "test_capture_stderr_outres-60" "$expected_stderr" "$actual_stderr"
    assert_equals "test_capture_stderr_outres-70" "$expected" "$actual"
}

tests() {
    echo "tests started"

    test_capture_stderr
    test_capture_stderr_outres

    echo "tests finished"
}

t() {
    tests
}

"$@"

Usage:

./capture_part.sh t

No errors output to console means that tests have been passed successfully.

  • capture_stderr — captures only stderr without temporary file creation. Example of usage in test_capture_stderr
  • capture_stderr_outres — captures stderr and called function out var string result. Example of usage in test_capture_stderr_outres
  • func_to_test_all_outs — example of tested function that produces all possible outcomes that can be captured: stdout, stderr, string result, return code.

Capture functions also return return code of called function/command.

More extended general solution for capturing also stdout to separate var is here: How to put all called function/command output results to different corresponding vars: for stderr, for stdout, for string result and for return code?

Comments

0

I prefer this construct as it is compact and easy to read:

stdout=$(ls /usr /not_exists 2>/dev/shm/$$.stderr); rc=$?; stderr=$(</dev/shm/$$.stderr)
if [[ $rc -eq 0 ]]; then
    echo "Success: stdout: $stdout"
else
    # note: ls will return stdout and stderr as one of the files exists
    echo "Error: stdout: $stdout; stderr: $stderr; rc: $rc"
fi
  • /dev/shm is a ram-based temporary filesystem

  • (<file) is a bash replacement for cat

  • If you think stderr messages could fully utilize /dev/shm, you could add this to the top of your script:

    trap '[[ -f /dev/shm/$$.stderr ]] && rm /dev/shm/$$.stderr' EXIT
    

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.