Skip to main content
added 423 characters in body
Source Link
xhienne
  • 18.2k
  • 2
  • 58
  • 71

We don't know the size of cmd1's output but pipes have a limited buffer size. Once that amount of data has been written to the pipe, any subsequent write will block until someone read the pipe (kind of your failed solution 3).

You must use a mechanism that guarantees not to block. For very large data, use a temporary file. Else, if you can afford keeping the data in memory (that was the idea after all with pipes), use this:

result=$(cmd1) && cmd2 <<<< <(printf '%s' "$result")
unset result

Here the result of cmd1 is stored in the variable result. If cmd1 is successful, cmd2 is executed and is fed with the data in result (through a here-string). Finally, result is unset to release the associated memory.

Note: formerly, I used a here-string (<<< "$result") to feed cmd2 with data but Stéphane Chazelas observed that bash would then create a temporary file, which you don't want.

Answers to questions in comment:

  • Yes, commands can be chained ad libitum:

      result=$(cmd1) \
      && result=$(cmd2 <<<< <(printf '%s' "$result")) \
      && result=$(cmd3 <<<< <(printf '%s' "$result")) \
      ...
      && cmdN < <(printf '%s' "$result")
      unset result
    
  • ThisNo, the above solution is not suitable for binary data because:

    1. Command substitution $(...) eats all trailing newlines;newlines.
    2. Behavior is not specifiedunspecified for NUL characters (\0) in the result of a command substitution (e.g. Bash would discard them);
    3. The here-string syntax <<< "..." adds a trailing newline (instead of cmd2 <<< "$result", this would not add anything though: printf '%s' "$result" | cmd2).
  • Yes, to circumvent all those problems with binary data, you can use an encoder like base64 (or uuencode, or a home-made one that only takes care of NUL characters and the trailing newlines):

      result=$(cmd1 > >(base64)) && cmd2 < <(printf '%s' "$result" | base64 -d <<< "$result")
      unset result
    

Here, I had to use a process substitutionssubstitution (>(...) and <(...)) in order to keep cmd1 and cmd2 exit valuesvalue intact.

That said, again that seems to be quite a hassle just to ensure that data are not written to disk. An intermediary temporary file is a better solution. See Stéphane's answer which addresses most of your concerns about it.

We don't know the size of cmd1's output but pipes have a limited buffer size. Once that amount of data has been written to the pipe, any subsequent write will block until someone read the pipe (kind of your failed solution 3).

You must use a mechanism that guarantees not to block. For very large data, use a temporary file. Else, if you can afford keeping the data in memory (that was the idea after all with pipes), use this:

result=$(cmd1) && cmd2 <<< "$result"
unset result

Here the result of cmd1 is stored in the variable result. If cmd1 is successful, cmd2 is executed and is fed with the data in result (through a here-string). Finally, result is unset to release the associated memory.

Answers to questions in comment:

  • Yes, commands can be chained ad libitum:

      result=$(cmd1) && result=$(cmd2 <<< "$result") && cmd3 <<< "$result"
    
  • This solution is not suitable for binary data because:

    1. Command substitution $(...) eats all trailing newlines;
    2. Behavior is not specified for NUL characters (\0) in command substitution (e.g. Bash would discard them);
    3. The here-string syntax <<< "..." adds a trailing newline (instead of cmd2 <<< "$result", this would not add anything though: printf '%s' "$result" | cmd2)
  • Yes, to circumvent all those problems with binary data, you can use an encoder like base64 (or uuencode or a home-made one that only takes care of NUL characters and the trailing newlines):

      result=$(cmd1 > >(base64)) && cmd2 < <(base64 -d <<< "$result")
      unset result
    

Here, I had to use process substitutions (>(...) and <(...)) in order to keep cmd1 and cmd2 exit values intact.

We don't know the size of cmd1's output but pipes have a limited buffer size. Once that amount of data has been written to the pipe, any subsequent write will block until someone read the pipe (kind of your failed solution 3).

You must use a mechanism that guarantees not to block. For very large data, use a temporary file. Else, if you can afford keeping the data in memory (that was the idea after all with pipes), use this:

result=$(cmd1) && cmd2 < <(printf '%s' "$result")
unset result

Here the result of cmd1 is stored in the variable result. If cmd1 is successful, cmd2 is executed and is fed with the data in result. Finally, result is unset to release the associated memory.

Note: formerly, I used a here-string (<<< "$result") to feed cmd2 with data but Stéphane Chazelas observed that bash would then create a temporary file, which you don't want.

Answers to questions in comment:

  • Yes, commands can be chained ad libitum:

      result=$(cmd1) \
      && result=$(cmd2 < <(printf '%s' "$result")) \
      && result=$(cmd3 < <(printf '%s' "$result")) \
      ...
      && cmdN < <(printf '%s' "$result")
      unset result
    
  • No, the above solution is not suitable for binary data because:

    1. Command substitution $(...) eats all trailing newlines.
    2. Behavior is unspecified for NUL characters (\0) in the result of a command substitution (e.g. Bash would discard them).
  • Yes, to circumvent all those problems with binary data, you can use an encoder like base64 (or uuencode, or a home-made one that only takes care of NUL characters and the trailing newlines):

      result=$(cmd1 > >(base64)) && cmd2 < <(printf '%s' "$result" | base64 -d)
      unset result
    

Here, I had to use a process substitution (>(...)) in order to keep cmd1 exit value intact.

That said, again that seems to be quite a hassle just to ensure that data are not written to disk. An intermediary temporary file is a better solution. See Stéphane's answer which addresses most of your concerns about it.

Added answers to questions
Source Link
xhienne
  • 18.2k
  • 2
  • 58
  • 71

We don't know the size of cmd1's output but pipes have a limited buffer size. Once that amount of data has been written to the pipe, any subsequent write will block until someone read the pipe (kind of your failed solution 3).

You must use a mechanism that guarantees not to block. For very large data, use a temporary file. Else, if you can afford keeping the data in memory (that was the idea after all with pipes), use this:

result=$(cmd1) && cmd2 <<< "$result"
unset result

Here the result of cmd1 is stored in the variable result. If cmd1 is successful, cmd2 is executed and is fed with the data in result (through a here-string). Finally, result is unset to release the associated memory.

Answers to questions in comment:

  • Yes, commands can be chained ad libitum:

      result=$(cmd1) && result=$(cmd2 <<< "$result") && cmd3 <<< "$result"
    
  • This solution is not suitable for binary data because:

    1. Command substitution $(...) eats all trailing newlines;
    2. Behavior is not specified for NUL characters (\0) in command substitution (e.g. Bash would discard them);
    3. The here-string syntax <<< "..." adds a trailing newline (instead of cmd2 <<< "$result", this would not add anything though: printf '%s' "$result" | cmd2)
  • Yes, to circumvent all those problems with binary data, you can use an encoder like base64 (or uuencode or a home-made one that only takes care of NUL characters and the trailing newlines):

      result=$(cmd1 > >(base64)) && cmd2 < <(base64 -d <<< "$result")
      unset result
    

Here, I had to use process substitutions (>(...) and <(...)) in order to keep cmd1 and cmd2 exit values intact.

We don't know the size of cmd1's output but pipes have a limited buffer size. Once that amount of data has been written to the pipe, any subsequent write will block until someone read the pipe (kind of your failed solution 3).

You must use a mechanism that guarantees not to block. For very large data, use a temporary file. Else, if you can afford keeping the data in memory (that was the idea after all with pipes), use this:

result=$(cmd1) && cmd2 <<< "$result"
unset result

Here the result of cmd1 is stored in the variable result. If cmd1 is successful, cmd2 is executed and is fed with the data in result (through a here-string). Finally, result is unset to release the associated memory.

We don't know the size of cmd1's output but pipes have a limited buffer size. Once that amount of data has been written to the pipe, any subsequent write will block until someone read the pipe (kind of your failed solution 3).

You must use a mechanism that guarantees not to block. For very large data, use a temporary file. Else, if you can afford keeping the data in memory (that was the idea after all with pipes), use this:

result=$(cmd1) && cmd2 <<< "$result"
unset result

Here the result of cmd1 is stored in the variable result. If cmd1 is successful, cmd2 is executed and is fed with the data in result (through a here-string). Finally, result is unset to release the associated memory.

Answers to questions in comment:

  • Yes, commands can be chained ad libitum:

      result=$(cmd1) && result=$(cmd2 <<< "$result") && cmd3 <<< "$result"
    
  • This solution is not suitable for binary data because:

    1. Command substitution $(...) eats all trailing newlines;
    2. Behavior is not specified for NUL characters (\0) in command substitution (e.g. Bash would discard them);
    3. The here-string syntax <<< "..." adds a trailing newline (instead of cmd2 <<< "$result", this would not add anything though: printf '%s' "$result" | cmd2)
  • Yes, to circumvent all those problems with binary data, you can use an encoder like base64 (or uuencode or a home-made one that only takes care of NUL characters and the trailing newlines):

      result=$(cmd1 > >(base64)) && cmd2 < <(base64 -d <<< "$result")
      unset result
    

Here, I had to use process substitutions (>(...) and <(...)) in order to keep cmd1 and cmd2 exit values intact.

added 586 characters in body
Source Link
xhienne
  • 18.2k
  • 2
  • 58
  • 71

We don't know the size of cmd1's output but pipes havehave a limited buffer size. Once that amount of data has been written to the pipe, any subsequent write will block until someone read the pipe (kind of your failed solution 3).

You must use a limited buffer sizemechanism that guarantees not to block. For very large data, use a temporary file. Else, if you can afford keeping the data in memory (that was the idea after all with pipes), use this:

result=$(cmd1) && cmd2 <<< "$result"
unset result 

Here the result of cmd1 is stored in the variable result. If cmd1 is successful, cmd2 is executed and is fed with the data in result (through a here-string). Finally, result is unset to release the associated memory.

We don't know the size of cmd1's output but pipes have a limited buffer size. For very large data, use a temporary file. Else, use this:

result=$(cmd1) && cmd2 <<< "$result"
unset result 

We don't know the size of cmd1's output but pipes have a limited buffer size. Once that amount of data has been written to the pipe, any subsequent write will block until someone read the pipe (kind of your failed solution 3).

You must use a mechanism that guarantees not to block. For very large data, use a temporary file. Else, if you can afford keeping the data in memory (that was the idea after all with pipes), use this:

result=$(cmd1) && cmd2 <<< "$result"
unset result

Here the result of cmd1 is stored in the variable result. If cmd1 is successful, cmd2 is executed and is fed with the data in result (through a here-string). Finally, result is unset to release the associated memory.

Source Link
xhienne
  • 18.2k
  • 2
  • 58
  • 71
Loading