Consider this simple program (abort.py) that writes some text to stdout and stderr and then crashed (abort()).
import os
import sys
print("end stdout")
print("end stderr", file=sys.stderr)
os.abort()
When I run it in terminal, both output to stdout and stderr are correctly produced.
$ python3 abort.py 
end stdout
end stderr
Aborted (core dumped)
However, if I were to redirect stdout and/or stderr to another program (perhaps for logging purpose), it doesn't work any more.
$ python3 abort.py | cat
end stderr
$ python3 abort.py |& cat
$ python3 abort.py | tee -a logs
end stderr
$ python3 abort.py |& tee -a logs
# file `logs` is unchanged
In fact, if the program (abort.py) produce a lot of texts to stdout and stderr, only the last section of them is lost from the perspective of the receiving program on the other end of the pipe. I also tried running the program in a bash script and run that script, but the result is the same.
Why is this happening and how should I fix it?
Background
The example above is obviously contrived. The real world scenario is that I was debugging a large program that occasionally crash (segfault or aborted) after running for days. It produces a lot of logs to both stdout and stderr, but the most important piece of information regarding its crash (such as stack trace) is printed in the end right before the crash happens. I have a logging pipeline setup in a bash script (involves some tee and gzip for example), but I found that last section of the log saved is always missing, which is frustrating.


stdbuf -oL python3 abort.py |& catwhich should enabled the line buffering forstdout, but it produces the exact same resultstdbufdoesn't seem to work with python3. It's possible that Python does its own buffering, distinct from the C library's logic which stdbuf tries to change. I think it's also possible for a program to change the buffering mode after stdbuf sets it, which also might make stdbuf ineffective.>xor>&xalso loses buffered output. (But it doesn't lose the reporting of the abnormal death, as the pipeline does, for the reason explained by icarus.) Standard-I/O on TCP socket(s) also do this, but is/are more finicky to set-up. Instead ofstdbufwhich as @ilkkachu says only tweaks stdlib,unbuffer(usually part of theexpectpackage) creates a fake TTY (pseudo-TTY or PTY) that convinces nearly all programs including perl to not buffer more than a line.