Running on Red Hat EL7, we have these great long lines in our log files so I do a
tail -f Log | cut -c1-$COLUMNS
This works great on some systems but other--apparently identical--systems, the pipe holds the data until the buffer is full. As I was typing this, SE gave me this answer which when used:
tail -f Log | stdbuf -oL cut -c1-$COLUMNS
does what I need but I'd like to know what is different. I'd like the systems to run the same, good or bad.
Is there a default buffering that has been set? How was it set and where?
Update: I opened two windows into a system where the problem occurs and tried:
while date; do usleep 500000 ; done | cut -c1-100
and got no output (until the buffer is full). In the other window, I ran strace on the cut process and got an endless series of:
read(0, "Wed Oct 26 13:04:12 CDT 2022\n", 4096) = 29
read(0, "Wed Oct 26 13:04:12 CDT 2022\n", 4096) = 29
read(0, "Wed Oct 26 13:04:13 CDT 2022\n", 4096) = 29
read(0, "Wed Oct 26 13:04:13 CDT 2022\n", 4096) = 29
read(0, "Wed Oct 26 13:04:14 CDT 2022\n", 4096) = 29
read(0, "Wed Oct 26 13:04:14 CDT 2022\n", 4096) = 29
read(0, "Wed Oct 26 13:04:15 CDT 2022\n", 4096) = 29
I think that's pretty conclusive evidence that the cut is doing the buffering. But how does it decide to do so?

tailor the output ofcut? Because in what you're showing,tail -f` is the only one writing to a pipe, and AFAIK it shouldn't do buffering.stdbufcommand makes the data come out immediately on the systems where the problem occurs. Since thestdbufprecedes thecut, I assume the cut is doing the buffering.