Timeline for Why does `tail -c 4097 /dev/zero` exit immediately instead of blocking?
Current License: CC BY-SA 4.0
13 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Apr 17 at 19:01 | comment | added | Stephen Kitt |
As you mention, files can grow (or even be truncated) while being read, so seeking to n bytes from the end of a file doesn’t guarantee that there are exactly n bytes left to read anyway. Considering /dev/zero specifically, it might as well be considered as an infinite tape (Turing style), where every value is always zero, so position is meaningless.
|
|
| Apr 17 at 18:59 | comment | added | Stephen Kitt |
@ilkkachu Let’s see… On the coreutils side of things, the special handling for “a page or less” is indeed a fix for issues with files in /sys and the like. On the kernel side of things, allowing /dev/zero etc. to handle lseek is intended to allow opening them in append mode. Considering the lseek “contract”, succeeding on /dev/zero without actually moving the position seems like an OK compromise: after seeking, the caller wants to know that the correct data will be read, which is the case anywhere in /dev/zero.
|
|
| Apr 17 at 9:37 | comment | added | ilkkachu |
@StephenKitt, well, exactly, it doesn't give an error, i.e. it succeeds, so it seems "supported" and it's fair to assume it does something useful, right? Not that the position would matter at all for /dev/zero, and it looks to always return a new position of zero, but with SEEK_END, the caller can't really know what the position should be anyway. But it could just drop an ESPIPE, same as /dev/tty appears to do.
|
|
| Apr 17 at 9:10 | comment | added | Stephen Kitt |
@ilkkachu “the fact that /dev/zero supports seeking from the end is already wrong” — it doesn’t, but that doesn’t result in an error when seeking.
|
|
| Apr 17 at 8:47 | comment | added | ilkkachu |
What I'm actually wondering, is why they bothered to make different behaviour for the case of a page or less in the first place... It's not like the read()/write() calls really care about page size at all. ... oh, right, for cases like the "files" in proc which look like regular files but where the size is a complete lie. Except that the seek(fd, -N, SEEK_END); read(N) should still be as valid as it can be.
|
|
| Apr 17 at 8:45 | comment | added | ilkkachu |
In any case, if the data changes, it kinda hard to get consistent results whatever you do, so odd behaviour on something special like /dev/zero isn't too bad, which ever way it goes. Actually, I could argue that the fact that /dev/zero supports seeking from the end is already wrong, since conceptually it doesn't have an end. And if the OS provides functions that are wrong, it's not the fault of the utility. :)
|
|
| Apr 17 at 8:42 | comment | added | ilkkachu | Reading until getting EOF vs. reading a set amount of bytes can be different on regular files too: if some other process manages to append to the file after the seek but before the reads finish. Then you might again read more than the amount of bytes requested. So, in any case, the implementation needs to decide which one to do. At least if there's only appends, seeking and reading the requested amount of bytes gives a set of bytes that were the N last ones at some point, but even that might not hold if concurrent writes also modify existing data. | |
| Apr 17 at 8:35 | vote | accept | Isidro Arias | ||
| Apr 17 at 8:25 | history | edited | Stephen Kitt | CC BY-SA 4.0 |
Clarify page size and equality, thanks Chris Davies!
|
| Apr 17 at 5:15 | history | edited | Stephen Kitt | CC BY-SA 4.0 |
Clarify the expectations.
|
| Apr 17 at 5:14 | comment | added | Stephen Kitt |
That’s how I understand the bug report too, along the same lines as the question; I was surprised by the fix. I suppose it depends on whether one considers the argument to -c to be a limit on the output size as well as a starting point in the input!
|
|
| Apr 17 at 4:01 | comment | added | Stéphane Chazelas |
I'd agree with the OP here that a tail -c anyvalue /dev/zero that doesn't loop forever is a bug as it's meant to output the end of an infinite stream.
|
|
| Apr 16 at 21:53 | history | answered | Stephen Kitt | CC BY-SA 4.0 |