This is by default and by design, and I am not sure you can solve this with bash. You can certainly solve this with zsh (it has select() syscall module) but maybe shell is not the right language for the job at this point.
The issue is simple as everything in unix is, but actual understanding and solution requires some deeper thinking and practical knowledge.
Cause of the effect is blocking flag set on the file descriptor, which is done by the system kernel by default when you open any VFS object.
In unix, process cooperation is synchronized by blocking at IO boundaries, which is very intuitive and makes everything very elegant and simple and makes naive and simplistic programming of beginner and intermediate application programmers "just work".
When you open object in question (file, fifo or whatever), the blocking flag set on it ensures, that any read form the descriptor immediately blocks the whole process when there is no data to be read. This block is unblocked only and only after some data is filled into the object from the other side (in case of pipe).
Regular files are an "exception", at least when compared to pipe, as from the point of IO subsystem they "never block" (even if they actually do - ie unsetting blocking flag on file fd has no effect, as process block happens deeper in kernel's storage fd read routine). From process POV disk read is always instant and in zero time without blocking (even though system clocks actually jump forward during such read).
This design has two effects you are observing:
First, you never really observe effects of IO blocking when juggling just regular files from shell (as they never block for real, as we explained above).
Second, once you get blocked in shell on read() from pipe, like you've got here, your process is essentially stuck in block forever as this kind of blocking is "for real", at least until more data is not filled in from the other side of the pipe. Your process does not even run and consume CPU time in that state, it is kernel who holds it blocked from outside, until more data arrives, and thus process timeout routines cannot even run either (as that requires process to be consuming CPU time ie running).
Your process will remain blocked at least until you fill pipe with sufficient amount of data to be read, then it will ublocked briefly, until all the data is consumed again and process is blocked again.
If you ponder about it carefully this is actually what makes pipes in shell work.
Have you ever wondered how come that complex shell pipeline adapts to fast or slow programs somehow on it's own? This is the mechanism that makes it work. Fast generators spewing output fast will make next program in pipeline read it faster, and slow data generator in pipeline will make any subsequent program in pipeline read/run slower - everything is rate limited by the pipes blocking on data, synchonizing whole pipe as if by magic.