The short answer is 0, because entropy is not consumed.
There is a common misconception that entropy is consumed — that each time you read a random bit, this removes some entropy from the random source. This is wrong. You do not “consume” entropyYou do not “consume” entropy. Yes, the Linux documentation gets it wrong.
During the life cycle of a Linux system, there are two stages:
- Initially, there is not enough entropy.
/dev/randomwill block until it thinks it has amassed enough entropy;/dev/urandomhappily provides low-entropy data. - After a while, enough entropy is present in the random generator pool.
/dev/randomassigns a bogus rate of “entropy leek” and blocks now and then;/dev/urandomhappily provides crypto-quality random data.
FreeBSD gets it right: on FreeBSD, /dev/random (or /dev/urandom, which is the same thing) blocks if it doesn't have enough entropy, and once it does, it keeps spewing out random data. On Linux, neither /dev/random nor /dev/urandom is the useful thing.
In practice, use /dev/urandom, and make sure when you provision your system that the entropy pool is fed (from disk, network and mouse activity, from a hardware source, from an external machine, …).
While you could try to read how many bytes get read from /dev/urandom, this is completely pointless. Reading from /dev/urandom does not deplete the entropy pool. Each consumer uses up 0 bits of entropy per any unit of time you care to name.