Skip to main content
replaced http://crypto.stackexchange.com/ with https://crypto.stackexchange.com/
Source Link

The short answer is 0, because entropy is not consumed.

There is a common misconception that entropy is consumed — that each time you read a random bit, this removes some entropy from the random source. This is wrong. You do not “consume” entropyYou do not “consume” entropy. Yes, the Linux documentation gets it wrong.

During the life cycle of a Linux system, there are two stages:

  1. Initially, there is not enough entropy. /dev/random will block until it thinks it has amassed enough entropy; /dev/urandom happily provides low-entropy data.
  2. After a while, enough entropy is present in the random generator pool. /dev/random assigns a bogus rate of “entropy leek” and blocks now and then; /dev/urandom happily provides crypto-quality random data.

FreeBSD gets it right: on FreeBSD, /dev/random (or /dev/urandom, which is the same thing) blocks if it doesn't have enough entropy, and once it does, it keeps spewing out random data. On Linux, neither /dev/random nor /dev/urandom is the useful thing.

In practice, use /dev/urandom, and make sure when you provision your system that the entropy pool is fed (from disk, network and mouse activity, from a hardware source, from an external machine, …).

While you could try to read how many bytes get read from /dev/urandom, this is completely pointless. Reading from /dev/urandom does not deplete the entropy pool. Each consumer uses up 0 bits of entropy per any unit of time you care to name.

The short answer is 0, because entropy is not consumed.

There is a common misconception that entropy is consumed — that each time you read a random bit, this removes some entropy from the random source. This is wrong. You do not “consume” entropy. Yes, the Linux documentation gets it wrong.

During the life cycle of a Linux system, there are two stages:

  1. Initially, there is not enough entropy. /dev/random will block until it thinks it has amassed enough entropy; /dev/urandom happily provides low-entropy data.
  2. After a while, enough entropy is present in the random generator pool. /dev/random assigns a bogus rate of “entropy leek” and blocks now and then; /dev/urandom happily provides crypto-quality random data.

FreeBSD gets it right: on FreeBSD, /dev/random (or /dev/urandom, which is the same thing) blocks if it doesn't have enough entropy, and once it does, it keeps spewing out random data. On Linux, neither /dev/random nor /dev/urandom is the useful thing.

In practice, use /dev/urandom, and make sure when you provision your system that the entropy pool is fed (from disk, network and mouse activity, from a hardware source, from an external machine, …).

While you could try to read how many bytes get read from /dev/urandom, this is completely pointless. Reading from /dev/urandom does not deplete the entropy pool. Each consumer uses up 0 bits of entropy per any unit of time you care to name.

The short answer is 0, because entropy is not consumed.

There is a common misconception that entropy is consumed — that each time you read a random bit, this removes some entropy from the random source. This is wrong. You do not “consume” entropy. Yes, the Linux documentation gets it wrong.

During the life cycle of a Linux system, there are two stages:

  1. Initially, there is not enough entropy. /dev/random will block until it thinks it has amassed enough entropy; /dev/urandom happily provides low-entropy data.
  2. After a while, enough entropy is present in the random generator pool. /dev/random assigns a bogus rate of “entropy leek” and blocks now and then; /dev/urandom happily provides crypto-quality random data.

FreeBSD gets it right: on FreeBSD, /dev/random (or /dev/urandom, which is the same thing) blocks if it doesn't have enough entropy, and once it does, it keeps spewing out random data. On Linux, neither /dev/random nor /dev/urandom is the useful thing.

In practice, use /dev/urandom, and make sure when you provision your system that the entropy pool is fed (from disk, network and mouse activity, from a hardware source, from an external machine, …).

While you could try to read how many bytes get read from /dev/urandom, this is completely pointless. Reading from /dev/urandom does not deplete the entropy pool. Each consumer uses up 0 bits of entropy per any unit of time you care to name.

replaced http://security.stackexchange.com/ with https://security.stackexchange.com/
Source Link

The short answer is 0, because entropy is not consumed.

There is a common misconceptioncommon misconception that entropy is consumed — that each time you read a random bit, this removes some entropy from the random source. This is wrong. You do not “consume” entropy. Yes, the Linux documentation gets it wrongthe Linux documentation gets it wrong.

During the life cycle of a Linux system, there are two stages:

  1. Initially, there is not enough entropy. /dev/random will block until it thinks it has amassed enough entropy; /dev/urandom happily provides low-entropy data.
  2. After a while, enough entropy is present in the random generator pool. /dev/random assigns a bogus rate of “entropy leek” and blocks now and then; /dev/urandom happily provides crypto-quality random data.

FreeBSD gets it right: on FreeBSD, /dev/random (or /dev/urandom, which is the same thing) blocks if it doesn't have enough entropy, and once it does, it keeps spewing out random data. On Linux, neither /dev/random nor /dev/urandom is the useful thing.

In practice, use /dev/urandom, and make sure when you provision your system that the entropy pool is fed (from disk, network and mouse activity, from a hardware source, from an external machine, …).

While you could try to read how many bytes get read from /dev/urandom, this is completely pointless. Reading from /dev/urandom does not deplete the entropy pool. Each consumer uses up 0 bits of entropy per any unit of time you care to name.

The short answer is 0, because entropy is not consumed.

There is a common misconception that entropy is consumed — that each time you read a random bit, this removes some entropy from the random source. This is wrong. You do not “consume” entropy. Yes, the Linux documentation gets it wrong.

During the life cycle of a Linux system, there are two stages:

  1. Initially, there is not enough entropy. /dev/random will block until it thinks it has amassed enough entropy; /dev/urandom happily provides low-entropy data.
  2. After a while, enough entropy is present in the random generator pool. /dev/random assigns a bogus rate of “entropy leek” and blocks now and then; /dev/urandom happily provides crypto-quality random data.

FreeBSD gets it right: on FreeBSD, /dev/random (or /dev/urandom, which is the same thing) blocks if it doesn't have enough entropy, and once it does, it keeps spewing out random data. On Linux, neither /dev/random nor /dev/urandom is the useful thing.

In practice, use /dev/urandom, and make sure when you provision your system that the entropy pool is fed (from disk, network and mouse activity, from a hardware source, from an external machine, …).

While you could try to read how many bytes get read from /dev/urandom, this is completely pointless. Reading from /dev/urandom does not deplete the entropy pool. Each consumer uses up 0 bits of entropy per any unit of time you care to name.

The short answer is 0, because entropy is not consumed.

There is a common misconception that entropy is consumed — that each time you read a random bit, this removes some entropy from the random source. This is wrong. You do not “consume” entropy. Yes, the Linux documentation gets it wrong.

During the life cycle of a Linux system, there are two stages:

  1. Initially, there is not enough entropy. /dev/random will block until it thinks it has amassed enough entropy; /dev/urandom happily provides low-entropy data.
  2. After a while, enough entropy is present in the random generator pool. /dev/random assigns a bogus rate of “entropy leek” and blocks now and then; /dev/urandom happily provides crypto-quality random data.

FreeBSD gets it right: on FreeBSD, /dev/random (or /dev/urandom, which is the same thing) blocks if it doesn't have enough entropy, and once it does, it keeps spewing out random data. On Linux, neither /dev/random nor /dev/urandom is the useful thing.

In practice, use /dev/urandom, and make sure when you provision your system that the entropy pool is fed (from disk, network and mouse activity, from a hardware source, from an external machine, …).

While you could try to read how many bytes get read from /dev/urandom, this is completely pointless. Reading from /dev/urandom does not deplete the entropy pool. Each consumer uses up 0 bits of entropy per any unit of time you care to name.

Source Link
Gilles 'SO- stop being evil'
  • 865.4k
  • 205
  • 1.8k
  • 2.3k

The short answer is 0, because entropy is not consumed.

There is a common misconception that entropy is consumed — that each time you read a random bit, this removes some entropy from the random source. This is wrong. You do not “consume” entropy. Yes, the Linux documentation gets it wrong.

During the life cycle of a Linux system, there are two stages:

  1. Initially, there is not enough entropy. /dev/random will block until it thinks it has amassed enough entropy; /dev/urandom happily provides low-entropy data.
  2. After a while, enough entropy is present in the random generator pool. /dev/random assigns a bogus rate of “entropy leek” and blocks now and then; /dev/urandom happily provides crypto-quality random data.

FreeBSD gets it right: on FreeBSD, /dev/random (or /dev/urandom, which is the same thing) blocks if it doesn't have enough entropy, and once it does, it keeps spewing out random data. On Linux, neither /dev/random nor /dev/urandom is the useful thing.

In practice, use /dev/urandom, and make sure when you provision your system that the entropy pool is fed (from disk, network and mouse activity, from a hardware source, from an external machine, …).

While you could try to read how many bytes get read from /dev/urandom, this is completely pointless. Reading from /dev/urandom does not deplete the entropy pool. Each consumer uses up 0 bits of entropy per any unit of time you care to name.