Skip to main content
replaced http://security.stackexchange.com/ with https://security.stackexchange.com/
Source Link

Entropy is fed into /dev/random at a rather slow rate, so if you use any program that uses /dev/random, it's pretty common for the entropy to be low.

Even if you believe in Linux's definition of entropy, low entropy isn't a security problem. /dev/random blocks until it's satisfied that it has enough entropy. With low entropy, you'll get applications sitting around waiting for you to wiggle the mouse, but not a loss of randomness.

In fact Linux's definition of entropy is flawed: it's an extremely conservative definition which strives to achieve a theoretical level of randomness that's useless in practice. In fact, entropy does not wear out — once you have enough, you have enough. Unfortunately, Linux only has two interfaces to get random numbers: /dev/random, which blocks when it shouldn't, and /dev/urandom, which never blocks. Fortunately, in practice, /dev/urandom is almost always correct, because a system quickly gathers enough entropy, after which point /dev/urandom is ok forever (including uses such as generating cryptographic keys)/dev/urandom is ok forever (including uses such as generating cryptographic keys).

The only time when /dev/urandom is problematic is when a system doesn't have enough entropy yet, for example on the first boot of a fresh installation, after booting a live CD, or after cloning a virtual machine. In such situations, wait until /proc/sys/kernel/random/entropy_avail reaches 200 or so. After that, you can use /dev/urandom as much as you like.

Entropy is fed into /dev/random at a rather slow rate, so if you use any program that uses /dev/random, it's pretty common for the entropy to be low.

Even if you believe in Linux's definition of entropy, low entropy isn't a security problem. /dev/random blocks until it's satisfied that it has enough entropy. With low entropy, you'll get applications sitting around waiting for you to wiggle the mouse, but not a loss of randomness.

In fact Linux's definition of entropy is flawed: it's an extremely conservative definition which strives to achieve a theoretical level of randomness that's useless in practice. In fact, entropy does not wear out — once you have enough, you have enough. Unfortunately, Linux only has two interfaces to get random numbers: /dev/random, which blocks when it shouldn't, and /dev/urandom, which never blocks. Fortunately, in practice, /dev/urandom is almost always correct, because a system quickly gathers enough entropy, after which point /dev/urandom is ok forever (including uses such as generating cryptographic keys).

The only time when /dev/urandom is problematic is when a system doesn't have enough entropy yet, for example on the first boot of a fresh installation, after booting a live CD, or after cloning a virtual machine. In such situations, wait until /proc/sys/kernel/random/entropy_avail reaches 200 or so. After that, you can use /dev/urandom as much as you like.

Entropy is fed into /dev/random at a rather slow rate, so if you use any program that uses /dev/random, it's pretty common for the entropy to be low.

Even if you believe in Linux's definition of entropy, low entropy isn't a security problem. /dev/random blocks until it's satisfied that it has enough entropy. With low entropy, you'll get applications sitting around waiting for you to wiggle the mouse, but not a loss of randomness.

In fact Linux's definition of entropy is flawed: it's an extremely conservative definition which strives to achieve a theoretical level of randomness that's useless in practice. In fact, entropy does not wear out — once you have enough, you have enough. Unfortunately, Linux only has two interfaces to get random numbers: /dev/random, which blocks when it shouldn't, and /dev/urandom, which never blocks. Fortunately, in practice, /dev/urandom is almost always correct, because a system quickly gathers enough entropy, after which point /dev/urandom is ok forever (including uses such as generating cryptographic keys).

The only time when /dev/urandom is problematic is when a system doesn't have enough entropy yet, for example on the first boot of a fresh installation, after booting a live CD, or after cloning a virtual machine. In such situations, wait until /proc/sys/kernel/random/entropy_avail reaches 200 or so. After that, you can use /dev/urandom as much as you like.

Source Link
Gilles 'SO- stop being evil'
  • 865.3k
  • 205
  • 1.8k
  • 2.3k

Entropy is fed into /dev/random at a rather slow rate, so if you use any program that uses /dev/random, it's pretty common for the entropy to be low.

Even if you believe in Linux's definition of entropy, low entropy isn't a security problem. /dev/random blocks until it's satisfied that it has enough entropy. With low entropy, you'll get applications sitting around waiting for you to wiggle the mouse, but not a loss of randomness.

In fact Linux's definition of entropy is flawed: it's an extremely conservative definition which strives to achieve a theoretical level of randomness that's useless in practice. In fact, entropy does not wear out — once you have enough, you have enough. Unfortunately, Linux only has two interfaces to get random numbers: /dev/random, which blocks when it shouldn't, and /dev/urandom, which never blocks. Fortunately, in practice, /dev/urandom is almost always correct, because a system quickly gathers enough entropy, after which point /dev/urandom is ok forever (including uses such as generating cryptographic keys).

The only time when /dev/urandom is problematic is when a system doesn't have enough entropy yet, for example on the first boot of a fresh installation, after booting a live CD, or after cloning a virtual machine. In such situations, wait until /proc/sys/kernel/random/entropy_avail reaches 200 or so. After that, you can use /dev/urandom as much as you like.