Skip to main content
added 298 characters in body
Source Link
Alex O
  • 133
  • 4

Edit to make the question more specific:
Can a program just read from /dev/urandom to provide high quality, cryptographically strong random numbers at a rate of at least 1MB/sec on a Ubuntu LTS VM in the cloud, or are there any pitfalls that I need to take into consideration?

Original:

I need to write a component that will provide random numbers, satisfying a number of conditions:

  1. It has to generate numbers that are statistically independent, linearly distributed, and pass industry standard statistical randomness tests.
  2. The algorithm has to be considered "cryptographically strong".
  3. It has to be sufficiently fast.
  4. The target system will be Ubuntu LTS, virtualized in the cloud.

I am thinking of using numbers read from /dev/urandom instead of using a true (hardware) RNG (such as this) or writing CSPRNG code. The information that I have gathered (see below) seems to indicate that this is a valid approach, but I want to be sure that I am not missing anything.

  • According to Documentation and Analysis of the Linux Random Number Generator section 8.2:
    The testing has shown that the output function generating random numbers for /dev/random and dev/urandom produce data exhibiting the characteristics of an ideal random number generator. Thus, no implementation errors that would diminish the entropy in the random numbers were identified.
    This appears to satisfy the 1st requirement.

  • According to the sources cited by Wikipedia, the Linux kernel uses the ChaCha20 algorithm to generate data for /dev/urandom starting from version 4.8. That algorithm is generally considered "cryptographically secure".
    This appears to satisfy the 2nd requirement.

  • I do not have an Ubuntu system at my disposal, but I ran { timeout --foreground 1s cat /dev/urandom; } | wc -c on my M1 Mac and it reported ~200MB/s, which is more than fast enough for my requirements, with room to spare. Running the same online was 2.5 times faster.
    A small C++ program using low level read() calls to repeatedly fill a 40KB buffer from /dev/urandom, achieved ~1GB/s speeds. I could not test getrandom() on my Mac but a quick check of gotbolt showed that the performance of both approaches is roughly the same for the same buffer size.
    This appears to satisfy the 3rd requirement.

  • This article suggests that the speedup improvements came in kernel version 5.17 (and further improvements in 5.18), which means that they are already in Ubuntu 22.04.2LTS (which uses version 5.19 of the kernel).

Are there any considerations that I have missed?

I need to write a component that will provide random numbers, satisfying a number of conditions:

  1. It has to generate numbers that are statistically independent, linearly distributed, and pass industry standard statistical randomness tests.
  2. The algorithm has to be considered "cryptographically strong".
  3. It has to be sufficiently fast.
  4. The target system will be Ubuntu LTS, virtualized in the cloud.

I am thinking of using numbers read from /dev/urandom instead of using a true (hardware) RNG (such as this) or writing CSPRNG code. The information that I have gathered (see below) seems to indicate that this is a valid approach, but I want to be sure that I am not missing anything.

  • According to Documentation and Analysis of the Linux Random Number Generator section 8.2:
    The testing has shown that the output function generating random numbers for /dev/random and dev/urandom produce data exhibiting the characteristics of an ideal random number generator. Thus, no implementation errors that would diminish the entropy in the random numbers were identified.
    This appears to satisfy the 1st requirement.

  • According to the sources cited by Wikipedia, the Linux kernel uses the ChaCha20 algorithm to generate data for /dev/urandom starting from version 4.8. That algorithm is generally considered "cryptographically secure".
    This appears to satisfy the 2nd requirement.

  • I do not have an Ubuntu system at my disposal, but I ran { timeout --foreground 1s cat /dev/urandom; } | wc -c on my M1 Mac and it reported ~200MB/s, which is fast enough for my requirements, with room to spare. Running the same online was 2.5 times faster.
    A small C++ program using low level read() calls to repeatedly fill a 40KB buffer from /dev/urandom, achieved ~1GB/s speeds. I could not test getrandom() on my Mac but a quick check of gotbolt showed that the performance of both approaches is roughly the same for the same buffer size.
    This appears to satisfy the 3rd requirement.

  • This article suggests that the speedup improvements came in kernel version 5.17 (and further improvements in 5.18), which means that they are already in Ubuntu 22.04.2LTS (which uses version 5.19 of the kernel).

Are there any considerations that I have missed?

Edit to make the question more specific:
Can a program just read from /dev/urandom to provide high quality, cryptographically strong random numbers at a rate of at least 1MB/sec on a Ubuntu LTS VM in the cloud, or are there any pitfalls that I need to take into consideration?

Original:

I need to write a component that will provide random numbers, satisfying a number of conditions:

  1. It has to generate numbers that are statistically independent, linearly distributed, and pass industry standard statistical randomness tests.
  2. The algorithm has to be considered "cryptographically strong".
  3. It has to be sufficiently fast.
  4. The target system will be Ubuntu LTS, virtualized in the cloud.

I am thinking of using numbers read from /dev/urandom instead of using a true (hardware) RNG (such as this) or writing CSPRNG code. The information that I have gathered (see below) seems to indicate that this is a valid approach, but I want to be sure that I am not missing anything.

  • According to Documentation and Analysis of the Linux Random Number Generator section 8.2:
    The testing has shown that the output function generating random numbers for /dev/random and dev/urandom produce data exhibiting the characteristics of an ideal random number generator. Thus, no implementation errors that would diminish the entropy in the random numbers were identified.
    This appears to satisfy the 1st requirement.

  • According to the sources cited by Wikipedia, the Linux kernel uses the ChaCha20 algorithm to generate data for /dev/urandom starting from version 4.8. That algorithm is generally considered "cryptographically secure".
    This appears to satisfy the 2nd requirement.

  • I do not have an Ubuntu system at my disposal, but I ran { timeout --foreground 1s cat /dev/urandom; } | wc -c on my M1 Mac and it reported ~200MB/s, which is more than fast enough for my requirements. Running the same online was 2.5 times faster.
    A small C++ program using low level read() calls to repeatedly fill a 40KB buffer from /dev/urandom, achieved ~1GB/s speeds. I could not test getrandom() on my Mac but a quick check of gotbolt showed that the performance of both approaches is roughly the same for the same buffer size.
    This appears to satisfy the 3rd requirement.

  • This article suggests that the speedup improvements came in kernel version 5.17 (and further improvements in 5.18), which means that they are already in Ubuntu 22.04.2LTS (which uses version 5.19 of the kernel).

Are there any considerations that I have missed?

added 2 characters in body
Source Link
Alex O
  • 133
  • 4

I need to write a component that will provide random numbers, satisfying a number of conditions:

  1. It has to generate numbers that are statistically independent, linearly distributed, and pass industry standard statistical randomness tests.
  2. The algorithm has to be considered "cryptographically strong".
  3. It has to be sufficiently fast.
  4. The target system will be Ubuntu LTS, virtualized in the cloud.

I am thinking of using numbers read from /dev/urandom instead of using a true (hardware) RNG (such as this) or writing CSPRNG code. The information that I have gathered (see below) seems to indicate that this is a valid approach, but I want to be sure that I am not missing anything.

  • According to Documentation and Analysis of the Linux Random Number Generator section 8.2:
    The testing has shown that the output function generating random numbers for /dev/random and dev/urandom produce data exhibiting the characteristics of an ideal random number generator. Thus, no implementation errors that would diminish the entropy in the random numbers were identified.
    This appears to satisfy the 1st requirement.

  • According to the sources cited by Wikipedia, the Linux kernel uses the ChaCha20 algorithm to generate data for /dev/urandom starting from version 4.8. That algorithm is generally considered "cryptographically secure".
    This appears to satisfy the 2nd requirement.

  • I do not have an Ubuntu system at my disposal, but I ran { timeout --foreground 1s cat /dev/urandom; } | wc -c on my M1 Mac and it reported ~200MB/s, which is fast enough for my requirements, with room to spare. Running the same online was 2.5 times faster.
    A small C++ program using low level read() calls to repeatedly fill a 40KB buffer from /dev/urandom, achieved ~1GB/s speeds. I could not test getrandom() on my Mac but a quick check of gotbolt showed that the performance of both approaches is roughly the same for the same buffer size.
    This appears to satisfy the 3rd requirement.

  • This article suggests that the speedup improvements came in kernel version 5.17 (and further improvements in 5.18), which means that they are already in Ubuntu 22.04.2LTS (which uses version 5.19 of the kernel).

Are there any considerations that I have missed?

I need to write a component that will provide random numbers, satisfying a number of conditions:

  1. It has to generate numbers that are statistically independent, linearly distributed, and pass industry standard statistical randomness tests.
  2. The algorithm has to be considered "cryptographically strong".
  3. It has to be sufficiently fast.
  4. The target system will be Ubuntu LTS, virtualized in the cloud.

I am thinking of using numbers read from /dev/urandom instead of using a true (hardware) RNG (such as this) or writing CSPRNG code. The information that I have gathered (see below) seems to indicate that this is a valid approach, but I want to be sure that I am not missing anything.

  • According to Documentation and Analysis of the Linux Random Number Generator section 8.2:
    The testing has shown that the output function generating random numbers for /dev/random and dev/urandom produce data exhibiting the characteristics of an ideal random number generator. Thus, no implementation errors that would diminish the entropy in the random numbers were identified.
    This appears to satisfy the 1st requirement.

  • According to the sources cited by Wikipedia, the Linux kernel uses the ChaCha20 algorithm to generate data for /dev/urandom starting from version 4.8. That algorithm is generally considered "cryptographically secure".
    This appears to satisfy the 2nd requirement.

  • I do not have an Ubuntu system at my disposal, but I ran { timeout --foreground 1s cat /dev/urandom; } | wc -c on my M1 Mac and it reported ~200MB/s, which is fast enough for my requirements, with room to spare. Running the same online was 2.5 times faster.
    A small C++ program using low level read() calls to repeatedly fill a 40KB buffer from /dev/urandom, achieved ~1GB/s speeds. I could not test getrandom() on my Mac but a quick check of gotbolt showed that the performance of both approaches is roughly the same for the same buffer size.
    This appears to satisfy the 3rd requirement.

  • This article suggests that the speedup improvements came in kernel version 5.17 (and further improvements in 5.18), which means that they are already in Ubuntu 22.04 (which uses version 5.19 of the kernel).

Are there any considerations that I have missed?

I need to write a component that will provide random numbers, satisfying a number of conditions:

  1. It has to generate numbers that are statistically independent, linearly distributed, and pass industry standard statistical randomness tests.
  2. The algorithm has to be considered "cryptographically strong".
  3. It has to be sufficiently fast.
  4. The target system will be Ubuntu LTS, virtualized in the cloud.

I am thinking of using numbers read from /dev/urandom instead of using a true (hardware) RNG (such as this) or writing CSPRNG code. The information that I have gathered (see below) seems to indicate that this is a valid approach, but I want to be sure that I am not missing anything.

  • According to Documentation and Analysis of the Linux Random Number Generator section 8.2:
    The testing has shown that the output function generating random numbers for /dev/random and dev/urandom produce data exhibiting the characteristics of an ideal random number generator. Thus, no implementation errors that would diminish the entropy in the random numbers were identified.
    This appears to satisfy the 1st requirement.

  • According to the sources cited by Wikipedia, the Linux kernel uses the ChaCha20 algorithm to generate data for /dev/urandom starting from version 4.8. That algorithm is generally considered "cryptographically secure".
    This appears to satisfy the 2nd requirement.

  • I do not have an Ubuntu system at my disposal, but I ran { timeout --foreground 1s cat /dev/urandom; } | wc -c on my M1 Mac and it reported ~200MB/s, which is fast enough for my requirements, with room to spare. Running the same online was 2.5 times faster.
    A small C++ program using low level read() calls to repeatedly fill a 40KB buffer from /dev/urandom, achieved ~1GB/s speeds. I could not test getrandom() on my Mac but a quick check of gotbolt showed that the performance of both approaches is roughly the same for the same buffer size.
    This appears to satisfy the 3rd requirement.

  • This article suggests that the speedup improvements came in kernel version 5.17 (and further improvements in 5.18), which means that they are already in Ubuntu 22.04.2LTS (which uses version 5.19 of the kernel).

Are there any considerations that I have missed?

Source Link
Alex O
  • 133
  • 4

Fast, cryptographically strong PRNG using /dev/urandom

I need to write a component that will provide random numbers, satisfying a number of conditions:

  1. It has to generate numbers that are statistically independent, linearly distributed, and pass industry standard statistical randomness tests.
  2. The algorithm has to be considered "cryptographically strong".
  3. It has to be sufficiently fast.
  4. The target system will be Ubuntu LTS, virtualized in the cloud.

I am thinking of using numbers read from /dev/urandom instead of using a true (hardware) RNG (such as this) or writing CSPRNG code. The information that I have gathered (see below) seems to indicate that this is a valid approach, but I want to be sure that I am not missing anything.

  • According to Documentation and Analysis of the Linux Random Number Generator section 8.2:
    The testing has shown that the output function generating random numbers for /dev/random and dev/urandom produce data exhibiting the characteristics of an ideal random number generator. Thus, no implementation errors that would diminish the entropy in the random numbers were identified.
    This appears to satisfy the 1st requirement.

  • According to the sources cited by Wikipedia, the Linux kernel uses the ChaCha20 algorithm to generate data for /dev/urandom starting from version 4.8. That algorithm is generally considered "cryptographically secure".
    This appears to satisfy the 2nd requirement.

  • I do not have an Ubuntu system at my disposal, but I ran { timeout --foreground 1s cat /dev/urandom; } | wc -c on my M1 Mac and it reported ~200MB/s, which is fast enough for my requirements, with room to spare. Running the same online was 2.5 times faster.
    A small C++ program using low level read() calls to repeatedly fill a 40KB buffer from /dev/urandom, achieved ~1GB/s speeds. I could not test getrandom() on my Mac but a quick check of gotbolt showed that the performance of both approaches is roughly the same for the same buffer size.
    This appears to satisfy the 3rd requirement.

  • This article suggests that the speedup improvements came in kernel version 5.17 (and further improvements in 5.18), which means that they are already in Ubuntu 22.04 (which uses version 5.19 of the kernel).

Are there any considerations that I have missed?