0

Edit to make the question more specific:
Can a program just read from /dev/urandom to provide high quality, cryptographically strong random numbers at a rate of at least 1MB/sec on a Ubuntu LTS VM in the cloud, or are there any pitfalls that I need to take into consideration?

Original:

I need to write a component that will provide random numbers, satisfying a number of conditions:

  1. It has to generate numbers that are statistically independent, linearly distributed, and pass industry standard statistical randomness tests.
  2. The algorithm has to be considered "cryptographically strong".
  3. It has to be sufficiently fast.
  4. The target system will be Ubuntu LTS, virtualized in the cloud.

I am thinking of using numbers read from /dev/urandom instead of using a true (hardware) RNG (such as this) or writing CSPRNG code. The information that I have gathered (see below) seems to indicate that this is a valid approach, but I want to be sure that I am not missing anything.

  • According to Documentation and Analysis of the Linux Random Number Generator section 8.2:
    The testing has shown that the output function generating random numbers for /dev/random and dev/urandom produce data exhibiting the characteristics of an ideal random number generator. Thus, no implementation errors that would diminish the entropy in the random numbers were identified.
    This appears to satisfy the 1st requirement.

  • According to the sources cited by Wikipedia, the Linux kernel uses the ChaCha20 algorithm to generate data for /dev/urandom starting from version 4.8. That algorithm is generally considered "cryptographically secure".
    This appears to satisfy the 2nd requirement.

  • I do not have an Ubuntu system at my disposal, but I ran { timeout --foreground 1s cat /dev/urandom; } | wc -c on my M1 Mac and it reported ~200MB/s, which is more than fast enough for my requirements. Running the same online was 2.5 times faster.
    A small C++ program using low level read() calls to repeatedly fill a 40KB buffer from /dev/urandom, achieved ~1GB/s speeds. I could not test getrandom() on my Mac but a quick check of gotbolt showed that the performance of both approaches is roughly the same for the same buffer size.
    This appears to satisfy the 3rd requirement.

  • This article suggests that the speedup improvements came in kernel version 5.17 (and further improvements in 5.18), which means that they are already in Ubuntu 22.04.2LTS (which uses version 5.19 of the kernel).

Are there any considerations that I have missed?

5
  • You might have forgotten to state what "sufficiently fast" is for your application Commented Apr 10, 2023 at 6:01
  • Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Commented Apr 10, 2023 at 7:46
  • 1
    @MarcusMüller, I have indirectly: "... it reported ~200MB/s, which is fast enough for my requirements, with room to spare" Commented Apr 11, 2023 at 17:56
  • hm, then I don't see too many considerations you've missed, aside from your question title being wrong: /dev/urandom is an actual RNG, not a PRNG. So if you need a PRNG, it's the wrong thing. Commented Apr 11, 2023 at 18:01
  • 1
    My understanding is that it is a CSPRNG (based on ChaCha20) that is seeded with actual random entropy on startup, but after the seeding (and before re-seeds, if any) it generates numbers algorithmically. If I am wrong, please point me to the relevant info. Thank you! Commented Apr 11, 2023 at 19:58

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.