Skip to main content
added 162 characters in body
Source Link

Having no entropy left and knowing that makes /dev/random deterministic. This enable other processes (including those started by malicious users) to be able to "predict" the future output from (i.e. the bytes read from) /dev/random.

On one hand, a good Linux programmer in need of "true" randomness would have carefully read random(4) so would read(2) from /dev/random which would block if no entropy is available. This could annoy processes reading from it. See also (as commented by the OP) the myths about urandom page.

On the other hand, some programmers prefer to read from /dev/urandom which is almost always good enough and does not block. AFAIK, the C++11 standard library from GCC is doing that for std::random_device.

Obviously, randomness matters slightly more on servers than on ordinary desktops.

If your question is becoming "are Linux programmers using /dev/random good enough to be aware of these issues", it is becoming a matter of opinion, or a poll, and then it could be off-topic here.

My opinion is that programmers should be expected to have read random(4).

Recent Intel processors have the RDRAND machine instruction, and /dev/random is using it (and some other things) as random source. So I guess that lack of entropy on /dev/random don't affect (i.e. never happens for) them.

So in short, I don't care about low entropy in /dev/random. But I am not coding software (e.g. poker sites, some banking systems) where randomness is critically important (to the point of costing lives, or millions of € or $). I guess that such systems can afford getting a better random source (IIRC, a hardware random generator with a megabit/second bandwidth costs a few hundred euros), and they should have (or buy consultancy for) expertise on randomness & probabilities.

Perhaps, you should simply mention, in the README and the documentation of your software, that you are using /dev/random and/or /dev/urandom, for what purpose, and explicitly document the importance, for your own code, of randomness.

Having no entropy left and knowing that makes /dev/random deterministic. This enable other processes (including those started by malicious users) to be able to "predict" the future output from (i.e. the bytes read from) /dev/random.

On one hand, a good Linux programmer in need of "true" randomness would have carefully read random(4) so would read(2) from /dev/random which would block if no entropy is available. This could annoy processes reading from it. See also (as commented by the OP) the myths about urandom page.

On the other hand, some programmers prefer to read from /dev/urandom which is almost always good enough and does not block. AFAIK, the C++11 standard library from GCC is doing that for std::random_device.

Obviously, randomness matters slightly more on servers than on ordinary desktops.

If your question is becoming "are Linux programmers using /dev/random good enough to be aware of these issues", it is becoming a matter of opinion, or a poll, and then it could be off-topic here.

My opinion is that programmers should be expected to have read random(4).

Recent Intel processors have the RDRAND machine instruction, and /dev/random is using it (and some other things) as random source. So I guess that lack of entropy on /dev/random don't affect (i.e. never happens for) them.

So in short, I don't care about low entropy in /dev/random. But I am not coding software (e.g. poker sites, some banking systems) where randomness is critically important (to the point of costing lives, or millions of € or $).

Perhaps, you should simply mention, in the README and the documentation of your software, that you are using /dev/random and/or /dev/urandom, for what purpose, and explicitly document the importance, for your own code, of randomness.

Having no entropy left and knowing that makes /dev/random deterministic. This enable other processes (including those started by malicious users) to be able to "predict" the future output from (i.e. the bytes read from) /dev/random.

On one hand, a good Linux programmer in need of "true" randomness would have carefully read random(4) so would read(2) from /dev/random which would block if no entropy is available. This could annoy processes reading from it. See also (as commented by the OP) the myths about urandom page.

On the other hand, some programmers prefer to read from /dev/urandom which is almost always good enough and does not block. AFAIK, the C++11 standard library from GCC is doing that for std::random_device.

Obviously, randomness matters slightly more on servers than on ordinary desktops.

If your question is becoming "are Linux programmers using /dev/random good enough to be aware of these issues", it is becoming a matter of opinion, or a poll, and then it could be off-topic here.

My opinion is that programmers should be expected to have read random(4).

Recent Intel processors have the RDRAND machine instruction, and /dev/random is using it (and some other things) as random source. So I guess that lack of entropy on /dev/random don't affect (i.e. never happens for) them.

So in short, I don't care about low entropy in /dev/random. But I am not coding software (e.g. poker sites, some banking systems) where randomness is critically important (to the point of costing lives, or millions of € or $). I guess that such systems can afford getting a better random source (IIRC, a hardware random generator with a megabit/second bandwidth costs a few hundred euros), and they should have (or buy consultancy for) expertise on randomness & probabilities.

Perhaps, you should simply mention, in the README and the documentation of your software, that you are using /dev/random and/or /dev/urandom, for what purpose, and explicitly document the importance, for your own code, of randomness.

added 127 characters in body
Source Link

Having no entropy left and knowing that makes /dev/random deterministic. This enable other processes (including those started by malicious users) to be able to "predict" the future output from (i.e. the bytes read from) /dev/random.

On one hand, a good Linux programmer in need of "true" randomness would have carefully read random(4) so would read(2) from /dev/random which would block if no entropy is available. This could annoy processes reading from it. See also (as commented by the OP) the myths about urandom page.

On the other hand, some programmers prefer to read from /dev/urandom which is almost always good enough and does not block. AFAIK, the C++11 standard library from GCC is doing that for std::random_device.

Obviously, randomness matters slightly more on servers than on ordinary desktops.

If your question is becoming "are Linux programmers using /dev/random good enough to be aware of these issues", it is becoming a matter of opinion, or a poll, and then it could be off-topic here.

My opinion is that programmers should be expected to have read random(4).

Recent Intel processors have the RDRAND machine instruction, and /dev/random is using it (and some other things) as random source. So I guess that lack of entropy on /dev/random don't affect (i.e. never happens for) them.

So in short, I don't care about low entropy in /dev/random. But I am not coding software (e.g. poker sites, some banking systems) where randomness is critically important (to the point of costing lives, or millions of € or $).

Perhaps, you should simply mention, in the README and the documentation of your software, that you are using /dev/random and/or /dev/urandom, for what purpose, and explicitly document the importance, for your own code, of randomness.

Having no entropy left and knowing that makes /dev/random deterministic. This enable other processes (including those started by malicious users) to be able to "predict" the future output from (i.e. the bytes read from) /dev/random.

On one hand, a good Linux programmer in need of "true" randomness would have carefully read random(4) so would read(2) from /dev/random which would block if no entropy is available. This could annoy processes reading from it. See also (as commented by the OP) the myths about urandom page.

On the other hand, some programmers prefer to read from /dev/urandom which is almost always good enough and does not block. AFAIK, the C++11 standard library from GCC is doing that for std::random_device.

Obviously, randomness matters slightly more on servers than on ordinary desktops.

If your question is becoming "are Linux programmers using /dev/random good enough to be aware of these issues", it is becoming a matter of opinion, or a poll, and then it could be off-topic here.

My opinion is that programmers should be expected to have read random(4).

Recent Intel processors have the RDRAND machine instruction, and /dev/random is using it (and some other things) as random source. So I guess that lack of entropy on /dev/random don't affect (i.e. never happens for) them.

So in short, I don't care about low entropy in /dev/random. But I am not coding software (e.g. poker sites, some banking systems) where randomness is critically important (to the point of costing lives, or millions of € or $).

Having no entropy left and knowing that makes /dev/random deterministic. This enable other processes (including those started by malicious users) to be able to "predict" the future output from (i.e. the bytes read from) /dev/random.

On one hand, a good Linux programmer in need of "true" randomness would have carefully read random(4) so would read(2) from /dev/random which would block if no entropy is available. This could annoy processes reading from it. See also (as commented by the OP) the myths about urandom page.

On the other hand, some programmers prefer to read from /dev/urandom which is almost always good enough and does not block. AFAIK, the C++11 standard library from GCC is doing that for std::random_device.

Obviously, randomness matters slightly more on servers than on ordinary desktops.

If your question is becoming "are Linux programmers using /dev/random good enough to be aware of these issues", it is becoming a matter of opinion, or a poll, and then it could be off-topic here.

My opinion is that programmers should be expected to have read random(4).

Recent Intel processors have the RDRAND machine instruction, and /dev/random is using it (and some other things) as random source. So I guess that lack of entropy on /dev/random don't affect (i.e. never happens for) them.

So in short, I don't care about low entropy in /dev/random. But I am not coding software (e.g. poker sites, some banking systems) where randomness is critically important (to the point of costing lives, or millions of € or $).

Perhaps, you should simply mention, in the README and the documentation of your software, that you are using /dev/random and/or /dev/urandom, for what purpose, and explicitly document the importance, for your own code, of randomness.

added 127 characters in body
Source Link

Having no entropy left and knowing that makes /dev/random deterministic. This enable other processes (including those started by malicious users) to be able to "predict" the future output from (i.e. the bytes read from) /dev/random.

On one hand, a good Linux programmer in need of "true" randomness would have carefully read random(4) so would read(2) from /dev/random which would block if no entropy is available. This could annoy processes reading from it. See also (as commented by the OP) the myths about urandom page.

On the other hand, some programmers prefer to read from /dev/urandom which is almost always good enough and does not block. AFAIK, the C++11 standard library from GCC is doing that for std::random_device.

Obviously, randomness matters slightly more on servers than on ordinary desktops.

If your question is becoming "are Linux programmers using /dev/random good enough to be aware of these issues", it is becoming a matter of opinion, or a poll, and then it could be off-topic here.

My opinion is that programmers should be expected to have read random(4).

Recent Intel processors have the RDRAND machine instruction, and /dev/random is using it (and some other things) as random source. So I guess that lack of entropy on /dev/random don't affect (i.e. never happens for) them.

So in short, I don't care about low entropy in /dev/random. But I am not coding software (e.g. poker sites, some banking systems) where randomness is critically important (to the point of costing lives, or millions of € or $).

Having no entropy left and knowing that makes /dev/random deterministic. This enable other processes (including those started by malicious users) to be able to "predict" the future output from (i.e. the bytes read from) /dev/random.

On one hand, a good Linux programmer in need of "true" randomness would have carefully read random(4) so would read(2) from /dev/random which would block if no entropy is available. This could annoy processes reading from it.

On the other hand, some programmers prefer to read from /dev/urandom which is almost always good enough and does not block. AFAIK, the C++11 standard library from GCC is doing that for std::random_device.

Obviously, randomness matters slightly more on servers than on ordinary desktops.

If your question is becoming "are Linux programmers using /dev/random good enough to be aware of these issues", it is becoming a matter of opinion, or a poll, and then it could be off-topic here.

My opinion is that programmers should be expected to have read random(4).

Recent Intel processors have the RDRAND machine instruction, and /dev/random is using it (and some other things) as random source. So I guess that lack of entropy on /dev/random don't affect (i.e. never happens for) them.

Having no entropy left and knowing that makes /dev/random deterministic. This enable other processes (including those started by malicious users) to be able to "predict" the future output from (i.e. the bytes read from) /dev/random.

On one hand, a good Linux programmer in need of "true" randomness would have carefully read random(4) so would read(2) from /dev/random which would block if no entropy is available. This could annoy processes reading from it. See also (as commented by the OP) the myths about urandom page.

On the other hand, some programmers prefer to read from /dev/urandom which is almost always good enough and does not block. AFAIK, the C++11 standard library from GCC is doing that for std::random_device.

Obviously, randomness matters slightly more on servers than on ordinary desktops.

If your question is becoming "are Linux programmers using /dev/random good enough to be aware of these issues", it is becoming a matter of opinion, or a poll, and then it could be off-topic here.

My opinion is that programmers should be expected to have read random(4).

Recent Intel processors have the RDRAND machine instruction, and /dev/random is using it (and some other things) as random source. So I guess that lack of entropy on /dev/random don't affect (i.e. never happens for) them.

So in short, I don't care about low entropy in /dev/random. But I am not coding software (e.g. poker sites, some banking systems) where randomness is critically important (to the point of costing lives, or millions of € or $).

added 44 characters in body
Source Link
Loading
added 462 characters in body
Source Link
Loading
added 462 characters in body
Source Link
Loading
Source Link
Loading