A Swarm of Stars / Wikimedia commons

Entropy Gathering [intro]

When /dev/random blocks

Adrian Seredinschi
CS: traps’n miracles
2 min readJul 16, 2013

--

Entropy never decreases — a certain law states. That is the case as well on any Linux system.

But situations might arise when the applications need a good source of entropy and this is used up much faster than it is gathered by the system, e.g. as in this bug (which is not a bug, actually). In other words: the entropy growth rate is not sufficient, the “bandwidth” of randomness being too small.

By default, the pool of entropy in /dev/random stores noise gathered from various sources — mouse movement, keyboard input, disk I/O, etc. The parameters for this device file and its behavior are stored and can be checked through the files in:

/proc/sys/kernel/random/

Especially relevant at first sight are:

  • /proc/sys/kernel/random/entropy_avail — the effective amount of noise gathered,
  • /proc/sys/kernel/random/poolsize — the size of the pool, usually set to 4096.

If the available size is smaller than the amount requested by some application, then that application will block until it will complete the reading of the requested amount. This can become a serious problem for example when there is the need to generate some (many) private keys, leaving the application hanging in the air as in a seemingly infinite loop. In some cases the user is explicitly informed to “move the mouse”, as is the case for gpg --gen-key. When that’s not the case, it’s obviously up to the user to detect what’s wrong and act accordingly, or just be patient and rely on the fact that it will eventually catch up and move on.

There are a couple of not-so-obvious ways of generating entropy:

  1. When a hardware random number generator (RNG) device is available there is the choice of rng-tools;
  2. haveged — a “good enough” pseudo-random generator that takes care of filling /dev/random just as a RNG would do;
  3. Taking the problem into one’s hands with manual solutions of generating noise, simply traversing the file system, reading files, etc, should do the job. A good way is through: watch --interval 0.1 ‘find / > /dev/null’.

Switching from /dev/random to /dev/urandom in an attempt to go around and avoid the possibility of this error should not be considered an option for cryptographical applications (e.g. long-term private keys) or, more generally, in cases when the the randomness quality is a concern.

My personal opinion, in a scenario where security is to be taken into account, is to use /dev/random and inform the user of the application/library/framework of the consequences of relying on it, and also the possible solutions of avoiding a temporary lock-down of the application.

I encountered this problem when using a library that calls gcry_pk_genkey() from libgcrypt11.

--

--