[Contents]

Discussion of the Fortuna Randomness Generator
John Denker

## 1  Discussion

The “Fortuna” class of randomness generators was proposed in reference 1. It can be understood as follows: There are 32 accumulator pools, labeled as N=1 to N=32 inclusive. Each one gets 1/32nd of the incoming entropy. The Nth pool unloads a batch of entropy at intervals of 2N−1 × .1 second.

Now, the work that the attacker must do is exponential in the number of bits, and the number of bits per batch is exponential in the pool-number (N), so we are talking about a double exponential, i.e. y=exp(exp(x)). This is an exceedingly nonlinear function. To cut to the chase: The only thing that matters is the one pool that has the baby-bear just-right amount of entropy per batch. That’s because:

• All faster pools are irrelevant. The cumulative effect of exponentially-many small batches is utterly swamped by the double exponential. I can show you the numbers if this is not already obvious to you.
• All slower pools are irrelevant. They happen too late, and they accomplish nothing. The adversary’s task goes from impossible to impossible-squared, which is meaningless.

So, whichever pool is the just-right one, you could do a 32-times better job by using that pool alone.

To say the same thing another way: If somebody thinks 32 pools are necessary, they are evidently considering the possibility that the raw incoming entropy stream is so dilute that it will take 13 years for the PRNG to recover from compromise. Most customers would be underwhelmed by such a slow recovery, and would prefer much tighter specifications. Possibilities include:

1. With the existing weak entropy source, use only one pool, so that the PRNG recovers in 5 months instead of 13 years, or, better ...
2. Spend the extra money (usually zero dollars, at most five dollars per machine) to obtain a provably better source of entropy, so that the PRNG recovers in milliseconds rather than megaseconds.

Minor point: Actually, in the later stages things are only about half as bad as you might think, because whenever a slow pool makes a contribution, all faster pools contribute at the same time. For the Nth pool, this contributes a factor of 2 − 2(1−N). You can fix this by dividing the incoming entropy 33 ways (not 32), and giving the fastest pool an extra share.

If we reduce the number of accumulator-pools from 32 to 2, the problem becomes even easier to understand. Suppose we have a source that produces entropy at a rate R, where R might be measured in bits per second. The fast pool contributes at intervals of time τ, and the slow pool contributes at 2τ. At the beginning of this section we considered the case where τ was fixed at τ = 0.1 second, but now we will adjust τ for better performance. A contribution is considered successful if it contributes an amount of entropy S1.

At every odd multiple of τ, the fast pool contributes an amount (1/2)Rτ. At every even multiple, the two pools together contribute (3/2)Rτ.

We analyze the situation where we do not know the exact value of R, but we know it lies in some interval [Rmin, Rmax], where Rmax is twice as big as Rmin. We consider three design approaches:

1. The Pollyanna Pangloss approach to Fortuna i.e. unwarranted optimism. We recklessly choose τ based on the hope that R = Rmax.

 Using two pools, the best choice is to use the even-numbered contributions. That means τ satisfies (3/2)Rmaxτ = S1 so τ = (2/3) S1/Rmax, and the recovery time is 2τ = (4/3) S1/Rmax. In the large-R case, where R is actually equal to Rmax, this gives us a secure system and a reasonably short recovery time. In the small-R case, our unwarranted optimism results in a grossly insecure system. If S1 is 200 bits, we have just lowered the adversary’s workload by a factor of 2100, which is a lot.

2. The minimax approach to Fortuna, simplified to two pools. The idea is to get a fast recovery when R is large.

 To be safe, we require (1/2)Rmaxτ = S1 so τ = 2 S1/Rmax = S1/Rmin. In the large-R case, the system is secure and it recovers every time the fast pool contributes, so the recovery time is just τ. In the small-R case, the system is secure, but it recovers only on the even-numbered stages, so the recovery time is 2τ = 2 S1/Rmin = 4 S1/Rmax.

 In the large-R case, the minimax approach to Fortuna is slower than the Pollyanna Pangloss approach by a factor of 1.5. In the small-R case, the minimax approach is faster by a factor of infinity, because the Pollyanna Pangloss approach never recovers.

3. Ordinary competent engineering. We use only one pool, and choose τ using the minimax principle. That is, we do the best we can while still being safe in the worst-case scenario.

This is nice and simple. To be as fast as safely possible, we choose τ = S1/Rmin = 2 S1/Rmax. The system is secure and it recovers every time the pool contributes.

 In the large-R case, the one-pool approach recovers faster than the Fortuna approach. Fortuna is wasting entropy by putting more entropy than necessary into the even-numbered contributions. In the small-R case, the one-pool approach once again recovers faster than the Fortuna approach. Fortuna is wasting entropy coming and going: the small contributions are too small, and the large contributions are larger than they need to be.

Using more pools just makes Fortuna even less favorable compared to ordinary prudent engineering.

Speaking of “Practical Cryptography” ... proving that the PRNG recovers eventually – no-matter-what – does not sound very practical, if “eventually” might mean 13 years. The recovery-time is a critical part of the engineering, because it determines whether the attacker decides the attack is worthwhile.

 If it takes (say) half a day to break something, and it stays broken for 13 years, that’s a tempting proposition for the attacker. If it only stays broken for a millisecond, that’s not nearly so tempting.

## 2  References

1.
Niels Ferguson and Bruce Schneier,
Practical Cryptography (2003).
[Contents]