Sobol Sequence Explained by Antoine Savine

Antoine Savine
3 min readDec 6, 2018


It is considered best practice in financial Monte-Carlo simulations to apply quasi-random numbers generated by Sobol’s algorithm in place of (pseudo-)random numbers. Sobol numbers offer a lower discrepancy (they fill the space of possibilities more evenly) resulting in a faster convergence and more stable estimates of financial product values and risk sensitivities.

left: random, right: Sobol — Modern Computational Finance (Antoine Savine, Wiley, 2018), page 205

The sequence was invented in USSR in 1967 and its computation was massively improved in 1979 by Antonov and Saleev, resulting in a remarkably efficient generation of the numbers with just a few low level bit-wise operations.

Sobol’s sequence was published in Press and al’s famous Numerical Recipes in C , which contributed to its adoption in finance and other scientific fields. The sequence is also described on Wikipedia. However, the implementation in Numerical Recipes and other places is only effective in low dimension, making it unsuitable to financial simulations, where dimension routinely stands in the hundreds (or thousands or tens of thousands in the case of xVA). It is not before the considerable efforts of Peter Jaeckel, Stephen Joe and Frances Kuo in the early 2000s, that an effective implementation in high dimension could be produced, and that Sobol’s sequence was universally adopted in finance.

However, the construction of the sequence is somewhat mind-twisting and remains generally poorly understood. Quants tend to apply the sequence as a black box, dropping the resulting numbers in place of conventional pseudo-random numbers for the generation of random paths. While this approach produces correct results in the context of classic Monte-Carlo simulations, it remains short-sighted, in particular, when applied to non-trivial variations of the Monte-Carlo algorithm. For example, in the context of xVA and mVA, some implementations apply so-called nested simulations (simulations within simulations) or the more efficient but harder to comprehend branching simulations (see this article and these presentations -one- -two-).

Nesting and Branching Simulations

It is not trivial to correctly apply Sobol in these situations and guarantee that the convergence and stability of the sequence are preserved. In these situations, and in all contexts other than a standard Monte-Carlo (generate one path, evaluate something along the path, repeat), it is necessary to look into the box and understand how Sobol’s sequence works and why it bears these desirable properties of low discrepancy, fast convergence and stability.

In addition, to apply Sobol’s sequence in the context of parallel simulations, a “skip ahead” must be implemented, whereby Sobol points number n to n+p may be generated on a parallel thread without actually generating the first n-1 points. It so happens that a logarithmic skip ahead can be implemented for Sobol’s sequence, but, evidently, such implementation requires an intimate understanding of Sobol’s algorithm inner mechanics.

Finally, a peek inside Sobol’s algorithm reveals treasures of ingenuity and a remarkable application of bit-wise calculus to achieve desirable properties. For all these reasons, I have published, in my book Modern Computational Finance (Wiley, 2018), a short, self-contained section 5.4 where I try to explain Sobol’s sequence and describe its mechanics and intuitions in an understandable manner.

I made this section freely available as a pdf on Wiley’s page:

Click the button “Read an Excerpt” below the cover, select “Chapter 5, section 5.4 (pdf)” and enjoy.

Don’t hesitate to leave your questions, which I will answer, suggestions and comments.

Antoine Savine



Antoine Savine

Quantitative Finance Practitioner (Danske Bank), Academic (Copenhagen University) and Author (Modern Computational Finance, Wiley)