Bridging the Gap: An Introduction to Brownian Bridge Simulations

Kristof Tabori
6 min readFeb 17, 2024

--

Quite a few people asked me after my first article on the topic of Brownian Bridges: how do you simulate the paths of a Brownian Motion with the condition that it will be “b” at time t₂? This question can be motivated by quite a few circumstances, like

  • they want to create similar figures I have in my previous articles
  • the need for doing some sort Monte Carlo simulation where knowing the end values of a Brownian Motion, one needs to simulate paths for the process and calculate some functions on them, like the expected value of a payout which happens only if the process (like the logarithm of a price for an asset) goes above a predefined level

Well, there are a couple of ways to achieve that, but not all are equally useful. Here are a couple of ways I can think about:

  1. The Brute Force Way: I mention it for entertainment purposes only. This is basically that one simulates the good-old Brownian Motion paths, and collects the ones which happen to be in a small neighbourhood of “b” at time t₂. As one can imagine, this is a very wasteful way to do things, totally unfit for Monte Carlo purposes.
  2. The Fractal Way: Motivated by my previous article, we start with a path with a low resolution, and progressively increase the granularity until we reach the desired level. This can be made computationally surprisingly efficient. The upside is, that if we have a path, but we aren’t happy with the resolution, we can progress on the one already there, no need to simulate from scratch.
  3. The Anticipatory Gaussian Process Way: Also, motivated by my previous article in the topic, we can define a process based on the original, unrestricted Brownian Motion. As soon as we know its value at time t₂, we can build the Bridge with it. The simulation is straightforward, can be made computationally efficient. Problem is that if we have a path, we can’t reuse it to have one with higher resolution.

The Fractal Way

As I’ve found out in my previous article:

“if we draw a value from the bridge at time t from its distribution, then we construct a new bridge (a at time t₁ and the drawn value at time t) and draw a value from the new bridge’s distribution at time s, we get the same distribution as if we drawn the value from the original bridge at time s”

It asks for an iterative method, knowing, that the distribution for a general Brownian Bridge with value “a” at t₁ and “b” at t₂ is:

f^{a, b}_{t_1, t_2} \left(x, t \right) = \dfrac{1}{\sqrt{2\pi}\sqrt{\dfrac{(t_2 — t)(t — t_1)}{t_2-t_1}\sigma²}} \cdot e^{-\dfrac{1}{2}\dfrac{{\left( x- \left(a+\dfrac{t-t_1}{t_2 — t_1}\left( b — a\right) \right) \right)}²}{\dfrac{(t_2 — t)(t — t_1)}{t_2-t_1}\sigma²}}
  1. Start with the least granular version of the Brownian Bridge path: it is “a” at t₁ and “b” at t₂, linearly interpolated in-between. That’s not much but a good basis for honest work.
  2. Add a point at half of the time interval and interpolate linearly between these three known points: it is again very easy, because we just need to draw from the distribution derived in my first article:
  3. Draw values for the half-time of each already known interval: again, those are going to be Brownian Bridges with the endpoints of the consecutive visited points. This is the important consequence of the quote above.

You can repeat step 3.) as many times one needs. You can also see the algorithm working by pulling the slider on the below figure:

One can implement a simple Python code to achieve that, like this:

import numpy as np
from typing import List

class BrownianBridgeFractalNaive:
def __init__(
self,
start: tuple,
end: tuple,
standard_deviation: float=1.0,
depth: int=10
) -> None:
self.start = start
self.end = end
self.standard_deviation = standard_deviation
self.depth = depth
self.times_steps = np.linspace(0, end[0] - start[0], num=2**depth+1)

def build_intermediate(self, starting_list: List[List[float]]) -> List[List[float]]:
new_list_length = len(starting_list[0])-1
noise = np.random.normal(size=new_list_length)
new_time = [(starting_list[0][i] + starting_list[0][i+1])/2.0 for i in range(new_list_length)]
mu = [(starting_list[1][i]+starting_list[1][i+1])/2.0 for i in range(new_list_length)]
time_scale = [np.sqrt((starting_list[0][i+1] - new_time[i])*(new_time[i] - starting_list[0][i])/(starting_list[0][i+1] - starting_list[0][i])) for i in range(new_list_length)]
new_value = [mu[i] + time_scale[i]*self.standard_deviation*noise[i] for i in range(new_list_length)]

time_list = [starting_list[0][0]]
value_list = [starting_list[1][0]]

for i in range(new_list_length):
time_list.append(new_time[i])
time_list.append(starting_list[0][i+1])
value_list.append(new_value[i])
value_list.append(starting_list[1][i+1])

return [time_list, value_list]

def sample(self) -> List[float]:
working_list = [[self.start[0], self.end[0]], [self.start[1], self.end[1]]]
for i in range(self.depth):
working_list = self.build_intermediate(working_list)

return working_list[1]

However, utilising for example NumPy’s vectorization, can bring significant performance improvements:

class BrownianBridgeFractal:

def __init__(
self,
start: tuple,
end: tuple,
sigma: float=1.0,
depth: int=10
) -> None:
self.start = start
self.end = end
self.sigma = sigma
self.depth = depth
self.times_steps = np.linspace(0, end[0] - start[0], num=2**depth+1)

def build_intermediate(self, starting_array: np.ndarray) -> np.ndarray:
new_bits = np.ndarray((2, starting_array.shape[1] - 1))
noise = np.random.normal(size=starting_array.shape[1] - 1)
new_bits[0] = (starting_array[0][:-1]+starting_array[0][1:])/2.0
mu = (starting_array[1][:-1]+starting_array[1][1:])/2.0
this_sigma = np.sqrt(
(starting_array[0][1:]-new_bits[0])*
(new_bits[0]-starting_array[0][:-1])/
(starting_array[0][1:]-starting_array[0][:-1])
)
tobereturned = np.ndarray((2, 2*starting_array.shape[1] - 1))

tobereturned[0][0::2] = starting_array[0]
tobereturned[0][1::2] = new_bits[0]
tobereturned[1][0::2] = starting_array[1]
tobereturned[1][1::2] = mu + this_sigma*self.sigma*noise

return tobereturned

def sample(self) -> np.array:
working_array = np.ndarray((2, 2))
working_array[0] = np.array([self.start[0], self.end[0]])
working_array[1] = np.array([self.start[1], self.end[1]])
for i in range(self.depth):
working_array = self.build_intermediate(working_array)

return working_array[1]

I’ve quickly compared the two solution’s runtime in a Jupyter notebook with the following settings:

start = (0, 2)
end = (6, 8)
depth = 20
num = 2**dept

It seems, that using the vectorized version has sped up the process 25-fold:

naive_fractal = BrownianBridgeFractalNaive(start, end, standard_deviation=3, depth=depth)
%%timeit
naive_fractal.sample()

5.23 s ± 1.56 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
fractal = BrownianBridgeFractal(start, end, standard_deviation=3, depth=depth)
%%timeit -n 100 -r 7
fractal.sample()

220 ms ± 116 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)

The Anticipatory Gaussian Process Way

We can see in my previous article that we can define an anticipatory Gaussian process, which will have the same distribution as the original bridge:

\widetilde{B}_t = \overbrace{a + \dfrac{t-t_1}{t_2-t_1}\left( b — a\right)}^{D_t} + H_{t-t_1}^{(1)} — \dfrac{t-t_1}{t_2-t_1} \cdot H_{t_2-t_1}^{(1)} = D_t + H_{T\cdot\tau_t}^{(1)} — \tau_t\cdot H_{T}^{(1)}

Where H is a scaled Wiener-process, T and tau are:

H_t^{(1)} = \sigma \cdot W_t
T = t_2-t_1
\tau_t = \dfrac{t-t_1}{t_2-t_1}

The simulation is rather straightforward:

  1. We define a time-step with a desired granularity.
  2. We simulate a Brownian Motion path.
  3. Once the final value is known, we subtract the time-scaled value of the final point.

A straightforward Python implementation utilising the vectorization built into NumPy is the following:

class BrownianMotionProcess:

def __init__(
self,
start: tuple,
end: tuple,
standard_deviation: float=1.0,
times: int=1024
) -> None:
self.start = start
self.end = end
self.standard_deviation = standard_deviation
self.times = times
self.time_steps = np.linspace(0, end[0] - start[0], num=times+1)/(end[0] - start[0])
self.scale = np.sqrt((end[0]-start[0])/float(times))
self.deterministic = start[1] + self.time_steps*(end[1] - start[1])

def sample(self) -> np.array:
noise = self.scale*self.standard_deviation*np.random.normal(size=self.times)
noise = np.insert(noise, [0], 0)
browinian_motion = np.cumsum(noise)
return self.deterministic + browinian_motion - (browinian_motion[-1])*self.time_steps

Again, I measured the time necessary for simulating a path with 2²⁰ steps:

my_bb_process = BrownianMotionProcess(start, end, standard_deviation=3, times=num)

%%timeit -n 100 -r 7
my_bb_process.sample()

145 ms ± 66.4 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)

The efficiency of NumPy’s `cumsum` method gives a further ~4/3 increase in speed compared to the efficient implementation of “The Fractal Way”. However, the already simulated paths can’t be reused, if we want to improve on the resolution.

Conclusion

I’ve shown two ways to implement path simulation for general Brownian Bridge between two arbitrary points in Python. We can see that even though “The Anticipatory Gaussian Process Way” seems to be more computationally efficient, “The Fractal Way” has the added benefit of having the ability to refine the resolution of the path in terms of time for an already existing simulation.

If you liked the article, feel free to clap to propel it to a wider audience. If you have found a part particularly insightful, don’t hesitate to highlight it. In case of any question or observation, comment below, I’ll be happy to respond. Also, if you would like to get notified when I publish my next story, follow me.

--

--

Kristof Tabori

Originally graduated as a physicist, but delivers value as a quant and by building LLM-backed solutions.