Between Certainty and Chance: Tracing the Probability Distribution of Paths of Brownian Bridges

Kristof Tabori
6 min readDec 11, 2023

--

A futuristic and stylized side view illustration of a modern, sleekly designed bridge with metallic and glowing elements with particles moving across it, symbolising a “Brownian Bridge”. Particles, depicted as high-tech luminous orbs or spheres, have digital trails emphasising their motion.
Imagination of particles moving over a bridge.

I’ve just rediscovered the Brownian Bridges. I find it rather intriguing that even if I fix the start and end points of a Brownian Motion, it will still maintain the freedom to be a Gaussian process in-between. It can be used to model processes, where both the beginning and the outcome is controlled, but we want to gain insights into the characteristics of the intermediate uncertainties. How can we describe such a process? And how can we derive these results in the easiest way without getting too much into the weeds of stochastic calculus?

What are Brownian Bridges?

According to Wikipedia:

A Brownian bridge is a continuous-time stochastic process B(t) whose probability distribution is the conditional probability distribution of a standard Wiener process W(t) (a mathematical model of Brownian motion) subject to the condition (when standardized) that W(T) = 0, so that the process is pinned to the same value at both t = 0 and t = T

B_t = \left(W_t|\; W_T = 0 \right),\; t \in \left[ 0,\; 1 \right]

However, it looks rather limited, which raises my first question:

Can we define the Brownian Bridge in a more general fashion?

As a first approach, I would generalise it to have:

  • A fixed volatility, but not necessarily 1
  • Start from t₁, not necessarily 0, and end at t₂
  • Arbitrary a and b values at t₁ and t₂ instead of strictly 0

In terms of formulae:

H_t^{(1)} = \sigma \cdot W_t

What is the distribution of such a Brownian Bridge at time t?

Visual Exploration

In order to enhance my intuition in a visual way, I’ve simulated and plotted the following:

  • a handful of Brownian Motion paths starting from a at time t₁
  • a couple of Brownian Bridges starting from the same position and ending up at b at time t₂
  • the density of the distribution for the endpoints of the Brownian Motion paths
  • sketched the distribution for Bₜ
Example paths of Brownian Motions and Brownian Bridges starting from the same point.

The probabilities we know

Let’s start with the good thing: we know the distribution of the Brownian motion at time t conditioned on its value at an earlier point, which is this:

P \left(H_t^{(1)} \in \left[x — \dfrac{dx}{2},\; x + \dfrac{dx}{2} \right]|\; H_{t_1}^{(1)}=a \right) = f_{t,\; t_1}^{(1)(a)}\left(x \right) \cdot dx,\; t > t_1

Where f is the density function of the following Gaussian distribution:

f_{t,\; s}^{(1)(a)}\left(x \right) = \dfrac{1}{\sqrt{2\pi}\sqrt{(t — s)\sigma²}} \cdot e^{-\dfrac{1}{2}\dfrac{{\left( x-a \right)}²}{(t — s)\sigma²}}

The distribution of the Brownian motion at time t conditioned on its value at an earlier point can be rewritten to a more compact form:

P \left(\Delta_{t, x}|\; \Lambda_{t_1, a}\right) = f_{t,\; t_1}^{(1)(a)}\left(x \right) \cdot dx,\; t > t_1

Using the notation of:

\Delta_{t, x} : H_t^{(1)} \in \left[x — \dfrac{dx}{2},\; x + \dfrac{dx}{2} \right]
\Lambda_{t,x} : H_{t}^{(1)}=x

I will need to use it in order to save horizontal space on these pages further down the line.

Plan for the solution

The number one slight difficulty is caused by the fact that we would like to condition the motion on a future value: we would like to calculate the probability that the process ends up in the dx neighborhood of x at time t, conditioned on both it being a at the beginning and being b at the end:

Writing this with a formula:

P \left(\Delta_{t, x}|\; \Lambda_{t_1,a} \cap \Lambda_{t_2,b} \right)

Instead of trying to figure out how the value of the process at t₂ impacts the value at earlier time t (steel blue arrow pointing backwards), one might want to transform the formula at hand to one which uses known conditional probabilities (where all the arrows are pointing forward). My plan is visualized on the following figure:

The densities and probabilities of the Brownian motion starting from a at time t₁ ending up at dx neigbourhood of x (amber area in the maroon density) at time t (darker maroon arrow), starting from x at time t and ending up in the db neighbourhood of b (light steel blue area in the amber density on the right) at time t₂ (amber arrow), and starting from a at time t₁ and ending up in the db neighbourhood of b (light steel blue area in the maroon density on the left) at time t₂ (fainter maroon arrow)

Basics with conditional probabilities

Back to the formulae: one can see that in the case dx approaches zero, the earlier mentioned lambda and delta events are getting the same:

\lim_{dx \to 0} \Delta_{t,x} = \Lambda_{t, x}

Because of this, assuming that the cummulative distribution function of the process at time s around the point a is derivable, the L’Hôpital’s rule, and the fact that the conditional probabilities are calculated as (if B is not empty):

We can see that if we bring the lambda out of the condition, it becomes a delta:

P \left(\Delta_{t, x}|\; \bm{\Lambda_{s,a}}\right) = \dfrac{\overbrace{P \left(\Delta_{t, x} \cap \Lambda_{s, a} \right)}^{=0}}{\underbrace{P \left(\Lambda_{s, a} \right)}_{=0}} =
= \lim_{da \to 0} \dfrac{\overbrace{\dfrac{F \left(\Delta_{t, x} \cap \{H_s^{(1)} < a+da\} \right) — F \left(\Delta_{t, x} \cap \{H_s^{(1)} < a-da\} \right)}{\cancel{da}}}^{P\left(\Delta_{t, x} \cap \Delta_{s, a}\right)}}{\underbrace{\dfrac{F \left(\{H_s^{(1)} < a+da\} \right) — F \left(\{H_s^{(1)} < a-da\} \right)}{\cancel{da}}}_{P\left(\Delta_{s, a}\right)}} =
= \dfrac{P \left(\Delta_{t, x} \cap \bm{\Delta_{s, a}} \right)}{P \left(\bm{\Delta_{s, a}} \right)}

Calculating the distribution for the Brownian Bridge

Considering all this, the probability of the Brownian motion having its value in the dx neighbourhood of x at time t can be written as:

P \left(\Delta_{t, x}|\; \Lambda_{t_1,a} \cap \Lambda_{t_2,b} \right) = \dfrac{P \left( \Delta_{t, x} \cap \Delta_{t_2, b}|\; \Lambda_{t_1,a} \right)}{\underbrace{P \left( \Delta_{t_2, b}|\; \Lambda_{t_1,a} \right)}_\text{C}}

The denominator is already in good shape (represented by the fainter maroon arrow), because the condition’s time is earlier than the variable’s time (t₂ > t₁), which is basically the area from the figure above drawn in light steel blue in the maroon distribution (on the left side):

C = f_{t_2, \; t_1}^{(a)}\left( b \right) \cdot db = \dfrac{1}{\sqrt{2\pi}\sqrt{(t_2 — t_1)\sigma²}} \cdot e^{-\dfrac{1}{2}\dfrac{{\left( b-a \right)}²}{(t_2 — t_1)\sigma²}} \cdot db

If we look at the numerator, we can move the Delta at x to the condition, as seen above in order to make the condition at t₁ obsolete:

P \left(\Delta_{t, x} \cap \Delta_{t_2, b}|\;\Lambda_{t_1,a} \right) = P \left( \Delta_{t_2, b}|\; \overbrace{\Lambda_{t_1,a}}^{\text{no need}\Leftarrow t>t_1} \cap \Lambda_{t, x} \right) \cdot P \left( \Delta_{t, x}|\; \Lambda_{t_1,a} \right)

One can simplify it to the following expression according to the plan, because increments after t are independent from the increments before t:

P \left(\Delta_{t, x} \cap \Delta_{t_2, b}|\; \Lambda_{t_1,a} \right) = \underbrace{P \left( \Delta_{t_2, b}|\; \Lambda_{t, x} \right)}_\text{A} \cdot \underbrace{P \left( \Delta_{t, x}|\; \Lambda_{t_1,a} \right)}_\text{B}

This is already very good, because:

A = f_{t_2, \; t}^{(x)}\left( b \right) \cdot db = \dfrac{1}{\sqrt{2\pi}\sqrt{(t_2 — t)\sigma²}} \cdot e^{-\dfrac{1}{2}\dfrac{{\left( b-x \right)}²}{(t_2 — t)\sigma²}} \cdot db
B = f_{t, \; t_1}^{(a)}\left( x \right) \cdot dx = \dfrac{1}{\sqrt{2\pi}\sqrt{(t — t_1)\sigma²}} \cdot e^{-\dfrac{1}{2}\dfrac{{\left( x-a \right)}²}{(t — t_1)\sigma²}} \cdot dx

So, we can obtain the distribution for Bₜ from:

P \left(\Delta_{t, x} \;|\; \Lambda_{t_1, a} \cap \Lambda_{t_2, b} \right) = \dfrac{A \cdot B}{C} = \ldots \text{after careful reordering } \ldots =
= \dfrac{1}{\sqrt{2\pi}\sqrt{\dfrac{(t_2 — t)(t — t_1)}{t_2-t_1}\sigma²}} \cdot e^{-\dfrac{1}{2}\dfrac{{\left( x- \left(a+\dfrac{t-t_1}{t_2 — t_1}\left( b — a\right) \right) \right)}²}{\dfrac{(t_2 — t)(t — t_1)}{t_2-t_1}\sigma²}} \cdot dx

Based on this, we can see that Bₜ follows a normal distribution where:

\mathbb{E}\left[B_t\right] = a+\dfrac{t-t_1}{t_2 — t_1}\left( b — a\right)
Var \left(B_t \right) = \dfrac{(t_2 — t)(t — t_1)}{t_2-t_1}\sigma²

What about a Brownian Motion with drift?

One might want to consider the following bridge:

H_t^{(2)} = \mu \cdot t + \sigma \cdot W_t
B_t = \left( H_t^{(2)} \;|\; H_{t_1}^{(2)} = a \cap H_{t_2}^{(2)} = b \right), \; t \in \left[ t_1, \; t_2 \right]f_{t,\; s}^{(2)(a)}\left(x \right) = \dfrac{1}{\sqrt{2\pi}\sqrt{(t — s)\sigma²}} \cdot e^{-\dfrac{1}{2}\dfrac{{\left( x-\mu \left(t — s \right) — a \right)}²}{(t — s)\sigma²}}

One can perform the above exercise, but with the following density function:

f_{t,\; s}^{(2)(a)}\left(x \right) = \dfrac{1}{\sqrt{2\pi}\sqrt{(t — s)\sigma²}} \cdot e^{-\dfrac{1}{2}\dfrac{{\left( x-\mu \cdot \left(t — s \right) — a \right)}²}{(t — s)\sigma²}}

Surprisingly enough, if one does the steps sufficiently carefully, the drift terms fall out.

It means that it doesn’t matter whether we use a Brownian Motion with or without drift, the constructed Brownian bridge will have the same distribution due to the restriction of the start and end points.

Conclusion

I’ve run through the main concepts and steps to derive the distribution for a Brownian Bridge with a fixed (not necessarily 0) starting and (not necessarily 0) ending point.

Also, I’ve hinted the conclusion that the drift of the foundational Brownian Motion is irrelevant in determining the Bridge’s distribution, highlighting the unique and somtimes surprising characteristics of these stochastic processes.

--

--

Kristof Tabori

Originally graduated as a physicist, but delivers value as a quant and by building LLM-backed solutions.