Beyond the Basics: Two Approaches to Covariance in General Brownian Bridges

Kristof Tabori
11 min readJan 3, 2024

--

A visually engaging image with intertwining pathways representing the Brownian Bridge concept in a mathematical context. These are overlaid with faint mathematical formulas, particularly those representing covariance and stochastic processes. The theme blends abstract art with mathematical imagery, using a palette of cool blues and greens to evoke a sense of calculation and precision. The image is designed to be visually appealing and conceptually relevant to the topic of Brownian Bridges.
Imagination the covariance of Brownian Bridges

I looked into how to derive the structure of the covariance function of the Brownian Bridges with arbitrary (> 0) start and end points in time. The covariance quantifies how much the values along the path tend to differ in the same direction from their expected values at two different times. I find formulae more expressive, so here is the definition what I actually mean by the covariance function between times s and t:

\text{Cov}\left[ B_s, B_t\right] = \mathbb{E}\left[B_s \cdot B_t\right] — \mathbb{E}\left[B_s\right]\cdot\mathbb{E}\left[B_t\right], \; t_1<s<t<t_2

I’ve found two equally easy ways to derive the covariance function of the general Brownian Bridge. I’ll show them both because they are going to produce intermediate steps, which might be important further down on the road:

  • I derive the density of the joint distribution of the Brownian Bridge first. Then I calculate the necessary expected values in a brute-force way arriving at the formula.
  • I define the stochastic process which represents the Brownian Bridge first. I show that its distribution is identical to the one derived in my previous article. Then I derive the covariance function by gradually tracing it back to the expectations of its building blocks.

All we need are:

  • a basic understanding of a Brownian motion/Wiener-process
  • a basic understanding of the expected value, variance and covariance of the Brownian motion
  • knowledge of what a Gaussian distribution is
  • how to calculate the expected value of a random variable if we know its density function

Learnings from my previous article

I’ll build the bridge from the same process as found in my earlier writing in the Brownian Bridges series:

H_t^{(1)} = \sigma \cdot W_t

And the Bridge itself is:

B_t = \left( H_t^{(1)}|\; H_{t_1}^{(1)} = a \cap H_{t_2}^{(1)} = b \right),\; t \in \left[t_1,\; t_2 \right]

I’ll also use the shorthand I’ve introduced there for saving some vertical space on writing “the underlying process is in the dx neighbourhood of x at time t” (Delta) and “the underlying process is equal to x at time t” (Lambda):

\Delta_{t, x} : H_t^{(1)} \in \left[x — \dfrac{dx}{2},\; x + \dfrac{dx}{2} \right]
\Lambda_{t,x} : H_{t}^{(1)}=x

Also, an important observation was that we can move the Lambda out of the condition the following way:

P \left(\Delta_{t, x}|\; \bm{\Lambda_{s,a}}\right) = \dfrac{P \left(\Delta_{t, x} \cap \bm{\Delta_{s, a}} \right)}{P \left(\bm{\Delta_{s, a}} \right)}

Also, the conclusion was that the probability that the bridge at time t is in the dx neighbourhood of x can be written as:

\mathbb{P} \left(\Delta_{t, x} \;|\; \Lambda_{t_1, a} \cap \Lambda_{t_2, b} \right) =
= \underbrace{\dfrac{1}{\sqrt{2\pi}\sqrt{\dfrac{(t_2 — t)(t — t_1)}{t_2-t_1}\sigma²}} \cdot e^{-\dfrac{1}{2}\dfrac{{\left( x- \left(a+\dfrac{t-t_1}{t_2 — t_1}\left( b — a\right) \right) \right)}²}{\dfrac{(t_2 — t)(t — t_1)}{t_2-t_1}\sigma²}}}_{f^{a, b}_{t_1, t_2} \left(x, t \right)} \cdot dx

Which implied that the value of the Bridge at time t follows a Gaussian distribution where the expected value is:

\mathbb{E}\left[B_t\right] = a+\dfrac{t-t_1}{t_2 — t_1}\left( b — a\right) = \dfrac{t_2-t}{t_2-t_1}a + \dfrac{t-t_1}{t_2-t_1}b

The variance is the following:

\mathbb{E}\left[B_t²\right] — \mathbb{E}\left[B_t\right]² = \text{Var}\left[ B_t\right] = \dfrac{(t_2 — t)(t — t_1)}{t_2-t_1}\sigma²

The expected value, variance and covariance

The Wiener-process, which is the most important part of the foundational process of the Bridge I discuss, is characterised by a Gaussian distribution at times t > 0, which is centred around zero and is linearly expanding with the square root of time. Based on this, the expected value and the variance of the Wiener-process are:

\mathbb{E}\left[W_t\right] = 0
\text{Var}\left[ W_t\right] = t

Also, one can quickly derive that the covariance of the Wiener-process based on its definition:

\text{Cov}\left[ W_s, W_t\right] = min(s, \; t)

Based on this, the expected value, variance and the covariance of the process I’ll build the Brownian Bridge upon are the following:

\mathbb{E}\left[H_t^{(1)}\right] = 0
\text{Var}\left[ H_t^{(1)}\right] = \sigma² t
\text{Cov}\left[ H_s^{(1)}, H_t^{(1)}\right] = \sigma² min(s, \; t)

Brute Force Way

First, let’s look at the expected value of the Bridge at time s, which can also be expressed with the starting value of the Bridge and the expectation at a later time t, which will come handy down the line:

\mathbb{E}\left[B_s\right] = \dfrac{t_2-s}{t_2-t_1}a + \dfrac{s-t_1}{t_2-t_1}b =
= \overbrace{\dfrac{t_2-s}{t_2-t_1}\dfrac{\bm{t-t_1}}{\bm{t-t_1}}a — \dfrac{\bm{s-t_1}}{\bm{t-t_1}}\dfrac{\bm{t_2-t}}{\bm{t_2-t_1}}\bm{a}}^{\dfrac{t-s}{t-t_1} a} + \dfrac{\bm{s-t_1}}{\bm{t-t_1}}\dfrac{\bm{t_2-t}}{\bm{t_2-t_1}}\bm{a} + \dfrac{s-t_1}{\bm{t-t_1}}\dfrac{\bm{t-t_1}}{t_2-t_1}b=
= \dfrac{s-t_1}{t-t_1}\mathbb{E}\left[B_t\right] + \dfrac{t-s}{t-t_1}a

I will define the joint density of the Brownian Bridge with the probability of the event of being in the dx neighbourhood of x at time s and jointly being in the dy neighbourhood of y at time t:

f_{s, t} \left(x, y \right) \cdot dx \cdot dy = \mathbb{P} \left( \Delta_{s, x} \cap \bm{\Delta_{t, y}} | \Lambda_{t_1, a} \cap \Lambda_{t_2, b} \right) =
= \underbrace{\mathbb{P} \left( \Delta_{s, x} | \Lambda_{t_1, a} \cap \bm{\Lambda_{t, y}} \overbrace{\cancel{\cap \Lambda_{t_2, b}}}^{t_1<\bm{s}<t<t_2} \right)}_{f^{a, y}_{t_1, t} \left(x, s \right)\cdot dx} \cdot \underbrace{\mathbb{P} \left(\bm{\Delta_{t, y}} | \Lambda_{t_1, a} \cap \Lambda_{t_2, b} \right)}_{f^{a, b}_{t_1, t_2} \left(y, t \right)\cdot dy}

We can see that the second probability is describing the probability of the bridge being in the dy neighbourhood of y at time t starting from a at time t₁ and ending up at b at time t₂.
The first factor is basically a bridge where not only the start and end points are fixed, rather than another point at t > s time. One can see that it is actually a construction of two Brownian Bridges joined together at time t. I’ve simulated a handful such paths for visualisation purposes (amber) overlaid with Brownian Bridge paths with starting value of a at time t₁ and ending value b at time t₂ (steel blue):

Brownian Bridge paths with starting value of “a” at time “t₁” and ending value “b” at time “t₂” (steel blue) overlaid wiith Brownian Briddge paths where there is an extra restriction of it being “y” at time “t” (amber).

The bottom line is that the movements in the first part are independent of the movements in the second part, so the condition at t₂ can be omitted. This way the first probability describes the probability of the bridge being in the dx neighbourhood of x at time starting from a at time t₁ and ending up at y at time t.

This way, we’ve arrived to the density of the joint distribution of the bridge at time s and t which is arguably the most important result of this particular approach:

f_{s, t} \left(x, y \right) = f^{a, y}_{t_1, t} \left(x, s \right) \cdot f^{a, b}_{t_1, t_2} \left(y, t \right)

Also, one important note is, that if we marginalise over y at time t, we get the distribution of the original bridge at time s. It means that if we draw a value from the bridge at time t from its distribution, then we construct a new bridge (a at time t₁ and the drawn value at time t) and draw a value from the new bridge’s distribution at time s, we get the same distribution as if we drawn the value from the original bridge at time s. It seems like a lot of work for a simple, already known result, but it will come handy in a later story I’ll write about how to actually simulate paths of a Brownian Bridge.

However, we hoped to derive the covariance function of the Brownian Bridge, so we need to move forward to calculate the expectation of the multiplication of the Bridge’s values which is not difficult in possession of the density of the joint distribution:

\mathbb{E}\left[B_s \cdot B_t\right] = \iint_{-\infty}^{\infty} x \cdot y \cdot f_{s, t} \left(x, y \right) \,dx\,dy =
= \int_{-\infty}^{\infty} y \cdot f^{a, b}_{t_1, t_2}\left(y, t \right) \underbrace{\int_{-\infty}^{\infty} x \cdot f^{a, y}_{t_1, t} \left(x, s \right) \,dx}_{\dfrac{t-s}{t — t_1} \cdot a+\dfrac{s-t_1}{t — t_1}\cdot y} \,dy=
= \dfrac{t-s}{t — t_1} \cdot a \cdot \underbrace{\int_{-\infty}^{\infty} y \cdot f^{a, b}_{t_1, t_2}\left(y, t \right) \,dy}_{\mathbb{E}\left[B_t\right]} + \dfrac{s-t_1}{t — t_1}\cdot \underbrace{\int_{-\infty}^{\infty} y² \cdot f^{a, b}_{t_1, t_2}\left(y, t \right) \,dy}_{\text{Var}\left[ B_t\right] + \mathbb{E}\left[B_t\right]²}=
= \dfrac{t-s}{t — t_1} a \cdot \mathbb{E}\left[B_t\right] + \dfrac{s-t_1}{t — t_1}\cdot \left(\text{Var}\left[ B_t\right] + \mathbb{E}²\left[B_t\right] \right) =
= \mathbb{E}\left[B_t\right] \cdot \underbrace{\left( \dfrac{s-t_1}{t-t_1}\mathbb{E}\left[B_t\right] + \dfrac{t-s}{t-t_1}a \right)}_{\mathbb{E}\left[B_s\right]} + \dfrac{s-t_1}{\cancel{t — t_1}}\dfrac{\left(t_2-t \right)\cancel{\left(t-t_1 \right)}}{\left(t_2-t_1 \right)}\sigma² =
= \dfrac{\left(t_2-t \right)\left(s-t_1 \right)}{\left(t_2-t_1 \right)}\sigma² + \mathbb{E}\left[B_t\right] \cdot \mathbb{E}\left[B_s\right]

This is very good, because the covariance is then:

\text{Cov}\left[ B_s, B_t\right] = \dfrac{\left(t_2-t \right)\left(s-t_1 \right)}{\left(t_2-t_1 \right)}\sigma² + \cancel{\mathbb{E}\left[B_t\right] \cdot \mathbb{E}\left[B_s\right]} — \cancel{\mathbb{E}\left[B_s\right]\cdot\mathbb{E}\left[B_t\right]}

The Gaussian process way

We can define the following process using the a deterministic functions (Dₜ, alpha) of the time, the original Brownian motion process, and its value at t₂-t₁:

\widetilde{B}_t = a + \underbrace{\dfrac{t-t_1}{t_2-t_1}}_{\tau_t}\left( b — a\right) + H_{t-t_1}^{(1)} + \alpha\left(t\right) \cdot H_{t_2-t_1}^{(1)}

The first step is to determine the deterministic alpha function to match the expected value and variance of the original Bridge at all t times (t₁ < t < t₂).

In order to avoid writing too much, I’ll call the time-span of the Brownian Bridge T alongside with the notation of tau:

\tau_t = \dfrac{t-t_1}{t_2-t_1}
T = t_2-t_1

Also, the variance of the original Brownian Bridge can be written as:

\text{Var}\left[ B_t\right] = \dfrac{(t_2 — t)(t — t_1)}{t_2-t_1}\sigma² = \left(1-\tau_t\right)\cdot \tau_t \cdot T \cdot \sigma²

We can see that the distribution of the new process is Gaussian at all times, because it is the sum of either deterministic functions or Gaussian variables. So, if we match the expected value and the variance of the distribution at all times in the t₁— t₂ interval, we can say the distribution of this process and the Brownian Bridge is the same.

It’s expected value is already in a good shape:

Which is exactly the expected value of the Brownian Bridge at time t without posing too much restriction on alpha.

Let’s look at the variance:

\text{Var}\left[ \widetilde{B}_t^{(\alpha)}\right] = \mathbb{E}\left[\left(\widetilde{B}_t^{(\alpha)}\right)²\right] — \underbrace{\mathbb{E}\left[\widetilde{B}_t^{(\alpha)}\right]²}_{D_t²} =
=\mathbb{E}\left[\left(D_t + H_{T\cdot \tau_t}^{(1)} + \alpha\left(t \right) \cdot H_{T}^{(1)} \right)²\right] — D_t² =
=\mathbb{E}\left[\cancel{D_t²} + \left(H_{T\cdot \tau_t}^{(1)}\right)² + \alpha\left(t \right)² \cdot \left(H_{T}^{(1)}\right)²\right] — \cancel{D_t²} +
+ 2\cdot \mathbb{E}\left[D_t\cdot \underbrace{H_{T\cdot \tau_t}^{(1)}}_{\mathbb{E}\left[\cdot \right]=0} + D_t\cdot \alpha\left(t \right)\cdot \underbrace{H_{T}^{(1)}}_{\mathbb{E}\left[\cdot \right]=0} +\alpha\left(t \right)\cdot H_{T\cdot \tau_t}^{(1)}\cdot H_{T}^{(1)}\right] =
= \underbrace{\mathbb{E}\left[\left(H_{T\cdot \tau_t}^{(1)}\right)² \right]}_{\sigma²\cdot T\cdot \tau_t} +\alpha\left(t \right)² \cdot \underbrace{\mathbb{E}\left[\left(H_{T}^{(1)}\right)² \right]}_{\sigma²\cdot T} +2\alpha\left(t \right)\cdot\underbrace{\mathbb{E}\left[H_{T\cdot \tau_t}^{(1)}\cdot H_{T}^{(1)} \right]}_{\sigma²\cdot min\left(T, \; T\cdot \tau_t\right) = \sigma²\cdot T\cdot \tau_t}=
= \sigma²\cdot T \cdot \underbrace{\left( \alpha\left(t \right)² + 2\alpha\left(t \right)\cdot \tau_t + \tau_t\right)}_{=\left(1-\tau_t\right) \cdot \tau_t \Longleftrightarrow \alpha\left(t \right) = \tau_t}

If we choose alpha in a way that the last factor is (1-tau)*tau, then the distribution of the process will be the same as the distribution of our original Bridge. So, for calculation purposes, I will use the following process:

\widetilde{B}_t = \overbrace{a + \dfrac{t-t_1}{t_2-t_1}\left( b — a\right)}^{D_t} + H_{t-t_1}^{(1)} — \dfrac{t-t_1}{t_2-t_1} \cdot H_{t_2-t_1}^{(1)} = D_t + H_{T\cdot\tau_t}^{(1)} — \tau_t\cdot H_{T}^{(1)}

Again, an important note: this definition can be used to simulate paths of Brownian Bridges, which I will discuss in a future article.

Using this “new” process, we can calculate the remaining necessary input for the covariance:

\mathbb{E}\left[B_s B_t\right] = \mathbb{E}\left[\widetilde{B}_s \widetilde{B}_t\right] = \mathbb{E}\left[\left(D_s + H_{T\cdot\tau_s}^{(1)} — \tau_s\cdot H_{T}^{(1)} \right)\left(D_t + H_{T\cdot\tau_t}^{(1)} — \tau_t\cdot H_{T}^{(1)} \right)\right]=
=\mathbb{E}\left[D_s\cdot D_t+\underbrace{D_s\cdot H_{T\cdot\tau_t}^{(1)}}_{\mathbb{E}\left[\cdot \right]=0} — \underbrace{D_s\cdot \tau_t\cdot H_{T}^{(1)}}_{\mathbb{E}\left[\cdot \right]=0}\right]+
+ \mathbb{E}\left[\underbrace{D_t\cdot H_{T\cdot\tau_s}^{(1)}}_{\mathbb{E}\left[\cdot \right]=0} + \underbrace{H_{T\cdot\tau_t}^{(1)}\cdot H_{T\cdot\tau_s}^{(1)}}_{\mathbb{E}\left[\cdot \right]=\sigma²\cdot T\cdot\tau_s} — \tau_t\cdot \underbrace{H_{T}^{(1)}\cdot H_{T\cdot\tau_s}^{(1)}}_{\mathbb{E}\left[\cdot \right]=\sigma²\cdot T\cdot\tau_s} \right]+
+\mathbb{E}\left[\underbrace{- D_t\cdot \tau_s\cdot H_{T}^{(1)}}_{\mathbb{E}\left[\cdot \right]=0} — \tau_s\cdot \underbrace{H_{T}^{(1)} \cdot H_{T\cdot\tau_t}^{(1)}}_{\mathbb{E}\left[\cdot \right]=\sigma²\cdot T\cdot\tau_t} + \tau_s \cdot \tau_t\cdot \underbrace{H_{T}^{(1)}\cdot H_{T}^{(1)}}_{\mathbb{E}\left[\cdot \right]=\sigma²\cdot T} \right] =
= D_s\cdot D_t + \sigma²\cdot T\cdot \left(\tau_s — \tau_s\tau_t — \cancel{\tau_s\tau_t} + \cancel{\tau_s\tau_t} \right) = \underbrace{D_s}_{\mathbb{E}\left[\widetilde{B}_s \right]}\cdot \underbrace{D_t}_{\mathbb{E}\left[\widetilde{B}_t \right]} + \underbrace{\left(1 — \tau_t \right)\cdot \tau_s\cdot T}_{\dfrac{\left(t_2-t \right)\left(s-t_1 \right)}{t_2-t_1}}\cdot \sigma²

So, because the distribution of the original bridge at all times t, t₁ < t < t₂ is the same as the distribution of the “new” process, their covariance is the same too:

\text{Cov}\left[B_s, B_t \right]=\text{Cov}\left[\widetilde{B}_s, \widetilde{B}_t \right]= \mathbb{E}\left[\widetilde{B}_s \widetilde{B}_t\right] — \mathbb{E}\left[\widetilde{B}_s \right] \cdot \mathbb{E}\left[\widetilde{B}_t\right] = \dfrac{\left(t_2-t \right)\left(s-t_1 \right)}{t_2-t_1}\sigma²

The correlation function

The correlation function is pretty easy to derive:

\rho\left(s, \; t \right) = \dfrac{\text{Cov}\left[B_s, \; B_t \right]}{\sqrt{\text{Var}\left[ B_s \right]}\sqrt{\text{Var}\left[ B_t \right]}} = \sqrt{\dfrac{\left(t_2 — t \right)\left( s -t_1 \right)}{\left(t_2 — s \right)\left( t -t_1 \right)}}

A few words about the results

First of all, because t₁ < s < t < t₂, the covariance is positive, which means that if the path is above its expected value at time s, our expectation is that its value will be above its expected value at a later time t. This erodes linearly as t is getting farther from s. It can be seen if I express t in terms of how much later it is from s:

\delta = \dfrac{t-s}{t_2-s} \Longleftrightarrow t = s + \left( t_2 — s \right) \delta

Then the covariance between time s and t, s < t is:

\text{Cov}\left[ B_s, B_t \right] = \dfrac{(t_2 — \overbrace{(s + ( t_2 — s ) \delta)}^{t})\left( s -t_1\right)}{t_2 — t_1} =
= \underbrace{\dfrac{\left(t_2 — s \right)\left(s — t_1 \right)}{t_2 -t_1}}_{\text{Var}\left[B_s \right]} — \delta\underbrace{\dfrac{\left(t_2 — s \right)\left(s — t_1 \right)}{t_2 -t_1}}_{\text{Var}\left[B_s \right]} = \text{Var}\left[B_s \right] \left( 1 — \delta\right)

For reference purposes, we can compare the results to the covariance of the original foundational process:

\text{Cov} \left[H_s^{(1)}, H_t^{(1)} \right] = \sigma²min(s, t)

And we can see that it is everywhere smaller than the covariance of the original Brownian motion.

Also, what would be more interesting is to compare the results to the original Brownian motion conditioned on being a at time t₁:

\text{Cov} \left[\left( H_s^{(1)}|\; H_{t_1}^{(1)} = a\right), \left(H_t^{(1)}|\; H_{t_1}^{(1)} = a\right) \right] = \sigma²\left(min(s, t) — t_1\right)

We can also see that the covariance of the Brownian Bridge is also smaller than the restricted Brownian motion.

However, if both t and s are close to t₁:

s = t_1 + \delta_s,\; \delta_s << t_2-t_1
t = t_1 + \delta_t,\; \delta_t << t_2-t_1,\; \delta_s < \delta_t
\dfrac{\left(t_2-t \right)\left(s-t_1 \right)}{t_2-t_1}\sigma²=\dfrac{\left(t_2-t_1 — \delta_t \right)\left(\cancel{t_1} + \delta_s — \cancel{t_1}\right)}{t_2-t_1}\sigma²=\underbrace{\left(\delta_s + \dfrac{\delta_s \delta_t}{t_2-t_1} \right)}_{\approx\delta_s=min(s, t)-t_1}\sigma² \approx
\approx \text{Cov} \left[\left( H_s^{(1)}|\; H_{t_1}^{(1)} = a\right), \left(H_t^{(1)}|\; H_{t_1}^{(1)} = a\right) \right]

We can see that the covariance function starts at t₁ just as a usual Brownian motion conditioned on being a at time t₁, but further we go from the starting point the bigger is the effect of the condition at the end.

Conclusion

I’ve derived the covariance function for Brownian Bridges in a general case in two ways without getting too deep into stochastic calculus. Surprisingly enough, the journey happened to be more exciting to me than the destination itself:

  • I’ve seen that if we draw a value from the bridge at time t according to its distribution, then we construct a new bridge (“a” at time t₁ and the drawn value at time t) and draw a value from the new bridge’s distribution at time s, we get the same distribution as if we drawn the value from the original bridge at time s
  • I’ve derived the joint density function for the joint distribution for the Brownian Bridge at times s and t, s < t
  • I’ve defined the stochastic process for the Brownian Bridge

The bottom line is that due to the condition at the end is making the covariance and the correlation less than of a normal Brownian motion only conditioned to be a at time t₁.

If you liked the article, feel free to clap to propel it to a wider audience. If you have found a part particularly insightful, don’t hesitate to highlight it. In case of any question or observation, comment below, I’ll be happy to respond. Also, if you would like to get notified when I publish my next story, follow me.

--

--

Kristof Tabori

Originally graduated as a physicist, but delivers value as a quant and by building LLM-backed solutions.