Applications and in-depth cases

Up until now, Malliavin calculus has been mostly a game or procedure we apply to stochastic processes. We will dive deeper and reach some interesting results.

Malliavin derivative of the Ornstein-Uhlenbeck process

Let’s consider a random process \(N_t\) that fluctuates around a value \(\mu\) and corrects itself more strongly the farther it is to \(\mu\), plus some noise. So, we want something like this:

\[ dN_t = \theta(\mu - N_t)\,dt+ \sigma\,dW_t \]

For example, let’s say that a lake can accept around \(\mu=1000\) trouts. If there are more than that, some will starve and die off, and if there are less, then they will reproduce. Finally, \(\theta\) controls how strong is the correction. Here are some examples with \(\theta=1\) and different values for \(\sigma\):

Code
library(ggplot2)

# Right
dt <- 0.01
total_time <- 10
N_0 <- 10
mu_ou <- 1000.0
tetha_ou <- 1.0

steps = total_time / dt
times <- seq(from = 0, to = total_time, length.out=steps)
stretched_times <- times * tetha_ou
dBt <- rnorm(steps,mean = 0, sd = sqrt(dt))

df <- data.frame(
  time=stretched_times
)

sigma_ou <- 0
N <- rep_len(x = N_0, length.out = length(times))
for (i in (2:length(times))) {
  N[i] <- N[i-1] + tetha_ou * (mu_ou - N[i-1]) * dt + sigma_ou * dBt[i]
}
df$N_0 <- N

sigma_ou <- 100
N <- rep_len(x = N_0, length.out = length(times))
for (i in (2:length(times))) {
  N[i] <- N[i-1] + tetha_ou * (mu_ou - N[i-1]) * dt + sigma_ou * dBt[i]
}
df$N_100 <- N

sigma_ou <- 250
N <- rep_len(x = N_0, length.out = length(times))
for (i in (2:length(times))) {
  N[i] <- N[i-1] + tetha_ou * (mu_ou - N[i-1]) * dt + sigma_ou * dBt[i]
}
df$N_250 <- N

ggplot(data = df, mapping = aes(x=time), legend=TRUE) +
  xlab('Time') +
  ylab('N(t)') +
  geom_line(mapping = aes(y=N_100, colour="σ=100"),linewidth = 1) + 
  geom_line(mapping = aes(y=N_250, colour="σ=250"),linewidth = 1) + 
  geom_line(mapping = aes(y=N_0, colour="σ=0"),linewidth = 1) +
  scale_colour_manual("Functions",values=c("black","darkgray","darkviolet"))

This is a very common process, so much it has a name: the Ornstein-Uhlenbeck process. Now, let’s calculate \(DN\). We have \(a(X) = \theta(\mu - X)\) and \(b(x) = \sigma\), so it’s not a simple Ito process that only depends on time. We better start from those before tackling the Ornstein-Uhlenbeck process. If we remember that we can split the integrals as \(\int_0^t... dW_t=\sum_{i=1}^n{...(W_{t_i}-W_{t_{i-1}})}\), then we can transform a classical Ito process \(X_t\) into something we can easily apply chain and product rules:

\[ \begin{aligned} X_t &= \sum_{i=1}^n \mu_{t_i}(t_i-t_{i-1}) + \sum_{i=1}^n \sigma_{t_i}(W_{t_i}-W_{t_{i-1}}) \\ D_rX_t &= \sum_{i=1}^n D_r\mu_{t_i}(t_i-t_{i-1}) + \sum_{i=1}^n D_r[\sigma_{t_i}\cdot(W_{t_i}-W_{t_{i-1}})] \\ &= \sum_{i=1}^n D_r\mu_{t_i}(t_i-t_{i-1}) + \sum_{i=1}^n \sigma_{t_i} \cdot 1_{[t_{i-1},t_{i}]}(r) + \sum_{i=1}^n D_r(\sigma_{t_i})\cdot(W_{t_i}-W_{t_{i-1}})] \\ &= \int_0^t D_r\mu_s\,ds + \sigma_r 1_{[0,t]}(r) + \int_0^t D_r\sigma_s\,dW_s \\ \end{aligned} \]

It looks messy, but things are what we expect: \(t\) is the terminal value of both integrals, \(s\) is the variable we use to integrate from \(0\) to \(t\), and \(r\) is the variable we introduced with the Malliavin Derivative. See that it makes sense: the influence of \(\sigma\) in the Ito process is the same along the whole Ito process path.

Now, we can use a similar argument for the number of trouts in the lake, which followed an Ornstein-Uhlenbeck process. Given that \(\mu_s\) and \(\sigma_s\) are now \(a(s,X_s)\) and \(b(s,X_s)\), we apply chain and product rules as above to arrive to:

\[ D_rX_t = \int_r^t \frac{\partial a}{\partial x}(s, X_s)\,D_rX_s\,ds + b(r,X_r)1_{[0,t]}(r) + \int_r^t \frac{\partial b}{\partial x}(s, X_s)\,D_rX_s\,dW_s \\ \]

Now, we know from above that \(\frac{\partial a}{\partial x} = -\theta\) and \(\frac{\partial b}{\partial x} = 0\), so we get:

\[ \begin{aligned} D_rN_t &= \int_r^t (-\theta)\,D_rN_s\,ds + \sigma1_{[0,t]}(r) \\ &= -\theta\int_r^t \,D_rN_s\,ds + \sigma1_{[0,t]}(r) \\ &= \sigma\,e^{-\theta(t-r)} \end{aligned} \]

In the case above, we arrive at a neat expression that tell us something interesting: the effect of a perturbation on the number of trouts is exponentially smaller if \(r<<t\). This makes sense: the population of trouts \(N\) wants to be close to \(\mu\) and the population from a distant past is irrelevant. It also tells us that there’s no randomness in the fluctuation because it doesn’t depend on any random variable, such as \(W_t\).

Finally, we lucked out above. The expression for \(D_rN_t\) is simple, so we could write an analytic expression for it. If it were more convoluted, it’s probably better to estimate a solution. Alos (2021) does a great job showcasing the above.

Clark-Ocone-Haussman Formula

Let’s see a relatively simple application with profound implications, following Friz (2002) entirely. Let’s consider this function:

\[ F(t) = e^{\int_0^t h\,dB - \frac{1}{2}\int_0^t{h^2}d\lambda} \]

This function is an exponential martingale, that is, \(\mathbb{E}_s[F(t)]=F(s)\). We now take \(F(1) = \mathcal{E}(h)\) and we now calculate \(DF\) :

\[ \begin{aligned} D_tF &= e^{-\frac{1}{2}\int_0^1{h^2}d\lambda}\cdot D_t(e^{\int_0^1 h\,dB}) \\ &= \underbrace{e^{-\frac{1}{2}\int_0^1{h^2}d\lambda}\cdot e^{\int_0^1 h\,dB}}_{F} \cdot h(t) \\ &= Fh(t) \end{aligned} \]

Now, this is only valid at \(t=1\), but we use the expectation operator and martingale property to get the values at a previous time. We will call \(\mathcal{F}_s\) the filtration up to time \(s\), and it’s the way we call the “history” or “information” known up to time \(s\), and then:

\[ \begin{aligned} \mathbb{E}[D_t F | \mathcal{F}_t] &= \mathbb{E}[F(1)h(t)|\mathcal{F_t}] \\ &=h(t) \mathbb{E}[F(1)|\mathcal{F_t}] \\ &=h(t) F(t) \end{aligned} \]

Now, we pull everything together. This martingale \(F(t)\) is the solution of the (stochastic) differential equation \(dF(t) = h(t)F(t)dB_t\), when \(F(0)=1=\mathbb{E}[F]\). Replacing with the above expression we get:

\[ \begin{aligned} F(t) &= \mathbb{E}[F] + \int_0^1 h(t)F(t) dB_t \\ &= \mathbb{E}[F] + \int_0^1 \mathbb{E}[D_t F | \mathcal{F_t}] dB_t \end{aligned} \]

This is the Clark-Ocone formula, which is an explicit formula for the Martingale Representation Theorem. The Malliavin derivative is the key ingredient to have an analytic expression and it’s the main reason one would care about it. What this is telling us is that any random variable indexed by time \(F_t\) can be split into a sum of a “deterministic” portion, the expected value, and a martingale that fluctuates around it.

We can apply this to the Ornstein-Uhlenbeck process for our trouts. We know from before that \(D_rN_t = \sigma\,e^{-\theta(t-r)}\). Notice that there’s no \(W_t\), so the expected value is just the Malliavin derivative. We could estimate the value of \(\mathbb{E}[N]\) or try to solve the (stochastic) differential equation, but given that the process already has a known solution, I’ll go ahead and add it. Here’s in all its splendor:

\[ \begin{aligned} N(t) &= \mathbb{E}[N] + \int_0^t \mathbb{E_r}[D_r N_t] dW_r \\ &= \underbrace{X_0e^{-\theta t} + \mu(1-e^{-\theta t})}_{Deterministic} + \underbrace{\int_0^t \sigma\,e^{-\theta(t-r)}\,dW_r}_{Martingale} \\ \end{aligned} \]

Why do we care about this? Well, for once, if we wanted to simulate different paths of our fish population \(N\), we only need to simulate the martingale portion, and the deterministic portion is only calculated once. Secondly, we can use the Clark-Ocone formula to directly calculate the variance for \(N\), without knowing the full analytic solution. Indeed, remember that \(\mathbb{Var}[N]=\mathbb{E}[N-\mathbb{E}[N]]^2\), so then, using Ito’s Isometry:

\[ \begin{aligned} \mathbb{Var}[N]&=\mathbb{E}\left[\left(\int_0^t \mathbb{E_r}[D_r N_t] dW_r\right)^2\right] \\ &=\mathbb{E}\left[\int_0^t \left(\sigma\,e^{-\theta(t-r)}\right)^2dr\right] \\ &=\sigma^2\mathbb{E}\left[\int_0^t e^{2\theta(r-t)}dr\right] \\ &=\frac{\sigma^2}{2\theta}\mathbb{E}\left[1-e^{-2\theta t}\right] \\ &=\frac{\sigma^2}{2\theta}\left(1-e^{-2\theta t}\right) \end{aligned} \]

Notice that this works for any variable: you only need the Malliavin Derivative to get the variance. We see above that:

  • Our trout population \(N\) in the long term (\(t \rightarrow \infty\)) will be a process moving around \(\mu\) with a constant variance \(\frac{\sigma^2}{2\theta}\)

  • Variance at the time of trout introduction (\(t \approx 0\)) is very small because the growth of \(N\), trying to reach \(\mu\) as soon as possible, dominates over the noise

  • A large \(\theta\) will not only make the long term variance arrive sooner, it will also prevent large deviations from \(\mu\)


Alos, & Lorite, E. 2021. Malliavin Calculus in Finance: Theory and Practice (1st Ed.). 1st ed. Financial Mathematics Series. Chapman; Hall/CRC. https://doi.org/10.1201/9781003018681.
Friz, Peter K. 2002. “An Introduction to Malliavin Calculus.” In. https://api.semanticscholar.org/CorpusID:2479628.