Homogenous linear RODE with a Wiener process noise coefficient

We start by considering a homogeneous linear equation in which the coefficient is a Wiener process. In this case, it is already know, by other means, that the Euler method converges strongly of order 1, because it can be regarded as system of stochastic differential equations with additive noise. Nevertheless, we use it here for illustrative purposes.

The equation

We consider the RODE

\[ \begin{cases} \displaystyle \frac{\mathrm{d}X_t}{\mathrm{d} t} = W_t X_t, \qquad 0 \leq t \leq T, \\ \left. X_t \right|_{t = 0} = X_0, \end{cases}\]

where $\{W_t\}_{t\geq 0}$ is a standard Wiener process. The explicit solution is

\[ X_t = e^{\int_0^t W_s \;\mathrm{d}s} X_0.\]

Computing a solution with the exact distribution

The integral $\int_0^{t_j} W_s\;\mathrm{d}s$ and, hence, the exact solution, is not uniquely defined from the values $W_{t_j}$ of the noise on the mesh points, no matter how fine it is. Hence, it makes no sense to compute the strong distance to "the exact solution". But we can estimate that by drawing sample solutions with the exact distribution conditioned on the mesh values.

We do that by first breaking down the sum into parts:

\[\int_0^{t_j} W_s\;\mathrm{d}s = \sum_{i = 0}^{j-1} \int_{t_i}^{t_{i+1}} W_s\;\mathrm{d}s.\]

On each mesh interval, we use that

\[ B_t = W_t - W_{t_i} - \frac{t - t_i}{t_{i+1}-t_i}(W_{t_{i+1}} - W_{t_i})\]

is a Brownian bridge on $[t_i, t_{i+1}]$, independent of $W_{t_i}$ and $W_{t_{i+1}}$.

Since

\[ \mathrm{d}W_t = \mathrm{d}B_t + \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\;\mathrm{d}t,\]

we obtain

\[\begin{align*} \int_{t_i}^{t_{i+1}} W_s\;\mathrm{d}s & = \int_{t_i}^{t_{i+1}} B_s^i\;\mathrm{d}s + \int_{t_i}^{t_{i+1}} \left( W_{t_i} + \frac{s - t_i}{t_{i+1}-t_i}(W_{t_{i+1}} - W_{t_i})\right)\;\mathrm{d}s \\ & = \frac{1}{2}\left(W_{t_i} + W_{t_{i+1}}\right)(t_{i+1} - t_i) + Z_i, \end{align*}\]

where

\[ Z_i = \int_{t_i}^{t_{i+1}} B_s^i\;\mathrm{d}s.\]

Notice the first term is the trapezoidal rule while the second term is a Gaussian with zero mean. We need to compute the variance of $Z_i$ to completely characterize it. By translation, it suffices to consider a Brownian bridge $\{B_t\}_{t\in [0, \tau]}$ on an interval $[0, \tau]$, with $\tau = \Delta t_N$. This is obtained from $B_t = W_t - (t/\tau)W_\tau$. We have, since $\mathbb{E}[W_tW_s] = \min\{t, s\}$, that

\[ \mathbb{E}[B_tB_s] = \min\{t, s\} - \frac{ts}{\tau}.\]

Hence,

\[\begin{align*} \mathbb{E}\left[\left(\int_0^{\tau} B_s\;\mathrm{d}s\right)^2\right] & = \mathbb{E}\left[\int_0^{\tau} \int_0^\tau B_sB_t\;\mathrm{d}s\;\mathrm{d}\right] \\ & = \int_0^\tau \int_0^\tau \mathbb{E}[B_sB_t] \;\mathrm{d}s\;\mathrm{d}t \\ & = \int_0^\tau \int_0^\tau \left(\min\{t, s\} - \frac{ts}{\tau}\right) \;\mathrm{d}s\;\mathrm{d}t \\ & = \int_0^\tau \int_0^t s\;\mathrm{d}s\;\mathrm{d}t + \int_0^\tau \int_t^\tau t\;\mathrm{d}s\;\mathrm{d}t - \int_0^\tau \int_0^\tau \frac{ts}{\tau} \;\mathrm{d}s\;\mathrm{d}t \\ & = \int_0^\tau \frac{t^2}{2}\;\mathrm{d}t + \int_0^\tau t(\tau - t)\;\mathrm{d}t - \int_0^\tau \frac{t\tau^2}{2\tau}\;\mathrm{d}t \\ & = \frac{\tau^3}{12}. \end{align*}\]

Back to $Z_i$, this means that

\[ Z_i \sim \mathcal{N}\left(0, \frac{(t_{i+1}- t_i)^3}{12}\right) = \sqrt{\frac{(t_{i+1} - t_i)^3}{12}}\mathcal{N}(0, 1).\]

For a normal variable $N \sim \mathcal{N}(\mu, \sigma)$, the expectation of the random variable $e^N$ is $\mathbb{E}[e^N] = e^{\mu + \sigma^2/2}$. Hence,

\[ \mathbb{E}[e^{Z_i}] = e^{((t_{i+1}- t_i)^3)/24}.\]

This is the contribution of this random variable to the mean of the exact solution. But in the implementation we actually draw from $Z_i$, not from $e^{Z_i}$.

Another way is to use the result in Section 14.2 of Han & Kloeden 2017, which says that the the exact distribution of $\int_0^\tau W_s\;\mathrm{d}s$ given the step $\Delta W = W_\tau - W_0 = W_\tau$, is

\[\int_0^{\tau} W_s\;\mathrm{d}s \sim \frac{\tau}{2}\Delta W + \sqrt{\frac{\tau^3}{12}}\mathcal{N}(0, 1) = \frac{\tau}{2}\Delta W + \mathcal{N}\left(0, \frac{\tau^3}{12}\right).\]

Then, for the distribution of the integral over a mesh interval $[t_i, t_{i+1}]$ when given the endpoints $W_{t_i}$ and $W_{t_{i+1}},$ we use that $s \mapsto W_{t_i+s} - W_{t_i}$ is a standard Wiener process to find that

\[\begin{align*} \int_{t_i}^{t_{i+1}} W_s\;\mathrm{d}s & = W_{t_i} + \int_{t_i}^{t_{i+1}} (W_s - W_{t_i})\;\mathrm{d}s \\ & = W_{t_i}(t_{i+1} - t_i) + \int_{0}^{t_{i+1} - t_i} (W_{t_i+s} - W_{t_i})\;\mathrm{d}s \\ & = W_{t_i}(t_{i+1} - t_i) + \frac{(t_{i+1} - t_i)}{2}(W_{t_{i+1}}-W_{t_{i}}) + Z_i \\ & = \frac{(W_{t_{i+1}}+W_{t_{i}})}{2}(t_{i+1} - t_i) + Z_i, \end{align*}\]

where $Z_i$ is as above. Thus, breaking down the sum over the mesh intervals:

\[\int_0^{t_j} W_s\;\mathrm{d}s = \sum_{i = 0}^{j-1} \int_{t_i}^{t_{i+1}} W_s\;\mathrm{d}s = \sum_{i=0}^{j-1} \left( \frac{(W_{t_{i+1}}+W_{t_{i}})}{2}(t_{i+1} - t_i) + Z_i\right).\]

In any case, once an Euler approximation is computed, along with realizations $\{W_{t_i}\}_{i=0}^n$ of a sample path of the noise, we consider an exact sample solution given by

\[ X_{t_j} = X_0 e^{\sum_{i = 0}^{j-1}\left(\frac{1}{2}\left(W_{t_i} + W_{t_{i+1}}\right)(t_{i+1} - t_i) + Z_i\right)},\]

for realizations $Z_i$ drawn from a normal distribution and scaled by the standard deviation $\sqrt{(t_{i+1} - t_i)^3/12}$. This is implemented by computing the integral recursively, via

\[ \begin{cases} I_j = I_{j-1} + \frac{1}{2}\left(W_{t_{j-1}} + W_{t_j}\right)(t_{j} - t_{j-1}) + Z_j, \\ Z_j = \sqrt{\frac{(t_{j} - t_{j-1})^3}{12}} R_j, \\ R_j \sim \mathcal{N}(0, 1), \\ \end{cases}\]

with $I_0 = 0$, and setting

\[ X_{t_j} = X_0 e^{I_j}.\]

Numerical approximation

Setting up the problem

First we load the necessary packages

using Plots
using Random
using Distributions
using RODEConvergence

Then we set up some variables, starting by choosing the Xoshiro256++ pseudo-random number generator, and setting its seed for the sake of reproducibility:

rng = Xoshiro(123)

We set the right hand side of the equation:

f(t, x, y, p) = y * x

Next we set up the time interval and the initial distribution law for the initial value problem, which we take it to be a Distributions.Normal random variable:

t0, tf = 0.0, 1.0
x0law = Normal()
Distributions.Normal{Float64}(μ=0.0, σ=1.0)

The noise is a WienerProcess starting at $y_0 = 0$:

y0 = 0.0
noise = WienerProcess(t0, tf, y0)
WienerProcess{Float64}(0.0, 1.0, 0.0)

There is no parameter in the equation, so we just set params to nothing.

params = nothing

The number of mesh points for the target solution and the approximations

ntgt = 2^16
ns = 2 .^ (4:14)
11-element Vector{Int64}:
    16
    32
    64
   128
   256
   512
  1024
  2048
  4096
  8192
 16384

and for a visualization of one of the sample approximations

nsample = ns[[1, 2, 3, 4]]
4-element Vector{Int64}:
  16
  32
  64
 128

Finally, we set up the number of samples for the Monte Carlo estimate of the strong error:

m = 1200

and add some information about the simulation, for the caption of the convergence figure.

info = (
    equation = "\$\\mathrm{d}X_t/\\mathrm{d}t = W_t X_t\$",
    noise = "a standard Wiener process noise \$\\{W_t\\}_t\$",
    ic = "\$X_0 \\sim \\mathcal{N}(0, 1)\$"
)

The target solution as described above is implemented as

target_solver! = function (xt::Vector{T}, t0::T, tf::T, x0::T, f::F, yt::Vector{T}, params::Q, rng::AbstractRNG) where {T, F, Q}
    axes(xt) == axes(yt) || throw(
        DimensionMismatch("The vectors `xt` and `yt` must match indices")
    )

    n = size(xt, 1)
    dt = (tf - t0) / (n - 1)
    i1 = firstindex(xt)
    xt[i1] = x0
    integral = zero(T)
    zscale = sqrt(dt^3 / 12)
    for i in Iterators.drop(eachindex(xt, yt), 1)
        integral += (yt[i] + yt[i1]) * dt / 2 + zscale * randn(rng)
        xt[i] = x0 * exp(integral)
        i1 = i
    end
end

and with that we construct the CustomMethod that solves the problem with this target_solver!:

target = CustomUnivariateMethod(target_solver!, rng)

The method for which we want to estimate the rate of convergence is, naturally, the Euler method, denoted RandomEuler:

method = RandomEuler()
RandomEuler{Float64, Distributions.Univariate}(Float64[])

Order of convergence

With all the parameters set up, we build the ConvergenceSuite:

suite = ConvergenceSuite(t0, tf, x0law, f, noise, params, target, method, ntgt, ns, m)
ConvergenceSuite{Float64, Distributions.Normal{Float64}, WienerProcess{Float64}, Nothing, typeof(Main.var"Main".f), 1, 1, CustomUnivariateMethod{Main.var"Main".var"#2#3", Random.Xoshiro}, RandomEuler{Float64, Distributions.Univariate}}(0.0, 1.0, Distributions.Normal{Float64}(μ=0.0, σ=1.0), Main.var"Main".f, WienerProcess{Float64}(0.0, 1.0, 0.0), nothing, CustomUnivariateMethod{Main.var"Main".var"#2#3", Random.Xoshiro}(Main.var"Main".var"#2#3"(), Random.Xoshiro(0xfefa8d41b8f5dca5, 0xf80cc98e147960c1, 0x20e2ccc17662fc1d, 0xea7a7dcb2e787c01, 0xf4e85a418b9c4f80)), RandomEuler{Float64, Distributions.Univariate}(Float64[]), 65536, [16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384], 1200, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [6.9452245653375e-310, 2.481397197e-315, 2.36923805e-315, 2.481397197e-315, NaN, NaN, NaN, NaN, NaN, NaN  …  2.50560839e-315, 4.0e-323, NaN, 2.25078079e-315, 4.0e-323, 2.3189696e-315, 4.0e-323, NaN, 2.29560646e-315, 4.0e-323], [6.95273488881485e-310, NaN, 0.0, 2.46397849e-315, 0.0, 2.42592232e-315, 6.9452232881229e-310, 1.0e-323, 0.0, NaN  …  1.51387e-319, 2.280167105e-315, 2.50676411e-315, 2.280167105e-315, 2.50676411e-315, 0.0, 1.0123e-320, 2.411834335e-315, 8.487983164e-314, 0.0], [2.31493075e-315, 2.293594307e-315, 2.31493075e-315, 2.293594307e-315, NaN, 2.41126592e-315, NaN, 2.500879747e-315, NaN, 2.41183208e-315  …  2.410061825e-315, NaN, 2.24711443e-315, NaN, 2.296332064e-315, NaN, 2.437588277e-315, NaN, 2.23853658e-315, 6.4766e-319])

Then we are ready to compute the errors via solve:

@time result = solve(rng, suite)
  3.398272 seconds (4.20 M allocations: 396.284 MiB, 9.51% gc time, 47.83% compilation time)

The computed strong error for each resolution in ns is stored in result.errors, and a raw LaTeX table can be displayed for inclusion in the article:

table = generate_error_table(result, suite, info)
    \begin{center}
        \begin{tabular}[htb]{|r|l|l|l|}
            \hline N & dt & error & std err \\
            \hline \hline
            16 & 0.0625 & 0.0401 & 0.00339 \\
            32 & 0.0312 & 0.0206 & 0.00174 \\
            64 & 0.0156 & 0.0104 & 0.000966 \\
            128 & 0.00781 & 0.00527 & 0.0005 \\
            256 & 0.00391 & 0.00266 & 0.000253 \\
            512 & 0.00195 & 0.0013 & 0.00012 \\
            1024 & 0.000977 & 0.000665 & 6.48e-5 \\
            2048 & 0.000488 & 0.000333 & 3.17e-5 \\
            4096 & 0.000244 & 0.000172 & 1.61e-5 \\
            8192 & 0.000122 & 8.48e-5 & 7.35e-6 \\
            16384 & 6.1e-5 & 4.26e-5 & 4.04e-6 \\
            \hline
        \end{tabular}
    \end{center}

    \bigskip

    \caption{Mesh points (N), time steps (dt), strong error (error), and standard error (std err) of the Euler method for $\mathrm{d}X_t/\mathrm{d}t = W_t X_t$ for each mesh resolution $N$, with initial condition $X_0 \sim \mathcal{N}(0, 1)$ and a standard Wiener process noise $\{W_t\}_t$, on the time interval $I = [0.0, 1.0]$, based on $M = 1200$ sample paths for each fixed time step, with the target solution calculated with $65536$ points. The order of strong convergence is estimated to be $p = 0.989$, with the 95\% confidence interval $[0.8866, 1.0931]$.}

The calculated order of convergence is given by result.p:

println("Order of convergence `C Δtᵖ` with p = $(round(result.p, sigdigits=2)) and 95% confidence interval ($(round(result.pmin, sigdigits=3)), $(round(result.pmax, sigdigits=3)))")
Order of convergence `C Δtᵖ` with p = 0.99 and 95% confidence interval (0.887, 1.09)

Plots

We create a plot with the rate of convergence with the help of a plot recipe for ConvergenceResult:

plt = plot(result)
Example block output

For the sake of illustration, we plot the approximations of a sample target solution:

plt = plot(suite, ns=nsample)
Example block output

Finally, we also visualize the noise associated with this sample solution:

plot(suite, xshow=false, yshow=true, label="Wiener noise")
Example block output

This page was generated using Literate.jl.