Non-homogenous linear RODE with a Wiener process noise coefficient
In our second linear example, a Wiener process noise enters as the non-homogeneous term.
The equation
More precisely, we consider the RODE
\[ \begin{cases} \displaystyle \frac{\mathrm{d}X_t}{\mathrm{d} t} = - X_t + W_t, \qquad 0 \leq t \leq T, \\ \left. X_t \right|_{t = 0} = X_0, \end{cases}\]
where $\{W_t\}_{t\geq 0}$ is a Wiener process. The explicit solution is
\[ X_t = e^{-t}X_0 + \int_0^t e^{-(t-s)}W_s\;\mathrm{d}s.\]
Computing a solution with the exact distribution
As in the first example, the integral $\int_0^{t_j} e^s W_s\;\mathrm{d}s$ and, hence, the exact solution, is not uniquely defined from the values $W_{t_j}$ of the noise on the mesh points, no matter how fine it is. Thus we estimate the strong error by drawing sample solutions with the exact distribution conditioned on the mesh values.
We do that by first breaking down the sum into parts:
\[\int_0^{t_j} e^s W_s\;\mathrm{d}s = \sum_{i = 0}^{j-1} \int_{t_i}^{t_{i+1}} e^s W_s\;\mathrm{d}s.\]
On each mesh interval, we consider again the Brownian bridge
\[ B_t = W_t - W_{t_i} - \frac{t - t_i}{t_{i+1}-t_i}(W_{t_{i+1}} - W_{t_i})\]
on $[t_i, t_{i+1}]$, which is independent of $W_{t_i}$ and $W_{t_{i+1}}$.
Then,
\[ \begin{align*} \int_{t_i}^{t_{i+1}} e^s W_s\;\mathrm{d}s & = \int_{t_i}^{t_{i+1}} e^s B_s^i\;\mathrm{d}s + \int_{t_i}^{t_{i+1}} e^s\left( W_{t_i} + \frac{s - t_i}{t_{i+1}-t_i}(W_{t_{i+1}} - W_{t_i})\right)\;\mathrm{d}s \\ & = W_{t_{i+1}}e^{t_{i+1}} - W_{t_i}e^{t_i} - \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\left(e^{t_{i+1}}-e^{t_i}\right) + Z_i, \end{align*}\]
where
\[ Z_i = \int_{t_i}^{t_{i+1}} e^s B_s^i\;\mathrm{d}s.\]
As before, the term $Z_i$ is a Gaussian with zero mean, and we need to compute its variance to completely characterize it. By translation, it suffices to consider a Brownian bridge $\{B_t\}_{t\in [0, \tau]}$ on an interval $[0, \tau]$, with $\tau = \Delta t_N$. This is obtained from $B_t = W_t - (t/\tau)W_\tau$. We have, since $\mathbb{E}[W_tW_s] = \min\{t, s\}$, that
\[ \mathbb{E}[B_tB_s] = \min\{t, s\} - \frac{ts}{\tau}.\]
Hence,
\[ \begin{align*} \mathbb{E}\left[\left(\int_0^{\tau} e^s B_s\;\mathrm{d}s\right)^2\right] & = \mathbb{E}\left[\int_0^{\tau} \int_0^\tau e^s e^t B_sB_t\;\mathrm{d}s\;\mathrm{d}\right] \\ & = \int_0^\tau \int_0^\tau e^s e^t \mathbb{E}[B_sB_t] \;\mathrm{d}s\;\mathrm{d}t \\ & = \int_0^\tau \int_0^\tau e^s e^t\left(\min\{t, s\} - \frac{ts}{\tau}\right) \;\mathrm{d}s\;\mathrm{d}t \\ & = \int_0^\tau \int_0^t e^s e^t s\;\mathrm{d}s\;\mathrm{d}t + \int_0^\tau \int_t^\tau e^s e^t t\;\mathrm{d}s\;\mathrm{d}t - \int_0^\tau \int_0^\tau e^s e^t \frac{ts}{\tau} \;\mathrm{d}s\;\mathrm{d}t \\ & = \int_0^\tau e^t(te^t-e^t+1)\;\mathrm{d}t + \int_0^\tau te^t(e^\tau - e^t)\;\mathrm{d}t \\ & \qquad - \int_0^\tau \frac{te^t}{\tau}\left(\tau e^\tau - e^\tau + 1\right)\;\mathrm{d}t \\ & = \frac{\tau^3}{12}. \end{align*}\]
Back to $Z_i$, this means that
\[ Z_i \sim \mathcal{N}\left(0, \frac{(t_{i+1}- t_i)^3}{12}\right) = \frac{\sqrt{(t_{i+1} - t_i)^3}}{\sqrt{12}}\mathcal{N}(0, 1).\]
Summing up the terms, we find that
\[ \begin{align*} \int_0^{t_j} e^s W_s\;\mathrm{d}s & = \sum_{i = 0}^{j-1} \int_{t_i}^{t_{i+1}} e^s W_s\;\mathrm{d}s \\ & = \sum_{i = 0}^{j-1} \left( W_{t_{i+1}}e^{t_{i+1}} - W_{t_i}e^{t_i} - \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\left(e^{t_{i+1}}-e^{t_i}\right) + Z_i\right) \\ & = W_{t_j}e^{t_j} - \sum_{i = 0}^{j-1} \left( \frac{W_{t_{i+1}}-W_{t_i}}{t_{i+1}-t_i}\left(e^{t_{i+1}}-e^{t_i}\right) + Z_i\right). \end{align*}\]
Thus, once an Euler approximation is computed, along with realizations $\{W_{t_i}\}_{i=0}^n$ of a sample path of the noise, we consider an exact sample solution given by
\[ X_{t_j} = e^{-{t_j}}\left(X_0 - \sum_{i=0}^{j-1} \left(\frac{W_{t_{i+1}} - W_{t_i}}{t_{i+1}-t_i}\left(e^{t_{i+1}} - e^{t_i}\right) + Z_i\right)\right) + W_{t_j},\]
for realizations $Z_i$ drawn from a normal distribution and scaled by the standard deviation $\sqrt{(t_{i+1} - t_i)^3/12}$. This is implemented by computing the integral recursively, via
\[ \begin{cases} I_j = I_{j-1} + \frac{W_{t_{j-1}} + W_{t_j}}{t_{j} - t_{j-1}}\left(e^{t_{j} - e^{t_{j-1}}}\right) - Z_j, \\ Z_j = \sqrt{\frac{(t_{j} - t_{j-1})^3}{12}} R_j, \\ R_j \sim \mathcal{N}(0, 1), \\ \end{cases}\]
with $I_0 = 0$, and setting
\[ X_{t_j} = e^{t_j}\left(X_0 - I_j\right) + W_{t_j}.\]
Numerical approximation
Setting up the problem
We load the necessary packages
using Plots
using Random
using Distributions
using RODEConvergenceThen we set up the relevant variables, as in the first example:
rng = Xoshiro(123)
f(t, x, y, p) = - x + y
t0, tf = 0.0, 1.0
x0law = Normal()
y0 = 0.0
noise = WienerProcess(t0, tf, y0)
params = nothing
ntgt = 2^16
ns = 2 .^ (4:14)11-element Vector{Int64}:
16
32
64
128
256
512
1024
2048
4096
8192
16384The numbers of mesh points for a visualization of one of the sample approximations
nsample = ns[[1, 2, 3, 4]]4-element Vector{Int64}:
16
32
64
128The number of simulations for the Monte Carlo estimate is set to
m = 100and the info about the simulation, for the caption of the convergence figure.
info = (
equation = "\$\\mathrm{d}X_t/\\mathrm{d}t = -X_t + W_t\$",
noise = "a standard Wiener process noise \$\\{W_t\\}_t\$",
ic = "\$X_0 \\sim \\mathcal{N}(0, 1)\$"
)We define the target solution as described above.
target_solver! = function (xt::Vector{T}, t0::T, tf::T, x0::T, f::F, yt::Vector{T}, params::Q, rng::AbstractRNG) where {T, F, Q}
axes(xt) == axes(yt) || throw(
DimensionMismatch("The vectors `xt` and `yt` must match indices")
)
n = size(xt, 1)
dt = (tf - t0) / (n - 1)
i1 = firstindex(xt)
xt[i1] = x0
integral = zero(T)
ti1 = zero(T)
zscale = sqrt(dt^3 / 12)
for i in Iterators.drop(eachindex(xt, yt), 1)
ti = ti1 + dt
integral += (yt[i] - yt[i1]) * (exp(ti) - exp(ti1)) / dt + zscale * randn(rng)
xt[i] = exp(-ti) * (x0 - integral) + yt[i]
ti1 = ti
i1 = i
end
endand with that we construct the CustomMethod that solves the problem with this target_solver!:
target = CustomUnivariateMethod(target_solver!, rng)The method for which want to estimate the rate of convergence is, naturally, the Euler method, implemented via RandomEuler:
method = RandomEuler()RandomEuler{Float64, Distributions.Univariate}(Float64[])Order of convergence
With all the parameters set up, we build the ConvergenceSuite:
suite = ConvergenceSuite(t0, tf, x0law, f, noise, params, target, method, ntgt, ns, m)ConvergenceSuite{Float64, Distributions.Normal{Float64}, WienerProcess{Float64}, Nothing, typeof(Main.var"Main".f), 1, 1, CustomUnivariateMethod{Main.var"Main".var"#2#3", Random.Xoshiro}, RandomEuler{Float64, Distributions.Univariate}}(0.0, 1.0, Distributions.Normal{Float64}(μ=0.0, σ=1.0), Main.var"Main".f, WienerProcess{Float64}(0.0, 1.0, 0.0), nothing, CustomUnivariateMethod{Main.var"Main".var"#2#3", Random.Xoshiro}(Main.var"Main".var"#2#3"(), Random.Xoshiro(0xfefa8d41b8f5dca5, 0xf80cc98e147960c1, 0x20e2ccc17662fc1d, 0xea7a7dcb2e787c01, 0xf4e85a418b9c4f80)), RandomEuler{Float64, Distributions.Univariate}(Float64[]), 65536, [16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384], 100, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [4.0e-323, 2.454754094e-315, 4.0e-323, NaN, NaN, NaN, NaN, NaN, 2.38235889e-315, NaN … 4.0e-323, NaN, NaN, NaN, NaN, NaN, -9.656109712662548, 2.21412286e-315, 4.0e-323, 2.429712555e-315], [6.9452245653375e-310, 2.3823583e-315, 2.3823583e-315, 2.3823583e-315, 0.0, 2.464157537e-315, 0.0, 2.166788525e-315, 6.9452232881229e-310, 2.463207093e-315 … 2.198260665e-315, 6.9452232881229e-310, 2.46385412e-315, 0.0, 2.288907087e-315, 6.9527348885097e-310, 2.46385414e-315, 2.483987563e-315, 2.288907087e-315, -7.768758677979505], [6.94522456533514e-310, 2.457218257e-315, 2.35627282e-315, 2.457218257e-315, 1.2073723943295808e-153, 2.259864569989912e-48, 1.9108157940138062e-76, 2.8523500965746155e-77, 3.2550940813188735e-86, 2.34044082e-315 … 6.749475659692733e-67, 2.175615791715112e-76, 6.749198793627959e-67, 4.6591899967103605e-33, 6.794114984051531e-67, 5.032818365522381e-38, 2.173519834e-315, 2.241797177e-315, 0.0, 6.4766e-319])Then we are ready to compute the errors via solve:
@time result = solve(rng, suite) 0.647812 seconds (333.33 k allocations: 197.545 MiB, 5.48% gc time, 68.42% compilation time)The computed strong error for each resolution in ns is stored in result.errors, and a raw LaTeX table can be displayed for inclusion in the article:
table = generate_error_table(result, suite, info) \begin{center}
\begin{tabular}[htb]{|r|l|l|l|}
\hline N & dt & error & std err \\
\hline \hline
16 & 0.0625 & 0.0196 & 0.00143 \\
32 & 0.0312 & 0.00944 & 0.000747 \\
64 & 0.0156 & 0.00445 & 0.000372 \\
128 & 0.00781 & 0.00222 & 0.000176 \\
256 & 0.00391 & 0.00122 & 8.44e-5 \\
512 & 0.00195 & 0.000557 & 4.28e-5 \\
1024 & 0.000977 & 0.000285 & 2.36e-5 \\
2048 & 0.000488 & 0.000154 & 1.16e-5 \\
4096 & 0.000244 & 7.95e-5 & 6.05e-6 \\
8192 & 0.000122 & 3.67e-5 & 2.88e-6 \\
16384 & 6.1e-5 & 1.77e-5 & 1.36e-6 \\
\hline
\end{tabular}
\end{center}
\bigskip
\caption{Mesh points (N), time steps (dt), strong error (error), and standard error (std err) of the Euler method for $\mathrm{d}X_t/\mathrm{d}t = -X_t + W_t$ for each mesh resolution $N$, with initial condition $X_0 \sim \mathcal{N}(0, 1)$ and a standard Wiener process noise $\{W_t\}_t$, on the time interval $I = [0.0, 1.0]$, based on $M = 100$ sample paths for each fixed time step, with the target solution calculated with $65536$ points. The order of strong convergence is estimated to be $p = 0.998$, with the 95\% confidence interval $[0.9162, 1.0798]$.}The calculated order of convergence is given by result.p:
println("Order of convergence `C Δtᵖ` with p = $(round(result.p, sigdigits=2)) and 95% confidence interval ($(round(result.pmin, sigdigits=3)), $(round(result.pmax, sigdigits=3)))")Order of convergence `C Δtᵖ` with p = 1.0 and 95% confidence interval (0.916, 1.08)Plots
We draw a plot of the rate of convergence with the help of a plot recipe for ConvergenceResult:
plt = plot(result)For the sake of illustration, we plot an approximation of a sample target solution:
plot(suite, ns=nsample)We can also visualize the noise associated with this sample solution:
plot(suite, xshow=false, yshow=true, label="Wiener noise")This page was generated using Literate.jl.