Linear RODE with fractional Brownian motion
Here, we consider a linear equation driven by a fractional Brownian motion (fBm) noise. This noise has a parameter $H,$ in the range $0 < H < 1,$ which is called the Hurst parameter and controls the correlation between the increments of the process.
When $0 < H < 1/2,$ the increments are negatively correlated, meaning that there is a higher chance of an increment to be opposite to a previous increment, yielding "rougher" paths, while for $1/2 < H < 1,$ the correlation is positive, yielding "smoother" paths. For $H = 1/2,$ the increments are uncorrelated and the fBm is a Wiener process.
For $H \neq 1/2,$ this process is not a Markov process and, in particular, is not a semi-martingale. Thus, the result in the paper does not apply, and the strong convergence is not necessarily 1. A recent result (see Wang, Cao, Han, & Kloeden (2021)) estimates the order of convergence to be the Hölder exponent of the sample paths, which is exactly $H.$ We, however, show, for this particular linear equation with the fBm in the non-homogeneous term, that the convergence is of order $\min\{H + 1/2, 1\},$ hence higher than the rate $H$ previously known. This rate coincides with the order 1 in the range $1/2 \leq H < 1,$ but is smaller than 1 in the range $0 < H < 1/2,$ which is fine since, as we said above, this is not a semi-martingale for $H$ in this range.
The equation
More precisely, we consider the RODE
\[ \begin{cases} \displaystyle \frac{\mathrm{d}X_t}{\mathrm{d} t} = - X_t + B^H_t, \qquad 0 \leq t \leq T, \\ \left. X_t \right|_{t = 0} = X_0, \end{cases}\]
where the noise $\{B^H_t\}_t$ is assumed to be a fractional Brownian motion (fBm) with Hurst parameter $0 < H < 1$.
The explicit solution is
\[ X_t = e^{-t}X_0 + \int_0^t e^{-(t-s)}B^H_s\;\mathrm{d}s,\]
We do not compute numerically this integral solution. Instead, we compare the Euler approximations with another Euler approximation in a much finer mesh.
Numerical approximation
Setting up the problem
First we load the necessary packages
using Plots
using Random
using LinearAlgebra
using Distributions
using RODEConvergenceThen we set up some variables:
rng = Xoshiro(123)
f(t, x, y, p) = - x + y
params = nothing
t0 = 0.0
tf = 1.0
x0law = Normal()
ntgt = 2^18
ns = 2 .^ (6:9)4-element Vector{Int64}:
64
128
256
512nsample = ns[[1, 2, 3, 4]]4-element Vector{Int64}:
64
128
256
512The number of simulations for the Monte Carlo estimate is set to
m = 600y0 = 0.0
hursts = Iterators.flatten((0.1:0.1:0.5, 0.7:0.2:0.9))
noise = FractionalBrownianMotionProcess(t0, tf, y0, first(hursts), ntgt)FractionalBrownianMotionProcess{AbstractFFTs.ScaledPlan{ComplexF64, FFTW.cFFTWPlan{ComplexF64, 1, false, 1, UnitRange{Int64}}, Float64}, FFTW.cFFTWPlan{ComplexF64, -1, false, 1, Tuple{Int64}}}(0.0, 1.0, 0.0, 0.1, 262144, [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], ComplexF64[0.0 + 9.3774455e-317im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im … 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im], ComplexF64[0.0 + 5.232891e-317im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im … 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im, 0.0 + 0.0im], 1.9073486328125e-6 * FFTW backward plan for 524288-element array of ComplexF64
(dft-ct-dit/128
(dftw-genericbuf/64-128-4096
(dft-direct-128-x64 "n1fv_128_avx2"))
(dft-indirect-before
(dft-vrank>=1-x128/1
(dft-ct-dit/8
(dftw-direct-8/56 "t2bv_8_avx2")
(dft-ct-dif/8
(dftw-directsq-8/28-x8 "q1bv_8_avx2")
(dft-vrank>=1-x8/1
(dft-direct-64-x8 "n1bv_64_avx2")))))
(dft-r2hc-1
(rdft-rank0-tiled/2-x4096-x128)))), FFTW forward plan for 524288-element array of ComplexF64
(dft-ct-dit/128
(dftw-genericbuf/64-128-4096
(dft-direct-128-x64 "n1fv_128_avx2"))
(dft-indirect-before
(dft-vrank>=1-x128/1
(dft-ct-dit/8
(dftw-direct-8/56 "t2fv_8_avx2")
(dft-ct-dif/8
(dftw-directsq-8/28-x8 "q1fv_8_avx2")
(dft-vrank>=1-x8/1
(dft-direct-64-x8 "n1fv_64_avx2")))))
(dft-r2hc-1
(rdft-rank0-tiled/2-x4096-x128)))))And add some information about the simulation, for the caption of the convergence figure.
info = (
equation = "\$\\mathrm{d}X_t/\\mathrm{d}t = -X_t + B^H_t\$",
noise = "fBm noise",
ic = "\$X_0 \\sim \\mathcal{N}(0, 1)\$"
)We define the target solution as the Euler approximation, which is to be computed with the target number ntgt of mesh points, and which is also the one we want to estimate the rate of convergence, in the coarser meshes defined by ns.
target = RandomEuler()
method = RandomEuler()RandomEuler{Float64, Distributions.Univariate}(Float64[])Order of convergence
With all the parameters set up, we build the convergence suite:
allctd = @allocated suite = ConvergenceSuite(t0, tf, x0law, f, noise, params, target, method, ntgt, ns, m)6025720pwr = Int(div(round(log10(allctd)), 3)) # approximate since Kb = 1024 bytes not 1000 and so on
@info "`suite` memory: $(round(allctd / 10^(3pwr), digits=2)) $(("bytes", "Kb", "Mb", "Gb", "Tb")[pwr+1])"[ Info: `suite` memory: 6.03 MbThen we are ready to compute the errors:
@time result = solve(rng, suite) 31.560774 seconds (313.91 k allocations: 14.374 MiB, 1.29% compilation time)The computed strong error for each resolution in ns is stored in result.errors, and a raw LaTeX table can be displayed for inclusion in the article:
table = generate_error_table(result, suite, info) \begin{center}
\begin{tabular}[htb]{|r|l|l|l|}
\hline N & dt & error & std err \\
\hline \hline
64 & 0.0156 & 0.0268 & 0.000843 \\
128 & 0.00781 & 0.0176 & 0.000551 \\
256 & 0.00391 & 0.0112 & 0.00034 \\
512 & 0.00195 & 0.00753 & 0.000223 \\
\hline
\end{tabular}
\end{center}
\bigskip
\caption{Mesh points (N), time steps (dt), strong error (error), and standard error (std err) of the Euler method for $\mathrm{d}X_t/\mathrm{d}t = -X_t + B^H_t$ for each mesh resolution $N$, with initial condition $X_0 \sim \mathcal{N}(0, 1)$ and fBm noise, on the time interval $I = [0.0, 1.0]$, based on $M = 600$ sample paths for each fixed time step, with the target solution calculated with $262144$ points. The order of strong convergence is estimated to be $p = 0.615$, with the 95\% confidence interval $[0.5279, 0.7018]$.}The calculated order of convergence is given by result.p:
println("Order of convergence `C Δtᵖ` with p = $(round(result.p, sigdigits=2)) and 95% confidence interval ($(round(result.pmin, sigdigits=3)), $(round(result.pmax, sigdigits=3)))")Order of convergence `C Δtᵖ` with p = 0.61 and 95% confidence interval (0.528, 0.702)Plots
We create a plot with the rate of convergence with the help of a plot recipe for ConvergenceResult:
plot(result)For the sake of illustration, we plot a sample of an approximation of a target solution:
plot(suite, ns=nsample)We can also visualize the noise:
plot(suite, xshow=false, yshow=true)We save the order of convergence obtained
ps = [result.p]
pmins = [result.pmin]
pmaxs = [result.pmax]1-element Vector{Float64}:
0.701793066912691nothing
Now we vary the Hurst parameter and record the corresponding order of convergence.
@info "h = $(first(hursts)); p = $(result.p)"
for h in Iterators.drop(hursts, 1)
loc_noise = FractionalBrownianMotionProcess(t0, tf, y0, h, ntgt)
loc_suite = ConvergenceSuite(t0, tf, x0law, f, loc_noise, params, target, method, ntgt, ns, m)
@time loc_result = solve(rng, loc_suite)
@info "h = $h => p = $(loc_result.p) ($(loc_result.pmin), $(loc_result.pmax))"
push!(ps, loc_result.p)
push!(pmins, loc_result.pmin)
push!(pmaxs, loc_result.pmax)
end[ Info: h = 0.1; p = 0.6149806924419149
31.114770 seconds (173 allocations: 147.906 KiB)
[ Info: h = 0.2 => p = 0.7014119810801731 (0.6174065849133366, 0.7854554112038732)
31.092259 seconds (173 allocations: 147.906 KiB)
[ Info: h = 0.3 => p = 0.8538555447440417 (0.7648821996401457, 0.9430256887398244)
31.151563 seconds (173 allocations: 147.906 KiB)
[ Info: h = 0.4 => p = 0.9174000369278631 (0.8275865613445237, 1.0070488871268841)
12.745040 seconds (173 allocations: 147.906 KiB)
[ Info: h = 0.5 => p = 1.0155487191835448 (0.9274563797763518, 1.1042781940331539)
31.085660 seconds (173 allocations: 147.906 KiB)
[ Info: h = 0.7 => p = 1.0039594190561711 (0.9179018614386478, 1.0898731923966176)
31.085014 seconds (173 allocations: 147.906 KiB)
[ Info: h = 0.9 => p = 1.0030634299687258 (0.9178604446066508, 1.0882619195104817)We print them out for inclusing in the paper:
[collect(hursts) ps pmins pmaxs]7×4 Matrix{Float64}:
0.1 0.614981 0.527876 0.701793
0.2 0.701412 0.617407 0.785455
0.3 0.853856 0.764882 0.943026
0.4 0.9174 0.827587 1.00705
0.5 1.01555 0.927456 1.10428
0.7 1.00396 0.917902 1.08987
0.9 1.00306 0.91786 1.08826The following plot helps visualizing the result.
plt = plot(ylims=(-0.1, 1.2), xaxis="H", yaxis="p", guidefont=10, legend=:bottomright)
scatter!(plt, collect(hursts), ps, yerror=(ps .- pmins, pmaxs .- ps), label="computed")
plot!(plt, 0.0:0.5:1.0, p -> min(p + 0.5, 1.0), linestyle=:dash, label="current")
plot!(plt, 0.0:0.5:1.0, p -> p, linestyle=:dot, label="previous")Strong order $p$ of convergence of the Euler method for $\mathrm{d}X_t/\mathrm{d}t = - Y_t^H X_t$ with a fBm process $\{Y_t^H\}_t$ for various values of the Hurst parameter $H$ (scattered dots: computed values; dashed line: expected $p = H + 1/2;$ dash-dot line: previous theory $p = H.$).
This page was generated using Literate.jl.