22 February 2021 | R. Rosa
We addressed, in the previous Time average bounds via Sum of Squares post, the problem of estimating the asymptotic limit of time averages of quantities related to the solutions of a differential equation.
Here, the aim is to consider an example, namely the Van der Pol oscillator, and use two numerical methods to obtain those bounds: via time evolution of the system and via a convex semidefinite programming using Sum of Squares (SoS), both discussed in the previous post.
This example is addressed in details in Fantuzzi, Goluskin, Huang, and Chernyshenko (2016). My main motivation is to visualize the auxiliary function that yields the optimal bound via SoS. That is the main reason to choose a two-dimensional system.
That bound depends on the chosen degree for the auxiliary function appearing in the SoS method. Here are the results for specific values of :
BoundsError: attempt to access 0-element Vector{Any} at index [1]We discuss, below, the details leading to these result and how to appreciate the plots above. (Notice you can rotate, pan and zoom the previous and subsequent images!)
The Van der Pol oscillator originated in the study of eletric circuits and appears in several other phenomena, from control, to biology and seismology.
In its simplest, and nondimensional, form, the equation reads
where . This equation has the stationary solution , , and all other solutions converge to a limit cycle that oscillates around the origin. The form of the limit cycle and the convergence rate to the limit cycle change according to the value of the parameter .
Below you will find the evolution of both and its derivative for different values of , and with the initial conditions and . Notice that, for small , the solution converges faster to the limit cycle, which is nearly a sinusoidal wave; while for large , the solution converges slightly slower and the solution develops spikes, as if firing up some signal information (think neurons in biological applications).
Failed to precompile PlotlyJS [f0f68f2c-4968-5e81-91da-67840de0976a] to "/home/runner/.julia/compiled/v1.10/PlotlyJS/jl_y54PTR".Another common representation of the dynamics of an autonomous system is that of a phase portrait, in which we draw the orbit, or trajectory, in the phase space of the system. In this case, the phase space is , where . We rewrite the second-order differential equation as a system of first order equations
The same trajectories above, in the two extreme values for , are seen below in phase space.
UndefVarError:PlotlyJS
not defined The post Time average bounds via Sum of Squares addressed the problem of estimating the time-average
for the solutions , , of a differential equation
where is continuous; is some locally Lipschitz function acting in some finite-dimensional space , ; and assuming the solutions generate a continuous semigroup , where .
We exemplify this here with the Van der Pol system (2) and with the quantity
The most direct way to numerically estimate the bound is via a numerical evolution of the system, for a sufficiently long time, and taking the corresponding time average. Since the system has a globally attracting limit cycle (except for the unstable fixed point at the origin), we may simply consider a single trajectory for the estimate.
UndefVarError:DifferentialEquations
not defined Notice we integrated for a very long time to have a proper convergence of the time average. Alternatively, knowning that a fixed fraction of the time integral over the time period converges to zero, we could start integrating at a later time and avoid the initial large values, due to the transient behavior, and the initial bumps, due to the periodic spikes in the solution and its derivative:
In general, however, we may not have such a clear understanding of the dynamics so as to start from specific points or at specific times.
We now estimate the upper bound via optimization. As we have seen in Time average bounds via Sum of Squares, since and are polynomials, an upper bound is given by
where denotes the set of real polynomials on with degree at most , and where, in this case, . We expect the bound to become sharper as we increase the degree . Recall, as discussed in the previous post, the degree has to be even, otherwise it has no chance of being nonnegative.
Below is the result of the estimate, for some choices of :
MethodError: no method matching prettytable(::IOBuffer, ::Matrix{Float64}, ::Matrix{String}; backend::Symbol, standalone::Bool, formatters::Main.FDSANDBOX_2569371999895574908.var"#17#18")Closest candidates are: prettytable(::IO, ::Any; header, kwargs...) @ PrettyTables ~/.julia/packages/PrettyTables/E8rPJ/src/print.jl:740 prettytable(!Matched::Type{HTML}, ::Any; kwargs...) @ PrettyTables ~/.julia/packages/PrettyTables/E8rPJ/src/print.jl:728 pretty_table(!Matched::Type{String}, ::Any; color, kwargs...) @ PrettyTables ~/.julia/packages/PrettyTables/E8rPJ/src/print.jl:722 ...
Here is a corresponding plot, but skipping , for scaling reasons:
UndefVarError:json
not defined Looking at (7) and its feasibility condition
we see that is smaller, the greater the term , i.e. the "closer" points to , or, in other words, the faster the orbit descends along , whenever possible.
The best situation is in a gradient flow, but that is not always the case. It may not even be possible to have descend along all the time; just think of a periodic orbit, with a nontrivial auxiliary function , such as in our case. Nevertheless, the longer the orbit descends along , the better.
With that in mind, you may go back to the visualizations of the auxiliary function given in the beginning of the post. Notice how the condition just described (of the orbit to attempt to descend along the optimal ) improves as the degree of is allowed to increase. You can rotate, pan, and zoom the figures as you wish, to better observe this behavior of the limit cycle with respect to .
We show below a simple performance comparison between the time integration method done in the function get_means_vdp()
and the SoS optimization method done in get_bound_vdp_sos()
(see the codes in Section Julia codes). For that, we use the JuliaCI/BenchmarkTools.jl package. Keep in mind the functions being compared have not been optimized and this is not a thorough assessment of both methods.
For the time-integration, instead of integrating from to , with , as done in one of the plots above, we only go up to , which is enough to get to the same bound as that given by the SoS method with the auxiliary function with degree . Here is the result of @btime
.
julia> using BenchmarkTools
julia> @btime get_means_vdp($u0, $μ, $ϕ, 610)[2][end]
768.929 ms (3115182 allocations: 77.89 MiB)
5.021851906510364
julia> @btime get_bound_vdp_sos($μ, $ϕ, 10)[2]
5.324 s (68842 allocations: 5.75 MiB)
5.0219594820626305
For this simple system, with a globally attracting limit cycle (except for the unstable fixed point at the origin), in which we can take a single trajectory to estimate the upper bound, it is computationally more efficient to do the time integration. However, in situations in which there are many different basins of attractions and "hidden" attractors, a single trajectory will not be sufficient for a global bound, and a Monte Carlo Method, integrating over tens, maybe hundreds, of solutions, is necessary. In this case, computing a global bound via SoS might be more efficient. We plan to illustrate this situation in a future post.
We present, below, the codes used in this post. You may download the full code with the Download button at the end.
We use The Julia Programming Language for the numerical computations. For the interactive plots, we use plotly/PlotlyJS.jl, which is Julia's interface to the plotly.js visualization library.
For solving the differential equation, we use the SciML/DifferentialEquations.jl package. The following snippet of code is used to solve the system of equations and plot the solutions over time.
using PlotlyJS
using DifferentialEquations
use_style!(:ggplot)
function vdp_du(du,u,params,t)
x, y = u
μ, = params
dx = y
dy = μ*(1 - x^2)*y - x
du .= [dx, dy]
end
u0 = [4.0, 6.0]
Tmax = 80.0
tspan=(0.0,Tmax)
μ_range=(0.2,1.0,2.0,4.0)
vdp_sol = [DifferentialEquations.solve(ODEProblem(vdp_du,u0,tspan,[μ], saveat=0.0:0.1:Tmax)) for μ in μ_range]
vdp_sol_times = 0.0:0.1:Tmax
plt_mu = [
PlotlyJS.plot(
[PlotlyJS.scatter(;x=vdp_sol_times, y=map(u->u[1], vdp_sol[j].(vdp_sol_times)), line_width=1, name="dx(t)/dt", showlegend=true*(j==1), mode="lines", line_color="orange"),
PlotlyJS.scatter(;x=vdp_sol_times, y=map(u->u[2], vdp_sol[j].(vdp_sol_times)), line_width=3, name="x(t)", showlegend=true*(j==1),mode="lines", line_color="steelblue")],
Layout(;xaxis_title = "t", yaxis_range=[-7.0,7.0], title="μ=$(μ_range[j])")
)
for j in 1:length(μ_range)
]
plt_comb = [plt_mu[1] plt_mu[2]; plt_mu[3] plt_mu[4]]
With the solution at hand, we also plot the corresponding orbits, for two values of , with the following code.
plt_phsp = [
PlotlyJS.Plot(
PlotlyJS.scatter(;x=map(u->u[1], vdp_sol[j].(vdp_sol_times)), y=map(u->u[2], vdp_sol[j].(vdp_sol_times)), line_width=2, name="(x(⋅),y(⋅))", showlegend=true*(j==1), mode="lines", line_color="steelblue"),
Layout(;xaxis_title = "x", yaxis_title = "y", xaxis_range=[-7.0,7.0],yaxis_range=[-7.0,7.0], title="Phase portrait μ=$(μ_range[j])")
)
for j in (1:3:4)
]
plt_ps = [plt_phsp[1] plt_phsp[end]]
For integrating over time, since the solution of the differential equation is not simply an array, but their values can be properly calculated at any time by interpolation, we use the JuliaMath/QuadGK.jl for a better time integration. Such a package is designed for one-dimensional quadrature methods.
using QuadGK
μ=4.0
u0 = [4.0, 6.0]
Tmax = 5000.0
ϕ(u) = sum(u.^2)
function get_means_vdp(u0, μ, ϕ, Tmax)
tspan=(0.0,Tmax)
vdp_sol_long = DifferentialEquations.solve(ODEProblem(vdp_du,u0,tspan,[μ]))
ϕ_times = Tmax/100:Tmax/100:Tmax
ϕ_mean_partialsums = [quadgk(t -> ϕ(vdp_sol_long(t)), j == 1 ? 0 : ϕ_times[j-1], ϕ_times[j], rtol=1e-8)[1] for j in 1:length(ϕ_times)]
ϕ_mean = [sum(ϕ_mean_partialsums[1:n])/ϕ_times[n] for n = 1:length(ϕ_times)]
return ϕ_times, ϕ_mean
end
ϕ_times, ϕ_mean = get_means_vdp(u0, μ, ϕ, Tmax)
plt_int = PlotlyJS.plot(
PlotlyJS.scatter(;x=ϕ_times, y=ϕ_mean, line_width=2, name="ϕ_mean(T)", mode="lines", line_color="red"),
Layout(;yaxis_range=[4.0,8.0], xaxis_title = "T", yaxis_title = "x²+y²", title="Evolution of the time average ϕᵤ(T) = (1/T)∫₀ᵀ ‖u(t)‖² dt;\nbound Φ̄ ≤ $(round(ϕ_mean[end],digits=3))"
)
)
For the optimization method, we use jump-dev/SumOfSquares.jl, which is the package for the Sum of Squares Programming. This is used in conjunction with JuliaAlgebra/DynamicPolynomials.jl, which defines a "dynamic polynomial" that works as an unknown polynomial looked upon for yielding a sum of squares form for the feasibility condition. For the SoS optimization method itself, we use the ericphanson/SDPAFamily.jl which is an interface to several SDPA-type optimization methods (SDPA standing for Semi-Definite Programming Algorithm). Such methods turn out to be the ones that work for the current problem (see Fantuzzi, Goluskin, Huang, and Chernyshenko (2016), who used similar MATLAB toolboxes).
using DynamicPolynomials
using SumOfSquares
using SDPAFamily
function get_bound_vdp_sos(μ, ϕ, Vdeg)
@polyvar x y
f = [y, μ*(1 - x^2)*y - x]
u = [x, y]
X = monomials([x, y], 1:Vdeg)
solver = optimizer_with_attributes(SDPAFamily.Optimizer{Float64}, MOI.Silent() => true)
model = SOSModel(solver)
@variable(model, V, Poly(X))
@variable(model, γ)
# set_start_value(γ, 6.0) # MathOptInterface VariablePrimalStart() is not supported with SDPAFamily.Optimizer{Float64}
dVdt = sum(differentiate(V, u) .* f)
@constraint(model, γ - ϕ(u) - dVdt >= 0)
@objective(model, Min, γ)
JuMP.optimize!(model)
return V, objective_value(model)
end
μ=4
ϕ(u) = sum(u.^2)
Vdeg_range = 4:2:10
optim = [get_bound_vdp_sos(μ, ϕ, Vdeg) for Vdeg in Vdeg_range]
bounds = [optim[j][end] for j in 1:length(optim)]
plt_sos = PlotlyJS.plot(
PlotlyJS.scatter(;x=Vdeg_range[2:end], y=bounds[2:end], yaxis_log=true, line_width=2, name="bound", mode="lines+markers", line_color="green"),
Layout(;yaxis_range=[4.0,6.0], xaxis_title = "degree of auxiliary polynomial V", yaxis_title = "bound", title="Bounds on Φ̄ for different degrees for V"
)
)
The visualization of the auxiliary function and the corresponding orbits (both in the plane and lifted to the auxiliary function) is done as follows.
Tmax = 50.0
vdp_sol_times = 0.0:0.01:Tmax
plt_composite = []
for j=1:length(optim)
xmesh = -4.2:0.2/j:4.2;
ymesh = -10:0.2/(j+2):10;
xgrid = fill(1,length(ymesh))*xmesh';
ygrid = ymesh*fill(1,length(xmesh))';
V = optim[j][1]
Vmin = minimum([value(V)(x,y) for x in xmesh for y in ymesh])
v(x,y) = log(1 + value(V)(x,y) - Vmin)
vdp_sol_x = map(u->u[1], vdp_sol[end].(vdp_sol_times))
vdp_sol_y = map(u->u[2], vdp_sol[end].(vdp_sol_times))
vgrid = v.(xgrid,ygrid)
tracesurf = PlotlyJS.scatter(;x=xgrid, y=ygrid, z = vgrid, type="surface", name="auxiliary function")
traceline0 = PlotlyJS.scatter(;x=vdp_sol_x, y=vdp_sol_y, z=0.0*vdp_sol_x, line_width=6, line_color="orange", mode="lines", type="scatter3d", name="orbit")
tracelinevxy = PlotlyJS.scatter(;x=vdp_sol_x, y=vdp_sol_y, z=v.(vdp_sol_x,vdp_sol_y), line_width=6, line_color="green", mode="lines", type="scatter3d", name="lifted orbit")
push!(plt_composite, PlotlyJS.Plot([tracesurf,traceline0,tracelinevxy], Layout(;xaxis_title = "x", yaxis_title = "y", zaxis_title="z=ln(1+V-min(V))", legend_x=0.0, legend_y=1.0, title="Auxiliary function V=V(x,y) with degree m=$(Vdeg_range[j]) and bound $(round(bounds[j],digits=3))")))
end
UndefVarError: `vdp_sol` not defined
⬇ Download the full julia code There are many people to thank for, in getting to this point, but I specifically want to thank Chris Rackauckas, for pointing me to use JuliaMath/QuadGK.jl; Eric Hanson, for helping me in using his package ericphanson/SDPAFamily; and Thibaut Lienart for helping me with many of the features in his package tlienart/Franklin.jl.