where is a vector of polynomials in . This yields that is a SoS.
Hence, the problem becomes to find a symmetric positive semidefinite matrix satisfying (16). The polynomials and are equal if, and only if, their coefficients are equal, which is a linear problem for , with dimension . If we define the coeficients of by and those of by , , then the problem becomes to find a symmetrix matrix such that
If we further ask to minimize the quantity for some desirable symmetric matrix , then we end up with the primal semidefinite programming problem for .
The convex minimization problem (10) can easily be rewritten in the minimax form
With this formulation in mind, Tobasco, Goluskin and Doering (2018) gave a beautiful proof that the bound is actually optimal, and that the supremum at the left hand side above is achieved!:
The proof uses Ergodic Theory and a minimax principle. In a future opportunity we will go through its proof, as well as to detail the extension done to the infinite-dimensional setting, which is briefly discussed next.
The proof in the finite dimensional case uses a few conditions that are delicate to extend to the infinite dimensional case:
The positively invariant set has to be compact;
The quantity of interest has to be a continuous function on the phase space ;
Borel probability mesure are Lagrangian invariant if and only if they are Eulerian invariant.
By Lagrangian invariant we mean the classical invariant condition , for any Borel set , where is the Borel probability measure in question and is the semigroup generated by the equation. By Eulerian invariant we mean that has to satisfy , for all .
The assumption that be finite-dimensional is not a requirement per se, but it makes the above conditions hold in more generality. For instance, it suffices to have closed and bounded to have it compact. And this compactness is needed both for the passage from time average to ensemble average (i.e. average with respect to the invariant measure) and for the minimax principle.
Concerning the assumption of continuity of , it is not a big deal in finite dimensions, but it is quite restrictive for partial differential equations. For instance, if the phase space is , one cannot consider involving derivatives of . Even if we attempt to use extensions of the minimax principle, they require upper-semicontinuity of , so even quantities like would not work as is.
But at least for the case of a continuous quantity in the infinite-dimensional (e.g. kinetic energy on ), one can go around the requirement of being compact by considering dissipative systems which possess a compact attracting set.
The remaining delicate condition is the equivalence between Lagrangian and Eulerian invariance, which is by no means trivial in the infinite-dimensional case. In fact, I know of only two equations for which this has been proved: the two-dimensional Navier-Stokes equations and a globally modified Navier-Stokes equations obtained by truncating the nonlinear term. However, it is my belief that the key tool is simply that it be possible to approximate the system (any solution) by a right-invertible semigroup (e.g. Galerkin approximation or a hyperbolic/wave-type approximation) and exploit the usual a~priori estimates. It is an open field to prove this for other systems or to come up with an easily-applicable general statement.
It should be said that even the notion of Eulerian invariance needs to be relaxed to working for special types of functions , which we call cylindrical test functionals. They are at the core of the notion of statistical solution.
As in the finite-dimensional case, we leave further details about the result in infinite dimensions to a future post. Meanwhile, the details can be found in Rosa and Temam (arxiv 2020)
Selected References: