Abstract
In this article, we study the hyperbolic Anderson model driven by a space-time colored Gaussian homogeneous noise with spatial dimension \(d=1,2\). Under mild assumptions, we provide \(L^p\)-estimates of the iterated Malliavin derivative of the solution in terms of the fundamental solution of the wave solution. To achieve this goal, we rely heavily on the Wiener chaos expansion of the solution. Our first application are quantitative central limit theorems for spatial averages of the solution to the hyperbolic Anderson model, where the rates of convergence are described by the total variation distance. These quantitative results have been elusive so far due to the temporal correlation of the noise blocking us from using the Itô calculus. A novel ingredient to overcome this difficulty is the second-order Gaussian Poincaré inequality coupled with the application of the aforementioned \(L^p\)-estimates of the first two Malliavin derivatives. Besides, we provide the corresponding functional central limit theorems. As a second application, we establish the absolute continuity of the law for the hyperbolic Anderson model. The \(L^p\)-estimates of Malliavin derivatives are crucial ingredients to verify a local version of Bouleau-Hirsch criterion for absolute continuity. Our approach substantially simplifies the arguments for the one-dimensional case, which has been studied in the recent work by [2].
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
One of the main tools of modern stochastic analysis is Malliavin calculus. To put it short, this is a differential calculus on a Gaussian space that represents an infinite dimensional generalization of the usual analytical concepts on an Euclidean space. The Malliavin calculus (also known as the stochastic calculus of variations) was initiated by Paul Malliavin [21] to give a probabilistic proof of Hörmander’s “sum of squares” theorem. It has been further developed by Stroock, Bismut, Watanabe and others. One of the main applications of Malliavin calculus is the study of regularity properties of probability laws, for example, the laws of the solutions to certain stochastic differential equations and stochastic partial differential equations (SPDEs), see e.g. [27, Chapter 2]. The Malliavin calculus is also useful in formulating and interpreting stochastic (partial) differential equations when the solution is not adapted to a Brownian filtration, which is the case of SPDEs driven by a Gaussian noise that is colored in time.
Recently, the Malliavin calculus has found another important application in the work of Nualart and Ortiz-Latorre [28], which paved the road for Stein to meet Malliavin. The authors of [28] applied the Malliavin calculus (notably the integration by parts formula) to characterize the convergence in law of a sequence of multiple Wiener integrals, and they were able to give new proofs for the fourth moment theorems of Nualart, Peccati and Tudor [30, 37]. Soon after the work [28], Nourdin and Peccati combined Malliavin calculus and Stein’s method of normal approximation to quantify the fourth moment theorem. Their work [24] marked the birth of the so-called Malliavin-Stein approach. This combination works admirably well, partially because one of the fundamental ingredients in Stein’s method—the so-called Stein’s lemma (2.6)—that characterizes the normal distribution, is nothing else but a particular case of the integration by parts formula (2.5) in Malliavin calculus. We refer interested readers to [44, Section 1.2] for a friendly introduction to this approach.
The central object of study in this paper is the stochastic wave equation with linear Gaussian multiplicative noise (in Skorokhod sense):
where \(\Delta \) is the Laplacian in space variables and the Gaussian noise \({\dot{W}}\) has the following correlation structure
with the following standing assumptions:
-
(i)
\(\gamma _0:{\mathbb {R}}\rightarrow [0,\infty ]\) is locally integrable and non-negative definite;
-
(ii)
\(\gamma \) is a non-negative and non-negative definite measure on \({\mathbb {R}}^d\) whose spectral measure \(\mu \)Footnote 1 satisfies Dalang’s condition:
$$\begin{aligned} \qquad \qquad \quad \int _{{\mathbb {R}}^d}\frac{1}{1+|\xi |^2}\mu (d\xi )<\infty , \end{aligned}$$(1.2)
where \(|\xi |\) denotes the Euclidean norm of \(\xi \in {\mathbb {R}}^d\).
An important example of the temporal correlation is the Riesz kernel \(\gamma _0(t)=|t|^{-\alpha _0}\) for some \(\alpha _0\in (0,1)\) (with \(\gamma _0(0)=\infty \)).
Equation (1.1) is also known in the literature as the hyperbolic Anderson model, by analogy with the parabolic Anderson model in which the wave operator is replaced by the heat operator. The noise \({\dot{W}}\) can be formally realized as an isonormal Gaussian process \(W=\{W(\phi ): \phi \in {\mathcal {H}}\}\) and here \({\mathcal {H}}\) is a Hilbert space that is the completion of the set \(C^\infty _c\big ({\mathbb {R}}_+\times {\mathbb {R}}^d)\) of infinitely differentiable functions with compact support under the inner product
where we write \(\gamma (x)\) for the density of \(\gamma \) if it exists and we shall use the definition (1.4) instead of (1.3) when \(\gamma \) is a measure. In (1.4), \(*\) denotes the convolution in the space variable and \(\gamma _0(t)= \gamma _0(-t)\) for \(t<0\). We denote by \({\mathcal {H}}^{\otimes p}\) the pth tensor product of \({\mathcal {H}}\) for \(p\in {\mathbb {N}}^*\), see Sect. 2 for more details.
As mentioned before, the existence of a temporal correlation \(\gamma _0\) prevents us from defining equation (1.1) in the Itô sense due to a lack of the martingale structure. In the recent work [3] by Balan and Song, the following results are established using Malliavin calculus. Let \(G_t\) denote the fundamental solution to the corresponding deterministic wave equation, that is, for \((t,z)\in (0,\infty )\times {\mathbb {R}}^d\),
To ease the notation, we will stick to the convention that
Definition 1.1
Fix \(d\in \{1,2\}\). We say that a square-integrable process \(u = \{ u(t,x): (t,x)\in {\mathbb {R}}_+\times {\mathbb {R}}^d\}\) is a mild Skorokhod solution to the hyperbolic Anderson model (1.1) if u has a jointly measurable modification (still denoted by \(u\) \()\) such that \(\sup \{ {\mathbb {E}}[u(t,x)^2 ]: (t,x)\in [0,T]\times {\mathbb {R}}^d\} < \infty \) for any finite T; and for any \(t>0\) and \(x\in {\mathbb {R}}^d\), the following equality holds in \(L^2(\Omega )\):
where the above stochastic integral is understood in the Skorokhod sense and the process \((s,y)\in {\mathbb {R}}_+\times {\mathbb {R}}^d\longmapsto {\mathbf {1}}_{(0,t)}(s) G_{t-s}(x-y)u(s,y)\) is Skorokhod integrable. See Definition 5.1 in [3] and Definition 1.1 in [2].
It has been proved in [3, Section 5] that equation (1.1) admits a unique mild Skorokhod solution u with the following Wiener chaos expansion:
where \(I_n\) denotes the nth multiple Wiener integral associated to the isonormal Gaussian process W (see Sect. 2 for more details), \(f_{t,x,n}\in {\mathcal {H}}^{\otimes n}\) is defined by (with the convention (1.6) in mind)
and \({\widetilde{f}}_{t,x,n}\) is the canonical symmetrization of \(f_{t,x,n}\in {\mathcal {H}}^{\otimes n}\) given by
where the sum in (1.9) runs over \({\mathfrak {S}}_n\), the set of permutations on \(\{1,2,\dots , n\}\). For example, \(f_{t,x,1}(t_1,x_1) =G_{t-t_1}(x-x_1)\) and
We would like to point out that in the presence of temporal correlation, there is no developed solution theory for the nonlinear wave equation (replacing \(u {\dot{W}}\) in (1.1) by \(\sigma (u) {\dot{W}}\) for some deterministic Lipschitz function \(\sigma :{\mathbb {R}}\rightarrow {\mathbb {R}}\)). We regard this as a totally different problem.
Now let us introduce the following hypothesis when \(d=2\):
Remark 1.2
-
(i)
Note that condition (a) for \(d=2\) is slightly stronger than Dalang’s condition (1.2). In fact, when \(d=2\), the paper [18] pointed out that Dalang’s condition (1.2) is equivalent to
$$\begin{aligned} \int _{|x|\le 1} \ln ( |x|^{-1} ) \gamma (x)dx < \infty ; \end{aligned}$$(1.10)let \(\ell ^\star = \frac{\ell }{\ell -1}\) and \(0< \varepsilon < 1/\ell ^\star \), then there is some \(\delta \in (0,1)\) and a constant \(C_\varepsilon \) such that \(\ln ( |x|^{-1}) \le C_\varepsilon |x|^{-\varepsilon }\) for any \(|x|\le \delta \), from which we deduce that
$$\begin{aligned} \int _{|x|\le 1} \ln ( |x|^{-1} ) \gamma (x)dx&\le \ln (\delta ^{-1}) \int _{ \delta< |x|\le 1} \gamma (x)dx + C_\varepsilon \int _{|x|\le \delta } |x|^{-\varepsilon } \gamma (x)dx \\&\le \ln (\delta ^{-1}) \int _{ \delta< |x|\le 1} \gamma (x)dx \\&\quad + C_\varepsilon \Vert \gamma \Vert _{L^\ell ({\mathbb {R}}^2)}\left( \int _{|x|\le \delta } |x|^{-\varepsilon \ell ^\star }dx \right) ^{1/\ell ^\star }<\infty . \end{aligned}$$ -
(ii)
The case (c) in Hypothesis \(\mathbf{(H1)}\) is a mixture of cases (a) and (b). Accordingly, more examples of the noise \({\dot{W}}\) arise. In the space variables, W can behave like a fractional Brownian sheet with Hurst indices greater than 1/2 in both directions, i.e. \(\gamma (x_1,x_2)=|x_1|^{2H_1-2}|x_2|^{2H_2-2}\) for some \(H_1,H_2 \in (1/2,1)\).
-
(iii)
For \(d=1\) we just assume that \(\gamma \) is a non-negative and non-negative definite measure on \({\mathbb {R}}\). In this case (see, for instance, Remark 10 of [11]) Dalang’s condition is always satisfied.
Under Hypothesis \(\mathbf (H1)\), we will state our first main result — the \(L^p(\Omega )\) estimates of the Malliavin derivatives of u(t, x). The first Malliavin derivative Du(t, x) is a random element in the Hilbert space \({\mathcal {H}}\), the completion of \(C^\infty _c\big ({\mathbb {R}}_+\times {\mathbb {R}}^d)\) under the inner product (1.3); as the space \({\mathcal {H}}\) contains generalized functions, it is not clear at first sight whether \((s,y) \longmapsto D_{s,y}u(t,x)\) is a (random) function. The higher-order Malliavin derivative \(D^{m} u(t,x)\) is a random element in \({\mathcal {H}}^{\otimes m}\) for \(m\ge 1\), see Sect. 2 for more details.
Let us first fix some notation.
Notation A (1) We write \(a\lesssim b\) to mean \(a\le Kb\) for some immaterial constant \(K>0\).
(2) We write \(\Vert X\Vert _p = \big ({\mathbb {E}}[ |X | ^p ]\big )^{1/p}\) to denote the \(L^p(\Omega )\)-norm of X for \(p\in [1,\infty )\).
(3) When p is a positive integer, we often write \(\pmb {z_p} = (z_1, \dots , z_p)\) for points in \({\mathbb {R}}_+^p\) or \({\mathbb {R}}^{dp}\), and \(d\pmb {z_p}=dz_1 \cdots dz_p\), \(\mu (d\pmb {z_p}) = \mu (dz_1)\cdots \mu (dz_p)\). For a function \(h: ({\mathbb {R}}_+\times {\mathbb {R}}^d)^p\rightarrow {\mathbb {R}}\) with \(p\ge 2\), we often write
which shall not cause any confusion. For \(m\in \{1,\dots , p-1\}\) and \((\pmb {s_m}, \pmb {y_m})\in {\mathbb {R}}_+^m\times {\mathbb {R}}^{dm}\), the expression \(h(\pmb {s_m}, \pmb {y_m};\bullet )\) stands for the function
Now, with the above notation in mind, we are in the position to state the first main result.Footnote 2
Theorem 1.3
Let \(d\in \{1,2\}\) and suppose that Hypothesis \(\mathbf (H1)\) holds if \(d=2\). Then, for any \((t,x) \in {\mathbb {R}}_+\times {\mathbb {R}}^d\), the random variable u(t, x) belongs to \({\mathbb {D}}^{\infty }\) (see Sect. 2.1). Moreover, for any integer \(m\ge 1\), the mth Malliavin derivative \(D^mu(t,x)\) is a random symmetric function denoted by
and for any \(p\in [2,\infty )\), we have, for almost all \((\pmb {s_m}, \pmb {y_m}) \in [0,t]^m \times {\mathbb {R}}^{md}\),
where the constant in the upper bound only depends on \((p,t,\gamma _0, \gamma ,m)\) and is increasing in t. Moreover, \(D^m u(t,x)\) has a measurable modification.
Throughout this paper, we will work with the measurable modifications of Du(t, x) and \(D^2 u(t,x)\) given by Theorem 1.3, which are still denoted by \(D u(t,x), D^2 u(t,x)\) respectively.
In this paper, we will present two applications of Theorem 1.3. Our first application are quantitative central limit theorems (CLTs) for the spatial averages of the solution to (1.1), which have been elusive so far due to the temporal correlation of the noise preventing the use of Itô calculus approach. A novel ingredient to overcome this difficulty is the so-called second-order Gaussian Poincaré inequality in an improved form. We will address these CLT results in Sect. 1.1. While in Sect. 1.2, as the second application, we establish the absolute continuity of the law of the solution to equation (1.1) using the \(L^p\)-estimates of Malliavin derivatives that are crucial to establish a local version of Bouleau-Hirsch criterion [5].
1.1 Gaussian fluctuation of spatial averages
Spatial averages of SPDEs have recently attracted considerable interest. It was Huang, Nualart and Viitasaari who first studied the fluctuation of spatial statistics and established a central limit theorem for a nonlinear SPDE in [15]. More precisely, they considered the following one-dimensional stochastic heat equation
on \({\mathbb {R}}_+\times {\mathbb {R}}\), where \({\dot{W}}\) is a space-time Gaussian white noise, with constant initial condition \(u(0,\bullet )=1\) and the nonlinearity \(\sigma :{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a Lipschitz function. In view of the localization property of its mild formulation (in the Walsh sense [43]),
with \(p_t\) denoting the heat kernel,Footnote 3 one can regard u(t, x) and u(t, y) as weakly dependent random variables for x, y far apart so that the integral
can be roughly understood as a sum of weakly dependent random variables. Therefore, it is very natural to expect Gaussian fluctuations when R tends to infinity.
Let us stop now to briefly fix some notation to facilitate our discussion.
Notation B. (1) For \(t>0\), we define, with \(B_R:=\{ x\in {\mathbb {R}}^d: |x| \le R\}\),
(2) We write \(f(R)\sim g(R)\) to mean that f(R)/g(R) converges to some positive constant as \(R\rightarrow \infty \).
(3) For two real random variables X, Y with distribution measures \(\mu , \nu \) respectively, the total variation distance between X, Y (or \(\mu ,\nu \)) is defined to be
where the supremum runs over all Borel set \(B\subset {\mathbb {R}}\). The total variation distance is well known to induce a stronger topology than that of convergence in distribution, see [25, Appendix C].
(4) We define the following quantities for future reference:
(5) For an integer \(m\ge 1\) and \(p\in [1,\infty )\), we say \(F\in {\mathbb {D}}^{m,p}\) if F is m-times Malliavin differentiable random variable in \(L^p(\Omega )\) and \({\mathbb {E}}\big [ \Vert D^j F\Vert _{{\mathcal {H}}^{\otimes j}}^p \big ] <\infty \) for every \(j=1,\dots , m\); see Sect. 2.1 for more details.
Now let us illustrate the strategy in [15]: (For this reference, \(d=1\))
-
The authors first rewrite \(F_R(t) = \delta (V_{t, R})\) with the random kernel
$$\begin{aligned} V_{t,R}(s,y) = \sigma (u(s,y) ) \int _{B_R} p_{t-s}(x-y) dx, \end{aligned}$$where \(\delta \) denotes the Skorokhod integral, the adjoint of the Malliavin derivative D.
-
By standard computations, they obtained \(\sigma ^2_R(t)\sim R\).
-
If \(F=\delta (v)\in {\mathbb {D}}^{1,2}\) is a centered random variable with variance one, for some v in the domain of \(\delta \), the (univariate) Malliavin-Stein bound (see [15, Proposition 2.2]) ensures that \(d_{\mathrm{TV}}( F, Z )\le 2\sqrt{ \text {Var}( \langle DF, v\rangle _{\mathcal {H}})}\) for \(Z\sim N(0,1)\).
-
Combining the above points, one can see that the obtention of a quantitative CLT is reduced to the computation of \( \text {Var}( \langle DF_R(t), V_{t,R} \rangle _{\mathcal {H}})\).
Because the driving noise is white in time as considered in [15], tools from Itô calculus (Clark-Ocone formula, Burkholder’s inequality, etc.) are used to estimate the above variance term. It is proved in [15] that \(d_{\mathrm{TV}}( F_R(t) /\sigma _R(t), Z ) \lesssim R^{-1/2}\). Meanwhile, a multivariate Malliavin-Stein bound and similar computations lead to the convergence of the finite-dimensional distributions, which coupled with the tightness property gives a functional CLT for \(\{ R^{-1/2}F_R(t): t\in {\mathbb {R}}_+\}\).
The above general strategy has been adapted to various settings, see [9, 10, 16, 19, 20, 38] for the study of stochastic heat equations and see [4, 12, 35] for the study of stochastic wave equations. All these references consider a Gaussian noise that is white in time. Nevertheless, when the Gaussian noise is colored in time, the mild formulation (1.13) cannot be interpreted in the Walsh-Itô sense. In this situation, only in the case \(\sigma (u)=u\) the stochastic heat equation (1.12) (also known as the parabolic Anderson model) can be properly solved using Wiener chaos expansions, so that \(F_R(t)\), defined in (1.14), can be expressed as an infinite sum of multiple Wiener integrals. With this well-known fact in mind, Nualart and Zheng [33] considered the parabolic Anderson model (i.e. (1.12) with \(\sigma (u)=u\)) on \({\mathbb {R}}_+\times {\mathbb {R}}^d\) such that \(d\ge 1\), the initial condition is constant and the assumptions (i)–(ii) hold (see page 2). The main result of [33] is the chaotic CLT that is based on the fourth moment theorems [30, 37]. When, additionally, \(\gamma \) is a finite measure, the authors of [33] established \(\sigma _R(t)\sim R^{d/2}\) and a functional CLT for the process \(R^{-d/2} F_R\); they also considered the case where \(\gamma (x)=|x|^{-\beta }\), for some \(\beta \in (0,2\wedge d)\), is the Riesz kernel, and obtain the corresponding CLT results. As pointed out in the paper [33], due to the homogeneity of the underlying Gaussian noise, the solution u to (1.12) can be regarded as the functional of a stationary Gaussian random field so that, with the Breuer-Major theorem [6] in mind, it is natural to study Gaussian fluctuations for the problems (1.12) and (1.1). Note that the constant initial condition makes the solution stationary in space and, in fact it is spatially ergodic (see [10, 36]). At last, let us mention the paper [32] in which chaotic CLT was used to study the parabolic Anderson model driven by a colored Gaussian noise that is rough in space. However, let us point out that the aforementioned methods fail to provide the rate of convergence when the noise is colored in time.
In this paper, we bring in a novel ingredient—the second-order Gaussian Poincaré inequalityFootnote 4—to reach quantitative CLT results for the hyperbolic Anderson model (1.1). Let us first state our main result.
Theorem 1.4
Let u denote the solution to the hyperbolic Anderson model (1.1) and recall the definition of \(F_R(t)\) and \( \sigma _R(t)\) from (1.14). Let \(Z\sim N(0,1)\) be the standard normal random variable. We assume that \(\gamma _0\) is not identically zero meaning
Then the following statements hold true:
-
(1)
Suppose that \(0<\gamma ({\mathbb {R}}^d) <\infty \) if \(d=1\) and \(\gamma \in L^1({\mathbb {R}}^d) \cap L^\ell ({\mathbb {R}}^d)\) for some \(\ell >1\) if \(d=2\). Then,
$$\begin{aligned} \sigma _R(t) \sim R^{d/2}\text { and } d_{\mathrm{TV}}\big ( F_R(t) / \sigma _R(t) , Z \big ) \lesssim R^{-d/2}. \end{aligned}$$Moreover, as \(R\rightarrow \infty \), the process \(\big \{ R^{-d/2} F_R(t): t\in {\mathbb {R}}_+\big \}\) converges weakly in the space of continuous functions \(C({\mathbb {R}}_+)\) to a centered Gaussian process \({\mathcal {G}}\) with covariance structure
$$\begin{aligned} {\mathbb {E}}\big [ {\mathcal {G}}(t) {\mathcal {G}}(s) \big ] = \omega _d \sum _{p\ge 1} p! \int _{{\mathbb {R}}^d} \big \langle {\widetilde{f}}_{t,x,p}, {\widetilde{f}}_{s,0,p} \big \rangle _{{\mathcal {H}}^{\otimes p}}dx, \end{aligned}$$(1.18)for \(t,s\in {\mathbb {R}}_+\). Here \(\omega _1=2\), \(\omega _2=\pi \) and \({\widetilde{f}}_{t,x,p}\) are introduced in (1.16) and (1.9), respectively. The convergence of the series in (1.18) is part of the conclusion.
-
(2)
Suppose \(d\in \{1,2\}\) and \(\gamma (x) = | x|^{-\beta }\) for some \(\beta \in (0,2\wedge d)\). Then,
$$\begin{aligned} \sigma _R(t) \sim R^{d-\frac{\beta }{2}}\text { and } d_{\mathrm{TV}}\big ( F_R(t) / \sigma _R(t) , Z \big ) \lesssim R^{-\beta /2}. \end{aligned}$$Moreover, as \(R\rightarrow \infty \), the process \(\big \{ R^{-d+\frac{\beta }{2}} F_R(t): t\in {\mathbb {R}}_+\big \}\) converges weakly in the space \(C({\mathbb {R}}_+)\) to a centered Gaussian process \({\mathcal {G}}_{\beta }\) with the covariance structure
$$\begin{aligned} {\mathbb {E}}\big [ {\mathcal {G}}_\beta (t) {\mathcal {G}}_\beta (s) \big ] = \kappa _{\beta , d} \int _0^t dr\int _0^s dr' \gamma _0(r-r') (t-r)(s-r'), \end{aligned}$$(1.19)for \(t,s\in {\mathbb {R}}_+\). Here the quantity \(\kappa _{\beta , d}\) is introduced in (1.16).
-
(3)
Suppose \(d=2\) and \(\gamma (x_1,x_2) = \gamma _1(x_1) \gamma _2(x_2)\) such that one of the following two conditions holds:
$$\begin{aligned} {\left\{ \begin{array}{ll} &{}\mathrm{(a')} ~\gamma _i(x_i) =|x_i|^{-\beta _i}~\text {for some }\beta _i\!\in (0,1), i=1,2; \\ &{}\mathrm{(b')} ~\gamma _1\!\in L^{\ell }({\mathbb {R}}) \cap L^1({\mathbb {R}}) ~\text {and }\gamma _2(x_2)=|x_2|^{-\beta }\text { for some }0\!<\! \beta< 1 \!<\! \ell <\infty . \end{array}\right. } \end{aligned}$$(1.20)Then,
$$\begin{aligned} {\left\{ \begin{array}{ll} \sigma _R(t) \sim R^{2 - \frac{1}{2}( \beta _1 + \beta _2)} \quad \text {and} \quad d_{\mathrm{TV}}\big ( F_R(t) / \sigma _R(t) , Z \big ) \!\lesssim \! R^{-(\beta _1\!+\!\beta _2)/2} ~ &{} \text {in case }\mathrm{(a')}, \\ \sigma _R(t) \!\sim R^{(3-\beta )/2 } \quad \text {and} \quad d_{\mathrm{TV}}\big ( F_R(t) / \sigma _R(t) , Z \big ) \!\lesssim \! R^{-(\beta +1)/2} ~ &{} \text {in case }\mathrm{(b')}. \end{array}\right. } \end{aligned}$$Moreover, as \(R\rightarrow \infty \), in case \((a')\) , the process \(\big \{ R^{-2+\frac{\beta _1+\beta _2}{2}} F_R(t): t\in {\mathbb {R}}_+\big \}\) converges weakly in the space \(C({\mathbb {R}}_+)\) to a centered Gaussian process \({\mathcal {G}}_{\beta _1, \beta _2}\) with the covariance structure
$$\begin{aligned} {\mathbb {E}}\big [ {\mathcal {G}}_{\beta _1, \beta _2}(t) {\mathcal {G}}_{\beta _1, \beta _2 } (s) \big ] = K_{\beta _1, \beta _2} \int _0^t dr\int _0^s dr' \gamma _0(r-r') (t-r)(s-r'), \end{aligned}$$(1.21)for \(t,s\in {\mathbb {R}}_+\), where
$$\begin{aligned} K_{\beta _1, \beta _2} :&= \int _{{\mathbb {R}}^4} {\mathbf {1}}_{\{ x_1^2+x_2^2\le 1 \}} {\mathbf {1}}_{\{ y_1^2+y_2^2\le 1 \}} |x_1 - y_1|^{-\beta _1} |x_2 - y_2|^{-\beta _2} dx_1dx_2dy_1dy_2; \end{aligned}$$(1.22)and in case \((b')\) , the process \(\big \{ R^{\frac{\beta -3}{2}} F_R(t): t\in {\mathbb {R}}_+\big \}\) converges weakly in the space \(C({\mathbb {R}}_+)\) to a centered Gaussian process \(\widehat{{\mathcal {G}}}_{\beta }\) with the covariance structure
$$\begin{aligned} {\mathbb {E}}\big [ \widehat{{\mathcal {G}}}_{\beta }(t) \widehat{{\mathcal {G}}}_{\beta } (s) \big ] = \gamma _1({\mathbb {R}}) {\mathcal {L}}_\beta \int _0^t dr\int _0^s dr' \gamma _0(r-r') (t-r)(s-r') \end{aligned}$$(1.23)for \(t,s\in {\mathbb {R}}_+\), where
$$\begin{aligned} {\mathcal {L}}_\beta : = \int _{{\mathbb {R}}^3} dx_1dx_2 dx_3 {\mathbf {1}}_{\{ x_1^2 + x_2^2\le 1 \}} {\mathbf {1}}_{\{ x_1^2 + x_3^2\le 1 \}} |x_2-x_3|^{-\beta }. \end{aligned}$$(1.24)
For the above functional convergences, we specify that the space \(C({\mathbb {R}}_+)\) is equipped with the topology of uniform convergence on compact sets.
Remark 1.5
-
(1)
Note that the case when \(\gamma (x) =\gamma _1(x_1)\gamma _2(x_2)\) with \(\gamma _i\in L^{\ell _i}({\mathbb {R}})\cap L^1({\mathbb {R}})\) for some \(\ell _i>1\), \(i=1,2\), is covered in part (1). Indeed, suppose that \(\ell _1\ge \ell _2\), then by Hölder’s inequality, \(\gamma _1\in L^{\ell _1}({\mathbb {R}})\cap L^{1}({\mathbb {R}})\) implies \(\gamma _1\in L^{\ell _2}({\mathbb {R}})\cap L^{1}({\mathbb {R}}) \) and hence \(\gamma \in L^{\ell _2}({\mathbb {R}}^2) \cap L^1({\mathbb {R}}^2)\).
-
(ii)
The rate of convergence can also be described using other common distances such as the Wasserstein distance and the Kolmogorov distance; see [25, Appendix C].
-
(iii)
The variance orders and the rates in parts (1) and (2) of Theorem 1.4 are consistent with previous work on stochastic wave equations, see [4, 12, 35]. The setting in part (3) is new. As we will see shortly, our strategy is quite different from that in these papers.
Now, let us briefly explain our strategy and begin with the Gaussian Poincaré inequality. For \(F\in {\mathbb {D}}^{1,2}\), the Gaussian Poincaré inequality (see e.g. [14] or (2.12)) ensures that
that is, if DF is small, then the random variable F has necessarily small fluctuations. In the paper [8], Chatterjee pointed out that for \(F=f(X_1, \dots , X_d)\) with \(X_1, \dots , X_d\) i.i.d. N(0, 1) and f twice differentiable, F is close in total variation distance to a normal distribution with matched mean and variance if the Hessian matrix \(\text {Hess}f(X_1, \dots , X_d)\) is negligible, roughly speaking. This is known as the second-order Gaussian Poincaré inequality. In what follows, we state the infinite-dimensional version of this inequality due to Nourdin, Peccati and Reinert; see the paper [26] as well as the book [25].Footnote 5
Proposition 1.6
Let F be a centered element of \({\mathbb {D}}^{2,4}\) such that \({\mathbb {E}}[ F^2] = \sigma ^2 > 0\) and let \(Z\sim N(0,\sigma ^2)\). Then,
where \(D^2 F \otimes _1 D^2F\) denotes the 1-contraction between \(D^2F\) and itself (see 2.10).
It has been known that this inequality usually gives sub-optimal rate. In the recent work [42] by Vidotto, she provided an improved version of the above inequality, where she considered an \(L^2\)-based Hilbert space \({\mathcal {H}}= L^2(A, \nu )\) with \(\nu \) a diffusive measure (nonnegative, \(\sigma \)-finite and non-atomic) on some measurable space A. Let us state this result for the convenience of readers.
Theorem 1.7
(Theorem 2.1 in [42]) Let \(F\in {\mathbb {D}}^{2,4}\) with mean zero and variance \(\sigma ^2>0\) and let \(Z\sim N(0,\sigma ^2)\). Suppose \({\mathcal {H}}= L^2(A,\nu )\) with \(\nu \) a diffusive measure on some measurable space A. Then,
The proof of the above inequality follows from the general Malliavin-Stein bound
(see [25, equation (5.1.4)])Footnote 6 and Vidotto’s new bound of
(see [42, Proposition 3.2]), where \(L^{-1}\) is the pseudo-inverse of the Ornstein-Uhlenbeck operator L; see Sect. 2.1 for the definitions.
Recall that our Hilbert space \({\mathcal {H}}\) is the completion of \(C^\infty _c({\mathbb {R}}_+\times {\mathbb {R}}^d)\) under the inner product (1.3). The Hilbert space \({\mathcal {H}}\) contains generalized functions, but fortunately the objects \(D^2u(t,x)\), Du(t, x) are random functions in view of Theorem 1.3. By adapting Vidotto’s proof to our setting, we have the following version of second-order Gaussian Poincaré inequality. Note we write \(f\in | {\mathcal {H}}^{\otimes p }|\) to mean f is a real valued function and \(\bullet \mapsto | f(\bullet ) |\) belongs to \({\mathcal {H}}^{\otimes p }\).
Proposition 1.8
If \(F\in {\mathbb {D}}^{2,4}\) has mean zero and variance \(\sigma ^2\in (0,\infty )\) such that with probability 1, \(DF\in | {\mathcal {H}}|\) and \(D^2F\in |{\mathcal {H}}^{\otimes 2}|\), then
where \(Z\sim N(0,\sigma ^2)\) and
As mentioned before, Proposition 1.8 will follow from the Malliavin-Stein bound (1.26) and Cauchy-Schwarz inequality, taking into account that, by the duality relation (2.5), we have that \({\mathbb {E}}\left( \langle DF, - DL^{-1}F \rangle _{{\mathcal {H}}} \right) = {\mathbb {E}}[ F^2]=\sigma ^2\). Indeed, we can write
Proposition 1.9
If \(F, G\in {\mathbb {D}}^{2,4}\) have mean zero such that with probability one, \(DF, DG\in | {\mathcal {H}}|\) and \(D^2F, D^2G\in | {\mathcal {H}}^{\otimes 2}|\), then
where
and \(A_2\) is defined by switching the positions of F, G in the definition of \(A_1\).
For the sake of completeness, we sketch the proof of Proposition 1.9 in Appendix A.2. Once we have the information on the growth order of \(\sigma _R(t)\), we can apply Theorem 1.3 and Proposition 1.9 to obtain the error bounds in Theorem 1.4. The proof of Theorem 1.4 will be given in Sect. 4: In Sect. 4.1, we will establish the limiting covariance structure, which will be used to obtain the quantitative CLTs in Sect. 4.2; Proposition 1.9, combined with a multivariate Malliavin-Stein bound (see e.g. [25, Theorem 6.1.2]), also gives us easy access to the convergence of finite-dimensional distributions (f.d.d. convergence) for part (1), while in the other parts, the f.d.d. convergence follows easily from the dominance of the first chaotic component of \(F_R(t)\); finally in Sect. 4.3, we establish the functional CLT by showing the required tightness, which will follow by verifying the well-known criterion of Kolmogorov-Chentsov (see e.g. [17, Corollary 16.9]).
1.2 Absolute continuity of the law of the solution to Eq. (1.1)
In this part, we fix the following extra hypothesis on the correlation kernels \(\gamma _0,\gamma \).
The following is the main result of this section.
Theorem 1.10
Let \(d\in \{1,2\}\) and assume that Hypothesis \(\mathbf{(H2)}\) holds. In addition, assume that Hypothesis \(\mathbf{(H1)}\) holds if \(d=2\). Let u be the solution to (1.1). For any \(t>0\) and \(x \in {\mathbb {R}}^d\), the law of u(t, x) restricted to the set \({\mathbb {R}}\backslash \{0\}\) is absolutely continuous with respect to the Lebesgue measure on \({\mathbb {R}}\backslash \{0\}\).
Let us sketch the proof of Theorem 1.10. In view of the Bouleau-Hirsch criterion for absolute continuity (see [5]), it suffices to prove that for each \(m\ge 1\),
where \(\Omega _m =\{ |u(t,x) | \ge 1/m\}\). Notice that
where \({\mathcal {P}}_0\) is the completion of \(C^\infty _c({\mathbb {R}}^d)\) with respect to the inner product \(\langle \cdot , \cdot \rangle _0 \) introduced in (2.1). The usual approach to show the positivity of this norm is to get a lower bound for this integral by integrating on a small interval \([ t-\delta , t]^2\) and use that, for r close to t, \(D_{r,y}u(t,x)\) behaves as \(G_{t-r}(x-y) u(s,y)\) (see, e.g., [31]). However, for \(r\not =s\), the inner product \( \langle D_{r,\bullet }u(t,x) , D_{s,\bullet }u(t,x) \rangle _{0} \) is not necessarily non-negative. Our strategy to overcome this difficulty consists in making use of Hypothesis \(\mathbf{(H2)}\) in order to show that
This allows us to reduce the problem to the non-degeneracy of \(\int _{t-\delta } ^t \Vert D_{r,\bullet }u(t,x) \Vert _{0} ^2 dr\) for \(\delta \) small enough, which can be handled by the usual arguments. At this point, we will make use of the estimates provided in Theorem 1.3.
For \(d=1\), Theorem 1.10 was proved in [2] under stronger assumptions on the covariance structure. The result in Theorem 1.10 for \(d=2\) is new. Indeed, the study of the existence (and smoothness) of the density for the stochastic wave equation has been extensively revisited over the last three decades. We refer the readers to [7, 22, 23, 31, 39,40,41]. In all these articles, the authors considered a stochastic wave equation of the form
on \({\mathbb {R}}_+\times {\mathbb {R}}^d\), with \(d\ge 1\). Here, \(\dot{{\mathfrak {X}}}\) denotes a space-time white noise in the case \(d=1\), or a Gaussian noise that is white in time and has a spatially homogeneous correlation (slightly more general than that of W) in the case \(d\ge 2\). The functions \(b,\sigma \) are usually assumed to be globally Lipschitz, and such that the following non-degeneracy condition is fulfilled: \(|\sigma (z)|\ge C>0\), for all \(z\in {\mathbb {R}}\). The temporal nature of the noise \(\dot{{\mathfrak {X}}}\) made possible to interpret the solution in the classical Dalang-Walsh sense, making use of all needed martingale techniques. The first attempt to consider a Gaussian noise that is colored in time was in the paper [2], where the hyperbolic Anderson model with spatial dimension one was considered. As mentioned above, in that paper the existence of density was proved under a slightly stronger assumption than Hypothesis \(\mathbf{(H2)}\).
The rest of this paper is organized as follows. Section 2 contains preliminary results and the proofs of our main results—Theorems 1.3, 1.4 and 1.10—are given in Sects. 3, 4 and 5 , respectively.
2 Preliminary results
This section is devoted to presenting some basic elements of the Malliavin calculus and collecting some preliminary results that will be needed in the sequel.
2.1 Basic Malliavin calculus
Recall that the Hilbert space \({\mathcal {H}}\) is the completion of \(C^\infty _c({\mathbb {R}}_+\times {\mathbb {R}}^d)\) under the inner product (1.3) that can be written as
where
As defined in Sect. 1.2, we denote by \({\mathcal {P}}_0\) the completion of \(C^\infty _c({\mathbb {R}}^d)\) with respect to the inner product \(\langle h, g\rangle _0\). Let \( | {\mathcal {P}}_0|\) be the set of measurable functions \(h:{\mathbb {R}}^d \rightarrow {\mathbb {R}}\) such that
Then \(|{\mathcal {P}}_0| \subset {\mathcal {P}}_0\) and for \(h\in | {\mathcal {P}}_0|\), \(\Vert h \Vert ^2_0= \int _{{\mathbb {R}}^{2d}}dzdz' \gamma (z-z') h(z) h(z')\). We define the space \(|{\mathcal {H}}|\) in a similar way. For \(h,g\in C^\infty _c({\mathbb {R}}^d)\) we can express (2.1) using the Fourier transform:
The Parseval-type relation (2.3) also holds for functions \(h,g \in L^1({\mathbb {R}}^d) \cap |{\mathcal {P}}_0|\).
For every integer \(p\ge 1\), \({\mathcal {H}}^{\otimes p}\) and \({\mathcal {H}}^{\odot p}\) denote the pth tensor product of \({\mathcal {H}}\) and its symmetric subspace, respectively. For example, \(f_{t,x,n}\) in (1.8) belongs to \({\mathcal {H}}^{\otimes n}\) and \({\widetilde{f}}_{t,x,n}\in {\mathcal {H}}^{\odot n}\); we also have \(f\otimes g\in {\mathcal {H}}^{\otimes (n+m)}\), provided \(f\in {\mathcal {H}}^{\otimes m}\) and \(g\in {\mathcal {H}}^{\otimes n}\); see [25, Appendix B] for more details.
Fix a probability space \((\Omega , {\mathcal {B}}, {\mathbb {P}})\), on which we can construct the isonormal Gaussian process associated to the Gaussian noise \({\dot{W}}\) in (1.1) that we denote by \(\{W(\phi ): \phi \in {\mathcal {H}}\}\). That is, \(\{W(\phi ): \phi \in {\mathcal {H}}\}\) is a centered Gaussian family of real-valued random variables defined on \((\Omega , {\mathcal {B}}, {\mathbb {P}})\) such that \({\mathbb {E}}[ W(\psi ) W(\phi ) ] = \langle \psi , \phi \rangle _{{\mathcal {H}}}\) for any \(\psi , \phi \in {\mathcal {H}}\). We will take \({\mathcal {B}}\) to be the \(\sigma \)-algebra \(\sigma \{W\}\) generated by the family of random variables \(\{ W(h): h\in C^\infty _c({\mathbb {R}}_+\times {\mathbb {R}}^d)\}\).
In the sequel, we recall some basics on Malliavin calculus from the books [25, 27].
Let \(C^\infty _\text {poly}({\mathbb {R}}^n)\) denote the space of smooth functions with all their partial derivatives having at most polynomial growth at infinity and let \({\mathcal {S}}\) denote the set of simple smooth functionals of the form
For such a random variable F, its Malliavin derivative DF is the \({\mathcal {H}}\)-valued random variable given by
And similarly its mth Malliavin derivative \(D^mF\) is the \({\mathcal {H}}^{\otimes m}\)-valued random variable given by
which is an element in \(L^p(\Omega ; {\mathcal {H}}^{\odot m})\) for any \(p\in [1,\infty )\). It is known that the space \({\mathcal {S}}\) is dense in \(L^p(\Omega , \sigma \{W\}, {\mathbb {P}})\) and
is closable for any \(p\in [1,\infty )\); see e.g. Lemma 2.3.1 and Proposition 2.3.4 in [25]. Let \({\mathbb {D}}^{m,p}\) be the closure of \({\mathcal {S}}\) under the norm
Now, let us introduce the adjoint of the derivative operator \(D^m\). Let \(\text {Dom}(\delta ^m)\) be the set of random variables \(v\in L^2 ( \Omega ; {\mathcal {H}}^{\otimes m} )\) such that there is a constant \(C_v>0\) for which
By Riesz representation theorem, there is a unique random variable, denoted by \(\delta ^m(v)\), such that the following duality relationship holds:
Equality (2.5) holds for all \(v\in \text {Dom}(\delta ^m)\) and all \(F\in {\mathbb {D}}^{m,2}\). In the simplest case when \(F = f( W(h))\) with \(h\in {\mathcal {H}}\) and \(f\in C^1_\text {poly}({\mathbb {R}})\), we have \(\delta (h) = W(h)\sim N(0, \Vert h\Vert _{\mathcal {H}}^2)\) and equality (2.5) reduces to
which is exactly part of the Stein’s lemma recalled below: For \(\sigma \in (0,\infty )\) and an integrable random variable Z, Stein’s lemma (see e.g. [25, Lemma 3.1.2]) asserts that
for any differentiable function \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) such that the above expectations are finite. The operator \(\delta \) is often called the Skorokhod integral since in the case of the Brownian motion, it coincides with an extension of the Itô integral introduced by Skorokhod, see e.g. [29]. Then we can say \(\text {Dom}(\delta ^m)\) is the space of Skorokhod integrable random variables with values in \({\mathcal {H}}^{\otimes m}\).
The Wiener-Itô chaos decomposition theorem asserts that \(L^2(\Omega , \sigma \{W\}, {\mathbb {P}})\) can be written as a direct sum of mutually orthogonal subspaces:
where \({\mathbb {C}}_0^W\), identified as \({\mathbb {R}}\), is the space of constant random variables and \({\mathbb {C}}_n^W = \{ \delta ^n( h): h \in {\mathcal {H}}^{\otimes n} ~\text {is deterministic}\}\), for \(n\ge 1\), is called the nth Wiener chaos associated to W. Note that the first Wiener chaos consists of centered Gaussian random variables. When \(h \in {\mathcal {H}}^{\otimes n}\) is deterministic, we write \(I_n(h) = \delta ^n( h )\) and we call it the nth multiple integral of h with respect to W. By the symmetry in (2.4) and the duality relation (2.5), \(\delta ^n( h ) = \delta ^n( {\widetilde{h}} ) \) with \( {\widetilde{h}}\) the canonical symmetrization of h, so that we have \(I_n(h) = I_n({\widetilde{h}})\) for any \(h \in {\mathcal {H}}^{\otimes n}\). The above decomposition can be rephrased as follows. For any \(F\in L^2(\Omega , \sigma \{W\}, {\mathbb {P}})\),
with \(f_n \in {\mathcal {H}}^{\odot n}\) uniquely determined for each \(n\ge 1\). Moreover, the (modified) isometry property holds
for any \(f\in {\mathcal {H}}^{\otimes p}\) and \(g\in {\mathcal {H}}^{\otimes q}\). We have the following product formula: For \(f\in {\mathcal {H}}^{\odot p}\) and \(g\in {\mathcal {H}}^{\odot q}\),
where \(f\otimes _r g\) is the r-contraction between f and g, which is an element in \({\mathcal {H}}^{\otimes (p+q-2r)}\) defined as follows. Fix an orthonormal basis \(\{e_i, i\in {\mathcal {O}}\}\) of \({\mathcal {H}}\). Then, for \(1\le r \le p\wedge q\),
In the particular case when f, g are real-valued functions, we can write
provided the above integral exists. For \(F\in {\mathbb {D}}^{m,2}\) with the representation (2.7) and \(m\ge 1\), we have
where \(I_{n-m}\big ( f_n(\bullet ,*)\big )\) is understood as the \((n-m)\)th multiple integral of \( f_n(\bullet ,*)\in {\mathcal {H}}^{\otimes (n-m)}\) for fixed \(\bullet \). We can write
whenever the above series makes sense and converges in \(L^2(\Omega )\). With the decomposition (2.11) in mind, we have the following Gaussian Poincaré inequality: For \(F\in {\mathbb {D}}^{1,2}\), it holds that
In fact, if F has the representation (2.7), then
which gives us (2.12) and, moreover, indicates that the equality in (2.12) holds only when \(F\in {\mathbb {C}}^W_0 \oplus {\mathbb {C}}^W_1\), that is, only when F is a real Gaussian random variable.
Now let us mention the particular case when the Gaussian noise is white in time, which is used in the reduction step in Sect. 3.2. First, let us denote
and point out that the following inequality reduces many calculations to the case of the white noise in time. For any nonnegative function \(f\in {\mathcal {H}}_0^{\otimes n}\) that vanishes outside \(([0,t] \times {\mathbb {R}}^d)^n\),
whereFootnote 7
whenever no ambiguity arises, we write \(\Vert f\Vert _0:=\Vert f\Vert _{{\mathcal {P}}_0^{\otimes n}}\) so that \( \Vert f\Vert _{{\mathcal {H}}_0^{\otimes n}}^2=\int _{[0,t]^n}\Vert f( \pmb {t_n} ,\bullet )\Vert _{0}^2 d\pmb {t_n}. \)
Let \(\dot{{\mathfrak {X}}}\) denote the Gaussian noise that is white in time and has the same spatial correlation as W. More precisely, \(\{{\mathfrak {X}}(f): f\in {\mathcal {H}}_0\}\) is a centered Gaussian family with covariance
Denote by \(I^{{\mathfrak {X}}}_p\) the p-th multiple stochastic integral with respect to \({\mathfrak {X}}\). The product formula (2.9) still holds with W replaced by the noise \({\mathfrak {X}}\). Moreover, if \(f\in {\mathcal {H}}^{\otimes p}\) and \(g\in {\mathcal {H}}^{\otimes q}\) have disjoint temporal supports,Footnote 8 then we have \(f\otimes _r g =0\) for \(r=1,\dots , p\wedge q\) and the product formula (2.9) reduces to
In this case, the random variables \(I^{{\mathfrak {X}}}_p(f)\) and \(I^{{\mathfrak {X}}}_q(g)\) are independent by the Üstünel-Zakai-Kallenberg criterion (see Exercise 5.4.8 of [25]) and note that we do not need to assume f, g to be symmetric in (2.14).
Now let us introduce the Ornstein-Uhlenbeck operator L that can be defined as follows. We say that F belongs to the \(\text {Dom}(L)\) if \(F\in {\mathbb {D}}^{1,2}\) and \(DF\in \text {Dom}(\delta )\); in this case, we let \(LF = -\delta DF\). For \(F\in L^2(\Omega )\) of the form (2.7), \(F\in \text {Dom}(L)\) if and only if \( \sum _{n\ge 1} n^2 n! \Vert f_n \Vert _{{\mathcal {H}}^{\otimes n}}^2 <\infty . \) In this case, we have \(LF = \sum _{n\ge 1} -n I_n(f_n)\). Using the chaos expansion, we can also define the Ornstein-Uhlenbeck semigroup \(\{P_t = e^{tL}, t\in {\mathbb {R}}_+\}\) and the pseudo-inverse \(L^{-1}\) of the Ornstein-Uhlenbeck operator L as follows. For \(F\in L^2(\Omega )\) having the chaos expansion (2.7),
Observe that for any centered random variable \(F\in L^2(\Omega , \sigma \{W\}, {\mathbb {P}})\), \(LL^{-1}F = F\) and for any \(G\in \text {Dom}(L)\), \(L^{-1} LG = G - {\mathbb {E}}[G].\) The above expression and the modified isometry property (2.8) give us the contraction property of \(P_t\) on \(L^2(\Omega )\), that is, for \(F\in L^2(\Omega , \sigma \{W\}, {\mathbb {P}})\), \(\Vert P_t F\Vert _2 \le \Vert F\Vert _2\). Moreover, \(P_t\) is a contraction operator on \(L^q(\Omega )\) for any \(q\in [1,\infty )\); see [25, Proposition 2.8.6].
Finally, let us recall Nelson’s hypercontractivity property of the Ornstein-Uhlenbeck semigroup: For \(F\in L^q(\Omega , \sigma \{W\}, {\mathbb {P}})\) with \(q\in (1,\infty )\), it holds for each \(t\ge 0\) that \(\Vert P_t F \Vert _{q_t} \le \Vert F \Vert _q\) with \(q_t = 1 + (q-1)e^{2t}\). In this paper, we need one of its consequences – a moment inequality comparing \(L^q(\Omega )\)-norms on a fixed chaos:
see e.g. [25, Corollary 2.8.14].
2.2 Inequalities
Let us first present a few inequalities, which will be used in Sect. 3.
Lemma 2.1
Fix an integer \(d\ge 1\). Suppose that either one of the following conditions hold:
Define
Then, for any \(f,g \in L^{2q}({\mathbb {R}}^{d})\),
where \(C_\gamma =\Vert \gamma \Vert _{L^{\ell }({\mathbb {R}}^d)}\) in case (a), and \(C_\gamma =C_{d,\beta }\) is the constant (depending on \(d,\beta )\) that appears in the Hardy–Littlewood–Sobolev inequality (2.16) below, in case (b).
Proof
In the case \(d=2\), this result was essentially proved on page 15 of [35] in case (a), and on page 6 of [4] in case (b). We reproduce the arguments here for the sake of completeness.
In case (a), we apply Hölder’s inequality and Young’s convolution inequality:
In case (b), we apply Hölder’s inequality and Hardy-Littlewood-Sobolev inequality:
This concludes the proof. \(\square \)
To deal with case (c) in \(\mathbf{(H1)}\), we need the following modification of Lemma 2.1.
Lemma 2.2
Suppose that \(\gamma (x_1,\ldots ,x_d)=\prod _{i=1}^{d}\gamma _i(x_i)\), where for each \(i\in \{1,\ldots ,d\}\),
Let \(q_i=\ell _i/(2\ell _i-1)\) in case (M1) and \(q_{i}=1/(2-\beta _i)\) in case (M2). Let \(q=\max \{q_i: i=1,\dots ,d\}\).
If \(f, g \in L^{2q}({\mathbb {R}}^d)\) satisfy \(f(x)=g(x)=0\) for \(x \not \in \prod _{i=1}^d[a_i,b_i]\) for some real numbers \(a_i<b_i\),Footnote 9 then
with \(\Lambda =\max \{b_i-a_i;i=1,\ldots ,d\}\), \(C_\gamma = \prod _{i=1}^{d}C_{\gamma _i}\) and \(\nu = \sum _{i=1}^{d} (q_i^{-1} - q^{-1})\). In particular, when \(q_i=q\) for all \(i\in \{1,\ldots ,d\}\), we have
The constants \(C_{\gamma _i}\) are defined as in Lemma 2.1.
Proof
By Lemma 2.1, inequality (2.17) holds for \(d=1\) with \(\nu =0\). Now let us consider \(d\ge 2\) and prove inequality (2.17) by induction. Suppose (2.17) holds for \(d\le k-1\) \((k\ge 2)\). We use the notation \(x=(x_1,\ldots ,x_k)=:\pmb {x_k}.\)
Without loss of any generality we assume \(q_1\ge q_2\ge \cdots \ge q_k\), so that \(q=q_1\). Applying the initial step \((d=1)\) yields
By the induction hypothesis, we can bound the right-hand side of (2.18) by
with \(\nu ^*= \sum _{i=1}^{k-1}( q_i^{-1} - q^{-1})\). By Hölder’s inequality,
A similar inequality holds for g. Since \(\nu ^*+ ( q_k^{-1} - q^{-1}) = \sum _{i=1}^{k} ( q_i^{-1} - q^{-1})\), inequality (2.17) holds for \(d=k\). \(\square \)
We will need the following generalization of Lemmas 2.1 and 2.2.
Lemma 2.3
-
(1)
Under the conditions of Lemma 2.1, for any \(f,g \in L^{2q}({\mathbb {R}}^{md})\)
$$\begin{aligned} \int _{{\mathbb {R}}^{2md}} f(\pmb {x_m}) g(\pmb {y_m}) \prod _{j=1}^m \gamma (x_j-y_j) d\pmb {x_m} d\pmb {y_m} \le C_\gamma ^m \Vert f \Vert _{L^{2q}({\mathbb {R}}^{md})} \Vert g \Vert _{L^{2q}({\mathbb {R}}^{md})}, \end{aligned}$$(2.19)where \(C_\gamma \) is the same constant as in Lemma 2.1. Here \(\pmb {x_m} = (x_1, \dots , x_m)\) with \(x_i\in {\mathbb {R}}^d\).
-
(2)
Let \(\gamma , C_{\gamma }\) and q be given as in Lemma 2.2. If \(f,g \in L^{2q}({\mathbb {R}}^{md})\) satisfy \(f(\pmb {x_{md}}) = g(\pmb {x_{md}})=0\) for \(\pmb {x_{md}} \notin \prod _{i=1}^{md} [a_i, b_i]\) for some real numbers \(a_i<b_i\), then inequality (2.19) holds with \(C_\gamma \) replaced by \(\Lambda ^\nu C_\gamma \), where \(\Lambda =\max \{ b_i -a_i : i=1,\dots , md\}\) and \(\nu = \sum _{i=1}^d ( q_i^{-1} - q^{-1})\). Here \(\pmb {x_{md}} = (x_1, \dots , x_{md})\) with \(x_i\in {\mathbb {R}}\).
Proof
The proof will be done by induction on m simultaneously for both cases (1) and (2). Let \(C=C_{\gamma }\) in case (1) and \(C=\Lambda ^{\nu }C_{\gamma }\) in case (2). The results are true for \(m=1\) by Lemmas 2.1 and 2.2. Assume that the results hold for \(m-1\). Applying the inequality for \(m=1\) yields
By the induction hypothesis, the latter term can be bounded by
which completes the proof. \(\square \)
Let us return to the three cases of Hypothesis \(\mathbf{(H1)}\). Lemma 2.1 indicates that \(L^{2q}({\mathbb {R}}^{2})\) is continuously embedded into \({\mathcal {P}}_{0}\), with \(q\in (1/2, 1)\) given by
Recall that \({\mathcal {P}}_{0}\) has been defined at the beginning of Sect. 2.1. Moreover, for any \(f, g\in L^{2q}({\mathbb {R}}^2)\),
where
For case (c) of Hypothesis \(\mathbf{(H1)}\), we consider three sub-cases:
Lemma 2.2 implies that, for any \(f, g\in L^{2q}({\mathbb {R}}^2)\) with
such that f, g vanish outside a box with side lengths bounded by \(\Lambda \), then inequality (2.21) still holds with
where the constants \(C_{1,\beta _i}\) are given as in Lemma 2.1.
From Lemma 2.3, we deduce that in cases (a) and (b),
for any measurable function \(f:( {\mathbb {R}}_+ \times {\mathbb {R}}^2)^n \rightarrow {\mathbb {R}}\) such that f vanishes outside \(([0,t] \times {\mathbb {R}}^2)^n\); in case (c), inequality (2.25) holds true for any measurable function \(f: ({\mathbb {R}}_+ \times {\mathbb {R}}^{2})^n \rightarrow {\mathbb {R}}\) such that
with \(\Lambda :=\max \{ b_i-a_i : i=1,\dots , 2n\}<\infty \).
Let us present a few facts on the fundamental solution G. When \(d=2\),
and
We will use also the following estimate.
Lemma 2.4
(Lemma 4.3 of [4]) For any \(q \in (1/2,1)\) and \(d=2\),
where \(A_{q}>0\) is a constant depending on q.
Finally, we record the expression of the Fourier transform of \(G_t\) for \(d\in \{1,2\}\):
Note that (see e.g. (3.4) of [3])
In Sect. 4, we need the following two results.
Lemma 2.5
For \(d\in \{ 1,2\}\), let \(\gamma _0\) satisfy the assumption (i) on page 2 and let \(\mu _p\) be a symmetric measure on \(({\mathbb {R}}^{d})^p\), for some integer \(p\ge 1\). Then, with \(0< s\le t\) and \(\Delta _p(t)= \{ \pmb {s_p}\in {\mathbb {R}}_+^p: t =s_0> s_1> \cdots> s_p > 0 \}\),
for any measurable function \(g: ({\mathbb {R}}_+\times {\mathbb {R}}^d)^p\rightarrow {\mathbb {R}}_+\) for which the above integral is finite.
Proof
After applying \(|ab|\le \frac{a^2+b^2}{2}\) and using the symmetry of \( \mu _p\), we have that the left-hand side quantity is bounded by
with
Putting \({\mathcal {I}}_s(s_1, \dots , s_p) := {\mathbf {1}}_{\{ s> s_1> \cdots> s_p>0 \}} \) and letting \(\widetilde{{\mathcal {I}}}_s(s_1, \dots , s_p)\) be its canonical symmetrization (so that \(\big \vert \widetilde{{\mathcal {I}}}_s\big \vert \le (p!)^{-1}\)), we can rewrite the term in (2.31) as
using also the bound \( \sup \{ \int _0^s \gamma _0(r-r') dr' : r\in [0,t] \}\le \Gamma _t \). For the other term (2.32), we argue in the same way: With \(({\mathcal {I}}_s \cdot h)(s_1, ... , s_p) ={\mathcal {I}}_s(s_1, \dots , s_p) h(s_1, ... , s_p) \), we rewrite the term (2.32) as
since \(h\ge 0\) and \(\big \vert \widetilde{{\mathcal {I}}}_t\big \vert \le (p!)^{-1}\). This concludes the proof. \(\square \)
Lemma 2.6
For \(d\in \{ 1,2\}\) let \(\gamma , \mu \) satisfy the assumption (ii) on page 2. Then, for any nonnegative function \(h\in {\mathcal {P}}_0\cap L^1({\mathbb {R}}^d)\),
As a consequence, for any integer \(p\ge 1\) and \(w_1, \dots , w_p\in [0,t]\),
Proof
Since \(h \ge 0\), using the fact that \({\mathcal {F}}h(\xi +z) = {\mathcal {F}}(e^{-iz \cdot } h)(\xi )\) together with \(|e^{-iz (x +y)}|=1\), we get
which is exactly \( \int _{{\mathbb {R}}^d} \mu (d\xi ) \big \vert {\mathcal {F}}h(\xi ) \big \vert ^2.\) In particular, by (2.30),
which is finite due to Dalang’s condition (1.2). Applying this inequality several times yields
which is a uniform bound over \((\pmb {z_p}, \pmb {w_p})\in {\mathbb {R}}^{dp}\times [0,t]^p\). \(\square \)
3 \(L^p\) estimates for Malliavin derivatives
This section is mainly devoted to the proof of Theorem 1.3. The proof will be done in several steps organized in Sects. 3.1, 3.2, 3.3, 3.4 and 3.5. In Sect. 3.6, we record a few consequences of Theorem 1.3 that will be used in the proof of Theorem 1.10 in Sect. 5.
3.1 Step 1: Preliminaries
Let us first introduce some handy notation. Recall that for \( \pmb {t_n}:=(t_1,\ldots ,t_n) \) and \( \pmb {x_n}:=(x_1,\ldots ,x_n) \), we defined in (1.8)
with the convention (1.6), and we denote by \({\widetilde{f}}_{t,x,n}\) the symmetrization of \(f_{t,x,n}\); see (1.9). We treat the time-space variables \((t_i,x_i)\) as one coordinate and we write
as in Notation A-(3). Recall that the solution u(t, x) has the Wiener chaos expansion
where the kernel \(f_{t,x,n}\) is not symmetric and in this case, by definition, \(I_n(f_{t,x,n})= I_n\big ({\widetilde{f}}_{t,x,n} \big )\).
Our first goal is to show that, for any fixed \((r,z) \in [0,t] \times {\mathbb {R}}^d\) and for any \(p\in [2,\infty )\), the series
converges in \(L^p(\Omega )\), and the sum, denoted by \(D_{r,z}u(t,x) \), satisfies the \(L^p\) estimates (1.11).
The first term of the series (3.1) is \({\widetilde{f}}_{t,x,1}(r,z)=G_{t-r}(x-z)\). In general, for any \(n\ge 1\),
where \(h^{(j)}_{t,x,n}(r,z;\bullet )\) is the symmetrization of the function \((\pmb {t_{n-1}},\pmb {x_{n-1}})\rightarrow f^{(j)}_{t,x,n}(r,z; \pmb {t_{n-1}},\pmb {x_{n-1}})\), which is obtained from \(f_{t,x,n}\) by placing r on position j among the time instants, and z on position j among the space points: With the convention (1.6),
That is,
with \( f_{r,z,1}=1\). For example, \(f^{(1)}_{t,x,1}(r,z; \bullet ) = G_{t-r}(x-z)\) and \(f^{(1)}_{t,x,n}(r,z; \pmb {t_{n-1}}, \pmb {x_{n-1}} )= G_{t-r}(x-z) f_{r,z,n-1}( \pmb {t_{n-1}}, \pmb {x_{n-1}} )\). By the definition of the symmetrization,
Similarly, for \(\pmb {s_m}\in [0,t]^m\) and \(\pmb {y_m}\in {\mathbb {R}}^{dm}\), and for any \(p\in [2,\infty )\), we will show that
converges in \(L^p(\Omega )\). Note that if the series (3.6) converges in \(L^p(\Omega )\), we can see that almost surely, the function
is symmetric, meaning that for any \(\sigma \in {\mathfrak {S}}_m\),
From now on, we assume \(t>s_1> ...> s_m>0\) without losing any generality. Note that like (3.2), we can write
where \(\pmb {i_m}\in \Delta _{n,m}\) means \(1 \le i_1< i_2< \cdots < i_m \le n\) and \(h^{(\pmb {i_m})}_{t,x,n}(\pmb {s_m}, \pmb {y_m}; \bullet )\) is the symmetrization of the function \(f^{(\pmb {i_m})}_{t,x,n}(\pmb {s_m}, \pmb {y_m}; \bullet )\) that is defined by
which is a generalization of (3.4).
3.2 Step 2: Reduction to white noise in time
Let \(\dot{{\mathfrak {X}}}\) denote the Gaussian noise that is white in time and has the same spatial correlation as W and let \(\{{\mathfrak {X}}(f): f\in {\mathcal {H}}_0\}\) denote the resulting isonormal Gaussian process; see Sect. 2.1.
For any \(p\in [2,\infty )\), we deduce from (3.6) and (3.7) that
The function \(\sum _{\pmb {i_m}\in \Delta _{n,m}} h^{(\pmb {i_m})}_{t,x,n}(\pmb {s_m}, \pmb {y_m}; \bullet )\) vanishes outside \(\big ([0,t]\times {\mathbb {R}}^d\big )^{n-m}\), thus we deduce from (2.13) that
Therefore, we get
This leads to
with
The product formula (2.14) and the decomposition (3.8) yield, with \((i_0, s_0, y_0)=(0, t,x)\),
where the last equality is obtained by using the independence among the random variables inside the expectation. It remains to estimate two typical terms:
The first term in (3.13) can be estimated as follows. Using Fourier transform in space (see (2.29)), we have, with \(t_0=r\),
By Lemma 2.6,
where \(C= 2(t^2+1) \int _{{\mathbb {R}}^d} ( 1+ |\xi |^2)^{-1}\mu (d\xi )\).
Remark 3.1
By the arguments that lead to (3.9), we can also get, for any \(p\in [2,\infty )\),
and then the estimate (3.15) implies \(u(t,x)\in L^p(\Omega )\). Moreover,
This is done under the Dalang’s condition (1.2) only and the case \(p=2\) provides another proof of [3, Theorem 4.4] when \(d=1,2\).
In what follows, we estimate the second term in (3.13) separately for the cases \(d=1\) and \(d=2\). As usual, we will use C to denote an immaterial constant that may vary from line to line.
3.2.1 Estimation of \(\Big \Vert I^{{\mathfrak {X}}}_{j-1}(f^{(j)}_{t,x,j}(r,z;\bullet ) \big ) \Big \Vert _2^2\) when \(d=1\)
When \(d=1\), \(G_t(x) = \frac{1}{2} {\mathbf {1}}_{\{|x| <t \}}\). For \(j=1\), \(I^{{\mathfrak {X}}}_{j-1}(f^{(j)}_{t,x,j}(r,z;\bullet ) \big )=G_{t-r}(x-z)\) with the convention (1.6). For \(j\ge 2\), it follows from the (modified) isometry property (2.8) that
where we recall that \(h^{(j)}_{t,x,j}(r,z;\bullet ) \) is the symmetrization of \(f^{(j)}_{t,x,j}(r,z;\bullet ) \); see (3.5). Then, taking advantage of the simple form of \(G_t(x)\) for \(d=1\), we get
from which we further get
where the last inequality follows from (3.15) and (3.14).
3.2.2 Estimation of \(\Big \Vert I^{{\mathfrak {X}}}_{j-1}(f^{(j)}_{t,x,j}(r,z;\bullet ) \big ) \Big \Vert _2^2\) when \(d=2\)
Let q be defined as in (2.20) and (2.23) and we fix such a q throughout this subsection. For \(j=1\), \(I^{{\mathfrak {X}}}_{j-1}(f^{(j)}_{t,x,j}(r,z;\bullet ) \big )=G_{t-r}(x-z)\) with the convention (1.6). For \(j\ge 2\), we begin with
where we applied Lemma 2.3 for the inequality aboveFootnote 10 and we denote
Note that we can choose C to depend only on \((t,\gamma , q)\) and be increasing in t.
Case \(j=2\). In this case, we deduce from Lemma 2.4 and (2.27) that
Case \(j\ge 3\). In this case, we use Minkowski inequality with respect to the norm in \(L^{1/q}( [t_2,t] ,dt_1)\) in order to get
Applying Lemma 2.4 yields
If \(j=3\), we have
Owing to (2.27), we can bound \(G^{2q-1} _{t-t_2}(x-x_2) \) by \((2\pi ) (t-t_2)G^{2q} _{t-t_2}(x-x_2) \), and then we apply again Lemma 2.4 and (2.27) to conclude that
For \(j\ge 4\), we continue with the estimate (3.20). We can first apply Minkowski inequality with respect to the norm \(L^{1/q}\big ( [t_4, t_2], dt_3\big )\) and then apply Lemma 2.4 to obtain
Note that
Then, by Cauchy-Schwarz inequality and (2.26), we can infer that
where \(c_1= \frac{(2\pi )^{3-4q}}{4-4q}\). Thus, substituting this estimate into (3.22), we end up with
Focusing on the indicators, the right-hand side of this estimate can be bounded by
For \(j=4\), using (2.28), we have
Now for \(j\ge 5\), we just integrate in each of the variables \(x_4, \dots , x_{j-1}\) (with this order) so that, thanks to (2.26), we end up with
where we used the rough estimate \(a^\nu \le (b+1)^{\nu }\) for \(0<a\le b\) and \(\nu >0\). Thus, using (2.28) we obtain:
Hence, combining the estimates (3.19), (3.21), (3.23) and (3.24) and taking into account that \( I^{{\mathfrak {X}}}_{0}(f^{(1)}_{t,x,1}(r,z;\bullet ) \big )=G_{r-s}(z-y)\), we can write
where the constant \(C > 1\) depends on \((t,\gamma , q)\) and is increasing in t. For \(1\le j \le n\), we obtain the following bound
3.3 Step 3: Proof of (1.11)
Let us first consider the lower bound in (1.11) for \(d\in \{1,2\}\). For \(p\in [2,\infty )\), we deduce from the modified isometry (2.8) that
Now let us establish the upper bound in (1.11). By symmetry, we can assume \(t>s_1> \cdots> s_m >0\). First we consider the case where \(d=2\). Recall the definition of \({\mathcal {Q}}_{m,n}\) from (3.11), and then plugging the estimates (3.15) and (3.25) into (3.12) yields, with \((i_0, s_0, y_0) = (0, t,x)\),
where we used the rough bound \(\left( {\begin{array}{c}n\\ m\end{array}}\right) \le 2^n\). The sum in the above display is equal to
by multinomial formula. That is, we can get
which, together with the estimate (3.10), implies the upper bound in (1.11), when \(d=2\).
The case \(d=1\) can be done in the same way by noticing that the bound in (3.17) can be replaced by \(n\frac{C^j}{j!} G_{t-r}^2(x-z)\) for \(1\le j\le n\). Then, like the estimate for \(d=2\), we can get, for \(t>s_1> \cdots>s_m>0\),
which together with the estimate (3.10) implies the upper bound in (1.11), when \(d=1\). This completes the proof of the estimate (1.11).
Notice that the upper bound also shows the convergence in \(L^p\) for any \(p\in [2,\infty )\) of the series (3.6), for any fixed \(\pmb {s_m}\in [0,t]^m\) and \(\pmb {y_m}\in {\mathbb {R}}^{dm}\).
3.4 Step 4: Existence of a measurable version
We claim that there is a random field Y such that \(Y(\pmb {s_m}, \pmb {y_m}) = D^m_{\pmb {s_m}, \pmb {y_m}}u(t,x)\) almost surely for almost all \((\pmb {s_m}, \pmb {y_m})\in [0,t]^m\times {\mathbb {R}}^{md}\) and the map**
is jointly measurable. This fact is rather standard and we will sketch the proof only in the case \(d=2\). From the explicit form of the kernels \(f_{t,x,n}\) given in (1.8), it follows that the map**
is measurable from \([0,t]^m\times {\mathbb {R}}^{2m}\) to \(L^2([0,t]^{n-m} ; L^{2q}({\mathbb {R}}^{2(n-m)}))\). Because
we deduce that the map (3.26) is measurable from \([0,t]^m\times {\mathbb {R}}^{2m}\) into \({\mathcal {H}}^{\otimes (n-m)}\). This implies that the map**
is measurable from \([0,t]^m\times {\mathbb {R}}^{2m}\) to \(L^2(\Omega )\). The upper bound in (1.11) implies that the map** (3.27) belongs to the space
From this, it follows that we can find a measurable modification of the process
Finally, by standard arguments we deduce the existence of a measurable modification of the series (3.6).
3.5 Step 5: Proof of \(u(t,x)\in {\mathbb {D}}^{\infty }\)
We have already seen in Remark 3.1 that \(u(t,x)\in L^p(\Omega )\) for any \(p\in [2,\infty )\). Then, it remains to show that the function \(D^m_{\pmb {s_m}, \pmb {y_m}}u(t,x)\) defined as the limit of the series (3.6) coincides with the mth Malliavin derivative of u(t, x). To do this, it suffices to show that \({\mathbb {E}}\big [ \Vert D^mu(t,x)\Vert _{{\mathcal {H}}^{\otimes m} }^p \big ] <\infty \) for any \(m\ge 1\). By Fubini’ theorem and using the upper bound (1.11), we write
This shows \(u(t,x)\in {\mathbb {D}}^{\infty }\) and completes the proof of Theorem 1.3. \(\square \)
Remark 3.2
When \(d=2,p=2,m=1\) and for the cases (a), (b) in Hypothesis \(\mathbf{(H1)}\), the upper bound in (1.11) can be proved in a much simpler way for almost all \((r,z)\in [0,t]\times {\mathbb {R}}^2\). Let \(v_{\lambda }\) be the solution to the stochastic wave equation
where \(\lambda >0\) and \(\dot{{\mathfrak {X}}}\) is given as before. This solution has the chaos expansion \(v_{\lambda }(t,x)=\sum _{n\ge 0} \lambda ^{n}I_{n}^{{\mathfrak {X}}}(f_{t,x,n})\) and its Malliavin derivative has the chaos expansion
see (3.1) and (3.2). From this, we infer that for any \((\lambda , t, x)\in (0,\infty )^2\times {\mathbb {R}}^2\) and for almost every \((r,z)\in [0,t]\times {\mathbb {R}}^2\),
where \(C_{\lambda ,t,\gamma }>0\) is a constant depending on \((\lambda , t, \gamma )\) and is increasing in t. The inequality above is due to Theorem 1.3 of [35] for case (a), respectively Theorem 1.2 of [4] for case (b). Therefore,
Thus, using (3.28) with \(\lambda =\sqrt{\Gamma _t}\), we get \(\big \Vert D_{r,z}u(t,x) \big \Vert _2^2 \le C_{\Gamma _t,t,\gamma }G_{t-r}^2(x-z)\).
3.6 Consequences of Theorem 1.3
We will establish two estimates that will be useful in Sect. 5.
Corollary 3.3
Let \(d=1,2\). Then, for any finite \(T>0\),
In particular, \(D_{r,\bullet }u(t,x)(\omega ) \in |{\mathcal {P}}_{0}|\) for almost every \((\omega ,r) \in \Omega \times [0,t]\), where \(|{\mathcal {P}}_0|\) is defined in (2.2).
Proof
We work with a version of \(\{ D_{r,z}u(t,x): (r,z)\in [0,t]\times {\mathbb {R}}^2\}\) that is jointly measurable. By Fubini’s theorem and Cauchy-Schwarz inequality, we have
where C is a constant depending on \(\gamma _0,\gamma ,t\) and is increasing in t. The above (uniform) bound implies (3.29). Hence, \(D_{r,\bullet }u(t,x)(\omega ) \in |{\mathcal {P}}_{0}|\) for almost all \((\omega ,r) \in \Omega \times [0,t]\).
\(\square \)
The space \( |{\mathcal {H}}\otimes {\mathcal {P}}_0|\) appearing in the next corollary is defined as the set of measurable functions \(h:{\mathbb {R}}_+\times {\mathbb {R}}^{2d} \rightarrow {\mathbb {R}}\) such that
Then, \( |{\mathcal {H}}\otimes {\mathcal {P}}_0| \subset {\mathcal {H}}\otimes {\mathcal {P}}_0\).
Corollary 3.4
Let \(d=1,2\). For almost all \((\omega ,r) \in \Omega \times [0,t]\), \(D D_{r,\bullet } u(t,x)(\omega ) \in |{\mathcal {H}}\otimes {\mathcal {P}}_0|\) and for any finite \(T>0\),
Proof
Using Theorem 1.3, Cauchy-Schwarz inequality and the estimate (1.11), we can write
As a consequence,
By the arguments used in the proof of Theorem 1.3, it follows that
Therefore,
and the same argument as in the proof of Corollary 3.3 ends our proof. \(\square \)
Remark 3.5
Note that for any finite \(T>0\), \({\mathbb {E}}\big (\big \Vert \vert D^2 u(t,x)\vert \big \Vert _{{\mathcal {H}}^{\otimes 2}}^2\big ) < \infty \) for any \((t,x) \in [0,T] \times {\mathbb {R}}^d\).
4 Gaussian fluctuation: Proof of Theorem 1.4
Recall that
and \(\sigma _R(t) = \sqrt{\text {Var}\big ( F_R(t) \big ) }\). First, we need to obtain the limiting covariance structure, which is the content of Proposition 4.1. It will give us the growth order of \(\sigma _R(t)\). Then, in Sect. 4.2, we apply the second-order Gaussian Poincaré inequality to establish the quantitative CLT for \(F_R(t)/\sigma _R(t)\). Finally, we will prove the functional CLT by showing the convergence of the finite-dimensional distributions and the tightness.
4.1 Limiting covariance
Proposition 4.1
Let u denote the solution to the hyperbolic Anderson model (1.1) and assume that the non-degeneracy condition (1.17) holds. Then, the following results hold true:
-
(1)
Suppose \(d \in \{1,2\}\) and \(\gamma ({\mathbb {R}}^d) \in (0, \infty )\). Then, for any \(t,s\in (0,\infty )\),
$$\begin{aligned} \lim _{R\rightarrow \infty } R^{-d} {\mathbb {E}}\big [ F_R(t) F_R(s) \big ] = \omega _d \sum _{p\ge 1} p! \int _{{\mathbb {R}}^d} \big \langle {\widetilde{f}}_{t,x,p}, {\widetilde{f}}_{s,0,p} \big \rangle _{{\mathcal {H}}^{\otimes p}}dx, \end{aligned}$$(4.1)see also (1.18). In particular, \(\sigma _R(t) \sim R^{d/2}\).
-
(2)
Suppose \(d \in \{1,2\}\) and \(\gamma (x) = |x|^{-\beta }\) for some \(\beta \in (0, 2\wedge d)\). Then, for any \(t,s\in (0,\infty )\),
$$\begin{aligned} \lim _{R\rightarrow \infty } R^{\beta -2d} {\mathbb {E}}\big [ F_R(t) F_R(s) \big ] = \kappa _{\beta , d} \int _0^t dr\int _0^s dr' \gamma _0(r-r') (t-r)(s-r'), \end{aligned}$$(4.2)where \(\kappa _{\beta , d} = \int _{B_1^2} dxdy | x- y |^{-\beta }\) is introduced in (1.16). In particular, \(\sigma _R(t) \sim R^{d- \frac{\beta }{2}}\).
-
(3)
Suppose \(d=2\) and \(\gamma (x_1,x_2) =\gamma _1(x_1)\gamma _2(x_2)\) satisfies one of the following conditions:
$$\begin{aligned} {\left\{ \begin{array}{ll} (c_1) &{} \gamma _i(x_i) = |x_i|^{-\beta _i}~\text {for some }\beta _i\in (0,1), i=1,2; \\ \quad \\ (c_2) &{} \gamma _1\in L^1({\mathbb {R}}) ~\mathrm{and}~ \gamma _2(x) = |x|^{-\beta }~\text {for some }\beta \in (0,1) \end{array}\right. }\,\, . \end{aligned}$$(4.3)For any \(s,t\in (0,\infty )\), the following results hold true:
-
\((r_1)\) In \((c_1)\), we have
$$\begin{aligned}&\lim _{R\rightarrow \infty } R^{\beta _1-\beta _2-4} {\mathbb {E}}\big [ F_R(t) F_R(s) \big ] \nonumber \\&\quad = K_{\beta _1, \beta _2} \int _0^t dr\int _0^s dr' \gamma _0(r-r') (t-r)(s-r'), \end{aligned}$$(4.4)where \(K_{\beta _1, \beta _2}\) is defined in (1.22).
-
\((r_2)\) In \((c_2)\), we have
$$\begin{aligned} \lim _{R\rightarrow \infty } R^{\beta -3} \!{\mathbb {E}}\big [ F_R(t) F_R(s) \big ] \!=\! \gamma _1({\mathbb {R}}) {\mathcal {L}}_\beta \!\! \int _0^t dr\!\int _0^s dr' \gamma _0(r-r') (t-r)(s-r'), \end{aligned}$$(4.5)where \({\mathcal {L}}_\beta \) is defined in (1.24).
-
4.1.1 Proof of part (1) in Proposition 4.1
Preparation. In the following, we will denote by \(\varphi \) the density of \(\mu \). For \(0< s\le t < \infty \) and \(x,y\in {\mathbb {R}}^d\), we have
where \( {\widetilde{f}}_{t,x,p}\in {\mathcal {H}}^{\otimes p}\) is defined as in (1.8)–(1.9) and \(\Phi _p(t,s; x-y)\), defined in the obvious manner, depends only on the difference \(x-y\). To see this dependency and to prepare for the future computations, we rewrite \(\Phi _p(t,s; x-y)\) using Fourier transform in space:
where \(\Delta _p(t) =\{ \pmb {s_p}: t> s_1> \cdots> s_p>0\}\), \((s_0, y_0, {\tilde{s}}_{\sigma (0)}, {\tilde{y}}_{\sigma (0)}) = (t,x,s,y)\), \({\widehat{G}}_t(\xi ) = \frac{\sin (t |\xi | )}{| \xi |}\) is introduced in (2.29) and we have used again the convention \(G_t(z)=0\) for \(t\le 0\).
Relation (4.6) shows that \( \Phi _p(t,s; x-y)\) is always nonnegative and equality (4.7) indicates that \(\Phi _p(t,s; x-y)\) indeed depends only on the difference \(x-y\), so that we can write
Note that \(\Phi _p(t,t; 0)\) coincides with \(\alpha _p(t)\) given in [3, Equation (4.11)]. Moreover, applying Lemma 2.5 with \(\mu _p(d\pmb {\xi _p}) = \varphi (\xi _1) \cdots \varphi (\xi _p) d\xi _1 \cdots d\xi _p\) and \( g(s_1,\xi _1, \dots , s_p,\xi _p) = \prod _{j=0}^{p-1} \vert {\widehat{G}}_{s_{j} - s_{j+1}}( \xi _p+\cdots + \xi _{j+1} ) \vert , \) we get (with \(s\le t\))
where we recall that \(\Gamma _t = \int _{-t}^t \gamma _0(a)da\) and point out that the right-hand side of (4.9) is finite by applying Lemma 2.6 with \(z_j = \xi _{j+1}+\cdots +\xi _p\) and \(z_p=0\).
Now we are ready to show (4.1).
Proof of (4.1)
Let us begin with
where \(\omega _1=2\), \(\omega _2=\pi \) and \(\text {Leb}(A)\) stands for the Lebesgue measure of \(A\subset {\mathbb {R}}^d\). We claim that
from which and the dominated convergence theorem we can deduce that
We remark that, by the monotone convergence theorem and the fact that \(\Phi _p(t,s;z)\ge 0\) for all \(z\in {\mathbb {R}}^d\), the claim (4.10) is equivalent to
Let us show the claim (4.12).
For \(p=1\), by direct computations, we can perform integration with respect to \(z, y, {\tilde{y}} \) (one by one in this order) to obtain
where \( \int _{{\mathbb {R}}^d} \Phi _1(t,s;z)dz >0\) due to the non-degeneracy assumption (1.17) on \(\gamma _0\). This implies in particular that \(\sigma _R(t) > 0\) for large enough R.
Next we consider \(p\ge 2\). Using the expression (4.7) and applying Fubini’s theorem with the dominance condition (4.9), we can write
where \(p_\varepsilon (\xi ) =(2\pi \varepsilon )^{-d/2} e^{-|\xi |^2/(2\varepsilon )} \) for \(\xi \in {\mathbb {R}}^d\) and we applied Lemma 2.5 with \( \mu _p(d\pmb {\xi _p}) =\varphi (\xi _1)\cdots \varphi (\xi _p) p_\varepsilon (\xi _1+\cdots +\xi _p) d\xi _1 \cdots d\xi _p\).
Next, we make the change of variables
and the bound (4.14) becomes
where we used \(| {\widehat{G}}_{t-s_1} (\xi ) | \le t\), and \(\varphi (\eta _1-\eta _2)\le \Vert \varphi \Vert _\infty \) (which is finite because \(\gamma ({\mathbb {R}}^d)<\infty \)) to obtain (4.15), and
Observe that \(Q_{p-1}\) does not depend on \(\eta _1\), thus for any \(p\ge 2\)
By Lemma 2.6, we have for any \(p\ge 2\)
Now, plugging the above estimate and (4.17) into (4.12), and using (4.13) for \(p=1\), we have
This shows the claim (4.12) and the claim (4.10), which confirm the limiting covariance structure (4.11). Hence the proof of (4.1) is completed.
\(\square \)
4.1.2 Proof of part (2) in Proposition 4.1
In this case, the corresponding spectral density is given by \( \varphi (\xi ) = c_{d,\beta } | \xi |^{\beta - d} \), for some constant \(c_{d,\beta }\) that only depends on d and \(\beta \).
Now, let us recall the chaos expansion (1.7) of u(t, x), from which we can obtain the following chaos expansion of \(F_R(t)\):
where \({\mathbf {J}}_{p,R}(t):= I_p\left( \int _{|x| \le R} {\widetilde{f}}_{t,x,p} dx \right) \) is the projection of \(F_R(t)\) onto the pth Wiener chaos, with \( {\widetilde{f}}_{t,x,p} \) given as in (1.9).
Using the orthogonality of Wiener chaoses with different order, we have
Let us first consider the variance of \( {\mathbf {J}}_{1,R}(t)\). With \(B_R=\{ x\in {\mathbb {R}}^d: |x | \le R\}\), we can write
Then, making the change of variables \((x, x', \xi )\rightarrow (Rx, Rx', \xi /R)\), we get
Note that \({\widehat{G}}_{t}(\xi /R)\) is uniformly bounded and convergent to t as \(R\rightarrow \infty \); observe also that
Thus we deduce from the dominated convergence theorem that, with \(\kappa _{\beta ,d} :=\int _{B_1^2} dxdx' | x- x' |^{-\beta }\),
In the same way, we can get
In what follows, we will show that as \(R\rightarrow \infty \),
In view of the orthogonality again, the above claim (4.22) and the results (4.20)–(4.21) imply that the first chaos of \(F_R(t)\) is dominant and
which gives us the desired limiting covariance structure. Moreover, we obtain immediately that the process \(\big \{R^{-d+\frac{\beta }{2} } F_R(t): t\in {\mathbb {R}}_+\big \}\) converges in finite-dimensional distributions to the centered Gaussian process \({\mathcal {G}}_\beta \), whose covariance structure is given by (1.19).
The rest of Sect. 4.1.2 is then devoted to proving (4.22). We point out that the strategy in Sect. 4.1.1 can not be directly used, because \(\varphi \) is not uniformly bounded here.
Proof of Claim (4.22)
We begin by writing (with \(s_0 ={\tilde{s}}_{\sigma (0)} = t\) and \(B_R=\{ x: |x| \le R\}\))
where we recall the convention that \(G_t(z)=0\) for \(t\le 0\).
Then, recalling definition (4.19) of \(\ell _{R}(\xi )\), we can apply Lemma 2.5 with
to get \( \text {Var}\big ( {\mathbf {J}}_{p,R}(t) \big )\) bounded by
Making change of variables
we obtain
where in the last inequality we used \(| {\widehat{G}}_t | \le t\) and the following Fourier transform:
Note that the integral \(\int _{B_1^2}dxdx' |x-x'|^{-\beta } e^{-i (x-x')\cdot \eta _2 R}\) is uniformly bounded by \(\kappa _{\beta ,d}\) and it converges to zero as \(R\rightarrow \infty \) for \(\eta _2\ne 0\). This convergence is a consequence of the Riemann-Lebesgue’s lemma. Taking into account the definition (4.16) of \(Q_{p-1}\), then we have
which is summable over \(p\ge 2\) by the arguments in the previous section. Hence by the dominated convergence theorem, we get
This proves the claim (4.22). \(\square \)
4.1.3 Proof of part (3) in Proposition 4.1
Recall the two cases from (4.3):
In \((c_1)\), the spectral density is \(\varphi (\xi _1, \xi _2) =c_{1,\beta _1}c_{1,\beta _2} | \xi _1|^{\beta _1-1} | \xi _2|^{\beta _2-1} \) for \((\xi _1,\xi _2)\in {\mathbb {R}}^2\), where \(c_{1,\beta }\) is a constant that only depends on \(\beta \). Now, using the notation from Sect. 4.1.2, we write
where the last equality is obtained by the change of variables \((x,x', \xi _1, \xi _2)\) to \((Rx,Rx', \xi _1/R, \xi _2/R)\). Thus, by the exactly same arguments that lead to (4.20), we can get
with \(K_{\beta _1, \beta _2} \) introduced in (1.22). Similar to (4.21), we also have
To obtain the result \((r_1)\), it remains to show
Its proof can be done verbatim as for the result (4.22), so we omit the details here.
Finally, let us look at the more interesting case \((c_2)\) where \(\gamma _1\in L^1({\mathbb {R}})\) and \(\gamma _2(x) =|x|^{-\beta }\) for some fixed \(\beta \in (0,1)\). In this case, the corresponding spectral density is \(\varphi (\xi _1,\xi _2) = \varphi _1(\xi _1) \varphi _2(\xi _2)\), where
Let us begin with (4.18) and make the usual change of variables \((x,x', \xi )\rightarrow (Rx,Rx', \xi /R)\) to obtain
Recall that \(\varphi _1 \), \( {\widehat{G}}_{t-s} \) and \( {\widehat{G}}_{t-s'} \) are uniformly bounded and continuous. Note that, applying Plancherel’s theorem and the Parseval-type relation (2.3), we have
Therefore, by the dominated convergence theorem and the fact that \(\varphi _1(0) = \frac{1}{2\pi }\gamma _1({\mathbb {R}})\), we get
where \({\mathcal {L}}_\beta \) is defined in (1.24). In the same way, we get for \(s,t\in (0,\infty )\),
Now we claim that the other chaoses are negligible, that is, as \(R\rightarrow \infty \),
Note that the desired limiting covariance structure follows from (4.27) and the above claim (4.28). The rest of this section is devoted to proving claim (4.28).
Proof of Claim (4.28)
By the same arguments that lead to the estimate (4.23), we can obtain
where \(\varphi _p(\pmb {\xi _p}) = \varphi (\xi _1)\cdots \varphi (\xi _p) \ell _R(\xi _1+ \cdots + \xi _p)\) for \(\xi _j = (\xi _{j}^{(1)}, \xi _{j}^{(2)})\in {\mathbb {R}}^2\), \(j=1,\dots ,p\) and \(\ell _R\) is defined in (4.19). Recall that in the current case, \(\varphi (\xi ) = \varphi _1(\xi ^{(1)}) \varphi _2( \xi ^{(2)} )\) for \(\xi =(\xi ^{(1)},\xi ^{(2)} )\in {\mathbb {R}}^2\) and \(\varphi _1, \varphi _2\) satisfy the conditions in (4.26). Then, the following change of variables
yields
In view of (4.19), we have \(\ell _R(\eta _1/R) = R^4 \ell _1(\eta _1)\). Thus, by changing \(\eta _1\) to \(\eta _1/R\), we write
where we used \( \vert {\widehat{G}}_{t - s_{1}}(\eta _{1}/R ) \vert ^2 \le t^2\). Observe that with \(\eta = (\eta ^{(1)}, \eta ^{(2)})\), we deduce from the fact \(\ell _1(\eta ) = \big \vert {\mathcal {F}}{\mathbf {1}}_{B_1}\big \vert ^2(\eta ^{(1)}, \eta ^{(2)})\) that
by inverting the Fourier transform. The above quantity is uniformly bounded by \(2\pi {\mathcal {L}}_\beta \) with \({\mathcal {L}}_\beta \) given in (1.24) and convergent to zero as \(R\rightarrow \infty \) for every \(x\ne 0\) in view of the Riemann-Lebesgue lemma. Thus, \(R^{\beta -3} \text {Var}\big ( {\mathbf {J}}_{p,R}(t) )\) is uniformly bounded by \(2\pi {\mathcal {L}}_\beta \Gamma _t^p \Vert \varphi _1 \Vert _\infty t^2 Q_{p-1}\), with \(Q_{p-1}\) given by (4.16) and it converges to zero as \(R\rightarrow \infty \). Since \(Q_{p} \le C^p/p!\), we have
and the dominated convergence theorem implies (4.28). \(\square \)
Remark 4.2
Under the assumptions of Proposition 4.1, we point out that \(\sigma _R(t) > 0\) for large enough R so that the renormalized random variable \(F_R(t)/ \sigma _R(t)\) is well-defined for large R.
4.2 Quantitative central limit theorems (QCLT) and f.d.d. convergence
In this section, we prove the quantitative CLTs that are stated in Theorem 1.4 and, as an easy consequence, we are also able to show the convergence of finite-dimensional distributions in Theorem 1.4. We consider first the part (1) and later we treat parts (2) and (3).
4.2.1 Part (1)
We will first show the estimate
where \(Z \sim N(0,1)\). By Proposition 1.8 applied to \(\frac{1}{ \sigma _R(t)} F_R(t)\), we have
where
Recall from Sect. 4.1.1 that \(\sigma ^2_R(t) \sim R^d\). Therefore, in order to show (4.29) it suffices to prove the estimate
Using Minkowski’s inequality, we can write
Then, it follows from our fundamental estimates in Theorem 1.3 that
with
and, in the same way, we have
where the implicit constants in (4.32)–(4.33) do not depend on \((R, r,z,\theta ,w)\) and are increasing in t. Now, plugging (4.32)–(4.33) into the expression of \({\mathcal {A}}_R\), we get
The four terms \( {\mathcal {A}}_{R,1}, \dots , {\mathcal {A}}_{R,4}\) are defined according to whether \(r>\theta \) or \(r<\theta \), and whether \( s>\theta '\) or \(s<\theta '\). For example, the term \({\mathcal {A}}_{R,1}\) corresponds to \(r>\theta \) and \(s>\theta '\):
The term \({\mathcal {A}}_{R,2}\) corresponds to \(r>\theta \) and \(s<\theta '\), the term \({\mathcal {A}}_{R,3}\) corresponds to \(r<\theta \) and \(s>\theta '\) and the term \({\mathcal {A}}_{R,4}\) corresponds to \(r< \theta \) and \(s<\theta '\). In the following, we estimate \({\mathcal {A}}_{R,j}\) for \(j=1,2,3,4\) by a constant times \(R^{d}\), which yields (4.31).
To get the bound for \({\mathcal {A}}_{R,1}\), it suffices to perform the integration with respect to \(dx_1, dx_2, dx_4\), \(dy', dy, dw', dw\), \(dz, dz', dx_3\) one by one, by taking into account the following facts:
To get the bound for \({\mathcal {A}}_{R,2}\), it suffices to perform the integration with respect to \(dx_1, dx_3,dz', dz\), \(dx_2, dw, dw', dy, dy', dx_4\). To get the bound for \({\mathcal {A}}_{R,3}\), it suffices to perform the integration with respect to \(dx_4, dy', dx_2, dy, dw', dx_1, dw, dz, dz', dx_3\) one by one. To get the bound for \({\mathcal {A}}_{R,4}\), it suffices to perform the integration with respect to \(dx_1, dx_3, dx_2, dz', dz, dw, dw', dy, dy', dx_4\) one by one. This completes the proof of (4.29).
In the second part of this subsection, we show the f.d.d. convergence in Theorem 1.4-(1).
Fix an integer \(m\ge 1\) and choose \(t_1, \dots , t_m\in (0,\infty )\). Put \({\mathbf {F}}_R = \big ( F_R(t_1), \dots , F_R(t_m) \big )\). Then, by the result on limiting covariance structure from Sect. 4.1.1, we have that the covariance matrix of \(R^{-d/2}{\mathbf {F}}_R\), denoted by \({\mathcal {C}}_R\), converges to the matrix \({\mathcal {C}} = ({\mathcal {C}}_{ij}: 1\le i,j \le m)\), with
Since \(F_R(t)=\delta (-DL^{-1}F_R(t))\), according to [25, Theorem 6.1.2],Footnote 11 for any twice differentiable function \(h: {\mathbb {R}}^m \rightarrow {\mathbb {R}}\) with bounded second partial derivatives,
with \({\mathbf {Z}}_R\sim N\big (0, {\mathcal {C}}_R \big )\), \({\mathbf {Z}}\sim N\big (0, {\mathcal {C}} \big )\) and \(\Vert h'' \Vert _\infty = \sup \big \{ \big \vert \frac{\partial ^2}{\partial x_i \partial x_j} h(x) \big \vert : x\in {\mathbb {R}}^m, i,j=1, \dots , m\big \}\). It is clear that the second term in (4.35) tends to zero as \(R\rightarrow \infty \). For the variance term in (4.35), taking advantage of Proposition 1.9 applied to \(F=F_R(t_i)\) and \(G=F_R(t_j)\) and using arguments analogous to those employed to derive (4.31), we obtain
Thus, the first term in (4.35) is \(O(R^{-d/2})\), implying that \( {\mathbb {E}}\big [ h(R^{-d/2}{\mathbf {F}}_R) - h({\mathbf {Z}}) \big ]\) converges to zero as \(R\rightarrow \infty \). This shows the convergence of the finite-dimensional distributions of \(\{ R^{-d/2} F_R(t): t\in {\mathbb {R}}_+ \}\) to those of the centered Gaussian process \({\mathcal {G}}\), whose covariance structure is given by
This concludes the proof of part (1) in Theorem 1.4. \(\square \)
4.2.2 Proofs in parts (2) and (3)
In part (2), in view of the dominance of the first chaos, we have already obtained in Sect. 4.1.2 that the finite-dimensional distributions of the process \(\big \{R^{-d+\frac{\beta }{2} } F_R(t): t\in {\mathbb {R}}_+\big \}\) converge to those of a centered Gaussian process \(\{{\mathcal {G}}_\beta (t) \}_{ t\in {\mathbb {R}}_+}\), whose covariance structure is given by (1.19). By the same reason, the convergence of the finite-dimensional distributions in part (3) follows from (4.24), (4.25), (4.27) and (4.28).
In this section, we show that:
where \(Z\sim N(0,1)\). Taking into account (4.30) and the variance estimates in Sects. 4.1.2 and 4.1.3, in order to get (4.36) it suffices to show that, for \(j\in \{1,2,3,4\}\) and for \(R\ge t\),
Since the total-variation distance is always bounded by one, the bound (4.36) still holds for \(R<t\) by choosing the implicit constant large enough.
The rest of this section is then devoted to proving (4.37) for \(R\ge t\) and for \(j\in \{1,2,3,4\}\).
Proof of (4.37)
Let us first consider the term \({\mathcal {A}}_{R,1}\), which can be expressed as
with
From now on, when \(d=2\), we write \((w, w', y, y', z, z') =(w_1, w_2, w'_1, w'_2, y_1, y_2, y'_1, y'_2,z_1, z_2,z'_1, z'_2) \) and then \(dy = dy_1 dy_2\); note also that \(x_1,\dots , x_4\) denote the dummy variables in \({\mathbb {R}}^d\). By making the following change of variables
and using the scaling property \(G_{t}(Rz) = R^{1-d} G_{tR^{-1}}(z)\) for \(d\in \{1,2\}\), we get
Note that we have replaced the integral domain \({\mathbb {R}}^{6d}\) by \([-2,2]^{6d}\) in (4.39) without changing the value of \({\mathbf {S}}_{1,R}\), because, for example, \(x_1\in B_1\) and \(|x_1-z| \le (t-r)/R\) implies \(|z|\le 1 + tR^{-1}\le 2\) while \(|z-w| \le (r-\theta )/R\) and \(|x_1-z| \le (t-r)/R\) imply \(|w|\le (t-\theta )R^{-1}+1 \le 2\).
In view of the expression of \(\gamma \) in part (2) and part (3), we write, for \(z\in {\mathbb {R}}^d\) (\(z=(z_1,z_2)\in {\mathbb {R}}^2\) when \(d=2\)),
and it is easy to see that
To ease the notation, we just rewrite the above estimates as
with \(\alpha = \beta \) in part (2), \(\alpha =\beta _1+\beta _2 \) in case \((a')\) of part (3), and \(\alpha =1+\beta \) in case \((b')\) of part (3).
To estimate \({\mathcal {A}}_{R,1}\), we can use (4.40) to perform integration with respect to \(dx_1, dx_2, dx_4\), \(dy', dy, dw', dw\), \(dz, dz', dx_3\) successively. More precisely, performing the integration with respect to \(dx_1, dx_2, dx_4\) and using the fact
gives us
by integrating out \(dw'\) and using (4.40); then, using (4.41) to integrate out dw
where the last inequality is obtained by integrating out \(dz, dz'\), \(dx_3\) one by one and using (4.40) and (4.41). The bound
is uniform over \((r,r',s,s' ,\theta ,\theta ')\in [0,t]^6\), and hence we obtain (4.37) for \(j=1\). For the other terms \({\mathcal {A}}_{R,2}, {\mathcal {A}}_{R,3}\) and \({\mathcal {A}}_{R,4}\), the arguments are the same: We first go through the same change of variables (4.38) to obtain terms \({\mathbf {S}}_{j, R}\) similar to \({\mathbf {S}}_{1, R}\) in (4.39), and then use the facts (4.40) and (4.41) to perform one-by-one integration with respect to the variables
This concludes the proof of (4.37) and hence completes the proof of (4.36). \(\square \)
4.3 Tightness
This section is devoted to establishing the tightness in Theorem 1.4. This, together with the results in Sects. 4.1 and 4.2 will conclude the proof of Theorem 1.4. To get the tightness, we appeal to the criterion of Kolmogorov-Chentsov (see e.g. [17, Corollary 16.9]). Put
and we will show, for any fixed \(T>0\), that the following inequality holds for any integer \(k\ge 2\) and any \(0< s < t \le T\le R\):
where the implicit constant does not depend on R, s or t. This moment estimate (4.43) ensures the tightness of \(\big \{ \sigma _R^{-1} F_R(t): t\in [0,T]\big \}\) for any fixed \(T>0\) and, therefore, the desired tightness on \({\mathbb {R}}_+\) holds.
To show the above moment estimate (4.43) for the increment \(F_R(t)- F_R(s)\), we begin with the chaos expansion
where s, t are fixed, so we leave them out of the subscript of the kernel \(g_{n,R}\) and
with \(\prod _{j=1}^0 =1\) and \(\varphi _{t,R}(r,y) := \int _{B_R} G_{t-r}(x-y) dx\). The rest of this section is then devoted to proving (4.43).
Proof of (4.43)
By the triangle inequality and using the moment estimate (2.15), we get, for any \(k\in [2,\infty )\),
Note that the kernel \(g_{n,R} =0\) outside \([0,t]^n \times {\mathbb {R}}^{dn}\). Then, using (2.8) and (2.13), we can write
where \( {\widetilde{g}}_{n,R}\) is the canonical symmetrization of \(g_{n,R}\):
With the convention (1.6) in mind, we can write
Then, using Fourier transform, we can rewrite \(n! \Vert {\widetilde{g}}_{n,R} \Vert _{{\mathcal {H}}_0^{\otimes n}}^2\) as follows:
Recall the expression (2.29) \({\widehat{G}}_t(\xi )= \frac{\sin (t|\xi |)}{|\xi |}\) and note that it is a 1-Lipschitz function in the variable t, uniformly over \(\xi \in {\mathbb {R}}^d\). Then
Therefore, plugging this inequality into (4.45) and then applying Lemma 2.6 yields
which is finite since \( {\mathbf {1}}_{B_R}\in {\mathcal {P}}_0 \). Using Fourier transform, we can write
Now let us consider the cases in (4.42).
In part (1) where \(\gamma \in L^1({\mathbb {R}}^d)\),
In the other cases, we can make the change of variables \((x,y)\rightarrow R(x,y)\) to obtain
using (4.40) with \(\alpha = \beta \) in part (2), \(\alpha =\beta _1+\beta _2 \) in case \((a')\), and \(\alpha =1+\beta \) in case \((b')\).
As a consequence, we get
and therefore,
which leads to (4.43). \(\square \)
5 Proof of Theorem 1.10
We argue as in the proof of Theorem 1.2 of [2]. As we explained in the introduction, it suffices to show that for each \(m\ge 1\),
where \(\Omega _m =\{ |u(t,x) | \ge 1/m\}\).
We claim that, almost surely, the function \((s,y) \mapsto D_{s,y}u(t,x)\) satisfies the assumptions of Lemma A.1. Indeed, for \(d=2\), by Minkowski’s inequality and the estimate (1.11), we have
For \(d=1\), again by the estimate (1.11),
Moreover, \((s,y) \mapsto D_{s,y}u(t,x)\) has compact support on \([0,t]\times B_M\) for some \(M>0\). As a consequence, by Lemma A.1, it suffices to prove that
As in the proof of Lemma 5.1 of [2], Corollaries 3.3 and 3.4 allow us to infer that the \({\mathcal {H}}\otimes {\mathcal {P}}_0\)-valued process \(K^{(r)}\) defined by
belongs to the space \({\mathbb {D}}^{1,2}({\mathcal {H}}\otimes {\mathcal {P}}_0)\). This is because, using Corollary 3.3, we can write
and in the same way, using Corollary 3.4 we can show that \({\mathbb {E}}\big ( \Vert DK^{(r)}\Vert _{ {\mathcal {H}}\otimes {\mathcal {H}}\otimes {\mathcal {P}}_0}^2 \big ) <\infty \). Therefore, the process \(K^{(r)}\) belongs to the domain of the \({\mathcal {P}}_0\)-valued Skorokhod integral, denoted by \(\overline{\delta }\). Then, using the same arguments as in the proof of Proposition 5.2 of [2], replacing \(L^2({\mathbb {R}})\) by \({\mathcal {P}}_0\), we can show that for any \(r \in [0,t]\), the following equation holds in \(L^2(\Omega ;{\mathcal {P}}_0)\):
Let \(\delta \in (0,t \wedge 1)\) be arbitrary. Due to relation (5.2) we have, almost surely,
where
On the event \(\Omega _m=\{ | u(t,x) | \ge 1/m\}\), we have
where
and
Coming back to (5.3), we can write
We now give upper bounds for the first moments of \(J(\delta )\) and \(I(\delta )\). We will use the following facts, which were proved in [3]:
We first treat \(J(\delta )\). By Cauchy-Schwarz inequality, for any \(r\in [0,t]\) and \(z,z' \in {\mathbb {R}}^2\),
Since \(G_{t-r}(x-z)\) contains the indicator of the set \(\{|x-z|<t-r\}\), we obtain:
It follows that
Next, we treat \(I(\delta )\). Applying Proposition 6.2 of [1] to the \({\mathcal {P}}_0\)-valued process
we obtain
We have,
and
Hence, \({\mathbb {E}}(I(\delta )) \le I_1(\delta )+I_2(\delta )\), where
and
Using Cauchy-Schwarz inequality and Corollaries 3.3 and 3.4 , we obtain:
Hence,
where
Using (5.4), (5.5) and (5.6), we conclude the proof as follows. For any \(n\ge 1\),
Letting \(n\rightarrow \infty \), we obtain:
Note that using Fourier transform and the expression (2.29), we can rewrite (5.7) as
where \(\Gamma _{\delta }=2\int _0^{\delta }\gamma _0(s)ds\). That is, we have \(\phi (\delta ) \le \Gamma _{\delta } \psi _0(\delta )\). Finally taking \(\delta \rightarrow 0\) proves (5.1), since \(g_{t,x}(\delta )\rightarrow 0\) and \(\delta \frac{\phi (\delta )}{\psi _0(\delta )} \le \delta \Gamma _\delta \rightarrow 0\) as \(\delta \rightarrow 0\). \(\square \)
Notes
The spectral measure \(\mu \) of \(\gamma \) is a tempered measure on \({\mathbb {R}}^d\) such that \(\gamma ={\mathcal {F}}\mu \), that is, \(\gamma \) is the Fourier transform of \(\mu \), and its existence is guaranteed by the Bochner-Schwarz theorem.
In higher dimension \((d\ge 3)\), the fundamental wave solution is a uniform measure supported on certain surfaces, then the Malliavin derivative Du(t, x) is expected to be merely a random measure instead of being a random function. In this case, the expression \(D_{s,y}u(t,x)\) does not make sense; see also the recent article [34] for related discussions.
\(p_t(x) =(2\pi t)^{-d/2} e^{-|x|^2/(2t)} \) for \(t>0\) and \(x\in {\mathbb {R}}^d\); in (1.13), \(d=1\).
The use of second-order Gaussian Poincaré inequality for obtaining CLT on a Gaussian space is one of the central techniques in the Malliavin-Stein approach; for example, in the recent paper [13], Dunlap et al. have used this Poincaré inequality to investigate the Gaussian fluctuation of the KPZ in dimension three and higher. We remark here that we can not directly apply this inequality because of the complicated correlation structure of the underlying Gaussian homogeneous noise, while the underlying Gaussian noise in [13] is white in time and smooth in space so that they can directly apply the version from [26]. In this article, we have established a quite involved variant of second-order Poincaré inequality, which is tailor-made for our applications.
Note that there is a typo in equation (5.3.2) of [25]: We have \(E[\Vert DF\Vert _{{\mathcal {H}}}^4]^{1/4}\) instead of \(E[\Vert D^2F\Vert _{{\mathcal {H}}}^4]^{1/4}\).
For the sake of completeness, we sketch a proof of (2.13) here: Given such a function \(f \in {\mathcal {H}}_0^{\otimes n}\),
$$\begin{aligned} \Vert f\Vert _{{\mathcal {H}}^{\otimes n}}^2&= \int _{[0,t]^{2n}} d\pmb {s_n} d\pmb {t_n} \big \langle f(\pmb {s_n}, \bullet ), f(\pmb {t_n}, \bullet ) \big \rangle _{{\mathcal {P}}_0^{}\otimes n} \prod _{j=1}^n \gamma _0(s_j-t_j) \\&\le \int _{[0,t]^{2n}} d\pmb {s_n} d\pmb {t_n} \frac{1}{2} \Big ( \big \Vert f(\pmb {s_n}, \bullet )\big \Vert _{{\mathcal {P}}_0^{\otimes n} }^2 + \big \Vert f(\pmb {t_n}, \bullet ) \big \Vert _{{\mathcal {P}}_0^{\otimes n} } ^2 \Big )\prod _{j=1}^n \gamma _0(s_j-t_j) \le \Gamma _t^n \Vert f\Vert _{{\mathcal {H}}_0^{\otimes n}}^2. \end{aligned}$$This means \(f = 0\) outside \((J \times {\mathbb {R}}^{d})^p\) and \(g= 0\) outside \((J^c\times {\mathbb {R}}^{d})^q\) for some set \(J\subset {\mathbb {R}}_+\). We will apply this formula to functions \(f=f_{t,x,j}^{(j)}(r,z;\bullet )\) and \(g=f_{r,z,n-j}\) given in Sect. 3.1, in which case \(J=(r,t)\).
We can apply this lemma to the function \(y\in {\mathbb {R}}^2\mapsto G_{t-s}(x-y)\) whose support is contained in \(\{y \in {\mathbb {R}}^2; |x-y|<t-s\}\), so we can choose \(\Lambda = 2t-2s\).
The function \(\pmb {x_{j-1}} \rightarrow f_{t,x,j}^{(j)}(\pmb {t_{j-1}},\pmb {x_{j-1}})=G_{t-t_1}(x-x_1)G_{t_1-t_2}(x_1-x_2)\ldots G_{t_{j-1}-r}(x_{j-1}-z)\) has support contained in \(\{\pmb {x_{j-1}}\in {\mathbb {R}}^{2(j-1)};|x_i-x|<t-t_i, \ \text{ for } \text{ all } \ i=1,\ldots ,j-1\}\).
References
Balan, R.M.: The stochastic wave equation with multiplicative fractional noise: a Malliavin calculus approach. Potential Anal. 36, 1–34 (2012)
Balan, R.M., Quer-Sardanyons, L., Song, J.: Existence of density for the stochastic wave equation with space-time homogeneous Gaussian noise. Electron. J. Probab. 24(106), 1–43 (2019)
Balan, R.M., Song, J.: Hyperbolic Anderson Model with space-time homogeneous Gaussian noise. ALEA Lat. Am. J. Probab. Math. Stat. 14, 799–849 (2017)
Bolaños-Guerrero, R., Nualart, D., Zheng, G.: Averaging 2D stochastic wave equation. Electron. J. Probab. 26(102), 1–32 (2021)
Bouleau N., Hirsch, F.: Propriété d’absolue continuité dans les espaces de Dirichlet et applications aux équations différentielles stochastiques. Séminaire de Probabilités XX: 12, 131-161, LNM 1204 (1986)
Breuer, P., Major, P.: Central limit theorems for non-linear functionals of Gaussian fields. J. Multivar. Anal. 13, 425–441 (1983)
Carmona, R., Nualart, D.: Random non-linear wave equations: smoothness of the solutions. Probab. Theory Related Fields 79, 469–508 (1988)
Chatterjee, C.: Fluctuation of eigenvalues and second order Poincaré inequalities. Probab. Theory Related Fields 143, 1–40 (2009)
Chen L., Khoshnevisan D., Nualart, D., Pu, F.: Poincaré inequality, and central limit theorems for parabolic stochastic partial differential equations. To appear in: Ann. Inst. Henri Poincaré Probab. Stat. (2021) ar**v:1912.01482
Chen L., Khoshnevisan D., Nualart, D., Pu, F.: Central limit theorems for spatial averages of the stochastic heat equation via Malliavin-Stein’s method. To appear in Stoch. Partial Differ. Equ. Anal. Comput. (2020). ar**v:2008.02408
Dalang, R.C.: Extending the martingale measure stochastic integral with applications to spatially homogeneous. Electron. J. Probab. 4(6), 1–29 (1999)
Delgado-Vences, F., Nualart, D., Zheng, G.: A central limit theorem for the stochastic wave equation with fractional noise. Ann. Inst. Henri Poincaré Probab. Stat. 56(4), 3020–3042 (2020)
Dunlap, A., Gu, Y., Ryzhik, L., Zeitouni, O.: Fluctuations of the solutions to the KPZ equation in dimensions three and higher, p. 176. Probab, Theory Related Fields (2020)
Houdré, C., Pérez-Abreu, V.: Covariance identities and inequalities for functionals on Wiener and Poisson spaces. Ann. Probab. 23, 400–419 (1995)
Huang, J., Nualart, D., Viitasaari, L.: A central limit theorem for the stochastic heat equation. Stoch. Process. Appl. 130(12), 7170–7184 (2020)
Huang, J., Nualart, D., Viitasaari, L., Zheng, G.: Gaussian fluctuations for the stochastic heat equation with colored noise. Stoch. Partial Differ. Equ. Anal. Comput. 8, 402–421 (2020)
Kallenberg, O.: Foundations of Modern Probability, Probability and Its Applications, 2nd edn. Springer, New York (2002)
Karkzewska, A., Zabczyk, J.: Stochastic PDE’s with function-valued solutions. In: Infinite-dimensional stochastic analysis (Clément Ph., den Hollander F., van Neerven J. & de Pagter B., eds), pp. 197–216, Proceedings of the Colloquium of the Royal Netherlands Academy of Arts and Sciences, Amsterdam (1999)
Khoshnevisan, D., Nualart, D., Pu, F.: Spatial stationarity, ergodicity and CLT for parabolic Anderson model with delta initial condition in dimension \(d\ge 1\). SIAM J. Math. Anal. 53(2), 2084–2133 (2021)
Kim, K., Yi, J.: Limit theorems for time-dependent averages of nonlinear stochastic heat equations. To appear in: Bernoulli. (2021+). ar**v:2009.09658
Malliavin, P.: Stochastic calculus of variations and hypoelliptic operators. Proceedings of the International Symposium on Stochastic Differential Equations (Res. Inst. Math. Sci., Kyoto Univ., Kyoto, 1976). New York: Wiley. pp. 195–263 (1978)
Márquez-Carreras, D., Mellouk, M., Sarrà, M.: On stochastic partial differential equations with spatially correlated noise: smoothness of the law. Stoch. Process. Appl. 93, 269–284 (2001)
Millet, A., Sanz-Solé, M.: A stochastic wave equation in two space dimension: smoothness of the law. Ann. Probab. 27, 803–844 (1999)
Nourdin, I., Peccati, G.: Stein’s method on Wiener chaos. Probab. Theory Related Fields 145(1), 75–118 (2009)
Nourdin, I., Peccati, G.: Normal approximations with Malliavin calculus: from Stein’s method to universality. Cambridge Tracts in Mathematics 192, Cambridge University Press (2012)
Nourdin, I., Peccati, G., Reinert, G.: Second order Poincaré inequalities and CLTs on Wiener space. J. Funct. Anal. 257, 593–609 (2009)
Nualart, D.: The Malliavin Calculus and Related Topics, Probability and Its Applications, 2nd edn. Springer, Berlin (2006)
Nualart, D., Ortiz-Latorre, S.: Central limit theorems for multiple stochastic integrals and Malliavin calculus. Stoch. Process. Appl. 118(4), 614–628 (2008)
Nualart, D., Pardoux, É.: Stochastic calculus with anticipating integrands. Probab. Theory Related Fields 78, 535–581 (1988)
Nualart, D., Peccati, G.: Central limit theorems for sequences of multiple stochastic integrals. Ann. Probab. 33(1), 177–193 (2005)
Nualart, D., Quer-Sardanyons, L.: Existence and smoothness of the density for spatially homogeneous SPDEs. Potential Anal. 27, 281–299 (2007)
Nualart, D., Song, X.M., Zheng, G.: Spatial averages for the Parabolic Anderson model driven by rough noise. ALEA Lat. Am. J. Probab. Math. Stat. 18, 907–943 (2021)
Nualart, D., Zheng, G.: Averaging Gaussian functionals. Electron. J. Probab. 25(48), 1–54 (2020)
Nualart, D., **a, P., Zheng, G.: Quantitative central limit theorems for the parabolic Anderson model driven by colored noises. (2021) ar**v:2109.03875
Nualart, D., Zheng, G.: Central limit theorems for stochastic wave equations in dimensions one and two. To appear in Stoch. Partial Differ. Equ. Anal. Comput (2021)
Nualart, D., Zheng, G.: Spatial ergodicity of stochastic wave equations in dimensions 1, 2 and 3. Electron. Commun. Probab. 25(80), 1–11 (2020)
Peccati, G., Tudor, C.A.: Gaussian limits for vector-valued multiple stochastic integrals. Séminaire de Probabilités XXXVIII, 247–262 (2005)
Pu, F.: Gaussian fluctuation for spatial average of parabolic Anderson model with Neumann/Dirichlet/periodic boundary conditions. Trans. Amer. Math. Soc. (2021). https://doi.org/10.1090/tran/8565
Quer-Sardanyons, L., Sanz-Solé, M.: Absolute continuity of the law of the solution to the 3-dimensional stochastic wave equation. J. Funct. Anal. 206(1), 1–32 (2004)
Quer-Sardanyons, L., Sanz-Solé, M.: A stochastic wave equation in dimension 3: Smoothness of the law. Bernoulli 10(1), 165–186 (2004)
Sanz-Solé, M., Süss, A.: The stochastic wave equation in high dimensions: Malliavin differentiability and absolute continuity. Electron. J. Probab. 18(64), 1–28 (2013)
Vidotto, A.: An improved second-order Poincaré inequality for functionals of Gaussian fields. J. Theoret. Probab. 33, 396–427 (2020)
Walsh, J.B.: An Introduction to Stochastic Partial Differential Equations. In: École d’été de probabilités de Saint-Flour, XIV—1984, 265–439. Lecture Notes in Math. 1180, Springer, Berlin (1986)
Zheng, G.: Recent developments around the Malliavin-Stein approach—fourth moment phenomena via exchangeable pairs. Ph.D thesis, Université du Luxembourg. (2018) Available at http://hdl.handle.net/10993/35536
Acknowledgements
The authors would like to thank Wangjun Yuan for carefully proofreading the manuscript and providing a list of typos.
Author information
Authors and Affiliations
Corresponding author
Additional information
This article is dedicated to István Gyöngy on the occasion of his 70th birthday.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Research supported by a grant from the Natural Sciences and Engineering Research Council of Canada.
Supported by NSF Grant DMS 1811181.
Supported by the Grant PGC2018-097848-B-I00 (Ministerio de Economía y Competitividad).
Appendix
Appendix
1.1 Auxiliary Results
Let \(d=2\) and assume Hypothesis \(\mathbf{(H1)}\). Suppose that \(S: {\mathbb {R}}_+\times {\mathbb {R}}^2 \rightarrow {\mathbb {R}}\) is a measurable function such that \(S\in L^{2}( {\mathbb {R}}_+; L^{2q} ({\mathbb {R}}^2))\), where q is given in (2.20) in cases (a) and (b) and it is given in (2.23) in case (c). We assume also that S has support in \([0,T]\times B_M\) for some \(M>0\). We claim that S belongs to \({\mathcal {H}}\) and the following estimates hold true:
Indeed, the first inequality is due to (2.13) and the second one follows from (2.25).
For \(d=1\), if \(S\in L^{2}( {\mathbb {R}}_+ \times {\mathbb {R}})\) has support in \([0,T]\times B_M\) for some \(M>0\), then \(S\in {\mathcal {H}}\) and the following estimates hold true:
Indeed, the first inequality is due to (2.13) and the second one follows from
and
Let us recall the Hypothesis \(\mathbf{(H2)}\): The measures \(\mu _0\) and \(\mu \) such that \(\gamma _0 ={\mathcal {F}}\mu _0\) and \(\gamma = {\mathcal {F}} \mu \) are absolutely continuous with respect to the Lebesgue measures with strictly positive densities.
Lemma A.1
Fix \(d\in \{1,2\}\) and assume that the Hypothesis \(\mathbf{(H2)}\) holds. Let the Hypothesis \(\mathbf{(H1)}\) hold if in addition \(d=2\). Suppose that the function \(S: {\mathbb {R}}_{+} \times {\mathbb {R}}^d\rightarrow {\mathbb {R}}\) has support in \([0,T]\times B_M\) for some \(M>0\) and \(S\in L^{2}\big ( {\mathbb {R}}_+; L^{2q}({\mathbb {R}}^d) \big )\), where
If
then \(\Vert S\Vert _{{\mathcal {H}}}>0\).
Proof
Suppose that \(\Vert S\Vert _{{\mathcal {H}}}=0\). There exists a sequence of smooth functions \((\psi _k)_{k\ge 1}\) in \(C^\infty ( {\mathbb {R}}_+\times {\mathbb {R}}^d)\), with support in \([0,T]\times B_M\), which converges to S in \(L^2({\mathbb {R}}_+; L^{2q}({\mathbb {R}}^d))\). Then,
where \(\gamma _0 ={\mathcal {F}}\mu _0\), \(\gamma = {\mathcal {F}} \mu \) and \({\mathcal {F}} \psi _k\) stands for the Fourier transform of \(\psi _k\) in space-time variables in this proof. By choosing a subsequence \((k_j)_{j\ge 1}\) we have that
for \(\mu _0 \otimes \mu \)-almost all \((\tau , \xi )\). On the other hand, kee** in mind that the supports of \(S,\psi _k\) are contained in \([0,T]\times B_M\), we have
from which we deduce that \((\psi _k)_{k\ge 1}\) converge in \(L^1([0,T] \times B_M)\) to S. Thus \({\mathcal {F}} \psi _k(\tau , \xi )\) converges to \({\mathcal {F}} S(\tau ,\xi )\) for all \((\tau ,\xi )\) and the convergence is uniform. As a consequence, \({\mathcal {F}}S(\tau ,\xi )=0\) for \(\mu _0 \otimes \mu \)-almost all \((\tau ,\xi ) \in {\mathbb {R}}_+ \times {\mathbb {R}}^d\) and by Hypothesis \(\mathbf{(H2)}\), we obtain \({\mathcal {F}}S(\tau ,\xi )=0\) for almost all \((\tau ,\xi ) \in {\mathbb {R}}_+ \times {\mathbb {R}}^d\) with respect to the Lebesgue measure.
Hence \(S(t,x)=0\) for almost all \(t>0\) and \(x \in {\mathbb {R}}^d\), i.e. there exists a Borel set \(N \subset {\mathbb {R}}_{+} \times {\mathbb {R}}^d\) with \(\lambda _{d+1}(N)=0\) such that \(S(t,x)=0\) for all \((t,x) \not \in N\). Here \(\lambda _{k}\) denotes the Lebesgue measure on \({\mathbb {R}}^{k}\). Therefore,
where \(A:=\{(t,x,y) \in {\mathbb {R}}_{+} \times {\mathbb {R}}^d \times {\mathbb {R}}^d; (t,x) \in N,(t,y) \in N\}\).
Let \(N_t=\{x \in {\mathbb {R}}^d; (t,x) \in N\}\) be the section of the set N at point \(t>0\). By Fubini’s theorem, \(\lambda _{d+1}(N)=\int _{0}^{\infty }\lambda _d(N_t)dt\). Since \(\lambda _{d+1}(N)=0\), we infer that \(\lambda _d(N_t)=0\) for almost all \(t>0\). Note that the section of the set A at point t is \(A_t=\{(x,y) \in {\mathbb {R}}^d \times {\mathbb {R}}^d; (t,x,y) \in A\}=N_t \times N_t\), and its Lebesque measure is \(\lambda _{2d}(A_t)=\lambda _{d}^2(N_t)=0\) for almost all \(t>0\). By applying Fubini again, we infer that \(\lambda _{2d+1}(A)=\int _0^{\infty }\lambda _{2d}(A_t)dt=0\). This shows \(I=0\), which contradicts (A.1). \(\square \)
1.2 Proof of Proposition 1.9
In this section, we only sketch the proof of Proposition 1.9 as the main body of the proof is almost identical to that in [42, Proposition 3.2].
Proof of (1.27)
Using the duality relation (2.5) and the identity \(L = -\delta D\), we have
which shows the equality in (1.27). Then, applying the Gaussian Poincaré inequality (2.12) and using Lemma 3.2 of [26], we can bound the variance appearing in the left-hand side of (1.27) by
We will show that the first expectation-term is bounded by \(A_1\) and the other one can be estimated in the same way and bounded by \(A_2\). Using the representation (see e.g. [25, Proposition 2.9.3])
with \(\{P_t, t\ge 0\}\) the Ornstein-Uhlenbeck semigroup, we can write
Note that if \(({\mathcal {M}}, {\mathfrak {M}}, \nu )\) is a probability space on which \(s\in {\mathcal {M}}\longmapsto V_s\in |{\mathcal {H}}|\) is \({\mathfrak {M}}\)-measurable such that \(\int _{{\mathcal {M}}} \big \Vert |V_s|\big \Vert _{{\mathcal {H}}}^2 \nu (ds)<\infty \), then by Fubini’s theorem and Cauchy-Schwarz inequality,
Using the above inequality on \(({\mathbb {R}}_+, e^{-t}dt)\), we deduce from (6.2) that
Observe that \( \langle D^2F, P_t DG \rangle _{\mathcal {H}}\) is nothing else but the one-contraction \(D^2F\otimes _1 P_tDG\), so that
where the last equality follows from the definition of contractions. Therefore, we have
and thus we end our estimation of \( {\mathbb {E}}[ \Vert \langle D^2F, - DL^{-1}G \rangle _{\mathcal {H}} \Vert _{\mathcal {H}}^2 ]\) by using Hölder inequality and the contraction property of \(P_t\) on \(L^4(\Omega )\), that is, using \(\Vert P_t (D_{r',z'}G) \Vert _4 \le \Vert D_{r',z'}G \Vert _4\).
To estimate the other expectation-term \( {\mathbb {E}}[ \Vert \langle DF, - D^2L^{-1}G \rangle _{\mathcal {H}} \Vert _{\mathcal {H}}^2 ]\), one can begin with
and then follow the same arguments.
\(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Balan, R.M., Nualart, D., Quer-Sardanyons, L. et al. The hyperbolic Anderson model: moment estimates of the Malliavin derivatives and applications. Stoch PDE: Anal Comp 10, 757–827 (2022). https://doi.org/10.1007/s40072-021-00227-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40072-021-00227-5
Keywords
- Hyperbolic Anderson model
- Wiener chaos expansion
- Malliavin calculus
- Second-order Poincaré inequality
- Quantitative central limit theorem
- Riesz kernel
- Dalang’s condition