1 Introduction

We consider the following stochastic heat equation

$$\begin{aligned} \left[ \begin{aligned}&\partial _t u(t,x) = \tfrac{1}{2} \partial ^2_x u(t,x) + b(u(t,x)) + \sigma (u(t,x)) {\dot{W}}(t,x)&\hbox { for }\ (t,x)\in (0,\infty )\times {\mathbb {R}},\\&\hbox { subject to }u(0,x) = u_0(x)&\text { for all }\ x\in {\mathbb {R}}. \end{aligned}\right. \end{aligned}$$
(1.1)

The noise term is space-time white noise; that is, \({\dot{W}}\) is a centered, generalized Gaussian random field with

$$\begin{aligned} {{\,\mathrm{\textrm{Cov}}\,}}[ {\dot{W}}(t,x) , {\dot{W}}(s,y) ] = \delta _0(t-s) \delta _0(x-y) \quad \hbox { for all } t,s\ge 0 \hbox { and } x,y\in {\mathbb {R}}. \end{aligned}$$

Throughout, we assume that \(u_0\), \(\sigma \) and b satisfy the following hypotheses:

Assumption 1.1

The initial profile \(u_0\) is a non-random bounded function.

Assumption 1.2

\(\sigma :{\mathbb {R}}\rightarrow (0, \infty )\) is Lipschitz continuous, and satisfies \(0<\inf _{\mathbb {R}}\sigma \le \sup _{\mathbb {R}}\sigma <\infty .\)

Assumption 1.3

\(b:{\mathbb {R}}\rightarrow (0,\,\infty )\) is locally Lipschitz continuous, as well as nondecreasing.

We recall that a random field solution to (1.1) is a predictable random field \(u=\{u(t,x)\}_{t \ge 0, x \in {\mathbb {R}}}\) that satisfies the following integral equation:

$$\begin{aligned} u(t,x) = (G_t*u_0)(x) + \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(y-x) b(u(s,y))\,\textrm{d}s\,\textrm{d}y + {\mathcal {I}}(t,x), \end{aligned}$$
(1.2)

where

$$\begin{aligned} {\mathcal {I}}(t,x) = \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(y-x) \sigma (u(s,y)) \, W(\textrm{d}s\,\textrm{d}y), \end{aligned}$$

the symbol \(*\) denotes convolution, and

$$\begin{aligned} G_r(z) = \frac{\exp \{-z^2/(2r)\}}{\sqrt{2\pi r}}\qquad \hbox { for all } r>0 \hbox { and } z\in {\mathbb {R}}. \end{aligned}$$

When b and \(\sigma \) are Lipschitz continuous, general theory ensures that the SPDE (1.2) is well posed; see Dalang [5] and Walsh [20]. However, general theory fails to be applicable when b and/or \(\sigma \) are assumed to be only locally Lipschitz continuous. Here, we can exploit the fact that b is nondecreasing in order to ensure the existence of a “minimal solution” u under Assumptions 1.2 and 1.3. The beginning of the proof of Theorem 1.5 in §5 contains the details of the construction of the minimal solution. But we can summarize that effort succinctly as follows: Consider (1.1) with b replaced by \(b\wedge n\) and denote the solution by \(u_n\). Because \(b\wedge n\uparrow b\) and \(b\wedge n\) is globally Lipschitz continuous, \(u_n\) is a classical solution and can be shown to increase pointwise to a random field u. Moreover, u is a mild solution to (1.1) whenever the latter makes sense; see §5. The random field u is the minimal solution in the sense that any other solution theory that agrees with general theory when b is Lipschitz, has a comparison theorem, and yields a solution v satisfies \(v\ge u\). We can now turn to the main objective of this paper and prove that, under Assumptions 1.2 and 1.3, the classical Osgood condition (1.3) of ODEs ensures that u, and hence v, blows up everywhere and instantaneously.

There is a large and distinguished literature in PDEs that focuses on these types of questions; see for example Cabré and Martel [2], Peral and Vázquez [17], and Vázquez [19]. To the best of our knowledge, the present paper contains the first instantaneous blowup result for SPDEs of the type given by (1.1). For PDEs, various different definitions for instantaneous blowup are used but all these notions basically mean that the solution blows up for every \(t>0\). We provide a different definition that is particularly well suited for our purposes.

Definition 1.4

Let \(u=\{u(t,x)\}_{t \ge 0, x \in {\mathbb {R}}}\) denote a space-time random field with values in \([-\infty ,\infty ]\). We say that u blows up everywhere and instantaneously when

$$\begin{aligned} \textrm{P}\left\{ u(t,x)=\infty \hbox { for every } t>0 \hbox { and } x\in {\mathbb {R}}\right\} =1. \end{aligned}$$

Our notion of instantaneous, everywhere blowup is sometimes referred to as instantaneous and complete blowup.

We are not aware of any prior results on instantaneous nor everywhere blowup in the SPDE literature. However, broader questions of blowup for SPDEs have received recent attention. Recent examples include Ref.s [6, 9,10,11,12], where criteria for the blowup in finite time with positive probability or almost surely are studied. And De Bouard and Debussche [8] investigate blowup in \(H^1({\mathbb {R}}^d)\) for the stochastic nonlinear Schrödinger equation, valid in arbitrarily small time, and with positive probability; see also the references in [8].

In order to state our result precisely, we need the well-known Osgood condition from the classical theory of ODEs.

Condition 1.5

A function \(b:{\mathbb {R}}\mapsto (0,\infty )\) is said to satisfy the Osgood condition if

$$\begin{aligned} \int _1^\infty \frac{\textrm{d}y}{b(y)}<\infty , \end{aligned}$$
(1.3)

where \(1/0=\infty \).

It was proved in Foondun and Nualart [11] that, when \(\sigma \) is a positive constant, the Osgood condition implies that the solution to (1.1) blows up almost surely. Earlier, this fact was previously proved by Bonder and Groisman [10] for SPDEs on a finite interval. In the reverse direction, and for the same equations on finite intervals, Foondun and Nualart [11] have shown that if \(\sigma \) is locally Lipschitz continuous and bounded, then the Osgood condition is necessary for the solution to blow up somewhere with positive probability.

Recall Assumptions 1.2 and 1.3. The aim of the present paper is to show that the Osgood condition in fact implies that, almost surely, the solution to Eq. (1.1) blows up everywhere and instantaneously.

Theorem 1.6

If b satisfies the Osgood Condition 1.5, then the minimal solution to (1.1) blows up everywhere and instantaneously almost surely.

A few years ago, Professor Alison Etheridge asked one of us a number of questions about the time to blow up and the nature of blowup for stochastic reaction–diffusion equations of the general type studied here. This paper provides an answer to Professor Etheridge’s questions in the case that \(\sigma \) satisfies Assumptions 1.2 and 1.3.

Remark 1.7

(On Assumption 1.2) Whereas Assumption 1.2 is likely not necessary for instantaneous everywhere blowup, something like this assumption is clearly needed. In fact, there is good reason to believe that the blowup phenomena for (1.1) changes completely when \(\sigma \) deviates sharply from Assumption 1.2; see for example Dozzi and López-Mimbela [9] for this phenomenon in the context of a related SPDE in which \(\sigma (u)=u\).

Remark 1.8

(On Assumption 1.3) It is easy to use Theorem 1.6 to improve itself beyond the monotonicity constraint of Assumption 1.3. For example, consider (1.1) when the reaction term is \(b(x)=1+x^2\). Clearly, b fails to verify Assumption 1.3. However, \(b(x)\ge {\tilde{b}}(x) = 1+[\max (x\,,0)]^2\), and the function \({\tilde{b}}\) does satisfy Assumption 1.3. Thus, it is possible to use a comparison argument to show that Theorem 1.6 applies and implies the instantaneous, everywhere blowup of (1.1) when \(b(x)=1+x^2\). We do not know if the strict positivity part of Assumption 1.3 can be replaced with non-negativity.

Let us now describe the main strategy behind the the proof of Theorem 1.6. We may recast (1.2) as

$$\begin{aligned} u = \text { Term A }+\text { Term B }+\text { Term C }, \end{aligned}$$

notation being clear. Term A is deterministic, involves the initial condition, and plays no role in the blowup phenomenon because the initial condition is a nice function. In the PDE literature, there are many results about blowup that hold because the initial condition is assumed to be singular. Here, the initial data is a very nice function with no singularities. In our setting, blowup occurs for very different reasons, and is caused by the interplay between the stochastic Term B, which is the highly non-linear term, and the other stochastic Term C, which is regarded as a Walsh stochastic integral. interplay. More precisely: (i) A spatial ergodicity argument ensures that at any time \(t>0\) there will be spatial intervals over which Term C reaches an arbitrary (fixed) height; (ii) The explosive drift ensures that the solution rapidly reaches infinity in those spatial intervals; and (iii) The instantaneous propagation of the heat equation will ensure the everywhere blowup of the solution.

As part of our analysis, we prove that, when b is in fact a Lipschitz continuous function that satisfies the Osgood condition (1.3), the process \(x\mapsto u(t,x)\) is almost surely unbounded for every \(t>0\). The proof of this fact makes use of ideas from the Malliavin calculus and Poincaré inequalities developed in a recent paper by Chen et al. [4]. The limiting procedure used to define the solution then allows us to use the growth property of b to show blowup of the solution and thus complete the proof of the main result.

We end this introduction with a plan of the paper. In §2 we study ergodicity and growth properties for a family of stochastic convolutions and we use some of these results to show that, when b is Lipschitz and the initial condition is a constant, the solution to (1.1) is spatially stationary and ergodic. In §4 we develop a hitting-time estimate for a family of differential inequalities and subsequently use that estimate in order to obtain a lower bound for u. The remaining details of the proof of Theorem 1.6 are gathered in §5, using the earlier results of the paper.

Throughout this paper, we write

$$\begin{aligned} \Vert X\Vert _p = \left\{ \textrm{E}(|X|^p)\right\} ^{1/p}\qquad \hbox { for all } p\ge 1 \hbox { and } X\in L^p(\Omega ). \end{aligned}$$

For every function \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\), \(\textrm{Lip}(f)\) denotes the optimal Lipschitz constant of f; that is,

$$\begin{aligned} \textrm{Lip}(f) = \sup _{-\infty<a<b<\infty }\frac{|f(b)-f(a)|}{b-a}. \end{aligned}$$

In particular, f is Lipschitz continuous iff \(\textrm{Lip}(f)<\infty \).

2 Spatial growth of stochastic convolutions

2.1 Spatial ergodicity via the Malliavin calculus

We introduce following Nualart [16] some elements of the Malliavin calculus that we will need. Let \({\mathcal {H}}=L^2({\mathbb {R}}_+ \times {\mathbb {R}})\). For every Malliavin-differentiable random variable F, we let DF denote the Malliavin derivative of F, and observe that \(DF=\{ D_{r,z}F\}_{r>0,z\in {\mathbb {R}}}\) is a random field indexed by \((r, z)\in {\mathbb {R}}_+\times {\mathbb {R}}\).

For every \(p \ge 2\), let \({\mathbb {D}}^{1,p}\) denote the usual Gaussian Sobolev space endowed with the semi-norm

$$\begin{aligned} \Vert F\Vert _{1,p}^p:=\textrm{E}(|F|^p)+\textrm{E}(\Vert DF\Vert ^p_{\mathcal {H}}). \end{aligned}$$

We will need the following version of the Poincaré inequality due to Chen et al. [4, (2.1)]:

$$\begin{aligned} \vert \text { Cov } (F_1,F_2) \vert \le \int _0^{\infty } \textrm{d}r \int _{-\infty }^{\infty } \textrm{d}z\ \Vert D_{r,z} F_1 \Vert _2 \Vert D_{r,z} F_2 \Vert _2 \quad \hbox { for every } F_1,F_2 \hbox { in } {\mathbb {D}}^{1,2}.\nonumber \\ \end{aligned}$$
(2.1)

Next, let us recall some notions from the ergodic theory of multiparameter processes (see for example Chen et al. [3]): We say that a predictable random field \(Z=\{Z(t,x)\}_{(t,x)\in (0,\infty )\times {\mathbb {R}}}\) is spatially mixing when the random field \(x \rightarrow Z(t,x)\) is weakly mixing in the usual sense for every \(t>0\). This property can be stated as follows: For all \(k\in {\mathbb {N}}\), \(t>0\), \(\xi ^1,...,\xi ^k\in {\mathbb {R}}\), and Lipschitz-continuous functions \(g_1,...,g_k:{\mathbb {R}}\rightarrow {\mathbb {R}}\) that satisfy \(g_j(0)=0\) and Lip\((g_j)=1\) for every \(j =1,...,k\),

$$\begin{aligned} \lim _{\vert x \vert \rightarrow \infty } \text { Cov } [{\mathcal {G}}(x), {\mathcal {G}}(0)]=0, \end{aligned}$$
(2.2)

where

$$\begin{aligned} {\mathcal {G}}(x)=\prod _{j=1}^k g_j(Z(t,x+\xi ^j)), \quad x \in {\mathbb {R}}. \end{aligned}$$
(2.3)

Whenever the process \(x \rightarrow Z(t,x)\) is stationary and weakly mixing for all \(t>0\), it is ergodic.

Finally, we will require the following elementary identity for products of the heat kernel

$$\begin{aligned} \int _{-\infty }^{\infty } \left[ G_{t-s}(x-y)\right] ^2 \left[ G_{s-r}(y-z)\right] ^2\, \textrm{d}y= \sqrt{\frac{t-r}{4\pi (t-s) (s-r)}}\left[ G_{t-r}(x-z)\right] ^2.\nonumber \\ \end{aligned}$$
(2.4)

See Chen et al. [4, below (2.7)].

2.2 Ergodicity of stochastic convolutions

Let \(Z=\{Z(t,x)\}_{(t,x)\in (0,\infty )\times {\mathbb {R}}}\) be a predictable random field that satisfies

$$\begin{aligned} c_1 \le \inf _{(t,x)\in (0,\infty )\times {\mathbb {R}}} Z(t,x) \le \sup _{(t,x)\in (0,\infty )\times {\mathbb {R}}} Z(t,x) \le c_2, \end{aligned}$$
(2.5)

for two positive and finite constants \(c_1\) and \(c_2\) that are fixed throughout. Set \(I_Z(0,x)=0\), and consider the associated stochastic convolution

$$\begin{aligned} I_Z(t,x) = \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(y-x) Z(s,y)\, W(\textrm{d}s\,\textrm{d}y)\qquad \hbox { for every } t>0 \hbox { and } x\in {\mathbb {R}}.\nonumber \\ \end{aligned}$$
(2.6)

The main aim of this section is to study the growth properties of the random field \(x \rightarrow I_Z(t\,,x)\). Next we develop natural conditions under which the random field \(x \rightarrow I_Z(t\,,x)\) is stationary and ergodic at all times \(t>0\).

Proposition 2.1

Assume that \(x \rightarrow Z(t, x)\) is stationary for all \(t>0\). Assume also that \(Z(t,x) \in {\mathbb {D}}^{1,p}\) for all \(p \ge 2\), \(t>0\) and \(x \in {\mathbb {R}}\), and that its Malliavin derivative DZ(tx) has the following property: For every \(T>0\) and \(p \ge 2\) there exists a number \(C_{T,p}>0\) such that

$$\begin{aligned} \Vert D_{r,z} Z(t,x) \Vert _p \le C_{T,p}\, G_{t-r}(x-z), \end{aligned}$$
(2.7)

for every \(t \in (0\,,T)\) and \(x \in {\mathbb {R}}\) and for almost every \((r,z) \in (0,t) \times {\mathbb {R}}\). Then the process \(x \rightarrow Z(t\,, x)\) is ergodic for every \(t>0\), and \(x \rightarrow I_Z(t\,, x)\) is stationary and ergodic for every \(t>0.\)

Proof

Thanks to the Poincaré inequality (2.1), the proof of ergodicity follows the same pattern as [3, Proof of Theorem 1.3]. Therefore, we describe the argument quickly mainly where adjustments are needed.

We start with the process Z and use a similar argument as Chen et al. [3, Proof of Corollary 9.1]; see also Chen et al. [4, Theorem 1.1]. Define \({\mathcal {G}}(x)\) as was done in (2.3). It then follows from (2.7) that there exists a constant \(c_{T,k}>0\) such that

$$\begin{aligned}\begin{aligned} \Vert D_{r,z} {\mathcal {G}}(x) \Vert _2&\le \sum _{j_0=1}^k \left( \prod _{j=1, j \ne j_0}^k \Vert g'_j(Z(t, x+\xi ^j)) \Vert _{2k} \right) \Vert D_{r,z} Z(t,x+\xi ^{j_0}) \Vert _{2k} \\&\le c_{T,k} \sum _{j=1}^k G_{t-r}(x+\xi ^j-z), \end{aligned}\end{aligned}$$

valid for all \(0<r<t\le T\) and \(x,z \in {\mathbb {R}}\).Footnote 1

We can combine the Poincaré inequality (2.1) and the semigroup property of the heat kernel to find that

$$\begin{aligned} \vert \text { Cov } [{\mathcal {G}}(x), {\mathcal {G}}(0)] \vert \le c_{T,k} \sum _{j,\ell =1}^k \int _0^t G_{2(t-r)}(x+\xi ^j-\xi ^{\ell })\, \textrm{d}r. \end{aligned}$$

This yields (2.2), whence follows the ergodicity of \(x \rightarrow Z(t\,,x)\) for every \(t>0\).

Next, we show that the process \(x \rightarrow I_Z(t,x)\) is stationary for all \(t>0\). The proof of this fact follows the proof of Lemma 7.1 in [3] closely. First, let us choose and fix some \(y \in {\mathbb {R}}\) and apply (7.2) in [3] as follows:

$$\begin{aligned} \begin{aligned} (I_Z\circ \theta _y)(t,x)= I_Z(t,x+y)&= \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(x+y-z) Z(s,z-y+y)\, W(\textrm{d}s\,\textrm{d}z) \\&= \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(x-z) Z(s,z+y)\, W_y(\textrm{d}s\,\textrm{d}z)\\&= \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(x-z) (Z\circ \theta _y)(s,z)\, W_y(\textrm{d}s\,\textrm{d}z), \end{aligned} \end{aligned}$$

where \(\theta _y\) denotes the shift operator (see Chen et al. [3]), and \(W_y\) is the associated shifted Gaussian noise [3, (7.1)]. The spatial stationarity of \(I_Z\) follows from the facts that W and \(W_y\) have the same law and the random field \(Z\circ \theta _y\) has the same finite-dimensional distributions as Z because Z is assumed to be spatially stationary.

We now turn to the spatial ergodicity of the process \(I_Z\). By the properties of the divergence operator [16, Proposition 1.3.8], \(I_Z(t,x) \in {\mathbb {D}}^{1,k}\) for all \(k \ge 2\), \(t>0\), and \(x \in {\mathbb {R}}\). Moreover, the Malliavin derivative \(DI_Z(t,x)\) a.s. satisfies

$$\begin{aligned} D_{r,z} I_Z(t,x) = G_{t-r}(x-z) Z(r,z) + \int _{(r,t)\times {\mathbb {R}}} G_{t-s}(y-x) D_{r,z} Z(s,y) \, W(\textrm{d}s\,\textrm{d}y). \end{aligned}$$

In principle, the above is valid for a.e. (rz) but in fact the right-hand side can be used to define the Malliavin derivative everywhere a.s. And that is what we do here. In particular, for any integer \(k \ge 2\), the Burkholder-Davis-Gundy inequality and the estimate (2.7) together imply that

$$\begin{aligned} \begin{aligned} \Vert D_{r,z} I_Z(t,x) \Vert _{2k}&\le c G_{t-r}(x-z) + c_k\left( \int _r^t \textrm{d}s \int _{{\mathbb {R}}} \textrm{d}y \left[ G_{t-s}(x-y)\right] ^2 \Vert D_{r,z} Z(s,y) \Vert ^2_{2k}\right) ^{1/2} \\&\le c G_{t-r}(x-z) + c_{T,k}\left( \int _r^t \textrm{d}s \int _{{\mathbb {R}}} \textrm{d}y \left[ G_{t-s}(x-y)\right] ^2 \left[ G_{s-r}(y-z)\right] ^2 \right) ^{1/2}. \end{aligned} \end{aligned}$$

Thanks to (2.4), this yields

$$\begin{aligned} \begin{aligned} \Vert D_{r,z} I_Z(t,x) \Vert _{2k}&\le c G_{t-r}(x-z) + c_{T,k} G_{t-r}(x-z) \left( \int _r^t \sqrt{\frac{t-r}{4\pi (t-s)(s-r)}}\ \textrm{d}s\right) ^{1/2} \\&\le c_{T,k} G_{t-r}(x-z) (1+ (t-r)^{1/4}). \end{aligned}\end{aligned}$$
(2.8)

Define

$$\begin{aligned} {\mathcal {J}}(x)=\prod _{j=1}^k g_j(I_Z(t,x+\xi ^j)) \qquad \hbox { for }\ x\in {\mathbb {R}}, \end{aligned}$$

using the same \(g^1,\ldots ,g^k\) and \(\xi ^1,\ldots ,\xi ^k\) that were introduced earlier. In this way we can conclude from (2.8) and elementary properties of the Malliavin derivative that

$$\begin{aligned}\begin{aligned} \Vert D_{r,z} {\mathcal {J}}(x) \Vert _2&\le \sum _{j_0=1}^k \left( \prod _{j=1, j \ne j_0}^k \Vert g'_j(I_Z(t, x+\xi ^j)) \Vert _{2k} \right) \Vert D_{r,z} I_Z(t,x+\xi ^{j_0}) \Vert _{2k} \\&\le c_{T,k} \sum _{j=1}^k G_{t-r}(x+\xi ^j-z) (1+ (t-r)^{1/4}) \end{aligned}\end{aligned}$$

valid for all \(0<r<t\le T\) and \(x,z \in {\mathbb {R}}\).

Now we apply (2.1) together with the semigroup property of the heat kernel to see that

$$\begin{aligned} \begin{aligned} \vert \text { Cov } [{\mathcal {J}}(x), {\mathcal {J}}(0)] \vert&\le c_{T,k} \sum _{j,\ell =1}^k \int _0^t G_{2(t-r)}(x+\xi ^j-\xi ^{\ell })(1+(t-r)^{1/4})^2\, \textrm{d}r. \end{aligned}\end{aligned}$$

Therefore, \(\lim _{\vert x \vert \rightarrow \infty } \text { Cov } [{\mathcal {J}}(x)\,, {\mathcal {J}}(0)]=0\), and hence follows the ergodicity of \(x \rightarrow I_Z(t,x)\) for every \(t>0\). This concludes the proof. \(\square \)

2.3 Ergodicity of the solution

In this section, we consider Eq. (1.1) with constant initial condition \(\rho \in {\mathbb {R}}\). That is,

$$\begin{aligned} u(t,x) = \rho + \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(y-x) b(u(s,y))\,\textrm{d}s\,\textrm{d}y + {\mathcal {I}}(t,x), \end{aligned}$$
(2.9)

where

$$\begin{aligned} {\mathcal {I}}(t,x) = \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(y-x) \sigma (u(s,y)) \, W(\textrm{d}s\,\textrm{d}y). \end{aligned}$$

The aim of this section is to show that when \(\sigma \) and b are Lipschitz continuous the solution to (2.9) is spatially ergodic. This follows from an application of Proposition 2.1. Note that because we are discussing Lipschitz continuous b, there is no need to describe what we mean by solution; that is done already in Walsh [20].

According to Bally and Pardoux [1] (see also Nualart [16, Proposition 1.2.4]), under these conditions \(u(t\,,x) \in {\mathbb {D}}^{1,P}\) for all \(p \ge 2\), \(t>0\), and \(x \in {\mathbb {R}}\), and the Malliavin derivative Du(tx) satisfies

$$\begin{aligned} \begin{aligned} D_{r,z} u(t,x) = G_{t-r}(x-z) \sigma (u(r,z))&+ \int _{(r,t)\times {\mathbb {R}}} G_{t-s}(y-x) B_{s,y} D_{r,z}u(s,y) \,\textrm{d}s\,\textrm{d}y\\&+ \int _{(r,t)\times {\mathbb {R}}} G_{t-s}(y-x) \Sigma _{s,y} D_{r,z} u(s,y) \, W(\textrm{d}s\,\textrm{d}y) \qquad \text { a.s }, \end{aligned}\end{aligned}$$

for a.e. \((r,z)\in (0,t)\times {\mathbb {R}}\) where B and \(\Sigma \) are a.s. bounded random fields. We have the following estimate on the Malliavin derivative.

Lemma 2.2

If \(\sigma \) and b are Lipschitz continuous, then for every \(T>0\) and \(p \ge 2\) there exists \(C_{T,p}>0\) such that

$$\begin{aligned} \Vert D_{r,z} u(t,x) \Vert _p \le C_{T,p}G_{t-r}(x-z) \end{aligned}$$

for all \(t \in (0\,,T)\) and \(x \in {\mathbb {R}}\), and for almost every \((r,z) \in (0,t) \times {\mathbb {R}}\).

Proof

The proof follows closely the proof of Lemma 2.1 in Chen et al. [4] but we must account for a few of the changes that are caused by the drift b: By Minkowski’s inequality,

$$\begin{aligned}{} & {} \bigg \Vert \int _{(r,t)\times {\mathbb {R}}} G_{t-s}(y-x) B_{s,y} D_{r,z}u(s,y) \,\textrm{d}s\,\textrm{d}y\bigg \Vert _p^2\\{} & {} \le c \int _r^t \textrm{d}s\int _{-\infty }^\infty \textrm{d}y \left[ G_{t-s}(x-y)\right] ^2 \Vert D_{r,z} u(s,y) \Vert ^2_p. \end{aligned}$$

This is the same expression that appears in the right-hand side of (2.6) in [4]. Therefore, the rest of the proof follows the analogous argument in [4, Proof of Lemma 2.1]. \(\square \)

We are now ready to state the main result of this section.

Corollary 2.3

If \(\sigma \) and b are Lipschitz continuous, then the random fields \(x \rightarrow u(t\,,x)\) and \(x \rightarrow {\mathcal {I}}(t\,,x)\) are stationary and ergodic for every \(t>0\).

Proof

Stationarity follows from Chen et al. [3, Lemma 7.1], and ergodicity is a direct consequence of Lemma 2.2 and Proposition 2.1. \(\square \)

2.4 Spatial growth of stochastic convolutions

We are ready to state the main result of this section.

Theorem 2.4

For every predictable random field Z that satisfies the boundedness condition (2.5) and for which \(x\mapsto I_Z(t\,,x)\) is stationary and ergodic for all \(t>0\), there exists \(\eta =\eta (c_1,c_2)>0\) such that

$$\begin{aligned} \textrm{P}\left\{ \limsup _{c\rightarrow \infty }\inf _{t\in (a,a+(\eta a)^2)}\inf _{x\in (0,\eta a)} I_Z(t,c+x)=\infty \right\} =1, \end{aligned}$$

valid for every non-random number \(a>0\).

Remark 2.5

A crucial part of the message of Theorem 2.4 is that \(\eta \) depends only on \(c_1,c_2\) from (2.5) and is, in particular, independent of the choice of Z.

The proof of Theorem 2.4 requires a few prefatory steps that we present as a series of lemmas. Once those lemmas are under way, we are able to prove Theorem 2.4 promptly.

Lemma 2.6

For every \(c_2>c_1>0\) there exist \(C_2,C_1>0\) such that

$$\begin{aligned} \frac{C_1}{1+\lambda } \exp \left( - \frac{\lambda ^2}{2c_1^2}\right) \le \textrm{P}\left\{ I_Z(t,x) \ge (t/\pi )^{1/4}\lambda \right\} \le \frac{C_2}{1+\lambda } \exp \left( - \frac{\lambda ^2}{2c_2^2}\right) , \end{aligned}$$

uniformly for all \(t,\lambda \ge 0\) and \(x\in {\mathbb {R}}\), and for every predictable random field Z that satisfies (2.5).

Proof

Choose and fix \(t>0\) and consider

$$\begin{aligned} M_0=0 \quad \text { and }\quad M_r = \int _{(0,r)\times {\mathbb {R}}} G_{t-s}(y-x) Z(s,y)\,W(\textrm{d}s\,\textrm{d}y) \qquad \hbox { for }\ 0<r\le t. \end{aligned}$$

Because Z is uniformly bounded, the above is a continuous, \(L^2\)-martingale with quadratic variation

$$\begin{aligned} \langle M\rangle _r = \int _0^r\textrm{d}s\int _{-\infty }^\infty \textrm{d}y\ [ G_{t-s}(y-x)]^2 |Z(s,y)|^2 \qquad \hbox { for }\ 0\le r\le t. \end{aligned}$$

Because

$$\begin{aligned} \int _0^r\textrm{d}s\int _{-\infty }^\infty \textrm{d}y\ [ G_{t-s}(y-x)]^2 = \int _0^r \frac{\textrm{d}s}{\sqrt{4\pi (t-s)}} = \sqrt{\frac{t}{\pi }} - \sqrt{\frac{t-r}{\pi }} \qquad \hbox { for }\ 0\le r\le t, \end{aligned}$$

the inequalities (2.5) yield

$$\begin{aligned} \frac{c_1^2}{\sqrt{\pi }}\left[ \sqrt{t} - \sqrt{t-r}\right] \le \langle M\rangle _r \le \frac{c_2^2}{\sqrt{\pi }}\left[ \sqrt{t} - \sqrt{t-r}\right] \qquad \hbox { for }\ 0\le r\le t. \end{aligned}$$
(2.10)

The Dubins, Dambis-Schwarz theorem, see [18], ensures that \(M_r = B(\langle M\rangle _r)\) for a standard, linear Brownian motion B. Since \(I_Z(t,x)=M_t\) is the terminal point of our martingale M, and because (2.10) implies that \(\langle M\rangle _t\le c_2^2\sqrt{t/\pi }\), we learn from the reflection principle and the scaling property that

$$\begin{aligned} \textrm{P}\left\{ I_Z(t,x) \ge c_2(t/\pi )^{1/4}\lambda \right\} \le \textrm{P}\left\{ \sup _{0\le r\le c_2^2\sqrt{t/\pi }} B(r) \ge c_2(t/\pi )^{1/4}\lambda \right\} =\sqrt{2/\pi }\int _{\lambda }^\infty \textrm{e}^{-z^2/2}\,\textrm{d}z. \end{aligned}$$

A standard estimate yields the upper bound. For the lower bound we observe in like manner to the preceding that

$$\begin{aligned}&\textrm{P}\left\{ I_Z(t\,,x) \ge c_1(t/\pi )^{1/4}\lambda \right\} \\&\ge \textrm{P}\left\{ B\left( c_1^2\sqrt{t/\pi } \right) \ge 2 c_1(t/\pi )^{1/4}\lambda \right\} \\&\times \textrm{P}\left\{ \sup _{\nu \in [c_1^2,c_2^2]} \left| B\left( \nu \sqrt{t/\pi }\right) - B\left( c_1^2\sqrt{t/\pi }\right) \right| \le c_1 (t/\pi )^{1/4}\right\} \\&= \frac{\varpi }{\sqrt{2\pi }} \int _{2\lambda }^\infty \textrm{e}^{-z^2/2}\,\textrm{d}z, \end{aligned}$$

where \(\varpi = \textrm{P}\{ \sup _{\nu \in [1,(c_2/c_1)^2]} | B(\nu )-B(1)| \le 1\}\in (0\,,1).\) This proves that

$$\begin{aligned} \textrm{P}\left\{ I_Z(t,x) \ge c_1(t/\pi )^{1/4}\lambda \right\} \gtrsim \lambda ^{-1}\exp (-\lambda ^2/2)\qquad \hbox { for all }\ \lambda \ge 1, \end{aligned}$$

where the implied constant depends only on \(c_1\) and \(c_2\). When \(\lambda \in (0,1)\), it suffices to lower bound the integral by a constant. \(\square \)

Lemma 2.7

Choose and fix a non-random number \(c_0>0\). Then,

$$\begin{aligned} \sup _{t\ge 0}\sup _{-\infty<x\ne z<\infty }\textrm{E}\left( \left| \frac{I_Z(t,x) - I_Z(t,z)}{|x-z|^{1/2}}\right| ^k \right) \le (2 c_0^2k)^{k/2}, \end{aligned}$$

for every \(k\in [2,\infty )\) and for all predictable random fields Z that satisfy \(\sup _{p\in {\mathbb {R}}_+\times {\mathbb {R}}}|Z(p)|\le c_0\).

Remark 2.8

We emphasize that Lemma 2.7 assumes that Z is bounded. This is a much weaker condition than (2.5), as the latter implies also that, among other things, \(\inf _{p\in {\mathbb {R}}_+\times {\mathbb {R}}}Z(p)\) is a.s. bounded from below by a strictly positive, deterministic number. The next lemmas also in fact require only this weaker boundedness condition.

Proof

Choose and fix \(t\ge 0\) and \(x\ne z\in {\mathbb {R}}\), and let Z be as described. By the Burkholder-Davis-Gundy inequality in the form [7], for every real number \(k\ge 2\),

$$\begin{aligned} \Vert I_Z(t\,,x) - I_Z(t\,,z) \Vert _k^2&\le 4k\int _0^t\textrm{d}s\int _{-\infty }^\infty \textrm{d}y\ [ G_{t-s}(y-x) - G_{t-s}(y-z)]^2\Vert Z(s\,,y)\Vert _k^2\\&\le 4c_0^2k\int _0^\infty \textrm{d}s\int _{-\infty }^\infty \textrm{d}y\ [ G_s(y-x+z) - G_s(y)]^2\\&=\frac{2c_0^2k}{\pi }\int _0^\infty \textrm{d}s\int _{-\infty }^\infty \textrm{d}\xi \ \textrm{e}^{-s\xi ^2}\left| 1 - \textrm{e}^{-i\xi (x-z)/2}\right| ^2\\&\qquad \qquad \text { [Plancherel's theorem] }\\&=\frac{8c_0^2k}{\pi }\int _0^\infty \frac{1-\cos (|x-z|\xi /2)}{\xi ^2}\,\textrm{d}\xi =2c_0^2k|x-z|. \end{aligned}$$

This proves the lemma. \(\square \)

Lemma 2.9

Choose and fix a non-random number \(c_0>0\). Then,

$$\begin{aligned} \sup _{t,h>0}\sup _{x\in {\mathbb {R}}}\textrm{E}\left( \left| \frac{I_Z(t+h,x) - I_Z(t,x)}{h^{1/4}}\right| ^k \right) \le (5 c_0^2k)^{k/2}, \end{aligned}$$

for every \(k\in [2,\infty )\) and for all predictable random fields Z that satisfy \(\sup _{p\in {\mathbb {R}}_+\times {\mathbb {R}}}|Z(p)|\le c_0\).

Proof

Choose and fix \(t,h>0\) and \(x\in {\mathbb {R}}\), and a predictable random field Z as above, and then write

$$\begin{aligned} \Vert I_Z(t+h,x) - I_Z(t,x)\Vert _k \le T_1 + T_2, \end{aligned}$$

where

$$\begin{aligned} T_1&= \left\| \int _{(0,t)\times {\mathbb {R}}} \left[ G_{t+h-s}(y-x) - G_{t-s}(y-x) \right] Z(s\,,y)\,W(\textrm{d}s\,\textrm{d}y) \right\| _k,\\ T_2&= \left\| \int _{(t,t+h)\times {\mathbb {R}}} G_{t+h-s}(y-x)Z(s\,,y)\,W(\textrm{d}s\,\textrm{d}y)\right\| _k. \end{aligned}$$

By the Burkholder-Davis-Gundy inequality in the form [7], for every real number \(k\ge 2\),

$$\begin{aligned} T_1^2&\le 4k\int _0^t\textrm{d}s\int _{-\infty }^\infty \textrm{d}y \left[ G_{t+h-s}(y-x) - G_{t-s}(y-x) \right] ^2\Vert Z(s\,,y)\Vert _k^2\\&\le 4c_0^2k\int _0^\infty \textrm{d}s\int _{-\infty }^\infty \textrm{d}y \left[ G_{s+h}(y) - G_s(y) \right] ^2\\&=\frac{2c_0^2k}{\pi }\int _0^\infty \textrm{d}s\int _{-\infty }^\infty \textrm{d}\xi \ \textrm{e}^{-s\xi ^2} \left| 1 - \textrm{e}^{-h\xi ^2/2}\right| ^2 \qquad \qquad \qquad \text { [Plancherel's theorem] }\\&= \frac{2\sqrt{2}\,c_0^2k}{\pi }\int _0^\infty \frac{|1-\exp (-y^2)|^2}{y^2}\,\textrm{d}y\,\sqrt{h}\\&\le \frac{2\sqrt{2}\,c_0^2k}{\pi }\left( \frac{1}{3} + \int _1^\infty \frac{\textrm{d}y}{y^2}\right) \sqrt{h} = \frac{8\sqrt{2}\,c_0^2k}{3\pi }\,\sqrt{h}, \end{aligned}$$

where we have used the bound \(1-\exp (-y^2)\le y^2\wedge 1\) in order to obtain the last concrete numerical estimate. Similarly, we obtain

$$\begin{aligned} T_2^2&\le 4k\int _t^{t+h}\textrm{d}s\int _{-\infty }^\infty \textrm{d}y\ [ G_{t+h-s}(y-x)]^2\Vert Z(s\,,y)\Vert _k^2\\&\le 4c_0^2k\int _0^h\textrm{d}s\int _{-\infty }^\infty \textrm{d}y\ [ G_{s+h}(y)]^2 = \frac{2 c_0^2k}{\pi }\int _h^{2h}\textrm{d}s\int _{-\infty }^\infty \textrm{d}\xi \ \textrm{e}^{-s\xi ^2} \\&=\frac{2c_0^2k}{\sqrt{\pi }}\int _h^{2h}\frac{\textrm{d}s}{\sqrt{s}} =\frac{4(\sqrt{2}-1)c_0^2k}{\sqrt{\pi }}\sqrt{h}. \end{aligned}$$

We finally obtain

$$\begin{aligned} \Vert I_Z(t+h,x) - I_Z(t,x)\Vert _k \le c_0\sqrt{k} \left[ \sqrt{\frac{8\sqrt{2}}{3\pi }} + \sqrt{\frac{4(\sqrt{2}-1)}{\sqrt{\pi }}} \right] h^{1/4}. \end{aligned}$$

This complete the proof. \(\square \)

Define

$$\begin{aligned} \varrho (p) = |p_1|^{1/4} + |p_2|^{1/2}\qquad \hbox { for all }\ p=(p_1, p_2)\in {\mathbb {R}}^2, \end{aligned}$$

and for convenience, we use the following notation, \(I_Z(p):=I_Z(p_1,p_2).\)

Lemma 2.10

For every non-random numbers \(c_0,m>0\) and \(\delta \in (0\,,1)\),

$$\begin{aligned} \sup _{Z,{\mathbb {I}}} \textrm{E}\exp \left( \alpha \sup _{\begin{array}{c} p,q\in [0,1]\times {\mathbb {I}}\\ 0< \varrho (p-q)\le 1 \end{array}}\left| \frac{I_Z(p) - I_Z(q)}{[\varrho (p-q)]^{1-\delta }}\right| ^2 \right) <\infty , \end{aligned}$$

where \(\sup _{Z,{\mathbb {I}}}\) denotes the supremum over all predictable random fields Z that satisfy \(\sup _{p\in {\mathbb {R}}_+\times {\mathbb {R}}} |Z(p)|\le c_0\) and over all intervals \({\mathbb {I}}\subset {\mathbb {R}}\) that have length \(\le m\), and \(\alpha \) is any positive number that satisfies

$$\begin{aligned} \alpha < \frac{(1-2^{-\delta /2})^2}{2^{25}\textrm{e}c_0^2}. \end{aligned}$$

Proof

Since \((a+b)^k\le 2^k(a^k+b^k)\) for all \(k\ge 1\) and \(a,b\ge 0\), Lemmas 2.7 and 2.9 together and Jensen’s inequality imply that

$$\begin{aligned} \begin{aligned} \textrm{E}\left( \left| \frac{I_Z(p)-I_Z(q)}{\varrho (p-q)}\right| ^k\right)&\le \left\{ \textrm{E}\left( \left| \frac{I_Z(p)-I_Z(q)}{\varrho (p-q)}\right| ^{2k}\right) \right\} ^{1/2} \\&\le c_0^k 2^k(4^{k/2}+10^{k/2})k^{k/2}\le (13 c_0)^k k^{k/2}, \end{aligned} \end{aligned}$$
(2.11)

valid for all real numbers \(k\ge 1\), distinct \(p,q\in {\mathbb {R}}_+\times {\mathbb {R}}\), and predictable Z that satisfy \(\sup _{p\in {\mathbb {R}}_+\times {\mathbb {R}}}|Z(p)|\le c_0\).

We are going to use a suitable form of Garsia’s lemma [14, Appendix C], and will begin by verifying the conditions that can be found in that reference. Note that \(\varrho (0)=0\) and \(\varrho \) is subadditive: \(\varrho (p+q)\le \varrho (p)+\varrho (q)\) for all \(p,q\in {\mathbb {R}}^d\). We use the notation of [14, Appendix C] and let

$$\begin{aligned} \textrm{B}_\varrho (s) =\left\{ y\in {\mathbb {R}}^2:\, \varrho (y) \le s\right\} \qquad \hbox { for all }\ s\ge 0, \end{aligned}$$

and for all real numbers \(k\ge 1\),

$$\begin{aligned} {\mathcal {I}}_k = \int _{[0,1]\times {\mathbb {I}}}\textrm{d}p\int _{[0,1]\times {\mathbb {I}}}\textrm{d}q\ \left| \frac{I_Z(p)-I_Z(q)}{\varrho (p-q)}\right| ^k. \end{aligned}$$

We know that \({\mathcal {I}}_k<\infty \) a.s. for every \(k\ge 1\). In fact, (2.11) ensures that

$$\begin{aligned} \textrm{E}({\mathcal {I}}_k) \le m^2(13 c_0)^kk^{k/2}, \end{aligned}$$
(2.12)

for all real numbers \(k\ge 1\), distinct \(p,q\in {\mathbb {R}}_+\times {\mathbb {R}}\), and predictable Z that satisfy \(\sup _{p\in {\mathbb {R}}_+\times {\mathbb {R}}}|Z(p)|\le c_0\). If \((s,y)\in {\mathbb {R}}_+\times {\mathbb {R}}^2\) satisfies \(|y_1|\le (s/2)^4\) and \(|y_2|\le (s/2)^2\) then certainly \(y\in B_\varrho (s)\). Similarly, if \(y\in B_\varrho (s)\), then certainly \(|y_1|\le s^4\) and \(|y_2|\le s^2\). This argument shows that \((s/2)^6\le |B_\varrho (s)|\le s^6\) for all \(s\ge 0\), where \(|\,\cdots |\) denotes the Lebesgue measure on \({\mathbb {R}}^2\). Consequently, \(\int _0^{r_0} |B_\varrho (s)|^{-2/k}\,\textrm{d}s <\infty \) for one, hence all, \(r_0>0\), if and only if \(k>12\) and

$$\begin{aligned} \int _0^{r_0}\frac{\textrm{d}s}{|B_\varrho (s)|^{2/k}}&\le 2^{12/k}\int _0^{r_0} s^{-12/k}\,\textrm{d}s \le \frac{2kr_0^{(k-12)/k}}{k-12}&\hbox { for every } r_0>0 \hbox { and } k>12\\&\le 4r_0^{(k-12)/k}&\hbox { for every } r_0>0 \hbox { and } k\ge 24. \end{aligned}$$

Apply Theorem C.4 of [14] with \(\mu (z)=z\) – so that \(C_\mu =2\) there – in order to see that

$$\begin{aligned} \sup _{\begin{array}{c} p,q\in [0,r]\times {\mathbb {I}}\\ \varrho (p-q)\le r_0 \end{array}} |I_Z(p) - I_Z(q)| \le 32{\mathcal {I}}_k^{1/k}\int _0^{r_0}\frac{\textrm{d}s}{|B_\varrho (s)|^{2/k}} \le 128{\mathcal {I}}_k^{1/k}r_0^{(k-12)/k} \qquad \text { a.s. }, \end{aligned}$$

for every non-random \(k\ge 24\) and \(r_0>0\). In particular, we learn from (2.12) that

$$\begin{aligned} \textrm{E}\left( \sup _{\begin{array}{c} p,q\in [0,1]\times {\mathbb {I}}\\ \varrho (p-q)\le r_0 \end{array}} |I_Z(p) - I_Z(q)|^k \right) \le 128^k r_0^{k-12}\textrm{E}({\mathcal {I}}_k) \le m^2(1664 c_0)^k r_0^{k-12}k^{k/2}, \end{aligned}$$

for every \(k\ge 24\) and \(r_0>0\), and all intervals \({\mathbb {I}}\) of length m, and all predictable fields Z that satisfy \(\sup _{p\in {\mathbb {R}}_+\times {\mathbb {R}}}|Z(p)|\le c_0\). We freeze all variables and define for every \(\delta \in (0\,,1)\) and \(n\in {\mathbb {Z}}_+\),

$$\begin{aligned} S_{n,\delta } = \left\{ \textrm{E}\left( \sup _{\begin{array}{c} p,q\in [0,1]\times {\mathbb {I}}\\ 2^{-n-1}< \varrho (p-q)\le 2^{-n} \end{array}} \left| \frac{I_Z(p) - I_Z(q)}{[\varrho (p-q)]^{1-\delta }} \right| ^k \right) \right\} ^{1/k}. \end{aligned}$$

It follows that as long as \(k\ge 24\),

$$\begin{aligned} S_{n,\delta } \le 2^{(1-\delta )(n+1)} \left\{ \textrm{E}\left( \sup _{\begin{array}{c} p,q\in [0,1]\times {\mathbb {I}}\\ \varrho (p-q)\le 2^{-n} \end{array}} |I_Z(p) - I_Z(q)|^k \right) \right\} ^{1/k} \le 2^{12-\delta } c_0 m^{2/k} 2^{-n[\delta -(12/k)]} \sqrt{k}. \end{aligned}$$

Sum the preceding over all \(n\in {\mathbb {Z}}_+\) to see that, as long as \(k\ge (24/\delta )>(12/\delta )\vee 24\),

$$\begin{aligned} \left\{ \textrm{E}\left( \sup _{\begin{array}{c} p,q\in [0,1]\times {\mathbb {I}}\\ \varrho (p-q)\le 1 \end{array}} \left| \frac{I_Z(p) - I_Z(q)}{[\varrho (p-q)]^{1-\delta }} \right| ^k \right) \right\} ^{1/k} \le \frac{2^{12-\delta } c_0 m^{2/k} \sqrt{k}}{1 - 2^{-[\delta -(12/k)]}} \le \frac{2^{12}}{1-2^{-\delta /2}}c_0 m^{2/k} \sqrt{k}. \end{aligned}$$

Replace k by 2k and restrict attention to integral choices of k in order to see that

$$\begin{aligned} \textrm{E}\left( \sup _{\begin{array}{c} p,q\in [0,1]\times \mathbb {I}\\ \varrho (p-q)\le 1 \end{array}} \left| \frac{I_{Z}(p) - I_{Z}(q)}{[\varrho (p-q)]^{1-\delta }} \right| ^{2k} \right) \le m^{2} \left( \frac{2^{25/2}\sqrt{e} c_0}{1-2^{-\delta /2}}\right) ^{2k}k! =: m^{2} Q^{k} k!, \end{aligned}$$

for every integer \(k \ge 12/\delta \), as well as all \(r>0\), all intervals \({\mathbb {I}}\) of length m, and all predictable fields Z that satisfy \(\sup _{p\in {\mathbb {R}}_+\times {\mathbb {R}}}|Z(p)|\le c_0\), where where we have used the inequality \(k^k\le \textrm{e}^k k!\) valid for all positive integers k. An appeal to the Taylor series expansion of the exponential function \(v\mapsto \exp (\alpha v^2)\) yields

$$\begin{aligned} \textrm{E}\exp \left( \alpha \sup _{\begin{array}{c} p,q\in [0,1]\times {\mathbb {I}}\\ \varrho (p-q)\le 1 \end{array}} \left| \frac{I_Z(p) - I_Z(q)}{[\varrho (p-q)]^{1-\delta }} \right| ^2\right) \le \frac{m^2}{1-\alpha Q}<\infty , \end{aligned}$$

for every \(\alpha \) that satisfies \(\alpha <Q^{-1}\). This proves the lemma. \(\square \)

We are ready to conclude this section.

Proof of Theorem 2.4

Lemma 2.6 ensures that

$$\begin{aligned} \textrm{P}\left\{ I_Z(a,c) > M\left( \frac{a}{\pi }\right) ^{1/4}\right\} \ge \frac{C_1\textrm{e}^{-M^2/(2c_1^2)}}{1+M}, \end{aligned}$$

for all \(a>0\), \(c\in {\mathbb {R}}\), and \(M\ge 1\). In particular,

$$\begin{aligned}&\textrm{P}\left\{ \inf _{t\in (a,a+\varepsilon ^4)}\inf _{x\in (c,c+\varepsilon ^2)} I_Z(t\,,x) \le M\left( \frac{a}{\pi }\right) ^{1/4}\right\} \\&\qquad \le 1 - \frac{C_1\textrm{e}^{-(2M)^2/(2c_1^2)}}{1+2M}\\&\qquad + \textrm{P}\left\{ \sup _{t\in (a,a+\varepsilon ^4)}\sup _{x\in (c,c+\varepsilon ^2)} | I_Z(t\,,x) - I_Z(a\,,c)| \ge M\left( \frac{a}{\pi }\right) ^{1/4}\right\} . \end{aligned}$$

Chebyshev’s inequality yields the following:

$$\begin{aligned}&\textrm{P}\left\{ \sup _{t\in (a,a+\varepsilon ^4)}\sup _{x\in (c,c+\varepsilon ^2)} | I_Z(t\,,x) - I_Z(a\,,c)| \ge M\left( \frac{a}{\pi }\right) ^{1/4}\right\} \\&\le \textrm{P}\left\{ \sup _{t\in (a,a+\varepsilon ^4)}\sup _{x\in (c,c+\varepsilon ^2)} \left| \frac{I_Z(t\,,x) - I_Z(a\,,c)}{\sqrt{\varrho \left( (t\,,x) - (a\,,c)\right) }} \right| \ge \frac{M (a/\pi )^{1/4}}{\sqrt{2\varepsilon }}\right\} \\&\le \textrm{E}\exp \left( \alpha \sup _{t\in (a,a+\varepsilon ^4)}\sup _{x\in (c,c+\varepsilon ^2)} \left| \frac{ I_Z(t\,,x) - I_Z(a\,,c)}{\sqrt{\varrho ((t\,,x)-(a\,,c))}}\right| ^2\right) \times \exp \left( - \frac{\alpha M^2 \sqrt{a/\pi }}{2\varepsilon }\right) , \end{aligned}$$

for all \(M\ge 1\) and \(a,c,\varepsilon ,\alpha >0\). Choose and fix

$$\begin{aligned} \alpha = \frac{(1-2^{-1/4})^2}{2^{26}\textrm{e}c_2^2} \quad \text { and }\quad \varepsilon =\frac{c_1^2\alpha }{8}\sqrt{\frac{a}{\pi }}. \end{aligned}$$
(2.13)

and apply Lemma 2.10 [with \(\delta =\frac{1}{2}\) and \(c_0=c_2\)] in order to see that there exists \(K = K(c_1,c_2)>1\) such that

$$\begin{aligned} \textrm{P}\left\{ \inf _{t\in (a,a+\varepsilon ^4)}\inf _{x\in (c,c+\varepsilon ^2)} I_Z(t\,,x) \le M\left( \frac{a}{\pi }\right) ^{1/4}\right\}&\le 1 - \frac{C_1\textrm{e}^{-(2M)^2/(2c_1^2)}}{1+2M} + K\textrm{e}^{-(2M)^2/c_1^2}\\&\le 1- \textrm{e}^{-(2M)^2/(2c_1^2)}\left[ \frac{C_1}{3M} -K\textrm{e}^{-(2M)^2/(2c_1^2)}\right] , \end{aligned}$$

for all \(M\ge 1\) and \(a>0\). In particular, there exists \(M_0=M_0(c_1,c_2)>1\) such that for all \(M\ge 1\) and \(a>0\),

$$\begin{aligned} \sup _{a,c>0} \textrm{P}\left\{ \inf _{t\in (a,a+\varepsilon ^4)}\inf _{x\in (c,c+\varepsilon ^2)} I_Z(t,x) \le M\left( \frac{a}{\pi }\right) ^{1/4}\right\} \le 1 - \frac{C_1\textrm{e}^{-(2M)^2/(2c_1^2)}}{6M} \end{aligned}$$

for all \(M\ge M_0\). To be sure, we remind also that \(\varepsilon =\varepsilon (a,c_1,c_2)\) is defined in (2.13). In any case, this readily yields

$$\begin{aligned} \inf _{a>0}\textrm{P}\left\{ \limsup _{c\rightarrow \infty } \inf _{t\in (a,a+\varepsilon ^4)}\inf _{x\in (c,c+\varepsilon ^2)}I_Z(t,x)> M\left( \frac{a}{\pi }\right) ^{1/4}\right\} \ge \frac{C_1\textrm{e}^{-(2M)^2/(2c_1^2)}}{6M}>0,\nonumber \\ \end{aligned}$$
(2.14)

for all \(M\ge M_0\). Since we are assuming that the infinite-dimensional process \(x\mapsto I_Z(\cdot ,x)\) is ergodic, we can improve (2.14) to the following without need for additional work:

$$\begin{aligned} \textrm{P}\left\{ \limsup _{c\rightarrow \infty } \inf _{t\in (a,a+\varepsilon ^4)}\inf _{x\in (c,c+\varepsilon ^2)} I_Z(t,x) > M\left( \frac{a}{\pi }\right) ^{1/4}\right\} =1, \end{aligned}$$

for all \(M\ge M_0\) and \(a>0\). We now can send \(M\rightarrow \infty \) to deduce the theorem from the particular form of \(\varepsilon \) that is given in (2.13). \(\square \)

3 A lower bound via differential inequalities

In this section, we continue to assume that b is Lipschitz continuous and increasing. Our aim is to prove the following key result.

Theorem 3.1

If \(b:{\mathbb {R}}\rightarrow (0,\infty )\) is Lipschitz continuous and non decreasing, then for every non-random number \(a>0\), there exists a non-random number \(\varepsilon = \varepsilon (a)>0\) – not depending on the choice of b – that satisfies the following for every \(M>\Vert u_0\Vert _{L^\infty ({\mathbb {R}})}\): \(\lim _{a\rightarrow 0^+} \varepsilon (a)=0\), and there exists an a.s.-finite random variable \(c = c(a,M)>0\) independent of b, such that

$$\begin{aligned} \inf _{t\in [a+\varepsilon ,a+2\varepsilon ]}\inf _{x\in (c,c+\sqrt{\varepsilon })} u(t,x) \ge \sup \left\{ N>M:\, \int _{M+\rho }^{N+\rho } \frac{\textrm{d}y}{b(y)}<\varepsilon \right\} \qquad \text { a.s. } \quad [\sup \varnothing =0], \end{aligned}$$

where \(\rho : = \inf _{x\in {\mathbb {R}}} u_0(x)\).

The following result will be useful for the proof of the above theorem.

Lemma 3.2

Fix two numbers \(N>A>0\) and suppose \(B:{\mathbb {R}}_+\rightarrow (0,\infty )\) is Lipschitz continuous and non decreasing. Let \(T=\int _A^N\textrm{d}s/B(s)\), and suppose that \(F:{\mathbb {R}}_+\rightarrow {\mathbb {R}}_+\) solves

$$\begin{aligned} F(t) \ge A+\int _0^t B(F(s))\,\textrm{d}s\qquad \hbox { for all }\ t\in [0,2T]. \end{aligned}$$

Then \(\inf _{t\in [T,2T]}F(t)\ge N\).

Remark 3.3

Lemma 3.2 can recast in slightly weaker terms as a statement about the differential inequality,

$$\begin{aligned}\left[ \begin{aligned}&F'\ge B\circ F\qquad \text { on } {\mathbb {R}}_+,\\&\hbox { subject to }\ F(0)\ge A. \end{aligned}\right. \end{aligned}$$

In this case, \(F(t)\ge N\) for all times t between \(T=\int _A^N\textrm{d}s/B(s)\) and time 2T.

Proof

Choose and fix an \(A>0\). The ordinary differential equation \(G(t)=A+\int _0^t B(G(s))\,\textrm{d}s\) has a unique, strictly increasing, continuous solution up to its blowup time. Using the differential equation, \(G'(t)=B(G(s))\), we find that the time \(T = \sup \{ t>0:\, G(t)\le N\}<\infty \). For every \(N>A\), and \(G(T)=\lim _{s\uparrow T}G(s)=N\). We also have that \(G(t) \ge N\) for all \(t \in [T, 2T]\). A comparison theorem yields \(F\ge G\) on [0, 2T], and completes the proof. \(\square \)

Proof of Theorem 3.1

We first assume that the initial data is equal to a constant \(\rho \in {\mathbb {R}}\). Choose and fix \(a>0\). According to Corollary 2.3 and Theorem 2.4, we can associate to a a non-random number \(\varepsilon =\varepsilon (a)>0\) such that

$$\begin{aligned} \lim _{a\rightarrow 0+}\varepsilon =0 \quad \text { and }\quad \limsup _{c\rightarrow \infty }\inf _{t\in (a+\varepsilon ,a+2\varepsilon )} \inf _{x\in (0,\sqrt{\varepsilon })} {\mathcal {I}}(t,c+x)=\infty , \quad \text { a.s. } \end{aligned}$$
(3.1)

Also choose and fix a number \(M>0\). According to Theorem 2.4, we can find a random number \(c>0\) such that

$$\begin{aligned} \inf _{t\in (a+\varepsilon ,a+2\varepsilon )}\inf _{x\in (0,\sqrt{\varepsilon })} {\mathcal {I}}(t,c+x) > M \quad \text { a.s. } \end{aligned}$$
(3.2)

Because \(b\ge 0\) and b is nondecreasing,

$$\begin{aligned} u(a+t\,,c+x)&\ge \rho + \int _{(0,t+a)\times {\mathbb {R}}} G_{a+t-s}(y-x-c) b(u(s\,,y))\,\textrm{d}s\,\textrm{d}y + {\mathcal {I}}(a+t\,,c+x)\\&\ge \rho + \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(y-x) b(u(a+s\,,c+y))\,\textrm{d}s\,\textrm{d}y + {\mathcal {I}}(a+t\,,c+x)\\&\ge \rho + \int _0^tb\left( \inf _{z\in (0,\sqrt{\varepsilon })}u(a+s\,,c+z)\right) \\ {}&\times \textrm{d}s\int _0^{\sqrt{\varepsilon }}\textrm{d}y\ G_{t-s}(y-x) + {\mathcal {I}}(a+t\,,c+x), \end{aligned}$$

a.s., for every \(t,c>0\) and \(x\in {\mathbb {R}}\). If in addition \(x\in (0\,,\sqrt{\varepsilon })\) and \(t\in (0\,,2\varepsilon )\), then

$$\begin{aligned}{} & {} \int _0^{\sqrt{\varepsilon }} G_{t-s}(y-x)\,\textrm{d}y = \int _{-x}^{-x+\sqrt{\varepsilon }} G_{t-s}(y)\,\textrm{d}y\\{} & {} \ge \int _{-\sqrt{\varepsilon }}^0 G_{t-s}(y)\,\textrm{d}y \ge \int _{-1/2}^0 G_1(y)\,\textrm{d}y=:\ell \in (0, 1), \end{aligned}$$

for all \(s\in (0,t)\). Therefore, (3.2) tells us that, for all \(x\in (0\,,\sqrt{\varepsilon })\) and \(t\in (0\,,2\varepsilon )\),

$$\begin{aligned} u(a+t,c+x) \ge \ell \int _0^tb\left( \inf _{z\in (0,\sqrt{\varepsilon })}u(a+s,c+z)\right) \textrm{d}s + M+\rho . \end{aligned}$$

In other words, we have shown that the function

$$\begin{aligned} f(t) = \inf _{x\in (0,\sqrt{\varepsilon })} u(a+t,c+x) \qquad [t>0] \end{aligned}$$

satisfies

$$\begin{aligned} f(t) \ge M +\rho + \ell \int _0^t b(f(s))\,\textrm{d}s \quad \hbox { uniformly for all }\ t\in (0,2\varepsilon ). \end{aligned}$$

Thanks to (4.2), we can find \(N>M\) such that \(\int _{M+\rho }^{N+\rho } [b(y)]^{-1}\,\textrm{d}y<\varepsilon \), whence \(\int _{M+\rho }^{N+\rho } [\ell b(y)]^{-1}\,\textrm{d}y<\varepsilon /\ell \). Therefore, Lemma 3.2 assures us that \(\inf _{t\in [\varepsilon /\ell ,2\varepsilon /\ell ]}f(t) \ge N\) and hence

$$\begin{aligned} \inf _{s\in [a+(\varepsilon /\ell ),a+(2\varepsilon /\ell )]} \inf _{y\in (c,c+\sqrt{\varepsilon /\ell })} u(s,y) \ge N \quad \text { a.s. } \end{aligned}$$

Because \(\lim _{a\rightarrow 0+}\varepsilon =0\) [see (3.1)], this yields the theorem in the case that the initial data is constant.

For the general case that the initial condition is bounded, using a standard comparison theorem we can deduce the proof of the theorem. \(\square \)

4 Minimal solutions, and proof of Theorem 1.6

We begin by revisiting the well posedness of (1.1) under Assumptions 1.2 and 1.3. After that, we prove Theorem 1.6 and conclude the paper.

4.1 Minimal solutions

Let \({\mathscr {L}}_{ loc }\) denote the collection of all functions \(f:{\mathbb {R}}\rightarrow (0,\infty )\) that are nondecreasing and locally Lipschitz continuous. In particular, Assumption 1.3 is shortened to the assertion that \(b\in {\mathscr {L}}_{ loc }\). We also define \({\mathscr {L}}\) to be the collection of all elements of \({\mathscr {L}}_{ loc }\) that are [globally] Lipschitz continuous.

Throughout this subsection, we write the solution to (1.1) as \(u_b\) provided that (1.1) well posed for a given \(b\in {\mathscr {L}}_{ loc }.\) As a consequence of the theory of Walsh [20], (1.1) is well posed for example when \(b\in {\mathscr {L}}\); see also Dalang [5]. Moreover, \(u_b\) is the unique solution provided additionally that \(\sup _{t\in (0,T)}\sup _{x\in {\mathbb {R}}}\Vert u(t,x)\Vert _2<\infty \) for all \(T>0\). Finally,

$$\begin{aligned} \textrm{P}\{ u_b\le u_c\}=1\qquad \hbox { for all } b,c\in {\mathscr {L}}\hbox { that satisfy } b\le c; \end{aligned}$$

see Mueller [15] and [13].

Now suppose that \(b\in {\mathscr {L}}_{ loc }\), as is the case in the Introduction. Let \(b^{(n)}=b\wedge n\) for every \(n\in {\mathbb {N}}\). The monotonicity of b implies that every \(b^{(n)}\in {\mathscr {L}}\) for every \(n\in {\mathbb {N}}\), and \(b^{(n)} \le b^{(m)}\) when \(n \le m\). Since \(u_{b^{(n)}} \le u_{b^{(m)}}\) whenever \(n\le m\), it follows that the random field

$$\begin{aligned} u(t,x) = \lim _{n\rightarrow \infty } u_{b^{(n)}}(t,x) \end{aligned}$$

exists and has lower-semicontinuous sample functions. Note also that if \(c\in {\mathscr {L}}\) satisfies \(c\le b\), then \(u_c\le u\). This proves that

$$\begin{aligned} u = \sup _{c\in {\mathscr {L}}} u_c. \end{aligned}$$

Therefore, we refer to u as the minimal solution to (1.1) when b satisfies Assumption 1.3.

Next we describe why u can justifiably be called the minimal “solution” to (1.1). Minimality is clear from context. However, “solution” deserves some words.

If b is in addition Lipschitz continuous, then u is the solution to (1.1) that the Walsh theory yields and there is nothing to discuss. Now suppose \(b\in {\mathscr {L}}_{ loc }\) and recall \(b^{(n)}\in {\mathscr {L}}\). We may observe that

$$\begin{aligned} b^{(n)}\left( u_{b^{(n)}}(t,x)\right) \le b^{(m)}\left( u_{b^{(m)}}(t,x)\right) \qquad \hbox { whenever }\ n\le m, \end{aligned}$$

off a single null set that does not depend on (bnm). Since

$$\begin{aligned} b^{(n)}(x)=\frac{b(x)+n-|b(x)-n|}{2}, \end{aligned}$$

it follows that

$$\begin{aligned} \lim _{n\rightarrow \infty } {b^{(n)}}\left( u_{b^{(n)}}(t,x)\right) = b(u(t,x)) \qquad \hbox { for all } t>0 \hbox { and } x\in {\mathbb {R}}, \end{aligned}$$
(4.1)

again off a single null set. Therefore, the monotone convergence theorem yields

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(y-x) b^{(n)}(u_{b^{(n)}}(s,y))\,\textrm{d}s\,\textrm{d}y= \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(y-x) b(u(s,y))\,\textrm{d}s\,\textrm{d}y, \end{aligned}$$

where \(b(\infty )=\sup b\).

Next, let us consider the \([0,\infty ]\)-valued random variable

$$\begin{aligned} \tau = \inf \left\{ t>0:\, u(t,y)=\infty \quad \hbox { for some }\ y\in {\mathbb {R}}\right\} , \end{aligned}$$

where \(\inf \varnothing =0\). Because u is lower semicontinuous, one can show that \(\tau \) is a stop** time with respect to the filtration of the noise, which we assume satisfies the usual conditions of martingale theory, without loss of generality. Of course, \(\tau \) is the first blowup time of u. Since \(\sigma \) is a bounded and continuous function,

$$\begin{aligned}&\lim _{n\rightarrow \infty }\left\| \int _{(0,t\wedge \tau )\times {\mathbb {R}}} G_{t-s}(y-x) [\sigma (u_{b^{(n)}}(s\,,y))-\sigma (u(s\,,y))]\,W(\textrm{d}s\,\textrm{d}y)\right\| _2^2\\&=\textrm{E}\left( \int _{(0,t\wedge \tau )\times {\mathbb {R}}} \left[ G_{(t\wedge \tau )-s}(y-x)\right] ^2 \lim _{n\rightarrow \infty }[\sigma (u_{b^{(n)}}(s\,,y))-\sigma (u(s\,,y))]^2 \textrm{d}s\, \textrm{d}y\right) =0, \end{aligned}$$

where \(\int _\varnothing (\,\cdots )=0\). Taken together, these comments prove that if \(\tau >0\) – that is if the solution to (1.1) does not instantly blow up – then u satisfies (1.2) for all \(x\in {\mathbb {R}}\) and all times \(t<\tau \).Footnote 2 In this sense, our extension of the solution theory of Walsh [20] indeed produces solutions for \(b\in {\mathscr {L}}_{ loc }\) if there is chance for non-instantaneous blowup, and the smallest such solution is u.

Theorem 1.6 says that if \(b\in {\mathscr {L}}_{ loc }\) satisfies the Osgood condition (1.3), then the minimal solution satisfies \(u(t)\equiv \infty \) for all \(t>0\).

Now suppose the Osgood condition holds, and consider any solution theory that extends the Walsh theory and has a comparison theorem. The preceding comments prove that if that solution theory produces a solution v, then that solution satisfies \(u\le v\) and hence \(v(t)\equiv \infty \) for all \(t>0\) by Theorem 1.6. This is a precise sense in which Theorem 1.6 says that “the solution” to (1.1) blows up instantaneously and everywhere.

We can now conclude the paper with the following.

4.2 Proof of Theorem 1.6

We now prove the everywhere and instantaneous blow up of u under (1.3), where the symbol u denotes the minimal solution to (1.1). Recall the process \(u^{(n)} = u_{b^{(n)}}\) from the previous subsection. Choose and fix an arbitrary number \(a>0\), fixed but as small as we would like, and let \(\varepsilon =\varepsilon (a)>0\) be chosen according to Theorem 3.1. Recall, in particular, the following relationship between a and \(\varepsilon =\varepsilon (a)\):

$$\begin{aligned} \lim _{a\rightarrow 0^+}\varepsilon =0. \end{aligned}$$

In light of (1.3), we may choose and fix \(M>\Vert u_0\Vert _{L^\infty ({\mathbb {R}})}\) such that

$$\begin{aligned} \int _{M+\rho }^\infty \frac{\textrm{d}y}{b(y)}<\varepsilon , \end{aligned}$$
(4.2)

where we recall that \(\rho =\inf _{x\in {\mathbb {R}}}u_0(x).\)

The construction of u and Theorem 3.1 together yield a random constant \(c=c(a, M)>0\) – independent of b – such that the following holds for every \(n\in {\mathbb {N}}\):

$$\begin{aligned} \inf _{t\in [a+\varepsilon ,a+2\varepsilon ]}\inf _{x\in (c,c+\sqrt{\varepsilon })} u(t\,,x)&\ge \inf _{t\in (a+\varepsilon ,a+2\varepsilon )}\inf _{x\in (c,c+\sqrt{\varepsilon })} u^{(n)}(t\,,x)\\&\ge \sup \left\{ N>M:\, \int _{M+\rho }^{N+\rho } \frac{\textrm{d}y}{b^{(n)}(y)}<\varepsilon \right\} \qquad \text { a.s. } \end{aligned}$$

Let \(n\uparrow \infty \) to see from the monotone convergence theorem that

$$\begin{aligned} \inf _{t\in [a+\varepsilon ,a+2\varepsilon ]}\inf _{x\in (c,c+\sqrt{\varepsilon })} u(t,x) \ge \sup \left\{ N>M:\, \int _{M+\rho }^{N+\rho } \frac{\textrm{d}y}{b(y)}<\varepsilon \right\} =\infty \qquad \text { a.s. } \end{aligned}$$

This proves that the blowup time is a.s. \(\le a+2\varepsilon \) and that the solution blows up everywhere in a random interval of the type \((c,c+\sqrt{\varepsilon })\). Consequently, for every non-random \(t \ge a+2\varepsilon \) there a.s. is a random closed interval \(I(t) \subset (0, \infty )\) and a non-random closed interval \({\tilde{I}}(t)=[a+\varepsilon , a+2\varepsilon ]\subset (0,t)\) such that

$$\begin{aligned} \inf _{(s,x)\in {\tilde{I}}(t) \times I(t)} u(s,x)=\infty \qquad \text { a.s. } \end{aligned}$$
(4.3)

Since a can be as small as we would like, and because \(\lim _{a\rightarrow 0}\varepsilon =0 \), we have shown instantaneous blowup. We now show that the blowup happens everywhere. For every \(n\in {\mathbb {N}}\), the random field \(u^{(n)}\) solves

$$\begin{aligned} \begin{aligned} u^{(n)}(t,x) = ( G_t*u_0)(x)&+ \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(y-x) b^{(n)}(u^{(n)}(s,y))\,\textrm{d}s\,\textrm{d}y\\&+ \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(y-x) \sigma (u^{(n)}(s,y)) \, W(\textrm{d}s\,\textrm{d}y). \end{aligned} \end{aligned}$$

By the monotone convergence theorem, for \(t \ge a+2\varepsilon \) and \(x\in {\mathbb {R}}\),

$$\begin{aligned}{} & {} \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(y-x) b^{(n)}(u^{(n)}(s,y))\, \textrm{d}s\,\textrm{d}y\\{} & {} \ge \int _{{\tilde{I}}(t)\times I(t)} G_{t-s}(y-x) b^{(n)}(u^{(n)}(s,y))\, \textrm{d}s\,\textrm{d}y \uparrow \infty , \end{aligned}$$

as \(n\rightarrow \infty \); see (4.1) and (4.3). At the same time, standard estimates such as those in §2 show that

$$\begin{aligned} \sup _{n\in {\mathbb {N}}} \textrm{E}\left( \sup _{(t,x)\in K}\left| \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(y-x)\sigma (u^{(n)}(s,y))\, W(\textrm{d}s\,\textrm{d}y)\right| ^2\right) <\infty , \end{aligned}$$

for every compact set \(K\subset {\mathbb {R}}_+\times {\mathbb {R}}\). Therefore, Fatou’s lemma ensures that a.s.,

$$\begin{aligned} \liminf _{n\rightarrow \infty }\sup _{(t,x)\in K} \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(y-x) \sigma (u^{(n)}(s,y))\, W(\textrm{d}s\,\textrm{d}y)<\infty . \end{aligned}$$

It follows that \(\inf _K u=\infty \) a.s. for all compact sets \(K\subset {\mathbb {R}}_+\times {\mathbb {R}}\). This concludes the proof. \(\square \)