1 Introduction

Excursion theory plays a fundamental role in the study of \({\mathbb {R}}_+\)–indexed Markov processes dating back to Itô’s work [17]. The purpose of this theory is to describe the evolution of a Markov process between visits to a fixed point in the state space. To be more precise, consider a Polish space E, a strong E-valued continuous Markov process \(\xi \) and fix a point \(x\in E\), regular and instantaneous for \(\xi \). The paths of \(\xi \) can be decomposed in excursions away from x, where an excursion is a piece of path of random length, starting and ending at x, such that in between \(\xi \) stays away from x. Formally, they consist of the restrictions of \(\xi \) to the connected components of \({\mathbb {R}}_+ {\setminus } \{ t \in {\mathbb {R}}_+: \xi _t = x \}\). In order to keep track of the ordering induced by the time, the family of excursions is indexed by means of a remarkable additive functional of \(\xi \), called its local time at x, and denoted throughout this work by \({\mathcal {L}}\). It is well known that \({\mathcal {L}}\) is a continuous process with Lebesgue-Stieltjes measure supported on the random set:

$$\begin{aligned} \big \{ t \in {\mathbb {R}}_+: \xi _t = x \big \}, \end{aligned}$$
(1.1)

and that the trajectories of \(\xi \) can be recovered from the family of indexed excursions by gluing them together, taking into account the time spent by \(\xi \) at x. For technical reasons, we will also assume that the point x is recurrent for \(\xi \). We stress that excursion theory holds under broader assumptions on the Markov process \(\xi \), and we refer to e.g. [4, Chapter VI] and [6] for a complete account.

The purpose of this work is to set the first milestone towards introducing an excursion theory for Markov processes indexed by random trees. The random trees that we consider are the so-called Lévy trees. This family is canonical, in the sense that Lévy trees are scaling limits of Galton-Watson trees [11, Chapter 2] and are characterized by a branching property in the same vein as their discrete counterparts [24, 37]. At this point, let us mention that Markov processes indexed by Lévy trees are fundamental objects in probability theory – for instance, they are intimately linked to the theory of superprocesses [11, 22]. More recently, Brownian motion indexed by the Brownian tree has been used as the essential building block in the construction of the universal model of random geometry called the Brownian map [23, 32], as well as in the construction of other related random surfaces [3, 30]. We also stress that Brownian motion indexed by a stable tree is also a universal object, due to the fact that it arises as scaling limit of discrete models [31]. For the sake of completeness, we shall start with a brief and informal account of our objects of interest.

A Lévy tree can be encoded by a continuous \({\mathbb {R}}_+\)-valued process \(H = (H_t)\) called its height process; and for this reason we denote the associated tree by \({\mathcal {T}}_H\). Roughly speaking, the tree \({\mathcal {T}}_H\) has a root and H encodes the distances to it when the tree is explored in “clockwise order”. Under appropriate assumptions, we consider the pair consisting of the Markov process \(\xi \) and its local time \({\mathcal {L}}\), indexed by a Lévy tree \({\mathcal {T}}_H\). With a slight abuse of notation, this process will be denoted in the rest of this work by:

$$\begin{aligned} \big ( (\xi _{\upsilon }, {\mathcal {L}}_\upsilon ): \upsilon \in {\mathcal {T}}_H \big ). \end{aligned}$$
(1.2)

In short, this process can be thought as a random motion defined on top of \({\mathcal {T}}_H\) and following the law of \(((\xi _t, {\mathcal {L}}_t): t \in {\mathbb {R}}_+)\), but splitting at every branching point of \({\mathcal {T}}_H\) into independent copies. The role played by \(\{ t \in {\mathbb {R}}_+: \xi _t =x \}\) is taken over in this setting by the following random subset of \({\mathcal {T}}_H\):

$$\begin{aligned} {\mathscr {Z}}:= \{ \upsilon \in {\mathcal {T}}_H:\, \xi _\upsilon = x \}. \end{aligned}$$
(1.3)

The definition of the excursions of \(( \xi _\upsilon )_{\upsilon \in {\mathcal {T}}_H}\) away from x should then be clear at an intuitive level – since it suffices to consider the restrictions of \(( \xi _\upsilon )_{\upsilon \in {\mathcal {T}}_H}\) to the connected components of \({\mathcal {T}}_H \setminus {\mathscr {Z}}\). Notice however that we lack a proper indexing for this family of excursions that would allow to recover the whole path, as in classical excursion theory. Moreover, one can expect the gluing of these excursions to be more delicate in our setting, since in the time-indexed case the extremities of an excursion consist of only two points, while in the present case, the extremities are subsets of \({\mathcal {T}}_H\) of significantly more intricate nature. In the same vein, since the set \({\mathscr {Z}}\) is a subset of \({\mathcal {T}}_H\), it inherits its tree structure and therefore it possesses richer geometric properties than the subset of the real line (1.1). More precisely, we consider the equivalence relation \(\sim _{\mathcal {L}}\) on \({\mathcal {T}}_H\) which identifies the components of \({\mathcal {T}}_H\) where \(({\mathcal {L}}_\upsilon )_{\upsilon \in {\mathcal {T}}_H}\) stays constant. The resulting quotient space \({\mathcal {T}}^{{\mathcal {L}}}_H:= {\mathcal {T}}_H / \sim _{\mathcal {L}}\) is also a tree, encoding the set \({\mathscr {Z}}\) and endowing it with an additional tree structure. In the terminology of [24], the tree \({\mathcal {T}}^{\mathcal {L}}_H\) is the so-called subordinate tree of \({\mathcal {T}}_H\) by \({\mathcal {L}}\). Since each component of \({\mathcal {T}}_H\) where \({\mathcal {L}}\) stays constant is naturally identified with an excursion of \(\xi \) away from x, a proper understanding of \({\mathcal {T}}_H^{{\mathcal {L}}}\) is crucial to develop an excursion theory for \((\xi _\upsilon )_{\upsilon \in {\mathcal {T}}_H}\). This work is devoted to both:

  1. 1.

    Introducing a notion of local time at x suitable to index the excursion of \((\xi _\upsilon )_{\upsilon \in {\mathcal {T}}_H}\) away from x;

  2. 2.

    Studying the structure of the random set \({\mathscr {Z}}\).

As we shall explain, both questions are intimately related and, as we mentioned before, they lay the foundations for the development of an excursion theory for \((\xi _\upsilon )_{\upsilon \in {\mathcal {T}}_H}\). In the case of Brownian motion indexed by the Brownian tree, an excursion theory has already been developed in [1] and has turned out to have multiple applications in Brownian geometry, see e.g. [25, 29]. However, we stress that in [1] the excursions are not indexed and, in particular, a reconstruction of the Brownian motion indexed by the Brownian tree in terms of its excursions is still out of reach in [1]. The concept of local time at x that we introduce allows for an appropriate indexing of the family of excursions, thereby enabling the development of an indexed excursion theory. This theory will be studied in the companion paper [35]. Let us now present the general framework of this work.

In order to formally define the tree indexed process (1.2), we rely on the theory of Lévy snakes and we shall now give a brief account. The theory of Lévy snakes has mainly been developed in the monograph of Duquesne and Le Gall [11], and a detailed presentation of the results that we need is given in Sect. 2. The process (1.2) is built from two layers of randomness. First, as we already mentioned, the family of random trees that we work with are called Lévy trees. If \(\psi \) is the Laplace exponent of a spectrally positive Lévy process X, under appropriate assumptions on \(\psi \), one can define the height process H as a functional of X. In order to explain how \({\mathcal {T}}_H\) is encoded by H, we work under the excursion measure of X above its running infimum and we write \(\sigma \) for the duration of an excursion. The relation:

$$\begin{aligned} d_{H}(s,t):=H_s+H_t - 2\cdot \inf _{s\wedge t \leqslant u \leqslant s\vee t} H_u, \quad \text { for all } (s,t)\in [0,\sigma ]^{2}, \end{aligned}$$

defines a pseudo-distance on \([0,\sigma ]\), and the associated equivalence relation \(\sim _H\) is defined by setting \(s\sim _{H} t\) if and only if \(d_{H}(s,t)=0\). The pointed metric space \({\mathcal {T}}_{H}:=([0,\sigma _H]/\sim _{H},d_{H},0)\) is a Lévy tree,Footnote 1 where for simplicity we keep the notation 0 for the equivalence class of 0. We also write \(p_H: [0,\sigma ] \mapsto {\mathcal {T}}_H\) for the canonical projection on \({\mathcal {T}}_H\) and we refer to Sect. 2.2 for more details about this encoding. The point 0 is called the root of \({\mathcal {T}}_H\) and, by construction, the height process encodes the distances to it. We stress that the distribution of \({\mathcal {T}}_H\) is characterized by the exponent \(\psi \), and we say that \({\mathcal {T}}_H\) is a \(\psi \)-Lévy tree. One of the main technical difficulties of this work is that, except when X is a Brownian motion with drift, the process H is not Markovian and we will need to introduce a measure-valued process – called the exploration process – which heuristically, carries the information needed to make H Markovian. This process will be denoted throughout this work by \(\rho = (\rho _t: \, t \geqslant 0)\) and its nature has a crucial impact on the geometry of \({\mathcal {T}}_H\). For instance, \(\rho \) allows to characterize the multiplicity and genealogy of points of \({\mathcal {T}}_H\). More precisely, recall that the multiplicity of a point \(\upsilon \) in \({\mathcal {T}}_H\) is defined as the number of connected components of \({\mathcal {T}}_H \setminus \{ \upsilon \}\). For \(i \in {\mathbb {N}}^{*} \cup \{ \infty \}\), we write \(\text {Multi}_i({\mathcal {T}}_H)\) for the set of points of \({\mathcal {T}}_H\) of multiplicity i, and the points of multiplicity strictly larger than 2 are called branching points. For instance, if X does not have jumps, the measures \((\rho _t:~t\geqslant 0)\) are atomless and all branching points have multiplicity 3. In contrast, as soon as the Lévy measure of X is non-null, the measures \((\rho _t:~t\geqslant 0)\) have atoms and the set \(\text {Multi}_\infty ({\mathcal {T}}_H)\) is non-empty. We also refer to [26] for the construction of the exploration process. The second layer of randomness consists in defining, given \({\mathcal {T}}_H\), a spatial motion indexed by \({\mathcal {T}}_H\) that roughly speaking behaves like the Markov process \((\xi _t)_{t \in {\mathbb {R}}_+}\) – when restricted to an injective path connecting the root of \({\mathcal {T}}_H\) to a leaf. This informal description can be formalized by making use of the theory of random snakes [11, Section 5]. More precisely, one can define a process \((( W_t, \Lambda _t ): t \in [0,\sigma ])\) taking values in the collection of finite \(E\times {\mathbb {R}}_+\)–valued continuous paths, each \((W_t, \Lambda _t)\) having lifetime \(H_t\) and such that, for each \(t\in {\mathbb {R}}_+\) and conditionally on \(H_t\), the path \((W_t, \Lambda _t)\) has the same distribution as \(((\xi _r, {\mathcal {L}}_r): r \in [0, H_t] )\). The second main property of \((W, \Lambda )\) is that it satisfies the snake property, viz.

$$\begin{aligned} \big (W_t(H_t), \Lambda _t(H_t)\big )=\big (W_s(H_s), \Lambda _s(H_s)\big ),\quad \text {for every }t\sim _H s. \end{aligned}$$

For simplicity, from now on, we will write \(({\widehat{W}}_t, {\widehat{\Lambda }}_t):=(W_t(H_t), \Lambda _t(H_t))\) for the tip of \((W_t, \Lambda _t)\). By the snake property, it follows that the process \((({\widehat{W}}_t, {\widehat{\Lambda }}_t): t \in [0,\sigma ])\) is well defined on the quotient space \({\mathcal {T}}_H\), and hence it defines a random function indexed by \({\mathcal {T}}_H\) which will be denoted by (1.2). The triplet \((\rho , W, \Lambda )\) is the so-called \(\psi \)-Lévy snake with spatial motion \((\xi , {\mathcal {L}})\), a Markov process that will be extensively studied throughout this work.

Let us now present the statements of our main results. These are stated under the excursion measure of \((\rho , W, \Lambda )\), but let us mention that we will obtain similar results under the underlying probability measure. By construction, the study of \({\mathscr {Z}}\) is closely related to the understanding of the random set:

$$\begin{aligned} \{ t \in [0,\sigma ]: {\widehat{W}}_t = x \}, \end{aligned}$$
(1.4)

since \({\mathscr {Z}}\) is precisely its image under the canonical projection \(p_H\) on \({\mathcal {T}}_H\). However, note that these two sets are of radically different natures. As in classical excursion theory for Markov processes, we shall start by constructing an additive functional \(A = (A_t)_{t \in [0,\sigma ]}\) of the Lévy snake \((\rho , W, \Lambda )\) with suitable properties and Lebesgue-Stieltjes measure \({\textrm{d}}A\) supported on (1.4). The first main result of this work is obtained in Sect. 4 and is divided in two parts:

  1. (i)

    The construction of the additive functional A [Proposition 4.10];

  2. (ii)

    The characterization of the support of \({\textrm{d}}A\) [Theorem 4.20].

See also Theorem 4.3 for an equivalent formulation of (ii) in the terminology of the tree indexed process \((\xi _\upsilon )_{\upsilon \in {\mathcal {T}}_H}\). Recalling our initial discussion, the process \((A_t)_{t \in {\mathbb {R}}_+}\) is the natural candidate to index the excursions away from x of \((\xi _\upsilon )_{ \upsilon \in {\mathcal {T}}_H}\). We are not yet in position in this introduction to formally state the content of (i) and (ii), but we can give a general description. Our construction of \((A_t)_{t \in {\mathbb {R}}_+}\) relies on the so-called exit local times of the Lévy snake \((\rho , W, \Lambda )\). More precisely, if we consider the family of domains \(\{E \times [0, r): \, r \in (0,\infty ) \}\), for each fixed \(r>0\), there exists an additive functional of \((\rho , W, \Lambda )\) that heuristically measures, at every \(t \geqslant 0\), the number of connected components of \({\mathcal {T}}_H {\setminus } \{ \upsilon \in {\mathcal {T}}_H: {\mathcal {L}}_\upsilon \leqslant r \}\) visited up to time t. This description is informal and we refer to Sect. 3 for details. We establish in Sect. 4.1 that the corresponding family of exit local times possesses a jointly measurable version \(({\mathscr {L}}_t^{r}: \, t \geqslant 0, r >0 )\), and in Sect. 4.2 we define our continuous additive A by setting:

$$\begin{aligned} A_t:= \int _0^\infty {\textrm{d}}r \, {\mathscr {L}}_t^{r}, \quad t \geqslant 0. \end{aligned}$$

After establishing that there is no branching point with label x, we give in Sect. 4.3 a precise characterization of the support of the measure \({\textrm{d}}A\). Formally, we prove that:

$$\begin{aligned} \text {supp} ~{\textrm{d}}A= \overline{ \big \{ t \in [0,\sigma ]: \xi _{p_H(t)} = x, \, p_H(t) \in \text {Multi}_2 ({\mathcal {T}}_H) \cup \{ 0 \} \big \}}. \end{aligned}$$

We also show in Theorem 4.20 that, equivalently, the support of \({\textrm{d}}A\) is the complement of the constancy intervals of \(({\widehat{\Lambda }}_t: \, t \geqslant 0)\). In particular, if we denote the right inverse of A by \((A^{-1}_t: t \geqslant 0)\), the relation:

$$\begin{aligned} H^A_t:= {\widehat{\Lambda }}_{A^{-1}_t}, \quad \quad t \geqslant 0, \end{aligned}$$

defines a continuous non-negative process that plays a crucial role in the second part of our work.

In Sect. 5, we turn our attention to the study of \({\mathscr {Z}}\) or, equivalently, to the structure of the subordinate tree \({\mathcal {T}}_H^{{\mathcal {L}}}\). Even if this is an object of very different nature, our analysis relies deeply on the results and the machinery developed in Sect. 4. The second main result of this work consists in showing that the process \(H^A\) satisfies the following properties:

  1. (i’)

    It encodes the subordinate tree \({\mathcal {T}}_H^{{\mathcal {L}}}\) [Theorem  5.1 (i)];

  2. (ii’)

    It is the height function of a Lévy tree, with an exponent \({\widetilde{\psi }}\) that we identify [Theorem 5.1 (ii)].

In particular, this shows that \({\mathcal {T}}_H^{\mathcal {L}}\) is a Lévy tree with exponent \({\widetilde{\psi }}\). We stress that a continuous function can fulfill (i’) without satisfying (ii’), and it is remarkable that \(H^A\) follows the exploration order of a Lévy tree. We also mention that the previous two points were established – although with a different construction of the height process \(H^A\) – in [24, Theorem 1] for the subordination of the Brownian tree by the running maximum of the Brownian motion indexed by the Brownian tree.Footnote 2 These approaches are complementary, since the techniques employed in [24] rely on a discrete approximation of the height function, while we shall argue directly in the continuum. We also mention that one of the strengths of our method is that it gives an explicit definition of \(H^A\) which is suitable for computations. This point is crucial in order to study the excursions of \((\xi _{\upsilon })_{\upsilon \in {\mathcal {T}}_H}\) from x. Our result shows that the height function of the subordinate tree \({\mathcal {T}}^{\mathcal {L}}_H\) can be constructed in terms of functionals of \((\rho , W,\Lambda )\), and that \(A^{-1}\) defines an exploration of \({\mathcal {T}}_H^{{\mathcal {L}}}\) compatible with the order induced by H. Property (i’) will be a consequence of our previous results (i), (ii) and Sect. 5 is mainly devoted to the proof of (ii’). The main difficulty to establish (ii’) comes from the fact that, as we already mentioned, the height process of a Lévy tree is not always Markovian. To circumvent this difficulty, the proof of (ii’) relies on the computation of the so-called marginals of the tree associated with \(H^A\). In particular, it makes use of all the machinery developed in previous sections as well as standard properties of Poisson random measures.

Let us now close the presentation of our work with a result of independent interest which is used extensively throughout this paper. In Sect. 3, we state and prove the so-called Special Markov property of the Lévy snake. This section is independent of the setting of Sects. 4 and 5, and we work with an arbitrary \((\psi , \xi )\)-Lévy snake under general assumptions on the pair \((\psi , \xi )\). Roughly speaking, the special Markov property is a spatial version of the classical Markov property for time-indexed Markov processes. The precise statement is the content of Theorem 3.7, see also Corollary 3.9. This result was established in [24, Theorem 20] for continuous Markov processes indexed by the Brownian tree, and a particular case was proved for the first time in [21]. Our result is a generalisation of [24, Theorem 20] holding in the broader setting of continuous Markov processes indexed by \(\psi \)-Lévy trees. The special Markov property of the Brownian motion indexed by the Brownian tree has already played a crucial role in multiple contexts, to name a few applications see for instance [9, 21, 29, 30] as well as [16, 22, 27] in the setting of super-processes. We expect this result to be useful outside the scope of this work. We also mention that the special Markov property of the Lévy snake is closely related to the one established by Dynkin in the context of superprocesses, see [14, Theorem 1.6]. However, we stress that the formulation in terms of the Lévy snake, although less general, gives additional and crucial information for our purposes. In particular, it takes into account the genealogy induced by the Lévy tree, and hence it carries geometrical information.

We conclude this introduction with a non-exhaustive summary of related works. First, as we already mentioned, we extend to the general framework of Markov processes indexed by Lévy snakes the work of Le Gall on subordination in the case of the Brownian motion indexed by the Brownian tree [24]. Moreover, our results on subordination of trees with respect to the local time are closely related, in the terminology of Lévy snakes, to Theorem 4 in [5] stated in the setting of superprocesses – the main difference being that in our work we encode the associated genealogy. For instance, we recover [5, Theorem 4] in a more precise form in our case of interest. We also note that we expect our results to be useful beyond the scope of this work, for instance in Brownian geometry. Finally, in the case of Brownian motion indexed by the Brownian tree and when \(x=0\), our functional A is closely related to the so-called integrated super-Brownian excursion [2] – a random measure arising in multiple limit theorems for discrete probability models, but also in the theory of interacting particle systems [7, 8] and in a variety of models of statistical physics [10, 15]. More precisely, the total mass \(A_\infty \) is the density of the integrated super-Brownian excursion at 0, see [28, Proposition 3]. In particular, we hope that our construction of the functional A will be useful to obtain new explicit computations regarding the integrated super-Brownian excursion and to generalize these computations to related models. For a connection with local times of super-Brownian motion we refer to Remark 4.14 at the end of Sect. 4.2.

The work is organized as follows: Sect. 2 gives an overview of the theory of Lévy trees and snakes. In Section 3, we state and prove the special Markov property for Lévy snakes and we explore some of its consequences. This section is independent of the rest of the work but is key for the development of Sects. 4 and 5. The preliminary results needed for its proof are covered in Sect. 3.1, and mainly concern approximation results for exit local times. Section 4 is devoted to first, constructing in Sect. 4.2 the additive functional A [Proposition 4.10], and afterward to the characterization of the support of the measure \({\textrm{d}}A\) [Theorem 4.20] in Sect. 4.3. We shall give two equivalent descriptions for the support of \({\textrm{d}}A\), one in terms of the pair (HW), and a second one only depending on \(\Lambda \). The latter will be needed in Sect. 5 and we expect the former to be useful to develop an excursion theory – we plan to pursue this goal in future works. The preliminary results needed for our constructions are covered in Sect. 4.1. Finally, in Sect. 5, after recalling preliminary results on subordination of trees by continuous functions, we explore the tree structure of the set \(\{ \upsilon \in {\mathcal {T}}_H:\, \xi _\upsilon = x \}\) by considering the subordinate tree of \({\mathcal {T}}_H\) with respect to the local time \({\mathcal {L}}\). The main result of the section is stated in Theorem 5.1, and consists in proving (i’) and (ii’). We provide an index with the main notations used in this work at the end of the manuscript.

2 Preliminaries

2.1 The height process and the exploration process

Let us start by introducing the class of Lévy processes that we will consider throughout this work. We set X a Lévy process indexed by \({\mathbb {R}}_+\), and we denote its law started from 0 by P. It will be convenient to assume that X is the canonical process on the Skorokhod space \(D({\mathbb {R}}_+,{\mathbb {R}})\) of càdlàg (right–continuous with left limits) real-valued paths equipped with the probability measure P. We denote the canonical filtration by \(({\mathcal {G}}_t:t\geqslant 0)\), completed as usual by the class of P– negligible sets of \({\mathcal {G}}_\infty =\bigvee _{t\geqslant 0} {\mathcal {G}}_t\). We henceforth assume that X verifies P-a.s. the following properties:

  • (A1) X does not have negative jumps;

  • (A2) The paths of X are of infinite variation;

  • (A3) X does not drift to \(+\infty \).

Since X has no negative jumps the map** \(\lambda \mapsto {\mathbb {E}}[\exp (-\lambda X_{1})]\) is well defined on \({\mathbb {R}}_{+}\) and we denote the Laplace exponent of X by \(\psi \), viz. the function defined by:

$$\begin{aligned} {\mathbb {E}}[\exp (-\lambda X_{1})]=\exp (\psi (\lambda )),\quad \text { for all } \lambda \geqslant 0. \end{aligned}$$

The function \(\psi \) can be written in the Lévy-Khintchine form:

$$\begin{aligned} \psi (\lambda ) = \alpha _0 \lambda + {\beta } \lambda ^2 + \int _{(0,\infty )} \pi ({\textrm{d}} x)~(\exp (-\lambda x) - 1 + \lambda x \mathbb {1}_{\{ x \leqslant 1 \}}), \end{aligned}$$

where \(\alpha _0 \in {\mathbb {R}},\, \beta \in {\mathbb {R}}_{+}\) and \(\pi \) is a sigma-finite measure on \({\mathbb {R}}_{+}^*\) satisfying \(\int _{(0,\infty )}\pi ({\textrm{d}}x)(1\wedge x^{2})<\infty \). Moreover, it is well known that condition (A2) holds if and only if we have:

$$\begin{aligned} \beta \ne 0 \quad \quad \text { or } \quad \quad \int _{(0,1)}\pi ({\textrm{d}}x) ~ x = \infty . \end{aligned}$$

The Laplace exponent \(\psi \) is infinitely differentiable and strictly convex in \((0,\infty )\) (see e.g. Chapter 8 in [19]). Since X does not drift towards \(\infty \) one has \(-\psi '(0+) = E[X_1] \leqslant 0\) which, in turn, implies that X oscillates, or drifts towards \(-\infty \) and that \(X_t\) has a finite first moment for any t. In terms of the Lévy measure, this ensures that the additional integrability condition \(\int _{(1,\infty )} \pi ({\textrm{d}}x) ~ x < \infty \) holds. Consequently, \(\psi \) can and will be supposed to be of the following form:

$$\begin{aligned} \psi (\lambda )=\alpha \lambda +\beta \lambda ^{2}+\int _{(0,\infty )}\pi ({\textrm{d}}x)(\exp (-\lambda x)-1+\lambda x), \end{aligned}$$
(2.1)

where now \(\pi \) satisfies \(\int _{(0,\infty )}\pi ({\textrm{d}}x)(x\wedge x^{2})<\infty \) and \(\alpha , \beta \in {\mathbb {R}}_+\) since \(\alpha = \psi '(0+)\). From now on, we will denote the infimum of X by I and remark that, under our current hypothesis, 0 is regular and instantaneous for the Markov process \(X-I = (X_t - \inf _{[0,t]}X_s: t \geqslant 0)\). Moreover, it is standard that P –a.s., the Lebesgue measure of \(\{ t \in {\mathbb {R}}_+: X_t = I_t \}\) is null - see e.g. Theorems 6.5 and 6.7 in [19] for a proof. The process \(-I\) is a local time of \(X-I\) and we denote the associated excursion measure from 0 by N. To simplify notation, we write \(\sigma _e\) for the lifetime of an excursion e. Finally, we impose the following additional assumption on \(\psi \):

$$\begin{aligned} \int _{1}^{\infty }\frac{{\textrm{d}}\lambda }{\psi (\lambda )}<\infty . \end{aligned}$$
(A4)

From now on, we will be working under (A1)–(A4).

Let us now briefly discuss the main implications of our assumptions. The condition (A4) is twofold: on the one hand, it ensures that \(\lim _{\lambda \rightarrow \infty } \lambda ^{-1} \psi (\lambda ) = \infty \) which implies that X has paths of infinite variation [4, VII-5] (the redundancy in our hypothesis is on purpose for ease of reading). On the other hand, under our hypothesis (A1)–(A3), it is well known that there exists a continuous state branching process with branching mechanism \(\psi (\lambda )\) (abbreviated \(\psi \)-CSBP) and that (A4) is equivalent to its a.s. extinction - we refer to Section II.1 of [22] for a detailed account. The \(\psi \)-Lévy tree can be interpreted as the genealogical tree of this branching process and is defined in terms of a fundamental functional of X, called the height process, that we now introduce.

The height and exploration processes. Let us turn our attention to the so-called height process—the main ingredient needed to define Lévy trees. Our presentation follows [11, Chapter 1] and we refer to [22, Section VIII-1] for heuristics stemming from the discrete setting. Let us start by introducing some standard notation. For every \(0<s\leqslant t\), we set

$$\begin{aligned} I_{s,t}:=\inf _{s \leqslant u \leqslant t} X_u, \end{aligned}$$

the infimum of X in [st] and remark that when \(s = 0\) we have \(I_{t}=I_{0,t}\). Moreover, since X drifts towards \(-\infty \) or oscillates, we must have \(I_{t}\rightarrow -\infty \) when \(t \uparrow \infty \). By [11, Lemma 1.2.1], for every fixed \(t\geqslant 0\), the limit:

$$\begin{aligned} H_t:= \lim \limits _{\varepsilon \rightarrow 0}\frac{1}{\varepsilon }\int _{[0,t]}{\textrm{d}}r~ \mathbb {1}_{\{ X_{r}<I_{r,t}+\varepsilon \}} \end{aligned}$$
(2.2)

exists in probability. Roughly speaking, for every fixed \(t\geqslant 0\), the quantity \(H_t\) measures the size of the set:

$$\begin{aligned} \{ r \leqslant t:~ X_{r-} \leqslant I_{r,t} \}, \end{aligned}$$

and we refer to \(H = (H_t: t \geqslant 0)\) as the height process of X. By [11, Theorem 1.4.3], condition (A4) ensures that H possesses a continuous modification that we consider from now on and that we still denote by H.

The process H will be the building block to define Lévy trees. However, H is not Markovian as soon as \(\pi \ne 0\) and we will need to introduce a process – called the exploration process – which roughly speaking carries the needed information to make H Markovian. More precisely, the exploration process is a Markov process and we will write H as a functional of it. In this direction, we write \({\mathcal {M}}_{f}({\mathbb {R}}_{+})\) for the set of finite measures on \({\mathbb {R}}_{+}\) equipped with the topology of weak convergence and with a slight abuse of notation we write 0 for the null measure on \({\mathbb {R}}_+\). For every \(t\geqslant 0\), the exploration process at time t, denoted by \(\rho _t\), is the random measure on \({\mathbb {R}}_+\) defined as:

$$\begin{aligned} \langle \rho _{t},f \rangle :=\int _{[0,t]}{\textrm{d}}_{s}I_{s,t}\,f(H_{s}), \end{aligned}$$
(2.3)

where \({\textrm{d}}_{s} I_{s,t}\) stands for the measure associated with the non-decreasing function \(s \mapsto I_{s,t}\). Equivalently, \(\rho =(\rho _t:~t\geqslant 0)\) can be defined as:

$$\begin{aligned} \rho _{t}({\textrm{d}}r):= \beta \mathbb {1}_{[0,H_{t}]}(r){\textrm{d}}r+\mathop {\sum \limits _{0<s\leqslant t}}_{X_{s-}<I_{s,t}}(I_{s,t}-X_{s-})\,\delta _{H_{s}}({\textrm{d}}r),\quad t\geqslant 0, \end{aligned}$$
(2.4)

and remark that (2.3) implies that

$$\begin{aligned} \langle \rho _{t},1 \rangle =I_{t,t}-I_{0,t}=X_{t}-I_{t},\quad t\geqslant 0. \end{aligned}$$
(2.5)

In particular, \(\rho _t\) takes values in \({\mathcal {M}}_{f}({\mathbb {R}}_{+})\). By [11, Proposition 1.2.3], the process \((\rho _t: t \geqslant 0)\) is an \({\mathcal {M}}_{f}({\mathbb {R}}_{+})\)-valued càdlàg strong Markov process, and we briefly recall some of its main properties for later use. For every \(\mu \in {\mathcal {M}}_{f}({\mathbb {R}}_{+})\), we write \(\text {supp} (\mu )\) for the topological support of \(\mu \) and we set \(H(\mu ):=\sup \text {supp} (\mu )\) with the convention \(H(0) = 0\). The following properties hold:

  1. (i)

    Almost surely, for every \(t \geqslant 0\), we have \(\text {supp } \rho _t = [0, H_t]\) if \(\rho _t \ne 0\).

  2. (ii)

    The process \(t \mapsto \rho _t\) is càdlàg with respect to the total variation distance.

  3. (iii)

    Almost surely, the following sets are equal:

    $$\begin{aligned} \{ t \geqslant 0: \rho _t = 0 \} = \{ t \geqslant 0: X_t - I_t = 0 \} = \{ t \geqslant 0: H_t = 0 \}. \end{aligned}$$
    (2.6)

Indeed, point (ii) was proved in [11, Proposition 1.2.3] while points (i) and (iii) are a direct consequence of [11, Lemma 1.2.2] and (2.5). In particular, note that we have \((H(\rho _t))_{t \geqslant 0} = (H_t)_{t \geqslant 0}\) and that point (ii) implies that the excursion intervals away from 0 of \(X-I\), H and \(\rho \) coincide. Moreover, since \(I_t \rightarrow -\infty \) when \(t \uparrow \infty \), the excursion intervals have finite length and by [11, Lemma 1.3.2] and the monotonicity of \(t\mapsto I_{t}\) we have:

$$\begin{aligned} \lim \limits _{\varepsilon \rightarrow 0}{\mathbb {E}}\bigg [\sup \limits _{s\in [0,t]}\big | \frac{1}{\varepsilon } \int _{0}^{s}{\textrm{d}}u \, \mathbb {1}_{\{ H_u<\varepsilon \}}+I_s\big |\bigg ]=0,\quad \text { for every }t\geqslant 0. \end{aligned}$$
(2.7)

By the previous display, \(-I\) can be thought as the local time of H at 0.

The Markov process \(\rho \) in our previous definition starts at \(\rho _0 = 0\) and, in order to make use of the Markov property, we have to recall how to define its distribution starting from an arbitrary measure \(\mu \in {\mathcal {M}}_f({\mathbb {R}}_+)\). In this direction, we will need to introduce the following two operations:

Pruning. For every \(\mu \in {\mathcal {M}}_{f}({\mathbb {R}}_{+})\) and \( 0\leqslant a < \langle \mu , 1 \rangle \), we set \(\kappa _{a}\mu \) the unique measure on \({\mathbb {R}}_+\) such that for every \(r \geqslant 0\):

$$\begin{aligned} \kappa _{a}\mu ([0,r]):=\mu ([0,r])\wedge (\langle \mu ,1 \rangle -a). \end{aligned}$$

If \(a \geqslant \langle \mu , 1 \rangle \) we simply set \(\kappa _a\mu := 0\). The operation \(\mu \mapsto \kappa _a \mu \) corresponds to a pruning operation “from the right” and note that, for every \(a> 0\) and \(\mu \in {\mathcal {M}}_{f}({\mathbb {R}}_{+})\), the measure \(\kappa _{a}\mu \) has compact support. In particular, one has \(H(\kappa _{a}\mu )<\infty \) for every \(a>0\), even for \(\mu \) with unbounded support.

Concatenation. Consider \(\mu ,\nu \in {\mathcal {M}}_{f}({\mathbb {R}}_{+})\) such that \(H(\mu )<\infty \). The concatenation of the measure \(\mu \) with \(\nu \) is again an element of \({\mathcal {M}}_f({\mathbb {R}}_+)\), denoted by \([\mu ,\nu ]\) and defined by the relation:

$$\begin{aligned} \langle [\mu ,\nu ],f \rangle :=\int \mu ({\textrm{d}}r) f(r)+\int \nu ({\textrm{d}}r) f(H(\mu )+r). \end{aligned}$$

Finally, for every \(\mu \in {\mathcal {M}}_{f}({\mathbb {R}}_{+})\), the exploration process started from \(\mu \) is denoted by \(\rho ^{\mu }\) and defined as:

$$\begin{aligned} \rho ^{\mu }_{t}:=[\kappa _{-I_{t}}\mu , \rho _{t}],\quad t> 0, \end{aligned}$$
(2.8)

with the convention \(\rho _{0}^{\mu }:=\mu \). In this definition we used the fact that, P-a.s., \(I_{t}<0\) for every \(t>0\), since we are not imposing the condition \(H(\mu ) < \infty \) on \(\mu \). Remark that by (2.5), the process \(\langle \rho ^\mu , 1 \rangle := (\langle \rho ^\mu _t, 1 \rangle : t \geqslant 0)\) has the same distribution as the Markov process \(X-I\) started from \(\langle \mu , 1 \rangle \), this fact will be used frequently. For every \(\mu \in {\mathcal {M}}_{f}({\mathbb {R}}_+)\), we write \({{\textbf {P}}}_{\mu }\) for the distribution of the exploration process started from \(\mu \) in \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+))\) – the space of càdlàg \({\mathcal {M}}_f({\mathbb {R}}_+)\)-valued paths.

For later use we also need to introduce, under P, the dual process of \(\rho \), this is, the \({\mathcal {M}}_f({\mathbb {R}}_+)\)-valued process \((\eta _t: t \geqslant 0)\) defined by the formula

$$\begin{aligned} \eta _t ({\textrm{d}}r):= \beta \mathbb {1}_{[0,H_{t}]}(r){\textrm{d}}r+\mathop {\sum \limits _{0<s\leqslant t}}_{X_{s-}<I_{s,t}}(X_{s} - I_{s,t} )\,\delta _{H_{s}}({\textrm{d}}r),\quad t\geqslant 0. \end{aligned}$$
(2.9)

We refer to [26] for a heuristic description of the process \((\rho , \eta )\) in terms of queuing systems. The process \(\eta \) will be only needed for some computations and the terminology will be justified by the identity (2.11) below. Moreover, \(\eta \) is càdlàg with respect to the total variation distance and the pair \((\rho , \eta )\) is a Markov process. We refer to [11, Section 3.1] for a complete account on \((\eta _t: t \geqslant 0 )\).

Before concluding this section, it will be crucial for our purposes to define the height process and the exploration process under the excursion measure N of \(X-I\). In this direction, if for an arbitrary fixed r we set \(g = \sup \{ s \leqslant r: X_s-I_s = 0 \}\) and \(d = \inf \{ s \geqslant r: X_s-I_s = 0 \}\), it is straightforward to see that \((H_t: t \in [g,d])\) can be written in terms of a functional of the excursion of \(X-I\) that straddles r, say \(e_j =( X_{(g + t) \wedge d} - I_{g}: t \geqslant 0)\), and this functional does not depend on the choice of r. Informally, from the initial definition (2.2) this should not come as a surprise since the integral (2.2) for \(t \in [g,d]\) vanishes on [0, g], we refer to the discussion appearing before Lemma 1.2.4 in [11] for more details. We denote this functional by \(H(e_j)\) and it satisfies that P–a.s., \(H_t = H_{t-g}(e _j)\) for every \(t \in [g,d]\). Furthermore, if we denote the connected components of \(\{ t \geqslant 0: X_t - I_t = 0 \}\) by \(\big ((a_i, b_i): i \in {\mathbb {N}} \big )\) and the corresponding excursions by \((e_i: i \in {\mathbb {N}})\), then we have \(H_{(a_i + t) \wedge b_i } = H_t(e_i)\), for all \(t \geqslant 0\). By considering the first excursion e of \(X-I\) with duration \(\sigma _e > \varepsilon \) for every \(\varepsilon > 0\), it follows that the functional H(e) in \(D({\mathbb {R}}_+, {\mathbb {R}})\) under \(N( {\textrm{d}}e \,| \sigma _e > \varepsilon )\) is well defined, and hence it is also well defined under the excursion measure N.

Turning now our attention to the exploration process and its dual, observe that for \(t \in [a_i, b_i]\) the mass of the atoms in (2.4) and (2.9) only depend on the corresponding excursion \(e_i\). We deduce by our previous considerations on H that we can also write \(\rho _{(a_i+t) \wedge b_i} = \rho _t (e_i)\) and \(\eta _{(a_i+t) \wedge b_i} = \eta _t (e_i)\), for all \(t \geqslant 0\), where the functionals \(\rho (e)\), \(\eta (e)\) are still defined by (2.4) and (2.9) respectively, but replacing X by \(e_i\) and H by \(H(e_i)\) – translated in time appropriately. By the same arguments as before, we deduce that \(\rho (e)\) and \(\eta (e)\) under \(N({\textrm{d}}e)\) are well defined \({\mathcal {M}}_{f}({\mathbb {R}}_{+})\)-valued functionals. From now on, when working under N, the dependency on e is omitted from H, \(\rho \) and \(\eta \). Remark that under N, we still have \(H(\rho _t) = H_t\) and \(\langle \rho _t, 1 \rangle = X_t\), for every \(t \geqslant 0\), where now X is an excursion of the reflected process. By excursion theory for the reflected Lévy process \(X-I\) we deduce that the random measure in \({\mathbb {R}}_+ \times {\mathcal {M}}_f({\mathbb {R}}_+)\) defined as

$$\begin{aligned} \sum _{i \in {\mathbb {N}}}\delta _{(-I_{a_i}, \rho _{(a_i + \cdot )\wedge {b_i}}, \eta _{(a_i + \cdot )\wedge {b_i}} )} \end{aligned}$$
(2.10)

is a Poisson point measure with intensity \(\mathbb {1}_{\ell \geqslant 0}{\textrm{d}}\ell \, N( {\textrm{d}}\rho , {\textrm{d}}\eta )\). Finally, we recall for later use the equality in distribution under N:

$$\begin{aligned} \big ( (\rho _t, \eta _t): t \geqslant 0 \big ) \overset{(d)}{=}\ \big ( (\eta _{(\sigma -t )-}, \rho _{(\sigma -t )-} ): t \geqslant 0 \big ), \end{aligned}$$
(2.11)

and we refer to [11, Corollary 3.1.6] for a proof. This identity is the reason why \(\eta \) is called the dual process of \(\rho \).

2.2 Trees coded by excursions and Lévy trees

The height process H under N is the main ingredient needed to define Lévy trees, one of the central objects studied in this work. Before giving a formal definition, we shall briefly recall some standard notation and notions related to (deterministic) pointed \({\mathbb {R}}\)-trees.

Real trees. In the same vein as the construction of planar (discrete) trees in terms of their contour functions, there exists a canonical construction of pointed \({\mathbb {R}}\)-trees in terms of positive continuous functions. In order to be more precise, we introduce some notation. Let \(e:{\mathbb {R}}_+ \mapsto {\mathbb {R}}_{+}\) be a continuous function, set \(\sigma _e\) the functional \(\sigma _e:= \sup \{ t > 0: e(t) \ne 0 \}\) with the convention \(\sup \{ \varnothing \}:= 0\). In particular, when \(e(0) = 0\), \(\sigma _{e} < \infty \) and \(e(s)> 0\) for all \(s \in (0,\sigma _e)\), the function e is called an excursion with lifetime \(\sigma _e\). Note that these notations are compatible with the ones introduced in the previous section. For convenience, we take \([0,\sigma _e]:= [0,\infty )\) if \(\sigma _e = \infty \). For every \(s, t \in [0,\sigma _e]\) with \(s \leqslant t\) set

$$\begin{aligned} \displaystyle m_{e}(s,t):= \inf _{s \leqslant u \leqslant t}e(u), \end{aligned}$$

and consider the pseudo-distance on \([0,\sigma _e]\) defined by:

$$\begin{aligned} d_{e}(s,t):=e(s)+e(t)-2\cdot m_{e}(s\wedge t,s\vee t), \quad \text { for all } (s,t)\in [0,\sigma _{e}]^{2}. \end{aligned}$$

The pseudo-distance \(d_{e}\) induces an equivalence relation \(\sim _{e}\) in \([0,\sigma _e]\) according to the following simple rule: for every \((s,t)\in [0,\sigma _{e}]^{2}\) we write \(s\sim _{e} t\) if and only if \(d_{e}(s,t)=0\), and we keep the notation 0 for the equivalency class of the real number 0. The pointed metric space \({\mathcal {T}}_{e}:=([0,\sigma _e]/\sim _{e},d_{e},0)\) is an \({\mathbb {R}}\)-tree, called the tree encoded by e and we denote its canonical projection by \(p_{e}:[0,\sigma _{e}]\rightarrow {\mathcal {T}}_{e}\). We stress that if \(\sigma _e<\infty \), then \({\mathcal {T}}_e\) is a compact \({\mathbb {R}}-\)tree.

Let us now give some standard properties and notations. We recall that in an \({\mathbb {R}}\)-tree there is only one continuous injective path connecting any two points \(u,v \in {\mathcal {T}}_e\), and we denote its image in \({\mathcal {T}}_e\) by \([u,v]_{{\mathcal {T}}_e}\). We say that u is an ancestor of v if \(u \in [0,v]_{{\mathcal {T}}_e}\) and we write \(u\preceq _{{\mathcal {T}}_e} v\). One can check directly from the definition that we have \(u\preceq _{{\mathcal {T}}_e} v\) if and only if there exists \((s,t)\in [0,\sigma _{e}]^{2}\) such that \((p_{e}(s),p_{e}(t))=(u,v)\) and \(e(s)=m_{e}(s\wedge t,s\vee t)\). In other words, we have:

$$\begin{aligned} {[}0,v]_{{\mathcal {T}}_e}=p_{e}\big (\big \{s\in [0,\sigma _{e}]:~ e(s)=m_{e}(s\wedge t,s\vee t)\big \}\big ), \end{aligned}$$

where t is any preimage of v by \(p_{e}\). To simplify notation, we write \(u\curlywedge _{{\mathcal {T}}_e} v\) for the unique element on the tree verifying \([0,u\curlywedge _{{\mathcal {T}}_e} v]_{{\mathcal {T}}_e}=[0,u]_{{\mathcal {T}}_e}\cap [0,v]_{{\mathcal {T}}_e}\). The element \(u\curlywedge _{{\mathcal {T}}_e} v\) is known as the common ancestor of u and v. Finally, if \(u\in {\mathcal {T}}_e\), the number of connected components of \({\mathcal {T}}_e\setminus \{u\}\) is called the multiplicity of u. For every \(i\in {\mathbb {N}}^{*}\cup \{\infty \}\), we will denote the set of points \(u\in {\mathcal {T}}_e\) of multiplicity equal to i by \(\text {Mult}_{i}({\mathcal {T}}_e)\). The points of multiplicity larger than 2 are called branching points, and the points of multiplicity 1 are called leaves.

Lévy trees. We are now in position to introduce:

Definition 2.1

The random metric space \({\mathcal {T}}_{H}\) under the excursion measure N is the (free) \(\psi \)-Lévy tree.

The term free refers to the fact that the lifetime of H is not fixed under N and it will be omitted from now on. Note that the metric space \({\mathcal {T}}_{H}\) can be considered under P without any modifications. Since, under P, we have \(\sigma _H = \infty \), the tree \({\mathcal {T}}_{H}\) stands for the space \(({\mathbb {R}}_+/\sim _{H},d_{H},0)\), and in particular it is no longer a compact space. The rest of the properties however remain valid and we will use the same notations indifferently under P and N. Moreover, since the point 0 is recurrent for the process \(X-I\), it is also recurrent for H by point (ii) of the previous section. This gives a natural interpretation of \({\mathcal {T}}_{H}\) as the concatenation at the root of infinitely many trees \({\mathcal {T}}_{H^{i}}\), where \((H^{i})_{i\in {\mathbb {N}}} = (H(e_i))_{i \in {\mathbb {N}}}\) are the excursions of H away from 0, and where the concatenation follows the order induced by the local time \(-I\). For this reason, we will say that \({\mathcal {T}}_{H}\) under P is a \(\psi \)-forest (made of \(\psi \)-Lévy trees).Footnote 3 In particular, remark that under P (resp. N), the root \(p_{H}(0)\) is a branching point of multiplicity \(\infty \) (resp. a leaf).

Before concluding the discussion on \({\mathbb {R}}\)-trees, we recall from [12, Theorem 4.6] that, under P or N, \(\text {Mult}_{i}({\mathcal {T}}_{H})=\varnothing \) for every \(i\notin \{1,2,3,\infty \}\). Moreover, we have \(\text {Mult}_{\infty }({\mathcal {T}}_{H})\setminus \{p_H(0)\}=\varnothing \) if and only if \(\pi =0\) or, equivalently, if X does not have jumps. More precisely, \(p_{H}\) realizes a bijection between \(\{t\geqslant 0: \, \Delta X_{t}>0\}\) and \(\text {Mult}_{\infty }({\mathcal {T}}_{H})\setminus \{p_{H}(0)\}\).

2.3 The Lévy snake

In this section, we give a short introduction to the so-called Lévy snake, a path-valued Markov process that allows to formalize the notion of a “Markov process indexed by a Lévy tree”. We follow the presentation of [11, Chapter 4]. However, beware that in this work we consider continuous paths defined on closed intervals, and hence our framework differs slightly with the one considered in [11, Chapter 4].Footnote 4

Snakes driven by continuous functions. Fix a Polish space E equipped with a distance \(d_{E}\) inducing its topology and we let \({\mathcal {W}}_{E}\) be the set of E-valued killed continuous functions. Each \(\text {w} \in {\mathcal {W}}_E\) is a continuous path \(\text {w}:[0,\zeta _{\text {w}}]\rightarrow E\), defined on a compact interval \([0,\zeta _{\text {w}}]\). The functional \(\zeta _{\text {w}} \in [0,\infty )\) is called the lifetime of \(\text {w}\) and it will be convenient to denote the endpoint of \(\text {w}\) by \(\widehat{\text {w}}:=\text {w}(\zeta _{\text {w}})\). Further, we write \({\mathcal {W}}_{E,x}:=\{\text {w} \in {\mathcal {W}}_E:~ \text {w}(0)=x\}\) for the subcollection of paths in \({\mathcal {W}}_E\) starting at x, and we identify the trivial element of \({\mathcal {W}}_x\) with zero lifetime with the point x. We equip \({\mathcal {W}}_{E}\) with the distance

$$\begin{aligned} d_{{\mathcal {W}}_{E}}(\text {w},\text {w}^{\prime }):=|\zeta _{\text {w}}-\zeta _{\text {w}^{\prime }}|+\sup \limits _{r\geqslant 0}d_{E}\big (\text {w}(r\wedge \zeta _{\text {w}}),\text {w}^{\prime }(r\wedge \zeta _{\text {w}^{\prime }})\big ), \end{aligned}$$

and it is straightforward to check that \(({\mathcal {W}}_{E},d_{{\mathcal {W}}_{E}})\) is a Polish space. Let us insist that the notation e is exclusively used for continuous \({\mathbb {R}}_+\)-valued functions defined on \({\mathbb {R}}_+\), and \({\text {w}}\) is reserved for E-valued continuous paths defined on compact intervals \([0, \zeta _{\text {w}}]\), viz. for the elements of \({\mathcal {W}}_E\).

We will now endow \({\mathcal {W}}_{E}^{{\mathbb {R}}_+}\) with a probability measure. In this direction, consider an E-valued Markov process \(\xi = (\xi _t: t \geqslant 0) \) with continuous sample paths. For every \(x\in E\), let \(\Pi _{x}\) denote the distribution of \(\xi \) started at x and also assume that \(\xi \) is time-homogeneous (it is implicitly assumed in our definition that the map** \(x \mapsto \Pi _x\) is measurable). Now, fix a deterministic continuous function \(h:{\mathbb {R}}_{+}\mapsto {\mathbb {R}}_{+}\). The first step towards defining the Lévy snake consists in introducing a \({\mathcal {W}}_E\)-valued process referred as the snake driven by h with spatial motion \(\xi \). In this direction, we also fix a point \(x\in E\) and a path \({\text {w}}\in {\mathcal {W}}_{E,x}\). For every ab such that \(0\leqslant a\leqslant \zeta _{\text {w}}\) and \(b \geqslant a\), there exists a unique probability measure \(R_{a,b}(\text {w},{\textrm{d}}\text {w}^{\prime })\) on \({\mathcal {W}}_{E,x}\) satisfying the following properties:

  1. (i)

    \(R_{a,b}(\text {w},{\textrm{d}}\text {w}^{\prime })\)-a.s., \(\text {w}^{\prime }(s)=\text {w}(s)\) for every \(s\in [0,a]\).

  2. (ii)

    \(R_{a,b}(\text {w},{\textrm{d}}\text {w}^{\prime })\)-a.s., \(\zeta _{\text {w}^{\prime }}=b\).

  3. (iii)

    Under \(R_{a,b}(\text {w},{\textrm{d}}\text {w}^{\prime })\), \((\text {w}^{\prime }(s+a))_{s\in [0,b-a]}\) is distributed as \((\xi _{s})_{s\in [0,b-a]}\) under \(\Pi _{\text {w}(a)}\).

Denoting the canonical process on \({\mathcal {W}}^{{\mathbb {R}}_{+}}_{E}\) by \((W_{s})_{s\geqslant 0}\), it is easy to see by Kolmogorov’s extension theorem that, for every \(\text {w}_{0} \in {\mathcal {W}}_{E,x}\) with \(\zeta _{\text {w}_{0}}=h(0)\), there exists a unique probability measure \(Q^{h}_{\text {w}_{0}}\) on \({\mathcal {W}}_E^{{\mathbb {R}}_+}\) satisfying that

$$\begin{aligned}&Q^{h}_{\text {w}_{0}}\big (W_{s_{0}}\in A_{0}, W_{s_{1}}\in A_{1},...,W_{s_{n}}\in A_{n}\big )\\&\hspace{0.2cm}=\mathbb {1}_{\{\text {w}_{0}\in A_{0}\}}\!\int _{A_{1}\times A_{2}\times \cdots \times A_{n}}\!\!\!\! R_{m_{h}(s_{0},s_{1}),h(s_{1})}(\text {w}_{0}, {\textrm{d}}\text {w}_{1})\dots R_{m_{h}(s_{n-1},s_{n}),h(s_{n})}(\text {w}_{n-1}, {\textrm{d}}\text {w}_{n}), \end{aligned}$$

for every \(0=s_{0}\leqslant s_{1}\leqslant ...\leqslant s_{n}\) and \(A_{0},..., A_{n}\) Borel sets of \({\mathcal {W}}_{E}\). The canonical process W in \({\mathcal {W}}^{{\mathbb {R}}_+}_E\) under \(Q^{h}_{\text {w}_{0}}\) is called the snake driven by h with spatial motion \(\xi \) started from \({\text {w}}_0\). The value \(W_s = (W_s(t): t \in [0,h(s)])\) of the Lévy snake at time s coincides with \({\text {w}}_0\) for \(0 \leqslant t \leqslant m_h(0,s)\) while for \(m_{h}(0,s)\leqslant t \leqslant h(s)\), it is distributed as the Markov process \(\xi \) started at \({\text {w}}_0(m_h(0,s))\) and stopped at time \(h(s)-m_h(0,s)\). Furthermore, informally, when h decreases, the path is erased from its tip and, when h increases, the path is extended by adding “little pieces” of trajectories of \(\xi \) at the tip. The term snake refers to the fact that, the definition of \(Q_{\text {w}_0}^h\) entails that for every \(s<s^\prime \) we have:

$$\begin{aligned} W_{s}(r)=W_{s^\prime }(r),\quad r\in [0,m_{h}(s,s^\prime )],\quad Q_{\text {w}_0}^h\text {-a.s.} \end{aligned}$$
(2.12)

Note however that this property only holds for fixed \(s,s^\prime \) \(Q_{{\text {w}}_0}^h\)-a.s. A priori, under \(Q^{h}_{\text {w}_{0}}\), the process W does not have a continuous modification with respect to the metric \(d_{{\mathcal {W}}_E}\), but it will be crucial for our work to find suitable conditions guaranteeing the existence of such a modification. This question will be addressed in the following proposition. We start by introducing some notation. First recall the convention \([a,\infty ]:=[a,\infty )\) for \(a < \infty \). Next, consider a \({\mathcal {J}}\)– indexed family \(a_i, b_i \in {\mathbb {R}}_+\cup \{\infty \}\), \({\mathcal {J}} \subset {\mathbb {N}}\), with \(a_i < b_i\) and suppose that the intervals \(([a_i, b_i], i \in {\mathcal {J}})\) are disjoint. A continuous function \(h:{\mathbb {R}}_+ \mapsto {\mathbb {R}}_+\) is said to be locally r-Hölder continuous in \(([a_i, b_i], i \in {\mathcal {J}})\) if, for every \(n\in {\mathbb {N}}\), there exists a constant \(C_n\) satisfying that \(|h(s) - h(t)| \leqslant C_n|s-t|^r\), for every \(i \in {\mathcal {J}}\) and \(s,t \in [a_i,b_i]\cap [0,n]\). We insist on the fact that the constant \(C_n\) does not depend on the index i.

Proposition 2.2

Suppose that there exists a constant \(C_\Pi > 0\) and two positive numbers \(p,q > 0\) such that, for every \(x \in E\) and \(t\geqslant 0\), we have:

$$\begin{aligned} \Pi _{x}\big ( \sup _{0 \leqslant u \leqslant t } d_E(\xi _u, x)^p \big ) \leqslant C_\Pi \cdot t^{q}. \end{aligned}$$
(2.13)

Further, consider a continuous function \(h: {\mathbb {R}}_+ \mapsto {\mathbb {R}}_+\) and denote by \(((a_i, b_i):~ i \in {\mathcal {J}})\) the excursion intervals above its running infimum. If h is locally r-Hölder continuous in \(([a_i, b_i]: i \in {\mathcal {J}})\) with \(qr > 1\) then, for every \(w \in \mathcal {W}_E\) with \(\zeta _{{\text {w}} } = h(0)\), the process W has a continuous modification under \(Q_\text {w }^h\).

Proof

With the notation introduced in the statement of the proposition, we fix a continuous driving function \(h: {\mathbb {R}}_+ \mapsto {\mathbb {R}}_+\) locally r-Hölder continuous in \(([a_i, b_i]:~ i \in {\mathcal {J}})\), an initial condition \({\text {w}}\in {\mathcal {W}}_E\) with \(\zeta _{\text {w}}= h(0)\), and we consider an arbitrary \(n \in {\mathbb {N}}\). By definition, for any \(s,t \in [a_i,b_i] \cap [0,n]\), we have \(|h(s) - h(t)| \leqslant C_n\cdot |s-t|^r\) for a constant \(C_n\) that does not depend on i. Next, we consider W, the snake driven by h under \(Q^h_{{\text {w}}}({\textrm{d}}W)\). The first step of the proof consists in showing that the process \((W_s: s \in \bigcup _{i\in {\mathcal {J}}}[a_i,b_i])\) has a locally Hölder-continuous modification on \(\big ([a_i,b_i]:~ i \in {\mathcal {J}} )\). In this direction, we remark that the definition of \(d_{{\mathcal {W}}_E}\) gives:

$$\begin{aligned} Q^h_{\text {w}}\big ( d_{{\mathcal {W}}_E}(W_s, W_t)^p \big )&\leqslant 2^{p} \cdot Q^h_{\text {w}}\Big ( \sup _{m_h(s,t) \leqslant u}d_{E} \big (W_s(u \wedge h(s) ), W_t(u \wedge h(t) )\big )^p \Big ) \\&\quad + 2^{p}\cdot |h(s) - h(t)|^p, \end{aligned}$$

for every \(s,t \in [a_i, b_i] \cap [0,n]\). Next, note that the first term on the right hand side can be bounded above by:

$$\begin{aligned}&Q^h_{\text {w}}\Big ( \sup _{m_h(s,t) \leqslant u} d_{E}\big (W_s(u \wedge h(s) ), W_t(u \wedge h(t))\big )^p \Big ) \\&\quad \leqslant 2^{p}\cdot Q^h_{\text {w}}\Big ( \sup _{m_h(s,t) \leqslant u} d_{E} \big ( W_s(u \wedge h(s) ), W_s( m_h(s,t)) \big )^p \Big )\\&\qquad + 2^p\cdot Q^h_{\text {w}}\Big ( \sup _{m_h(s,t) \leqslant u} d_{E}\big (W_t( m_h(s,t) ), W_t(u \wedge h(t))\big )^p \Big ) \\&\quad \leqslant 2^{p}\cdot Q^h_{\text {w}}\Big ( \Pi _{W_s(m_h(s,t))} \big ( \sup _{ u \leqslant h(s) - m_h(s,t) } d_{E}( \xi _u , \xi _0 )^p \big ) \Big )\\&\qquad + 2^{p}\cdot Q^h_{\text {w}}\Big ( \Pi _{W_t(m_h(s,t))} \big ( \sup _{ u \leqslant h(t) - m_h(s,t) } d_{E}( \xi _0 , \xi _u )^p \big ) \Big ) \\&\quad \leqslant 2^{p} C_{\Pi }\cdot \Big ( \big |h(s)-m_h(s,t)\big |^{q}+\big |h(t)-m_h(s,t)\big |^{q}\Big ), \end{aligned}$$

where in the second inequality we applied the Markov property at time \(m_h(s,t)\), and in the last one we used the upper bound (2.13). By our assumptions on h we derive that, for every \(n>0\), there exists a constant \(C'_n\) such that:

$$\begin{aligned} Q^h_{\text {w}}\left( d_{{\mathcal {W}}_E}(W_s, W_t)^p \right) \leqslant C'_n \cdot \big ( | t-s |^{ q r }+| t-s |^{ p r }\big ), \\ \quad \quad \text { for any } s,t \in [a_i, b_i] \cap [0,n], \end{aligned}$$

and we stress that the constant \(C'_n\) does not depend on i. Recall that \(qr>1\). Moreover, we can also assume that \(pr> qr > 1\) since, by equip** the space \({\mathcal {W}}_E\) with the distance \(1\wedge d_{{\mathcal {W}}_E}\) in place of \(d_{{\mathcal {W}}_E}\), we can take p as large as wanted.Footnote 5 Now, fix \(r_0 \in (0, (qr-1)/p )\). We deduce by a standard Borel-Cantelli argument, similar to the proof of Kolmogorov’s lemma, that there exists a modification of \((W_s: s \in [0,n] \cap \bigcup _{i\in {\mathcal {J}}}[a_i,b_i] )\), say \((W_s^*: s \in [0,n] \cap \bigcup _{i\in {\mathcal {J}}}[a_i,b_i] )\), satisfying that \(Q_{\text {w}}^h\)– a.s., for every \(i \in {\mathcal {J}}\)

$$\begin{aligned} d_{{\mathcal {W}}_E}( W_s^*, W_t^* ) \leqslant K_n |s-t|^{r_0}, \quad \quad \text { for every } s,t \in [a_i, b_i] \cap [0,n], \end{aligned}$$
(2.14)

where the (random) quantity \(K_n\) does not depend on i. To simplify notation, set \({\mathcal {V}}:={\mathbb {R}}_+{\setminus } \bigcup _{i\in {\mathcal {J}}}[a_i,b_i]\) and remark that if \(t\in {\mathcal {V}}\), then \(h(t)=\inf \{h(u):~u\in [0,t]\}\). For every \(t \in {\mathcal {V}}\), we set \(W^*_t:= ( {\text {w}}(u): u \in [0,h(t)] )\) and we consider the process \((W^*_t: t\in [0,n])\). Notice that by the very construction of \(W^*\), we have \(Q_{\text {w}}^h(W_t = W^*_t)=1\) for every \(t \in [0,n]\), which shows that \(W^*\) is a modification of W in [0, n].

Let us now show that \(W^*\) is continuous on [0, n]. The continuity for \(t \in [0,n]\cap \bigcup _{i\in {\mathcal {J}}}(a_i,b_i) \) follows by (2.14) and we henceforth fix \(t\in {\mathcal {V}}_n:=[0,n]{\setminus } \bigcup _{i\in {\mathcal {J}}}(a_i,b_i) \). In particular, we have \(h(t)=\inf \{h(u):~u\in [0,t]\}\). On the one hand, for every sequence \((s_k:~k\in {\mathbb {N}})\) in \({\mathcal {V}}_n\) converging to t, the continuity of \({\text {w}}\) and h ensures that \(({\text {w}}( u): u \in [0, h(s_k)] ) \rightarrow W^*_t\) with respect to \(d_{{\mathcal {W}}_E}\) as \(k\uparrow \infty \). Therefore, we have:

$$\begin{aligned} \mathop {\lim \limits _{s\rightarrow t }}_{s\in {\mathcal {V}}_n}d_{{\mathcal {W}}_E}(W_{s}^*, W_t^*)= \mathop {\lim \limits _{s\rightarrow t }}_{s\in {\mathcal {V}}_n}d_{{\mathcal {W}}_E}\Big ( \big ({\text {w}}(u): u \in [0, h(s)] \big ), W^*_{t} \Big ) =0. \end{aligned}$$
(2.15)

On the other hand, for every \(s \in [a_j,b_j] \cap [0,n]\) for some \(j \in {\mathcal {J}}\) with \(s \leqslant t\), we have

$$\begin{aligned} d_{{\mathcal {W}}_E}(W_s^*, W_t^*)&\leqslant d_{{\mathcal {W}}_E}(W_s^*, W_{b_j}^*)+ d_{{\mathcal {W}}_E}\big ( W^*_{b_j} , W^*_{t} \big )\\&\leqslant K_n|s-b_j|^{r_0} + d_{{\mathcal {W}}_E}\Big ( \big ({\text {w}}(u) : u \in [0,h(b_j)] \big ) , W^*_{t} \Big ), \end{aligned}$$

which goes to 0 as \(s \uparrow t\) by (2.15) since \(W^*_t = \big ( {\text {w}}(u): u \in [0,h(t)] \big )\). The case \( s\geqslant t\) can be treated similarly by replacing \(b_i\) with \(a_i\) and it follows that \(d_{{\mathcal {W}}_E}(W_{s}^*, W_t^*) \rightarrow 0\), as \(s\rightarrow t\). Consequently, \(W^*\) is continuous on [0, n]. Since this holds for any n, we can define a continuous modification of W in \({\mathbb {R}}_+\). \(\square \)

Under the conditions of Proposition 2.2, the measure \(Q^h_{{\textrm{w}}}\) can be defined on the Skorokhod space \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {W}}_E)\) of \({\mathcal {W}}_E\)-valued càdlàg functions, viz. \({\mathcal {W}}_E\)-valued right-continuous paths possessing left limits at every time \(t>0\), and with a slight abuse of notation we still denote it by \(Q^h_{{\textrm{w}}}\).Footnote 6 From now on, we shall work under these conditions and \(Q^h_{\text {w}}\) will always be considered as a measure in \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {W}}_E)\). In particular, remark that if we write W for the canonical process in \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {W}}_E)\), then W is \(Q^h_{\text {w}}\)–a.s. continuous. Finally, we point out that the regularity of W was partially addressed in the proof of [11, Proposition 4.4.1], for initial conditions of the form x with \(x \in E\), when working with paths \({\text {w}}\) defined on the half open interval \([0,\zeta _{\text {w}})\).

The Lévy snake with spatial motion \(\xi \). The driving function h of the random snake that we have considered so far was deterministic, and the next step consists in randomising h. We write \({\mathcal {M}}_f^0\) for the subset of \({\mathcal {M}}_f({\mathbb {R}}_+)\) defined as

$$\begin{aligned} {\mathcal {M}}^{0}_{f}:=\big \{\mu \in {\mathcal {M}}_{f}({\mathbb {R}}_{+}):\,H(\mu )<\infty \ \text { and } \text {supp } \mu = [0,H(\mu )]\big \}\cup \{0\}, \end{aligned}$$

and we introduce

$$\begin{aligned} \Theta :=\big \{(\mu , {\text {w}}) \in {\mathcal {M}}_f^0 \times {\mathcal {W}}_{E}:~H(\mu )=\zeta _{\text {w}}\big \}. \end{aligned}$$
(2.16)

Fix a Laplace exponent \(\psi \) satisfying (A1)–(A4), and set

$$\begin{aligned} \Upsilon :=\sup \big \{ r \geqslant 0: \lim _{\lambda \rightarrow \infty } \lambda ^{-r}\psi (\lambda ) = \infty \big \}. \end{aligned}$$
(2.17)

In particular, by the convexity of \(\psi \) we must have \(\Upsilon \geqslant 1\). For every \(\mu \in {\mathcal {M}}^{0}_{f}\), recall that we write \({{\textbf {P}}}_{\mu }\) for the distribution of the exploration process started from \(\mu \) in \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+))\). With a slight abuse of notation we denote the canonical process in \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+))\) by \(\rho \) and observe that, by property (i) in Sect. 2.1 and (2.8), the process \(\rho \) under \({{\textbf {P}}}_\mu \) takes values in \({\mathcal {M}}^{0}_{f}\). Notice that \(H(\rho )\) under \({{\textbf {P}}}_{\mu }\) is continuous since \(\mu \in {\mathcal {M}}_f^0\). We can now state the hypothesis we will be working with.

In the rest of this work, we will always assume that:

  • Hypothesis \(({{\textbf {H}}}_{0})\). There exists a constant \(C_\Pi > 0\) and two positive numbers \(p,q > 0\) such that, for every \(x \in E\) and \(t\geqslant 0\), we have:

    figure a

For instance, it can be checked that condition (\(\hbox {H}_{0}\)) is fulfilled if the Lévy tree has exponent \(\psi (\lambda )= \lambda ^\alpha \) for \(\alpha \in (1,2]\) and \(\xi \) is a Brownian motion. Let us discuss the implications of (\(\hbox {H}_{0}\)). Under \({{\textbf {P}}}_{\mu }\), denote the excursion intervals of H above its running infimum by \((\alpha _i, \beta _i)\). Recall from (2.8) that \((\rho ^\mu _t:= [k_{-I_t}\mu , \rho _t ]:~t \geqslant 0)\), under \({{\textbf {P}}}_0\), is distributed according to \({{\textbf {P}}}_\mu \), and note that \(H_t(\rho ^\mu ) = H(k_{-I_t}\mu ) + H(\rho _t)\), for \(t \geqslant 0\). By [11, Theorem 1.4.4], under \({{\textbf {P}}}_0\) the process \(H(\rho )\) is locally Hölder continuous of exponent m for any \(m \in (0, 1 - \Upsilon ^{-1})\). In particular, this holds for some \(m:=r\) verifying \(qr > 1\) by the second condition in (\(\hbox {H}_{0}\)). Since \(\big (H(k_{-I_t}\mu ): t \geqslant 0 \big )\) is constant on each excursion interval \((\alpha _i, \beta _i)\) and \((H(\rho _t): t \geqslant 0)\) is locally r-Hölder continuous, we deduce that \(H(\rho ^\mu )\) is locally r-Hölder continuous on \(([\alpha _i, \beta _i]:\, i \in {\mathbb {N}})\). Said otherwise, \({{\textbf {P}}}_\mu \)-a.s., the paths of \(H(\rho )\) satisfy the conditions of Proposition  2.2 and we will henceforth assume that the condition is satisfied for every path, and not only outside of a negligible set.

Finally, consider the canonical process \((\rho , W)\) in \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+)\times {\mathcal {W}}_E )\), the space of \({\mathcal {M}}_f({\mathbb {R}}_+)\times {\mathcal {W}}_E\)-valued càdlàg paths. By our previous discussion we deduce that we can define a probability measure in \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+)\times {\mathcal {W}}_E )\) by setting

$$\begin{aligned} {\mathbb {P}}_{\mu ,\text {w}}({\textrm{d}}\rho ,\, {\textrm{d}}W):={{\textbf {P}}}_\mu ({\textrm{d}}\rho )\,Q^{H(\rho )}_{\text {w}}({\textrm{d}}W), \end{aligned}$$

for every \((\mu ,\text {w})\in \Theta \). The process \((\rho , W)\) under \({\mathbb {P}}_{\mu , {\text {w}}}\) is called the \(\psi \)-Lévy snake with spatial motion \(\xi \) started from \((\mu , {\text {w}})\). We denote its canonical filtration by \(({\mathcal {F}}_t:~t\geqslant 0)\) and observe that by construction, \({\mathbb {P}}_{\mu , {\text {w}}}\)–a.s., W has continuous paths. Now, the proof of [11, Theorem 4.1.2] applies without any change to our framework and gives that the process \(((\rho , W), ({\mathbb {P}}_{\mu , {\text {w}}}: (\mu , {\text {w}}) \in \Theta ))\) is a strong Markov process with respect to the filtration \(({\mathcal {F}}_{t+})\). It should be noted that assumption (\(\hbox {H}_{0}\)) is the same as the one appearing in [11, Proposition 4.4.1], for paths defined on \([0,\zeta _{\text {w}})\) and started from \(x\in E\). In the particular case \(\psi (\lambda ) = \lambda ^2/2\), the path regularity of W was already addressed in [20, Theorem 1.1].

Let us conclude our discussion concerning regularity issues by introducing the notion of snake paths, which summarises the regularity properties of \((\rho ,W)\) as well as some related notation that will be used throughout this work. Recall that \({\mathcal {M}}_f({\mathbb {R}}_+)\), equipped with the topology of weak convergence, is a Polish space [18, Lemma 4.5]. We denote systematically the elements of the path space \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_E)\) by:

$$\begin{aligned} (\uprho , \omega ) = \big ( (\uprho _s, \omega _s): \, s \in {\mathbb {R}}_+ \big ), \end{aligned}$$

and by definition, we have \((\rho _s(\uprho ), W_s(\omega )) = (\uprho _s, \omega _s )\) for \(s \in {\mathbb {R}}_+\). For each fixed \(s\geqslant 0\), \(\omega _s\) is an element of \({\mathcal {W}}_{E}\) with lifetime \(\zeta _{\omega _s}\), and the \({\mathbb {R}}_+\)-valued process \(\zeta (\omega ):= (\zeta _{\omega _s}: \, s \geqslant 0)\) is called the lifetime process of \(\omega \). We will occasionally use the notation \(\zeta _s(\omega )\) instead of \(\zeta _{\omega _s}\), and in such cases we will drop the dependence on \(\omega \) if there is no risk of confusion.

Definition 2.3

A snake path started from \((\mu , {\text {w}}) \in \Theta \) is an element \((\uprho , \omega ) \in {\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_E)\) such that the map** \(s \mapsto \omega _s\) is continuous, and satisfying the following properties:

  1. (i)

    \((\uprho _0, \omega _0 ) = (\mu ,\text {w})\).

  2. (ii)

    \((\uprho _s, \omega _s) \in \Theta \), and in particular \(H(\rho _s) = \zeta (\omega _s)\), for all \(s \geqslant 0\).

  3. (iii)

    \(\omega \) satisfies the snake property: for any \(0\leqslant s \leqslant s'\),

    $$\begin{aligned} \omega _s(t) = \omega _{s'}(t) \, \, \text { for all } \, \, 0 \leqslant t \leqslant \inf _{[s,s']} \zeta (\omega ). \end{aligned}$$

A continuous \({\mathcal {W}}_E\)-valued path \(\omega \) satisfying (iii) is called a snake trajectory. We point out that this notion had already been introduced in the context of the Brownian snake [1, Definition 6]. However, in the Brownian case the process W is Markovian and there is no need of working with pairs \((\uprho , \omega )\) – this is the reason why we have to introduce the notion of snake paths. We denote the collection of snake paths started from \((\mu , {\text {w}}) \in \Theta \) by \({\mathcal {S}}_{\mu , {\text {w}}}\) and simply write \({\mathcal {S}}_x\) instead of \({\mathcal {S}}_{0,x}\). Finally, we set:

$$\begin{aligned} {\mathcal {S}}:= \bigcup _{(\mu , {\text {w}}) \, \in \, \Theta } {\mathcal {S}}_{\mu , {\text {w}}}. \end{aligned}$$

For any given \((\uprho , \omega ) \in {\mathcal {S}}\), we denote indifferently its duration by

$$\begin{aligned} \sigma _{H(\uprho )} = \sigma (\omega ) = \sup \{t \geqslant 0: \, \zeta _{\omega _t} \ne 0 \}. \end{aligned}$$
(2.18)

Remark that, by continuity and the definition of \(Q^{h}_{\text {w}}\), the process \(((\rho , W), ({\mathbb {P}}_{\mu , {\text {w}}}: (\mu , {\text {w}}) \in \Theta ))\) takes values in \({\mathcal {S}}\) – it satisfies the snake property by (2.12) and the continuity of W. Said otherwise, \({\mathbb {P}}_{\mu ,\text {w}}\text {-a.s.}\), we have

$$\begin{aligned} \zeta _{s}=H(\rho _{s}), \,\, \text { for every } s \geqslant 0, \end{aligned}$$

and for any \(t \leqslant t'\)

$$\begin{aligned} W_{t}(s)=W_{t^{\prime }}(s), \,\, \text { for all } s\leqslant m_{H}(t,t^{\prime }). \end{aligned}$$

We stress that when working on \({\mathcal {S}}\) the equivalent notations \(\zeta _s\), \(H(\rho _s)\) and \(H_s\) will be used indifferently. The snake property implies that, for every \(t,t^{\prime } \geqslant 0\) such that \(p_{H}(t)=p_{H}(t^{\prime })\), we have \(W_{t}=W_{t^{\prime }}\). In particular, for such times it holds that \({\widehat{W}}_{t}={\widehat{W}}_{t^{\prime }}\) and hence \(({\widehat{W}}_t:~t\geqslant 0)\) can be defined on the quotient space \({\mathcal {T}}_H\). More precisely, under \({\mathbb {P}}_{\mu ,\text {w}}\), the function defined with a slight abuse of notation for all \(\upsilon \in {\mathcal {T}}_H\) as

$$\begin{aligned} \xi _{\upsilon }:={\widehat{W}}_{t}, \quad \text {where }t\text { is any element of } p_{H}^{-1}(\upsilon ), \end{aligned}$$

is well defined and leads us to the notion of tree indexed processes. When \((\mu , {\text {w}}) = (0,x)\), the process \((\xi _\upsilon )_{\upsilon \in {\mathcal {T}}_H}\) is known as the Markov process \(\xi \) indexed by the tree \({\mathcal {T}}_H\) and started from x.Footnote 7 In this work, we will need to consider the restriction of \((\rho ,W)\) to different intervals and therefore, it will be convenient to introduce a formal notion of subtrajectories.

Subtrajectories. Fix \(s<t\) such that \(H_s = H_t\) and \(H_r > H_s\) for all \(r \in (s,t)\). The subtrajectory of \((\rho ,W)\) in [st] is the process taking values in \({\mathbb {D}}({\mathbb {R}}_+,{\mathcal {M}}_{f}({\mathbb {R}}_+) \times {\mathcal {W}}_E)\), denoted by \((\rho ^{\prime }_{r},W^{\prime }_{r})_{r\geqslant 0}\) and defined as follows: for every \(r\geqslant 0\), set

$$\begin{aligned} \langle \rho ^{\prime }_{r} , f \rangle&:=\int \rho _{(r+s)\wedge t}({\textrm{d}}h)f(h \!- \!H_s)\mathbb {1}_{\{ h > H_s \}} \quad \text { and } \quad W^{\prime }_{r}(\cdot ) :=W_{(r+s)\wedge t}(H_{s}+\cdot \,). \end{aligned}$$

In particular, we have

$$\begin{aligned} \zeta (W^{\prime }_{r})=H_{(r+s)\wedge t}-H_{s}=H(\rho ^{\prime }_{r}), \quad \text { for all } r\geqslant 0. \end{aligned}$$

Remark that if \((\rho ,W)\) is a snake path, then the subtrajectory \((\rho ^{\prime }, W^{\prime })\) is also in \({\mathcal {S}}\). Informally, \(W^{\prime }\) encodes the labels \((\xi _{v}:~v\in p_{H}([s,t]))\).

2.4 Excursion measures of the Lévy snake

Fix \(x\in E\) and consider the Lévy snake \((\rho ,W)\) under \({\mathbb {P}}_{0,x}\). By (2.6) and the fact that 0 is a regular instantaneous point for the reflected process \(X-I\), it follows that the measure 0 is a regular instantaneous point for the Markov process \(\rho \). This yields that (0, x) is regular and instantaneous for the Markov process \((\rho ,W)\). Moreover, \((-I_{t}: t \geqslant 0)\) is a local time at 0 for \(\rho \) and hence, it is a local time at (0, x) for \((\rho ,W)\). We let \({\mathbb {N}}_{x}\) denote the excursion measure of \((\rho ,W)\) away from (0, x) associated with the local time \(-I\), and note that the duration \(\sigma \) of \((\rho , W)\) coincides with the stop** time \(\inf \{t > 0: \rho _t =0 \}\). We stress that \({\mathbb {N}}_{x}\) is a measure in the canonical space \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_E)\). By excursion theory of the Markov process \((\rho , W)\), if \(\{(\alpha _i, \beta _i):~i \in {\mathbb {N}}\}\) stands for the excursion intervals of \((\rho , W)\) and \((\rho ^i, W^i)\) are the corresponding subtrajectories then, under \({\mathbb {P}}_{0,x}\), the measure

$$\begin{aligned} \sum \limits _{i\in {\mathbb {N}}}\delta _{(-I_{\alpha _i},\rho ^{i},W^{i})}, \end{aligned}$$
(2.19)

is a Poisson point measure with intensity \(\mathbb {1}_{[0,\infty )}(\ell ){\textrm{d}}\ell \,{\mathbb {N}}_{x}({\textrm{d}}\rho , \,{\textrm{d}}\omega ).\) Let us mention that the enumeration can be taken measurably with respect to \((\rho , W)\). For instance, for every \(k\in {\mathbb {Z}}\), one can consider the temporal enumeration of the atoms \((-I_{\alpha _i}, \rho ^i, W^i)\) with lifetime \(\sigma (W^i) \in (2^{-(k+1)}, 2^{-k} ]\), and then re-rank all the atoms according to \({\mathbb {N}}\); this is always feasible since the countable union of countable sets remains a countable set. More generally, for the point measures built in terms of \((\rho , W)\) that we consider in this work, one can always assume that the enumeration of its atom is measurable with respect to \((\rho , W)\). This can be invariably achieved by considering variations of the enumeration we just described. The specific details will be systematically omitted unless there is a new technical difficulty. Recalling the interpretation of the restrictions \({\mathbb {N}}_x( \, \cdot \, | \sigma > \varepsilon )\) as the law of the first excursion with length greater than \(\varepsilon \), it follows that under \({\mathbb {N}}_{x}\), W satisfies the snake property and \((\rho , W) \in {\mathcal {S}}\). In particular, we can still make use of the definition of subtrajectories and \((\xi _{\upsilon })_{\upsilon \in {\mathcal {T}}_{H}}\) under the excursion measure \({\mathbb {N}}_{x}\), and for simplicity we will use the same notation.

By the discussion at the end of Sect. 2.1, it is straightforward to verify that

$$\begin{aligned} {\mathbb {N}}_{x}({\textrm{d}}\rho ,\, {\textrm{d}}\eta , \, {\textrm{d}}W)=N({\textrm{d}}\rho , \, {\textrm{d}}\eta )\, Q^{H(\rho )}_{x}({\textrm{d}}W). \end{aligned}$$
(2.20)

Said otherwise, under \({\mathbb {N}}_{x}\):

  • The distribution of \((\rho , \eta )\) is \(N({\textrm{d}}\rho , \, {\textrm{d}}\eta )\);

  • The conditional distribution of W knowing \((\rho , \eta )\) is \(Q_{x}^{H(\rho )}\).

Remark that by construction and (2.11), under \({\mathbb {N}}_x\) we have

$$\begin{aligned} \big ( (\rho _t, \eta _t, W_t): t \in [0,\sigma ] \big ) \overset{(d)}{=}\ \big ( (\eta _{(\sigma -t )-}, \rho _{(\sigma -t )-}, W_{\sigma -t } ):t \in [0,\sigma ] \big ), \end{aligned}$$
(2.21)

where we used that by continuity, we have \(W_{\sigma -t }=W_{(\sigma -t )-}\) for every \(t\in [0,\sigma ]\).

Let us now discuss a variant of (2.19) that holds when starting from an arbitrary \((\mu , {\text {w}}) \in \Theta \), that will be used frequently in our computations. In this direction, let \({\mathbb {P}}_{\mu , {\text {w}}}^{\dag }\) be the distribution of \((\rho , W)\) under \({\mathbb {P}}_{\mu , {\text {w}}}\) killed at time \(\inf \{ t > 0: \rho _t = 0 \}\). In particular, observe that the lifetime of both \(\rho \) and H under \({\mathbb {P}}_{\mu , {\text {w}}}^\dag \) is \(\sigma := \sup \{ t \geqslant 0: \rho _t \ne 0 \}\). We stress that this is consistent with (2.18) and that \(\sigma \) coincides with \(\inf \{ t > 0: \rho _t = 0 \}\). By the discussion following (2.8), under \({\mathbb {P}}_{\mu , {\text {w}}}^\dag \) the process \(\langle \rho , 1 \rangle \) is a Lévy process started from \(\langle \mu ,1 \rangle \) and stopped when reaching 0. Now assume that \(\mu \ne 0\), write \(\big ( (\alpha _i, \beta _i): \, i \in {\mathbb {N}} \big )\) for the excursion intervals over the running infimum of \(\langle \rho ,1 \rangle \) under \({\mathbb {P}}^\dag _{\mu ,\text {w}}\) and denote the corresponding subtrajectory associated with \([\alpha _i, \beta _i]\) by \((\rho ^i, W^i)\). If for \(t \geqslant 0\) we write \(I_t:= \inf _{s \leqslant t}\langle \rho _s, 1 \rangle - \langle \mu , 1\rangle \), the measure

$$\begin{aligned} \sum _{i \in {\mathbb {N}}}\delta _{(-I_{\alpha _i}, \rho ^i, W^i)}, \end{aligned}$$
(2.22)

is a Poisson point measure with intensity \(\mathbb {1}_{[0, \langle \mu ,1 \rangle ]} (u) \, {\textrm{d}}u \, {\mathbb {N}}_{\text {w}( H( \kappa _{u} \mu ) )}({\textrm{d}}\rho , {\textrm{d}}W)\). Moreover, writing \(h_i:= H_{\alpha _i} = H_{\beta _i}\), we infer by (2.8) that \(h_i = H(\kappa _{-I_{\alpha _i}} \mu )\) and since the image measure of \(\mathbb {1}_{[0,\langle \mu , 1 \rangle ]}(u) \, {\textrm{d}}u\) under the map** \(u \mapsto H(\kappa _u \mu )\) is precisely \(\mu \), we deduce that under \({\mathbb {P}}^\dag _{\mu ,\text {w}}\) the measure

$$\begin{aligned} \sum _{i \in {\mathbb {N}}}\delta _{(h_i, \rho ^i, W^i)} \end{aligned}$$
(2.23)

is a Poisson point measure with intensity \(\mu ({\textrm{d}}h){\mathbb {N}}_{\text {w}(h)}({\textrm{d}}\rho , {\textrm{d}}W)\). We refer to [11, Lemma 4.2.4] for additional details.

We close this section by recalling a many-to-one formula that will be used frequently to obtain explicit computations. We start with some preliminary notations: consider a 2-dimensional subordinatorFootnote 8\((U^{(1)}, U^{(2)})\) defined on some auxiliary probability space \((\Omega _0, {\mathcal {F}}_0, P^0 )\) with Laplace exponent given by

$$\begin{aligned} - \log E^0 \Big [ \exp \big ( - \lambda _1 U^{(1)}_1 - \lambda _2 U^{(2)}_1 \big ) \Big ]:= {\left\{ \begin{array}{ll} \frac{\psi (\lambda _1) - \psi (\lambda _2)}{\lambda _1 - \lambda _2} -\alpha \quad \text {if } \lambda _1 \ne \lambda _2 \\ \psi '(\lambda _1) - \alpha \quad \! \hspace{7.5mm} \text {if } \lambda _1 = \lambda _2, \end{array}\right. } \end{aligned}$$
(2.24)

where \(E^0\) stands for the expectation taken with respect to \(P^0\) and \(\alpha \) is the drift coefficient in (2.1). Notice that in particular \(U^{(1)}\) and \(U^{(2)}\) are subordinators with Laplace exponent \(\lambda \mapsto \psi (\lambda )/\lambda - \alpha \). Let be the pair or random measures defined by

with the convention . The following many-to-one equation will play a central role in all this work:

Lemma 2.4

For every \(x \in E\) and every non-negative measurable function \(\Phi \) on \({\mathcal {M}}_f({\mathbb {R}}_+)^2 \times {\mathcal {W}}_E\), we have:

(2.25)

where \(\alpha \) is the drift term appearing in (2.1).

Proof

First, remark that we have

$$\begin{aligned} {\mathbb {N}}_{x} \Big ( \int _0^{\sigma } {\textrm{d}}s \, \Phi \big (\rho _s, \eta _s, W_s \big ) \Big )=\int _{0}^{\infty } {\textrm{d}}s \, {\mathbb {N}}_{x} \Big ( \mathbb {1}_{\{s< \sigma _H \}} \, \Phi \big (\rho _s, \eta _s, W_s \big ) \Big ). \end{aligned}$$

Next, we use (2.20) to write the previous display in the form:

$$\begin{aligned}{} & {} \int _{0}^{\infty } {\textrm{d}}s \, {\mathbb {N}}_{x} \Big ( \mathbb {1}_{\{s < \sigma _H\}} \,\Pi _{x}\Big [ \Phi \Big (\rho _s, \eta _s, \big (\xi _r:~r\leqslant H(\rho _s)\big ) \Big )\Big ] \Big ) \\{} & {} \quad = N \Big (\int _{0}^{\sigma } {\textrm{d}}s \,\Pi _{x}\Big [ \Phi \Big (\rho _s, \eta _s, \big (\xi _r:~r\leqslant H(\rho _s)\big ) \Big )\Big ] \Big ). \end{aligned}$$

Since now \(\Pi _x \big [ \Phi \big (\rho _s, \eta _s, (\xi _r: r \leqslant H(\rho _s))\big ) \big ]\) is a functional of \((\rho _s, \eta _s)\), it suffices to establish (2.25) for a functional only depending on the pair \((\rho _s, \eta _s)\). However, this is precisely formula (18) in [12]. \(\square \)

For later use, observe that by an application of the many-to-one formula (2.25), for every \(\varepsilon >0\) we have

$$\begin{aligned} N\Big (\int _{0}^{\sigma }{\textrm{d}}s \, \mathbb {1}_{\{0\leqslant H(\rho _s)<\varepsilon \}}\Big )=\int _0^\varepsilon {\textrm{d}}a \, \exp \big (- \alpha a\big )\leqslant \varepsilon . \end{aligned}$$
(2.26)

3 Special Markov property

In this section we state and prove the (strong) special Markov property for the Lévy snake. This result was originally introduced in [21, Section 2] in the special case of the Brownian motion indexed by the Brownian tree, viz. when the Lévy exponent of the tree is of the form \(\psi (\lambda )=\beta \lambda ^{2}\) and the spatial motion \(\xi \) is a Brownian motion. This result plays a fundamental role in the study of Brownian motion indexed by the Brownian tree, see for example [21, 24, 29, 30]. More recently, a stronger version was proved in [24] still for \(\psi (\lambda )=\beta \lambda ^{2}\) but holding for more general spatial motions \(\xi \). In this section we extend this result to an arbitrary exponent \(\psi \) of a Lévy tree. Even if we follow a similar strategy to the one introduced in [24], general Lévy trees are significantly less regular than the Brownian tree – in particular the height process H is not Markovian. The arguments need to be carefully reworked and for instance, the existence of points with infinite multiplicity complicates the proof considerably.

We start by introducing some standard notation that will be used in the rest of the section and recalling the preliminaries needed for our purpose. Fix \(x \in E\) and for an arbitrary open subset \(D \subset E\) containing x and \(\text {w}\in {\mathcal {W}}_{E,x}\), set

$$\begin{aligned} \,\tau _{D}(\text {w}):=\inf \big \{t\in [0,\zeta _{\text {w}}]: ~ \text {w}(t)\notin D\big \}, \end{aligned}$$

with the usual convention \(\inf \{ \varnothing \}=\infty \). Similarly, we will write \(\tau _{D}(\xi ):=\inf \{t\geqslant 0: ~ \xi _{t}\notin D\}\) for the exit time from D of the spatial motion \(\xi \). When considering the later, the dependency on \(\xi \) is usually dropped when there is no risk of confusion. In the rest of the section, we will always assume that:

figure b

The special Markov property is roughly speaking a spatial version of the Markov property. In order to state it, we need to properly define the notion of paths “inside D” and “excursions outside D”, as well as a notion of measurability with respect to the information generated by the trajectories staying inside of D. Section 3.1 is devoted to the study of paths inside D and to a fundamental functional of the Lévy snake, called the exit local time. The study of the excursions outside D is postponed to Sect.  3.2.

3.1 The exit local time

Let us begin by introducing some useful operations and notation.

Truncation. We start by defining the truncation of a path \((\uprho ,\omega ) \in {\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_{E,x} )\) to D – we stress that we have \(\omega _s(0) = x\) for every \(s \geqslant 0\). In this direction, define the functional

$$\begin{aligned} V^D_t(\uprho , \omega ):= \int _0^t {\textrm{d}}s \, \mathbb {1}_{\{ \zeta _{\omega _s} \leqslant \tau _D(\omega _s) \}}, \quad \quad t \geqslant 0, \end{aligned}$$
(3.1)

measuring the amount of time spent by \(\omega \) without leaving D up to time t. Let us be more precise: at time s, we will say that \(\omega _s\) does not leave D (or stays in D) if \(\omega _s ( [0,\zeta _s) ) \subset {D}\) (notice that \({\widehat{\omega }}_s\) might be in \(\partial D\)) and on the other hand, we say that the trajectory exits D if \(\omega _s([0,\zeta _s)) \cap D^c \ne \varnothing \). Observe that a trajectory \((\omega _s(t): t \in [0,\zeta _s] )\) might exit the domain D and return to it before the lifetime \(\zeta _s\), but such a trajectory will not be accounted by \(V^D\). Write \({\mathcal {Y}}_{D}(\uprho , \omega ):= V^D_{\sigma (\omega )} (\uprho , \omega )\) for the total amount of time spent in D, and for every \(s\in [0, {\mathcal {Y}}_{D}(\uprho , \omega ))\) set

$$\begin{aligned} \Gamma _s^{D}(\uprho , \omega ):=\inf \big \{t\geqslant 0: V_t^D(\uprho , \omega ) > s\big \},\, \end{aligned}$$

with the convention \(\Gamma ^D_s (\uprho ,\omega ):= \sigma (\omega )\), if \(s \geqslant {\mathcal {Y}}_D(\uprho ,\omega )\). The truncation of \((\uprho ,\omega )\) to D is the element of \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_{E,x} )\) with lifetime \({\mathcal {Y}}_{D}(\uprho , \omega )\) defined as follows:

$$\begin{aligned} \text {tr}_{D}\big (\uprho ,\omega \big ):=(\uprho _{\Gamma _s^{D}(\uprho , \omega )},\omega _{\Gamma _s^{D}(\uprho , \omega )})_{s\in {\mathbb {R}}_+}. \end{aligned}$$

Indeed, observe that the trajectory \((\uprho _{\Gamma ^D},\omega _{\Gamma ^D})\) is càdlàg since \(\uprho , \omega \) and \(\Gamma ^{D}\) are càdlàg. For simplicity, we set \(\text {tr}_{D} (\omega ) = (\omega _{\Gamma _s^{D}(\omega )})_{s\in {\mathbb {R}}_+ }\) and we write \( \text {tr}_D ({\widehat{\omega }})\) for \( {\widehat{\omega }}_{\Gamma ^D}\). Roughly speaking, \(\text {tr}_{D}(\omega )\) removes the trajectories \(\omega _s\) from \(\omega \) leaving D, glues the remaining endpoints, and hence encodes the trajectories \(\omega _s\) that stay in D. Let us stress that when \((\uprho , \omega )\) is an element of \({\mathcal {S}}_x\), the truncation \(\text {tr}_D(\uprho , \omega )\) is still in \({\mathcal {S}}_x\) since \(\text {tr}_{D} (\omega )\) is a snake trajectory taking values in \(D\cup \partial D\) by [1, Proposition 10], and condition (ii) in Definition 2.3 is clearly satisfied. Recall that \((\rho , W)\) stands for the canonical process in \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+)\times {\mathcal {W}}_{E,x} )\), and that it takes values in \({\mathcal {S}}_{\mu , {\text {w}}}\) under \({\mathbb {P}}_{\mu , {\text {w}}}\) for \((\mu , {\text {w}}) \in \Theta \) and in \({\mathcal {S}}_{y}\) under \({\mathbb {N}}_{y}\) for \(y \in E\). We will also need to introduce the sigma field

$$\begin{aligned} {\mathcal {F}}^D:= \sigma \big ( \text {tr}_D( \rho , W)_s: s \geqslant 0 \big ) \end{aligned}$$
(3.2)

in \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_E )\), which roughly speaking, contains the information generated by the trajectories that stay in D. The following technical lemma will be often useful. It states that, under \({\mathbb {N}}_x\), when a trajectory \(W_s\) exits the domain D, then the measure \(\rho _s\) does not have an atom at level \(\tau _D(W_s)\). More precisely:

Lemma 3.1

Let D be an arbitrary open subset \(D\subset E\) containing x. Then, \({\mathbb {N}}_{x}\)–a.e.

$$\begin{aligned} \rho _s(\{ \tau _D(W_s)\}) = 0,\quad \text { for all } s \geqslant 0. \end{aligned}$$

Proof

First, remark that the many-to-one formula (2.25) gives:

$$\begin{aligned}{} & {} {\mathbb {N}}_{x}\Big ( \int _0^\sigma {\textrm{d}}s \, \mathbb {1}_{\{ \tau _D(W_s)< \infty \}} \rho _s(\{\tau _D(W_s)\}) \Big ) \\{} & {} \quad = \int _0^\infty {\textrm{d}}a \, \exp (-\alpha a) E^0\otimes \Pi _x \Big ( \mathbb {1}_{\{ \tau _D((\xi _u: u \leqslant a)) < \infty \}} J_a(\{\tau _D(\xi _u: u \leqslant a)\}) \Big ), \end{aligned}$$

which vanishes by the independence between \(\xi \) and \(J_a\). This shows that \({\mathbb {N}}_x\)–a.e., the Lebesgue measure of the set \(\{ s\in [0,\sigma ]: \rho _s(\{\tau _D(W_s)\}) \ne 0 \}\) is null and now we claim that this implies that \({\mathbb {N}}_{x}\)–a.e. \(\rho _s(\{ \tau _D(W_s)\}) = 0\) for all \(s \geqslant 0\). We argue by contradiction to prove this claim. Suppose that for some \(s>0\), we have \(\rho _s(\{ \tau _D(W_s)\}) > 0\). In this case, recalling that the exploration process \(\rho \) is càdlàg with respect to the total variation distance, we must have

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\big |\rho _s(\{ \tau _D(W_s)\}) - \rho _{s+\varepsilon }(\{ \tau _D(W_s)\})\big | \leqslant \lim _{\varepsilon \downarrow 0} \sup _{A \in {\mathcal {B}}({\mathbb {R}}) } \big |\rho _s(A) - \rho _{s+\varepsilon }(A)\big | = 0. \end{aligned}$$

We infer that for some \(\delta > 0\), it holds that \(\rho _u(\{ \tau _D(W_s)\})> 0\) for all \(u \in [s,s+\delta )\). In particular, we have \(H_u \geqslant H_s\) for all \(u \in [s,s+\delta )\). By the snake property, we deduce that, for every \(u \in [s,s+\delta )\), \(\tau _D(W_s) = \tau _D(W_u)\) and consequently:

$$\begin{aligned} \rho _{u}(\{ \tau _D(W_{u})\})=\rho _u(\{ \tau _D(W_s)\})> 0. \end{aligned}$$

However, this is in contradiction with the first part of the proof and the desired result follows. \(\square \)

Exit local time. As in classical excursion theory, we will need to properly index the excursions outside D but we will also ask the indexing to be compatible with the order induced by H. To achieve it, we will make use of the exit local time from D. We briefly recall its definition and main properties and we refer to [11, Section 4.3] for a more detailed account. By Propositions 4.3.1 and 4.3.2 in [11], under \({\mathbb {N}}_{x}\) and \({\mathbb {P}}_{0,x}\), the limit

$$\begin{aligned} L_{s}^{D}(\rho , W):=\lim \limits _{\varepsilon \rightarrow 0} \frac{1}{\varepsilon } \int _{0}^{s} {\textrm{d}}r \mathbb {1}_{\{\tau _{D}(W_{r})<H(\rho _r)<\tau _{D}(W_{r})+\varepsilon \}}, \end{aligned}$$
(3.3)

exists for every \(s\geqslant 0\), where the convergence holds uniformly on compact intervals in \(L_1({\mathbb {P}}_{0,x})\) and \(L_1({\mathbb {N}}_{x})\). As usual, when there is no risk of confusion, the dependence on \((\rho , W)\) is omitted in \(L^D(\rho , W)\). This defines a continuous non-decreasing process \(L^{D}\) called the exit local time from D of \((\rho ,W)\). We insist that, under \({\mathbb {N}}_x\) and \({\mathbb {P}}_{0,x}\), the process \((\rho , W)\) takes values in \({\mathcal {S}}_x\) which yields that \(H_s = \zeta _s\) for every \(s \geqslant 0\). The following first-moment formula will be often used in our computations.

Lemma 3.2

For every non-negative measurable functional \(\Phi \) on \({\mathcal {M}}_f({\mathbb {R}}_+)^2 \times {\mathcal {W}}_E\), we have:

(3.4)

When the function \(\Phi \) only depends on \((\rho , W)\), this result was established in [11, Proposition 4.3.2] and the same argument can be employed to establish (3.4). However, the proof of [11, Proposition 4.3.2] is rather technical and for completeness we provide a shorter argument.

Proof

First, observe that by the approximation (3.3), up to considering a sub-sequence, the family of measures \(\varepsilon ^{-1}\mathbb {1}_{\{\tau _{D}(W_{r})<H_{r}<\tau _{D}(W_{r})+\varepsilon \}} {\textrm{d}}r\) for \(\varepsilon \geqslant 0\) converge weakly as \(\varepsilon \downarrow 0\) towards \({\textrm{d}}L_r^D\). We now claim that, \({\mathbb {N}}_{x}\)-a.e., for every non-negative continuous function \(\Phi \) on \({\mathcal {M}}_f^2({\mathbb {R}}_+)\times {\mathcal {W}}_E\) bounded above by 1 we have:

$$\begin{aligned} \int _{0}^{\sigma }{\textrm{d}}L_s^D~ \Phi (\rho _s,\eta _s, W_s)= \lim \limits _{\varepsilon \rightarrow 0}\varepsilon ^{-1}\int _{0}^{\sigma } {\textrm{d}}r~ \Phi (\rho _r,\eta _r, W_r) \mathbb {1}_{\{\tau _{D}(W_{r})<H_{r}<\tau _{D}(W_{r})+\varepsilon \}}. \end{aligned}$$

To see it, we make a couple of observations. On the one hand, the approximation (3.3) yields that the measure \({\textrm{d}}L_s^D\) is \({\mathbb {N}}_x\)–a.e. supported on the set \(\{ s \in {\mathbb {R}}_+: H_s = \tau _D(W_s) \}\). On the other hand, by Lemma 3.1 and the duality (2.21), it holds that \({\mathbb {N}}_x\)–a.e. for every \(s \geqslant 0\) we have \(\rho _s(\{ \tau _D(W_s) \}) = \eta _s(\{ \tau _D(W_s) \})=0\). Putting these two facts together yields that, under \({\mathbb {N}}_x\), the function \(s\mapsto \Phi (\rho _s,\eta _s,W_s)\) is continuous at \({\textrm{d}}L_s^D\) almost every \(s\geqslant 0\). The previous display now follows by Portemanteau’s theorem. Let us now deduce the statement of the lemma. In this direction, notice that an application of Fatou’s lemma yields:

$$\begin{aligned}{} & {} {\mathbb {N}}_{x} \left( \int _0^\sigma {\textrm{d}}L_s^D~ \Phi (\rho _s, \eta _s, W_s ) \right) \\{} & {} \quad \leqslant \liminf \limits _{\varepsilon \rightarrow 0} \varepsilon ^{-1}\cdot {\mathbb {N}}_{x} \left( \int _{0}^{\sigma } {\textrm{d}}r~ \Phi (\rho _r,\eta _r, W_r) \mathbb {1}_{\{\tau _{D}(W_{r})<H_{r}<\tau _{D}(W_{r})+\varepsilon \}} \right) . \end{aligned}$$

By (2.25) and the dominated convergence theorem, the right-side term of the previous display can be written as:

which gives the inequality:

(3.5)

Moreover, as we mentioned before starting the proof, by [11, Proposition 4.3.2] we have an equality in (3.5) when the functional \(\Phi \) does not depend on \(\eta \), and in particular in the case \(\Phi =1\). Therefore, if we combine the bound (3.5) with the same bound obtained by considering \(1-\Phi \) instead of \(\Phi \), we obtain the desired equality (3.4). \(\square \)

Observe that as a straightforward consequence of (3.3) or (3.4), we have

$$\begin{aligned} \text {supp } {\textrm{d}}L_s^D \subseteq \{ s\geqslant 0:~ \tau _D(W_s) = H_s \}, \quad {\mathbb {N}}_{x} \text {-a.e.} \end{aligned}$$

We stress that \(L^D\) is constant at every interval at which \(W_s\) stays in D and in each connected component of

$$\begin{aligned} \{s\geqslant 0:~\tau _D(W_s)< H_s\}. \end{aligned}$$

We call such a connected component an excursion interval from D. This family of intervals will be studied in detail in the next section. The process \(L^D\) is not measurable with respect to \({\mathcal {F}}^{D}\), the informal reason being that it contains the information on the lengths of the excursions from D. However, as we are going to show in Proposition 3.4, the time-changed process

$$\begin{aligned} {\widetilde{L}}^D:= \big (L_{\Gamma ^D_{s}}^{D} \big )_{s\in {\mathbb {R}}_+} \end{aligned}$$

is \({\mathcal {F}}^{D}\)-measurable – notice that we removed precisely from \(L^D\) by means of the time change the constancy intervals generated by excursions from D. This measurability property will be crucial for the proof of the special Markov property and the rest of this section is devoted to its proof.

First remark that we have only defined the exit local time under the measures \({\mathbb {P}}_{0,x}\) and \({\mathbb {N}}_x\) for \(x \in D\). In order to be able to apply the Markov property, we need to extend the definition to more general initial conditions \((\mu , {\text {w}}) \in \Theta \). This construction will also be essential for the results of Sect. 4. The precise statement is given in the following proposition:

Proposition 3.3

Fix \((\mu ,\textrm{w})\in \Theta \) such that \(\textrm{w}(0)\) \(\in D\) and suppose that \(\mu (\{\tau _D(\textrm{w})\})=0\). Then, under \({\mathbb {P}}_{\mu , \textrm{w}}\) there exists a continuous, non-decreasing process \(L^D\) with associated Lebesgue-Stieltjes measure \({\textrm{d}}L^D\) supported on \(\{ t \in {\mathbb {R}}_+: {\widehat{W}}_t \in \partial D \}\), such that, for every \(t \geqslant 0\)

$$\begin{aligned} L^D_t(\rho , W)= \lim \limits _{\varepsilon \rightarrow 0} \frac{1}{\varepsilon } \int _0^t {\textrm{d}}s \, \mathbb {1}_{\{ \tau _D(W_s)< H(\rho _s) < \tau _D(W_s) + \varepsilon \}}, \end{aligned}$$
(3.6)

where the convergence holds uniformly on compact intervals in \(L^{1}({\mathbb {P}}_{\mu ,\textrm{w}})\). Moreover:

  1. (i)

    Under \({\mathbb {P}}_{\mu , \text {w} }\), if \(\tau _D(\text {w} ) < \infty \), we have \(L^D_t(\rho ,W) = 0\) for every \(t \leqslant \inf \{s \geqslant 0: H(\rho _s) < \tau _D(\text {w} ) \}\).

  2. (ii)

    Under \({\mathbb {P}}^{\dag }_{\mu ,\text {w} }\), with \(\mu \ne 0\), recall the definition of the random point measure \(\sum _{i \in {\mathbb {N}}}\delta _{(h_i, \rho ^i, W^i)}\) defined in (2.23). Then we have:

    $$\begin{aligned} L^D_\infty (\rho ,W) = \sum \limits _{h_i<\tau _D(\text {w} )}L^D_\infty (\rho ^i,W^i), \quad {\mathbb {P}}^{\dag }_{\mu ,\text {w} }\text {-a.s.} \end{aligned}$$
    (3.7)

Proof

Let us start with preliminary remarks and introducing some needed notation. Fix \((\mu ,{\text {w}})\in \Theta \) with \({\text {w}}(0) \in D\) satisfying \(\mu (\{\tau _D({\text {w}})\}) = 0\). We write

$$\begin{aligned} T_r:=\inf \{t\geqslant 0:~H_t=r\},\text { for every }r \geqslant 0, ~~~ \text {and} ~~~T_0^{+}:=\inf \{t\geqslant 0:~\langle \rho _t, 1\rangle \!=\!0\}. \end{aligned}$$

By (3.3) and the strong Markov property, we already know that \(\varepsilon ^{-1} \int _{T_0^+}^{T_0^++t} {\textrm{d}}s \, \mathbb {1}_{\{ \tau _D(W_s)< H_s < \tau _D(W_s) + \varepsilon \}}\) converges as \(\varepsilon \downarrow 0\) uniformly on compact intervals in \(L^{1}({\mathbb {P}}_{\mu ,\text {w}})\) towards a non-decreasing continuous process supported on \(\{t\geqslant 0: {\widehat{W}}_{T_0^+ + t} \in \partial D\}\). Consequently, it suffices to prove the proposition under \({\mathbb {P}}_{\mu ,{\text {w}}}^{\dag }\) with \(\mu \ne 0\). In this direction, we set

$$\begin{aligned} I(t,\varepsilon ) := \frac{1}{\varepsilon } \int _0^t {\textrm{d}}s \, \mathbb {1}_{\{ \tau _D(W_s)< H_s < \tau _D(W_s) + \varepsilon \}}, \end{aligned}$$

for every \(\varepsilon > 0\). Recall now that under \({\mathbb {P}}^\dag _{\mu , {\text {w}}}\), the process \(\langle \rho ,1\rangle \) is a killed Lévy process started at \(\langle \mu ,1\rangle \) and stopped at its first hitting time of 0. Write \(((\alpha _i, \beta _i):~i \in {\mathbb {N}})\), for the excursion intervals of \(\langle \rho ,1 \rangle \) over its running infimum, and let \((\rho ^i, W^i)\) be the subtrajectory associated with the excursion interval \([\alpha _i, \beta _i]\). To simplify notation, we also set \(h_i:=H(\alpha _i)\) and recall from (2.23) that the measure \({\mathcal {M}}:= \sum _{i \in {\mathbb {N}}}\delta _{(h_i, \rho ^i, W^i)}\) is a Poisson point measure with intensity \(\mu ({\textrm{d}}h){\mathbb {N}}_{{\text {w}}(h)}({\textrm{d}}\rho , {\textrm{d}}W)\).

We suppose first that \(\tau _D({\text {w}}) \geqslant \zeta _{\text {w}}\). We shall prove that the collection \(\big (I(t,\varepsilon ), t \geqslant 0\big )\) for \(\varepsilon > 0\) is Cauchy in \(L_1({\mathbb {P}}^\dag _{\mu , {\text {w}}})\) uniformly on compact intervals as \(\varepsilon \downarrow 0\), viz.

$$\begin{aligned} \lim _{\delta , \varepsilon \rightarrow 0}{\mathbb {E}}_{\mu , {\text {w}}}^{\dag } \big [ \sup _{s \leqslant t} |I(s,\varepsilon ) - I(s,\delta )| \big ] = 0. \end{aligned}$$
(3.8)

This implies directly the existence of \(L^D\) defined as in (3.6) as well as point (i). We shall then deduce (ii), and the remaining case \(\tau _D({\text {w}}) < \zeta _{\text {w}}\) is treated afterwards. Let us proceed with the proof of (3.8). Since the Lebesgue measure of \(\{ t \in [0,\sigma ]: \langle \rho _t,1 \rangle = \inf _{s \leqslant t} \langle \rho _s, 1 \rangle \}\) is null, we can write

$$\begin{aligned} I(t,\varepsilon ) = \frac{1}{\varepsilon } \sum _{i \in {\mathbb {N}}} \int _{\alpha _i \wedge t }^{\beta _i \wedge t} {\textrm{d}}s \, \mathbb {1}_{\{ \tau _D(W_s)< H_s < \tau _D(W_s) + \varepsilon \}}, \end{aligned}$$

which yields that \(\mathbb {E}_{\mu , {\text {w}}}^{\dag } \big [ \sup _{s \leqslant t} |I(s,\varepsilon ) - I(s,\delta )| \big ]\) is bounded by above by:

$$\begin{aligned}&{\mathbb {E}}_{\mu , {\text {w}}}^{\dag } \Big [ \sum _{i \in {\mathbb {N}}} \sup _{s \leqslant t } \big |\frac{1}{\varepsilon } \int _{\alpha _i \wedge s}^{\beta _i \wedge s} {\textrm{d}}u \, \mathbb {1}_{\{ \tau _D(W_u)< H_u< \tau _D(W_u) + \varepsilon \}} - \frac{1}{\delta } \int _{\alpha _i \wedge s}^{\beta _i \wedge s} {\textrm{d}}u \, \mathbb {1}_{\{ \tau _D(W_u)< H_u< \tau _D(W_u) + \delta \}} \big | \Big ] \\&\quad \leqslant {\mathbb {E}}_{\mu , {\text {w}}}^{\dag } \Big [ \sum _{i \in {\mathbb {N}}} \sup _{s \leqslant \sigma (W^i) } \big |\frac{1}{\varepsilon } \int _0^{s \wedge t} {\textrm{d}}u \, \mathbb {1}_{\{ \tau _D(W^i_u)< H(\rho _u^i)< \tau _D(W^i_u) + \varepsilon \}} - \frac{1}{\delta } \int _0^{s \wedge t} {\textrm{d}}u \, \mathbb {1}_{\{ \tau _D(W^i_u)< H(\rho _u^i) < \tau _D(W^i_u) + \delta \}} \big | \Big ]. \end{aligned}$$

Since \(\mu (\{ \tau _D({\text {w}}) \}) = 0\), the last display is given by

$$\begin{aligned} \int _{[0,\tau _D({\text {w}}))} \mu ({\textrm{d}}h) \, {\mathbb {N}}_{{\text {w}}(h)}\Big ( \sup _{s \leqslant t} | I(s,\varepsilon ) - I(s,\delta ) | \Big ). \end{aligned}$$
(3.9)

Let us now show that (3.9) converges towards 0 when \(\varepsilon , \delta \downarrow 0\). Since for every \(h \in [0,\tau _D({\text {w}}))\) we have \({\text {w}}(h) \in D\), the term inside the integral in (3.9) converges towards 0 as \(\varepsilon , \delta \downarrow 0\) by the approximation of exit local times under the excursion measure given in (3.3). Knowing that \(\mu \) is a finite measure, it suffices to show that the term,

$$\begin{aligned} {\mathbb {N}}_{{\text {w}}(h)}\Big ( \sup _{s \leqslant t} | I(s,\varepsilon ) - I(s,\delta ) | \Big ), \end{aligned}$$

can be bounded uniformly in \(\varepsilon , \delta \). However, still under \({\mathbb {N}}_{{\text {w}}(h)}\), we have the simple upper bound:

$$\begin{aligned} \sup _{s \leqslant t} | I(s,\varepsilon ) - I(s,\delta ) | \leqslant I(\sigma ,\varepsilon ) + I(\sigma ,\delta ), \end{aligned}$$

and by the many-to-one formula (2.25), we deduce that

$$\begin{aligned} {\mathbb {N}}_{{\text {w}}(h)}\big ( I(\sigma ,\varepsilon ) \big ) =\varepsilon ^{-1} E^0 \otimes \Pi _{{\text {w}}(h)}\Big [ \int _0^\infty {\textrm{d}}a \, \exp (-\alpha a) \mathbb {1}_{\{ \tau _D(\xi )< H(J_a) < \tau _D(\xi ) + \varepsilon \}} \Big ] \leqslant 1, \end{aligned}$$

for every \(\varepsilon >0\), where to obtain the previous inequality we use that \(H(J_a)=a\) (this follows from the fact that J has a dense set of jump times). In particular, we have \({\mathbb {N}}_{{\text {w}}(h)}\big ( I(\sigma ,\varepsilon ) + I(\sigma ,\delta ) \big ) \leqslant 2\) and (3.8) follows. Still under our assumption \(\tau _D({\text {w}}) \geqslant \zeta _{\text {w}}\) we now turn our attention to (3.7). We know that for any \((h_i,W^i, \rho ^i) \in {\mathcal {M}}\) we have the limit in probability:

$$\begin{aligned} L^{D}_{\sigma _i} ( \rho ^i, W^i) = \lim _{\varepsilon \rightarrow 0} \varepsilon ^{-1} \int _{a_i}^{b_i} {\textrm{d}}s \mathbb {1}_{\{\tau _D(W_s)< H_s < \tau _D(W_s)+\varepsilon \}}. \end{aligned}$$

It then follows from our definitions that for every \(r> 0\),

$$\begin{aligned} L_\sigma ^{D} - L^{D}_{T_{\zeta _{{\text {w}}}-r}} = \sum _{h_i \leqslant \zeta _{\text {w}}-r}L^{D}_{\sigma _i}(\rho ^i, W^i), \end{aligned}$$

observing that the number of non-zero terms on the right-hand side is finite. By taking the limit as \(r \downarrow 0,\) we deduce (3.7) by monotonicity.

Let us now assume that \(\tau _D({\text {w}}) < \zeta _{\text {w}}\). To simplify notation, set \(a:=\tau _D({\text {w}})\) and notice that

$$\begin{aligned} (\rho _{T_a}, W_{T_a}) = \big (\mu \mathbb {1}_{[0,\tau _D({\text {w}})]}, ({\text {w}}(h): h \in [0,\tau _D({\text {w}})])\big ), \end{aligned}$$

where we recall that \(\mu (\{ \tau _D({\text {w}}) \}) = 0\). By our previous discussion and the strong Markov property, we deduce that \((I(t,\varepsilon ) -I(T_a,\varepsilon ):t \geqslant T_a)\) converges as \(\varepsilon \downarrow 0\) uniformly on compact intervals in \(L^{1}({\mathbb {P}}_{\mu ,{\text {w}}})\) towards a continuous process. To conclude our proof, it suffices to show that:

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \frac{1}{\varepsilon } {\mathbb {E}}_{\mu , {\text {w}}}^{\dag } \Big [ \int _0^{T_a} {\textrm{d}}s \, \mathbb {1}_{\{ \tau _D(W_s)< H_s < \tau _D(W_s) + \varepsilon \}} \Big ] = 0. \end{aligned}$$

To obtain the previous display, write

$$\begin{aligned} \int _{0}^{T_{a}} {\textrm{d}}s \, \mathbb {1}_{\{ \tau _D(W_s)< H_s< \tau _D(W_s) + \varepsilon \}}=\sum \limits _{ h_i\geqslant a}\int _{\alpha _i}^{\beta _i}{\textrm{d}}s \, \mathbb {1}_{\{ \tau _D(W_s)< H_s < \tau _D(W_s) + \varepsilon \}}, \end{aligned}$$

where we have \(h_i\ne a\) for every \(i\in {\mathbb {N}}\), since \(\mu (\{a\})=0\). Moreover, for every i with \(h_i > a\) notice that \(\tau _{D}(W_s)=a\). This implies:

$$\begin{aligned} \int _{0}^{T_{a}} {\textrm{d}}s \, \mathbb {1}_{\{ \tau _D(W_s)< H_s< \tau _D(W_s) + \varepsilon \}}\leqslant \sum \limits _{a\leqslant h_i\leqslant a+\varepsilon }\int _{0}^{\sigma (W^i)} {\textrm{d}}s \, \mathbb {1}_{\{ 0< H(\rho _s^i) < \varepsilon \}}, \end{aligned}$$

and we can now use that \(\mathcal {M}\) is a Poisson point measure with intensity given by \(\mu ({\textrm{d}}h){\mathbb {N}}_{{\text {w}}(h)}({\textrm{d}}\rho , {\textrm{d}}W)\) to obtain:

$$\begin{aligned} {\mathbb {E}}^{\dag }_{\mu ,{\text {w}}}\big [\int _{0}^{T_{a}} {\textrm{d}}s \, \mathbb {1}_{\{ \tau _D(W_s)< H_s< \tau _D(W_s) + \varepsilon \}} \big ] \leqslant \mu ([a,a+\varepsilon ])N(\int _{0}^{\sigma }{\textrm{d}}s \mathbb {1}_{\{0\leqslant H(\rho _s)<\varepsilon \}}). \end{aligned}$$
(3.10)

Finally, by (2.26), the previous display is bounded above by \(\varepsilon \cdot \mu ([a,a+\varepsilon ])\), giving:

$$\begin{aligned} \limsup \limits _{\varepsilon \rightarrow 0}\frac{1}{\varepsilon }{\mathbb {E}}^{\dag }_{\mu ,{\text {w}}}\Big [ \int _{0}^{T_{a}} {\textrm{d}}s \, \mathbb {1}_{\{ \tau _D(W_s)< H_s < \tau _D(W_s) + \varepsilon \}}\Big ]\leqslant \mu (\{a\})=0, \end{aligned}$$

where in the last equality we used that \(\mu (\{a\} )=0\) by assumption. \(\square \)

Now that we have defined the exit local time under more general initial conditions, let us turn our attention to the measurabliliy properties of \({\widetilde{L}}^D\). From now on, when working under \({\mathbb {P}}_{0,x}\) or \({\mathbb {N}}_{x}\), the sigma field \({\mathcal {F}}^D\) should be completed with the \({\mathbb {P}}_{0,x}\)-negligible and \({\mathbb {N}}_{x}\)-negligible sets respectively – for simplicity we use the same notation.

Proposition 3.4

Under \({\mathbb {P}}_{0,x}\) and \({\mathbb {N}}_x\), the process \({\widetilde{L}}^D\) is \({\mathcal {F}}^{D}\)-measurable.

In particular, the proposition implies that, under \({\mathbb {N}}_x\), the total mass \(L^D_\sigma = {\widetilde{L}}^D_\infty \) is \({\mathcal {F}}^D\)-measurable. The proof will mainly rely on the two following technical lemmas.

Lemma 3.5

Consider an open subset \(D \subset E\) containing x. Fix an arbitrary element \((\mu , \text {w} )\in \Theta \) with \(\text {w} (0) = x\) and satisfying \(\mu ( \{ \tau _D(\text {w} ) \}) = 0\) if \(\tau _D(\text {w} )<\infty \). Then, for every \(K > 0\), we have:

$$\begin{aligned}&{\mathbb {E}}_{\mu , \text {w} }^{\dag }\Big [ \int _0^\sigma {\textrm{d}}L_s^D \mathbb {1}_{\{ \langle \rho _s, 1 \rangle \leqslant K \}} \Big ]\\&\quad = \int _0^{ \mu \left( [0,\tau _D(\text {w} ))\right) } {\textrm{d}}u ~E^{0} \otimes \Pi _{\text {w} ( H(\kappa _{\langle \mu , 1 \rangle -u} \mu ))} \left( \mathbb {1}_{\{ \tau _D < \infty \}} \exp (- \alpha \tau _D) \mathbb {1}_{\{ \langle J_{\tau _D} , 1 \rangle \leqslant K -u \}} \right) . \end{aligned}$$

Proof

Recall that, under \({\mathbb {P}}^\dag _{\mu , {\text {w}}}\), the process \(\langle \rho ,1\rangle \) is a Lévy process started at \(\langle \mu , 1 \rangle \) and stopped at its first hitting time of 0. As usual, write \(\{(\alpha _i, \beta _i): \, i \in {\mathbb {N}}\}\) for the excursion intervals of \(\langle \rho ,1\rangle -\langle \mu ,1\rangle \) over its running infimum, that we still denote by I. We write \((\rho ^i,W^i)\) for the subtrajectory associated with \([\alpha _i,\beta _i]\). As explained in (2.19), the measure:

$$\begin{aligned} \sum _{i \in {\mathbb {N}}}\delta _{(-I_{\alpha _i}, \rho ^i, W^i)}, \end{aligned}$$

is a Poisson point measure with intensity \(\mathbb {1}_{[0,\langle \mu , 1 \rangle ]}(u) {\textrm{d}}u \, {\mathbb {N}}_{\text {w}(H( \kappa _{u}\mu ))}({\textrm{d}}\rho , {\textrm{d}}W).\) Furthermore, for every \(i\in {\mathbb {N}}\), we have \(H(\kappa _{-I_{\alpha _i}} \mu ) =H_{\alpha _i} = H_{\beta _i}\) and to simplify notation we denote this quantity by \(h_i\). Next, we notice that, by Proposition 3.3, we have \(\int _0^\sigma {\textrm{d}}L_s^D \mathbb {1}_{\{ \langle \rho _s,1\rangle -\langle \mu ,1 \rangle = I_s \}} = 0\) and \(L^D_{t} = 0\), for every \(t\leqslant \inf \{s \geqslant 0: H_s < \tau _D({\text {w}}) \}\). From our previous observations, we get:

$$\begin{aligned} \int _0^\sigma {\textrm{d}}L_s^D \mathbb {1}_{\{ \langle \rho _s, 1 \rangle \leqslant K \}}&=\sum \limits _{h_i<\tau _{D}({\text {w}})} \int _{\alpha _i}^{\beta _i} {\textrm{d}}L_s^D \mathbb {1}_{\{ \langle \rho _s, 1 \rangle \leqslant K \}}\\&=\sum \limits _{H(\kappa _{-I_{\alpha _i}}\mu )<\tau _{D}({\text {w}})} \int _{0}^{\beta _i-\alpha _i} {\textrm{d}}L_s^D(\rho ^i, W^i) \mathbb {1}_{\{ \langle \rho _s^i, 1 \rangle \leqslant K- \langle \mu ,1 \rangle - I_{\alpha _i} \}}, \end{aligned}$$

where we used in the second identity that \(\langle \rho _{s+\alpha _i}, 1 \rangle =\langle \rho _s^i, 1 \rangle +\langle \rho _{\alpha _i}, 1 \rangle =\langle \rho _s^i, 1 \rangle +I_{\alpha _i} +\langle \mu ,1 \rangle \), for every \(s\in [0,\beta _i-\alpha _i]\). This implies that:

$$\begin{aligned}&{\mathbb {E}}_{\mu , \text {w}}^{\dag }\Big [\sum \limits _{H(\kappa _{-I_{\alpha _i}}\mu )<\tau _{D}({\text {w}})} \int _{0}^{\beta _i-\alpha _i} {\textrm{d}}L_s^D(\rho ^i, W^i) \mathbb {1}_{\{ \langle \rho _s^i, 1 \rangle \leqslant K- \langle \mu ,1 \rangle -I_{\alpha _i} \}}\Big ] \\&\quad =\int _{ \mu ([\tau _{D}({\text {w}}),\infty ))}^{\langle \mu ,1\rangle } {\textrm{d}}u \, {\mathbb {N}}_{\text {w}( H(\kappa _{u} \mu ))} \Big ( \int _0^\sigma {\textrm{d}}L_s^D \mathbb {1}_{\{ \langle \rho _s , 1 \rangle \leqslant K - \langle \mu ,1 \rangle +u\}} \Big ), \end{aligned}$$

and the desired result now follows by performing the change of variable and applying the many-to-one formula (3.4). \(\square \)

Lemma 3.6

Consider an increasing sequence of open subsets \((D_n:~n\geqslant 1)\) containing x, such that \(\cup _n D_n = D\) and \(\overline{D_n} \subset D\). There exists a subsequence \((n_k:~k\geqslant 0)\) converging towards infinity, such that

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }\sup \limits _{s\in [0,\sigma ]}|L_{s}^{D_{n_k}}-L_{s}^{D}|=0,\,\, \quad {\mathbb {N}}_x \text {-a.e. } \end{aligned}$$
(3.11)

Proof

The proof of this lemma will be achieved by using similar techniques as in [21, Proposition 2.3] in the Brownian setting. We start by showing that, for a suitable subsequence, the total mass \(L_\sigma ^{D_n}\) converges towards \(L_\sigma ^{D}\), \({\mathbb {N}}_x\)-a.e. The uniform convergence will then be deduced by standard techniques. Notice however that in [21], this is mainly done by establishing an \(L_2({\mathbb {N}}_x)\) convergence of \(L^{D_n}_\sigma \) towards \(L^{D}_\sigma \), and that we do not have a priori moments of order 2 in our setting. In order to overcome this difficulty, we need to localize the tree by the use of a truncation argument. We start by showing that, for any fixed \(K>0\), we have:

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _0^\sigma {\textrm{d}}L_s^{D_n} \mathbb {1}_{\{\langle \rho _s,1 \rangle \leqslant K\}} = \int _0^\sigma {\textrm{d}}L_s^{D} \mathbb {1}_{\{\langle \rho _s,1 \rangle \leqslant K\}}, \quad \text { in } L_2({\mathbb {N}}_x). \end{aligned}$$
(3.12)

In this direction, we write \({\mathbb {N}}_x\Big ( \big |\int _0^\sigma {\textrm{d}}L_s^D \mathbb {1}_{\{ \langle \rho _s,1 \rangle \leqslant K \}} - \int _0^\sigma {\textrm{d}}L_s^{D_n} \mathbb {1}_{\{ \langle \rho _s,1 \rangle \leqslant K \}} \big | ^2 \Big )\) in the following form

$$\begin{aligned}&{\mathbb {N}}_x\Big ( \big (\int _0^\sigma {\textrm{d}}L_s^D \mathbb {1}_{\{ \langle \rho _s,1 \rangle \leqslant K \}} \big )^2 \Big ) + {\mathbb {N}}_x\Big (\big ( \int _0^\sigma {\textrm{d}}L_s^{D_n} \mathbb {1}_{\{ \langle \rho _s,1 \rangle \leqslant K \}} \big )^2\Big )\nonumber \\&\quad - 2 {\mathbb {N}}_x \Big ( \big ( \int _0^\sigma {\textrm{d}}L_s^{D_n} \mathbb {1}_{\{ \langle \rho _s,1 \rangle \leqslant K \}} \big )\cdot \big ( \int _0^\sigma {\textrm{d}}L_s^{D} \mathbb {1}_{\{ \langle \rho _s,1 \rangle \leqslant K \}} \big ) \Big ), \end{aligned}$$
(3.13)

and the proof of (3.12) will follow by computing each term separately and by taking the limit as \(n \uparrow \infty \). First, we remark that

$$\begin{aligned} \big ( \int _0^\sigma {\textrm{d}}L_s^D \mathbb {1}_{\{ \langle \rho _s,1 \rangle \leqslant K \}} \big )^2&= 2 \int _0^\sigma {\textrm{d}}L_s^{D} \mathbb {1}_{\{ \langle \rho _s,1 \rangle \leqslant K \}} \int _s^\sigma {\textrm{d}}L_u^D~ \mathbb {1}_{\{ \langle \rho _u,1 \rangle \leqslant K\}}, \end{aligned}$$

and the idea now is to apply the Markov property. For convenience, we let \(\Theta _D\) be the subset of \(\Theta \) of all the pairs \((\mu , {\text {w}})\) satisfying the condition \(\mu (\{\tau _D({\text {w}})\}) = 0\) when \(\tau _D({\text {w}}) < \infty \), and we define \(\Theta _{D_n}\) similarly replacing D by \(D_n\). Notice that by Lemma  3.1, we have, \({\mathbb {N}}_{x}\)–a.e., \((\rho _{t},W_t)\in \Theta _{D}\cap (\cap _{n\geqslant 1} \Theta _{D_n})\) for every \(t\geqslant 0\). For \((\mu , {\text {w}})\in \Theta _D\), we set

$$\begin{aligned} \phi _{D}(\mu , \text {w})&:= {\mathbb {E}}_{\mu , \text {w}}^{\dag }\Big [ \int _0^\sigma {\textrm{d}}L_s^D \mathbb {1}_{\{ \langle \rho _s, 1 \rangle \leqslant K \}} \Big ] \nonumber \\&= \int _0^{ \mu \left( [0,\tau _D({\text {w}}))\right) } {\textrm{d}}u ~E^{0} \otimes \Pi _{{\text {w}}( H(\kappa _{\langle \mu ,1\rangle -u} \mu ))} \left( \mathbb {1}_{\{ \tau _D < \infty \}} \exp (- \alpha \tau _D) \mathbb {1}_{\{ \langle J_{\tau _D} , 1 \rangle \leqslant K -u \}} \right) , \end{aligned}$$
(3.14)

where in the second equality we used Lemma 3.5. Note that the dependence of \(\phi _D\) on K is being omitted to simplify the notation. By our previous discussion, the Markov property followed by an application of (3.4) gives:

$$\begin{aligned} {\mathbb {N}}_x \left( \left( \int _0^\sigma {\textrm{d}}L_s^D \mathbb {1}_{\{ \langle \rho _s ,1 \rangle \leqslant K \}} \right) ^2 \right)&= 2 {\mathbb {N}}_x \left( \int _0^\sigma {\textrm{d}}L_s^{D} \mathbb {1}_{\{ \langle \rho _s, 1 \rangle \leqslant K \}} \phi _{D}(\rho _s,W_s) \right) \nonumber \\&= 2 E^{0} \otimes \Pi _{x} \left( \mathbb {1}_{\{ \tau _{D} < \infty \}} \exp (-\alpha \tau _{D}) \mathbb {1}_{\{ \langle J_{\tau _{D}}, 1 \rangle \leqslant K \}} \phi _{{D}}(J_{\tau _{D}}, \xi ^{\tau _{D}}) \right) , \end{aligned}$$
(3.15)

where to simplify notation, we write \(\xi ^{\tau _{D}}:= (\xi _{t}: 0 \leqslant t \leqslant \tau _{D})\). Observe that \((J_{\tau _{D}}, \xi ^{\tau _D}) \in \Theta _D\) since by independence, it holds that \(\mathbb {1}_{\{ \tau _D < \infty \}}J_{\tau _D}(\{ \tau _D \}) = 0\), \(P^0\otimes \Pi _x\)–a.s. Replacing D by \(D_n\), we also have \((J_{\tau _{D_n}}, \xi ^{\tau _{D_n}}) \in \Theta _{D_n}\) and we obtain

$$\begin{aligned}&{\mathbb {N}}_x \Big ( \big (\int _0^\sigma {\textrm{d}}L_s^{D_n} \mathbb {1}_{\{ \langle \rho _s ,1 \rangle \leqslant K \}} \big )^2 \Big ) \nonumber \\&\quad = 2 E^{0} \otimes \Pi _{x} \left( \mathbb {1}_{\{ \tau _{D_n} < \infty \}} \exp (-\alpha \tau _{D_n}) \mathbb {1}_{\{ \langle J_{\tau _{D_n}}, 1 \rangle \leqslant K \}} \phi _{{D_n}}(J_{\tau _{D_n}}, \xi ^{\tau _{D_n}}) \right) , \end{aligned}$$
(3.16)

where for \((\mu , {\text {w}}) \in \Theta _{D_n}\), we write

$$\begin{aligned} \phi _{D_n}(\mu , \text {w}) \!=\! \int _0^{ \mu \left( [0,\tau _{D_n}({\text {w}}))\right) }\!\! {\textrm{d}}u ~E^{0}\otimes \Pi _{{\text {w}}( H(\kappa _{\langle \mu ,1\rangle -u} \mu ))} \left( \mathbb {1}_{\{ \tau _{D_n} < \infty \}} \exp (- \alpha \tau _{D_n}) \mathbb {1}_{\{ \langle J_{\tau _{D_n}}, 1 \rangle \leqslant K -u \}} \right) .\nonumber \\ \end{aligned}$$
(3.17)

Our goal now is to take the limit in (3.16) as \(n \uparrow \infty \) and to show that this limit is precisely (3.15). In this direction, we remark that by definition of \(\phi _{D_n}(\mu , {\text {w}})\) we always have the bound \(\phi _{D_n}(\mu , \text {w})\leqslant \mu \left( [0,\tau _{D_n}({\text {w}}))\right) \), and therefore on \(\{ \langle J_{\tau _{D_n}}, 1 \rangle \leqslant K \}\), we have \(\phi _{D_n}(J_{\tau _{D_n}} , \xi ^{\tau _{D_n}})\leqslant K\). Thanks to the dominated convergence theorem, it is then enough to show that, \(P^0\otimes \Pi _x\)-a.s., the following convergence holds:

$$\begin{aligned}{} & {} \lim \limits _{n\rightarrow \infty }\mathbb {1}_{\{ \tau _{D_n}< \infty \}} \exp (- \alpha \tau _{D_n}) \mathbb {1}_{\{ \langle J_{\tau _{D_n}}, 1 \rangle \leqslant K \}}\phi _{{D_n}}(J_{\tau _{D_n}}, \xi ^{\tau _{D_n}})\\{} & {} \quad =\mathbb {1}_{\{ \tau _{D} < \infty \}} \exp (- \alpha \tau _{D}) \mathbb {1}_{\{ \langle J_{\tau _{D}}, 1 \rangle \leqslant K \}}\phi _{{D}}(J_{\tau _{D}}, \xi ^{\tau _{D}}). \end{aligned}$$

In order to prove it, we start noticing that we always have \(\tau _{D_n}\uparrow \tau _{D}\) as \(n\rightarrow \infty \). In particular, since \(\langle J_{\infty }, 1 \rangle =\infty \), we see that the limit in the previous display is 0 on the event \(\{\tau _D=\infty \}\). Let us focus now on the event \(\{\tau _D<\infty \}\). First remark that for every \(u \leqslant \langle J\tau _{D_n}, 1 \rangle \) we have

$$\begin{aligned} \kappa _{\langle J_{\tau _{D_n}},1\rangle -u }J_{\tau _{D_n}} = \kappa _{\langle J_{\tau _{D}},1\rangle - u }J_{\tau _{D}} \end{aligned}$$

and recall the definition of \(\phi _{D}(\mu , {\text {w}})\) and \(\phi _{D_n}(\mu , {\text {w}})\) given in (3.14) and (3.17) respectively. This combined with the independence between J and \(\xi \) ensures that, under \(\{\tau _D<\infty \}\), the quantities \(\langle J\tau _{D_n}, 1 \rangle \) and \(\phi _{D_n}(J_{\tau _{D_n}}, \xi ^{\tau _{D_n}})\) converge respectively to \(\langle J\tau _{D}, 1 \rangle \) and \(\phi _{D}(J_{\tau _{D}}, \xi ^{\tau _{D}})\), giving the desired convergence under \(\{\tau _D<\infty \}\). Consequently, we get:

$$\begin{aligned} \lim _{n \rightarrow \infty } {\mathbb {N}}_x \Big ( \big (\int _0^\sigma {\textrm{d}}L_s^{D_n} \mathbb {1}_{\{ \langle \rho _s,1 \rangle \leqslant K \}} \big )^2 \Big ) = {\mathbb {N}}_x \Big ( \big (\int _0^\sigma {\textrm{d}}L_s^D \mathbb {1}_{\{ \langle \rho _s,1 \rangle \leqslant K \}} \big )^2 \Big ). \end{aligned}$$

Turning our attention to the cross-term, we can apply similar steps and the Markov property as before to obtain

$$\begin{aligned}&{\mathbb {N}}_x \Big ( \big ( \int _0^\sigma {\textrm{d}}L_s^{D_n} \mathbb {1}_{\{ \langle \rho _s ,1 \rangle \leqslant K \}} \big )\cdot \big ( \int _0^\sigma {\textrm{d}}L_s^{D} \mathbb {1}_{\{ \langle \rho _s ,1 \rangle \leqslant K \}} \big ) \Big )\\&\quad = {\mathbb {N}}_x \Big ( \int _0^\sigma {\textrm{d}}L_s^{D_n} \mathbb {1}_{\{ \langle \rho _s ,1 \rangle \leqslant K\}} \int _s^\sigma {\textrm{d}}L_u^D \mathbb {1}_{\{\langle \rho _u ,1 \rangle \leqslant K \}} \Big )\\&\qquad + {\mathbb {N}}_x \Big ( \int _0^\sigma {\textrm{d}}L_s^{D} \mathbb {1}_{\{ \langle \rho _s ,1 \rangle \leqslant K\}} \int _s^\sigma {\textrm{d}}L_u^{D_n} \mathbb {1}_{\{\langle \rho _u ,1 \rangle \leqslant K \}} \Big ) \\&\quad = E^0 \otimes \Pi _x \left( \mathbb {1}_{\{ \tau _{D_n}< \infty \}} \exp (- \alpha \tau _{D_n}) \mathbb {1}_{\{ \langle J_{\tau _{D_n}},1 \rangle \leqslant K \}} \phi _{D}(J_{\tau _{D_n}} , \xi ^{\tau _{D_n}}) \right) \\&\qquad + E^0 \otimes \Pi _x \left( \mathbb {1}_{\{ \tau _D < \infty \}} \exp (- \alpha \tau _D) \mathbb {1}_{\{ \langle J_{\tau _D},1 \rangle \leqslant K \}} \phi _{D_n}(J_{\tau _D} , \xi ^{\tau _D}) \right) , \end{aligned}$$

and using the same method as before we get:

$$\begin{aligned} \lim _{n \rightarrow \infty }{\mathbb {N}}_x \Big ( \big ( \int _0^\sigma {\textrm{d}}L_s^{D_n} \mathbb {1}_{\{\langle \rho _s,1 \rangle \leqslant K \}} \big )\cdot \big ( \int _0^\sigma {\textrm{d}}L_s^{D} \mathbb {1}_{\{ \langle \rho _s,1 \rangle \leqslant K \}} \big ) \Big )= {\mathbb {N}}_x \Big ( \big (\int _0^\sigma {\textrm{d}}L_s^D \mathbb {1}_{\{ \langle \rho _s,1 \rangle \leqslant K \}} \big )^2 \Big ). \end{aligned}$$

Taking the limit as \(n \uparrow \infty \) in (3.13) we deduce the claimed \(L_2({\mathbb {N}}_x)\) convergence (3.12). Now that the convergence of the truncated total mass has been established, to derive the statement of the proposition we proceed as follows. First, we introduce the processes

$$\begin{aligned} A^n_t:= \int _0^{t} {\textrm{d}}L_s^{D_n} \mathbb {1}_{\{\langle \rho _s, 1 \rangle \leqslant K \}} \,\,\,\text { and }\,\,\, A_t:= \int _0^{t} {\textrm{d}}L_s^{D} \mathbb {1}_{\{\langle \rho _s, 1 \rangle \leqslant K \}}, \end{aligned}$$

which are continuous additive functionals of the Markov process \((\rho , W)\). Then using the Markov property, we get

$$\begin{aligned}&{\mathbb {N}}_x \left( A^n_\infty | {\mathcal {F}}_s \right) = A_{s \wedge \sigma }^{n} + \phi _{D_n}(\rho _{s \wedge \sigma } , W_{s \wedge \sigma }) \,\,\,\text { and }\,\,\, \nonumber \\&{\mathbb {N}}_x \left( A_\infty | {\mathcal {F}}_s \right) = A_{s \wedge \sigma } + \phi _D(\rho _{s \wedge \sigma } , W_{s \wedge \sigma }), \end{aligned}$$
(3.18)

since \(\phi _{D_n}(\mu ,\text {w}) = \mathbb {E}^\dag _{\mu , \text {w}}[A^n_\infty ]\), \(\phi _D(\mu , \text {w}) = \mathbb {E}^\dag _{\mu , \text {w}}[A_\infty ]\) and \(\phi _{D_n}(\rho _\sigma , W_\sigma ) = \phi _{D}(\rho _\sigma , W_\sigma ) =0\), \({\mathbb {N}}_x\)-a.e. To simplify notation, we denote respectively by \(M^n_s= {\mathbb {N}}_x(A^n_\infty | {\mathcal {F}}_s)\) and \(M_s= {\mathbb {N}}_x(A_\infty | {\mathcal {F}}_s)\) for \(s\geqslant 0\) the martingales in (3.18). Next, we apply Doob’s inequality to derive:

$$\begin{aligned} {\mathbb {N}}_{x}\big (\sup \limits _{s>0}|M_{s}^{n}-M_{s}|>\delta \big )\leqslant \delta ^{-2} {\mathbb {N}}_{x}\big (| A^n_\sigma -A_\sigma |^2\big ). \end{aligned}$$
(3.19)

Indeed, even if \({\mathbb {N}}_x\) is not a finite measure, we can argue as follows: fix \(a>0\) and observe that \((M_{a+ t})_{t \geqslant 0}\), \((M^n_{a+ t})_{t \geqslant 0}\) under \({\mathbb {N}}_x( \, \cdot \, |\sigma > a)\) are uniformly integrable martingales, from which we obtain

$$\begin{aligned} {\mathbb {N}}_{x}\big (\sup \limits _{s\geqslant a}|M_{s}^{n}-M_{s}|>\delta ~ \big |~ \sigma> a\big )\leqslant \delta ^{-2} {\mathbb {N}}_{x}\big (| A^n_\sigma -A_\sigma |^2~ \big |~ \sigma > a\big ), \end{aligned}$$

and we deduce (3.19) by multiplying both sides by \({\mathbb {N}}_x(\sigma > a)\) and by taking the limit as \(a \downarrow 0\) – using monotone convergence.

By (3.12), the right-hand side of (3.19) converges towards 0 as \(n\uparrow \infty \) and we deduce that

$$\begin{aligned} \lim \limits _{k\rightarrow \infty } \sup \limits _{s >0}|M_{s}^{n_k}-M_{s}| =0, \quad {\mathbb {N}}_{x}\text { -a.e. } \end{aligned}$$

for a suitable subsequence \((n_k:~k\geqslant 1)\) increasing towards infinity. Since \(\lim \limits _{n\rightarrow \infty }\phi _{D_n}(\rho _{s},W_{s})=\phi _{D}(\rho _{s},W_{s})\), we obtain that \({\mathbb {N}}_x\)-a.e., for every \(t\geqslant 0\), \(\int _0^t {\textrm{d}}L_s^{D_{n_k}} \mathbb {1}_{\{\langle \rho _s, 1 \rangle \leqslant K \}} \rightarrow \int _0^{t}{\textrm{d}}L^{D}_s \mathbb {1}_{\{ \langle \rho _s, 1 \rangle \leqslant K \}}\) as \(k \rightarrow \infty \). By continuity, monotonicity and the fact that \(\sigma <\infty \) \({\mathbb {N}}_x\)–a.e., we can apply Dini’s theorem to get:

$$\begin{aligned} \lim _{k \rightarrow \infty } \sup _{t > 0} \big |\int _0^t {\textrm{d}}L_s^{D_{n_k}} \mathbb {1}_{\{ \langle \rho _s, 1 \rangle \leqslant K\}} - \int _0^t {\textrm{d}}L_s^D \mathbb {1}_{\{ \langle \rho _s, 1 \rangle \leqslant K \}} \big | = 0, \quad \quad {\mathbb {N}}_x \text {- a.e.} \end{aligned}$$

Consequently, we deduce that on the event \(\{ \sup _{s\geqslant 0} \langle \rho _s, 1 \rangle \leqslant K \}= \{\sup X\leqslant K\}\), the \({\mathbb {N}}_x\)-a.e. uniform convergence (3.11) holds under a subsequence \((n_k)\), which depends on K. Since this holds for arbitrary K, we can use a diagonal argument to find a deterministic subsequence that we still denote by \((n_k:k\geqslant 1)\) converging towards infinity such that

$$\begin{aligned} \lim _{k \rightarrow \infty } \sup _{t\in [0,\sigma ]}|L_{t}^{D_{n_k}}-L_{t}^{D}|=0, \quad \quad {\mathbb {N}}_x \text {- a.e.} \end{aligned}$$

\(\square \)

We are now in position to prove that the process \({\widetilde{L}}^D\) is \({\mathcal {F}}^D\)-measurable.

Proof of Proposition 3.4

Until further notice, we argue under \({\mathbb {P}}_{0,x}\). By (3.3) and monotonicity, a diagonal argument gives that we can find a subsequence \((\varepsilon _k:~k\geqslant 1)\), with \(\varepsilon _k \downarrow 0\) as \(k \rightarrow \infty \), such that:

$$\begin{aligned} L_{\Gamma _s^{D}}^{D_{n}}=\lim \limits _{k \rightarrow \infty }\frac{1}{\varepsilon _k}\int _{0}^{\Gamma _s^{D}} {\textrm{d}}r \mathbb {1}_{\{\tau _{D_{n}}(W_{r})< H_{r}<\tau _{D_{n}}(W_{r})+\varepsilon _k \}}, \end{aligned}$$

for every \(n\geqslant 1\) and \(s\geqslant 0\). Our goal is now to show that:

$$\begin{aligned} L_{\Gamma _s^{D}}^{D_{n}}= \lim \limits _{k \rightarrow \infty }\frac{1}{\varepsilon _k}\int _{0}^{s} {\textrm{d}}r \mathbb {1}_{\{\tau _{D_{n}}(W_{\Gamma _r^D})< H_{\Gamma ^D_r}<\tau _{D_{n}}(W_{\Gamma ^D_r})+\varepsilon _k \}}, \end{aligned}$$
(3.20)

which will imply that \((L_{\Gamma _{s}^D}^{D_{n}})_{s\geqslant 0}\) is \({\mathcal {F}}^{D}\)-measurable for every \(n \in {\mathbb {N}}\). In order to establish (3.20) we argue for \(\omega \) fixed and observe that for k large enough, we have:

$$\begin{aligned} \mathbb {1}_{\{\tau _{D_{n}}(W_{r})< H_{r}<\tau _{D_{n}}(W_{r})+\varepsilon _k \}} = \mathbb {1}_{\{\tau _{D_{n}}(W_{r})< H_{r}<\tau _{D_{n}}(W_{r})+\varepsilon _k \}} \mathbb {1}_{\{ H_r \leqslant \tau _D(W_r) \}}, \\ \quad \text { for all } r \in [0,\Gamma _s^D]. \end{aligned}$$

To see it, remark that if the previous display did not hold, by a compactness argument and continuity we would have \(\tau _{D_n}(W_{r_0}) = \tau _D(W_{r_0}) \leqslant H_{r}\) for some \(r_0\) in \([0,\Gamma _s^D]\). This gives a contradiction since \({\overline{D}}_n \subset D\) and \((W_{r_0}(t))_{t \in [0, H_{r_0}]}\) is continuous. Recalling the notation \(V^D\) given in (3.1), we deduce that

$$\begin{aligned} L_{\Gamma _s^{D}}^{D_{n}}&=\lim \limits _{k \rightarrow \infty }\frac{1}{\varepsilon _k}\int _{0}^{\Gamma _s^{D}} {\textrm{d}}r \mathbb {1}_{\{\tau _{D_{n}}(W_{r})< H_{r}<\tau _{D_{n}}(W_{r})+\varepsilon _k \}} \\&=\lim \limits _{k \rightarrow \infty }\frac{1}{\varepsilon _k}\int _{0}^{\Gamma _s^{D}} {\textrm{d}}V^D_r \mathbb {1}_{\{\tau _{D_{n}}(W_{r})< H_{r}<\tau _{D_{n}}(W_{r})+\varepsilon _k \}} \\&=\lim \limits _{k \rightarrow \infty }\frac{1}{\varepsilon _k}\int _{0}^{s} {\textrm{d}}r \mathbb {1}_{\{\tau _{D_{n}}(W_{\Gamma _r^D})< H_{{\Gamma _r^D}}<\tau _{D_{n}}(W_{\Gamma _r^D})+\varepsilon _k \}}, \end{aligned}$$

giving us (3.20). The same arguments can be applied under \({\mathbb {N}}_x\) and, to complete the proof of the proposition, it suffices to show that for every \(t\geqslant 0\)

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\sup \limits _{s\in [0,t]}|L_{\Gamma _s^{D}}^{D_{n}}-L_{\Gamma _s^{D}}^{D}|=0, \quad \quad \text { under }{\mathbb {P}}_{0,x} \text { and } {\mathbb {N}}_x, \end{aligned}$$
(3.21)

at least along a suitable subsequence. However, note that when working under \({\mathbb {N}}_x\), this convergence follows by Lemma 3.6. Now, the result under \({\mathbb {P}}_{0,x}\) is a standard consequence of excursion theory. More precisely, recall that \(-I\) is the local time of \((\rho ,W)\) at (0, x) and, for fixed \(r > 0\), set \(T_r:= \inf \{ t \geqslant 0: ~-I_t > r \}\). If we let \(T_D:= \inf \{ t \geqslant 0: \tau _D(W_t) < \infty \}\), by continuity there exists a finite number of excursions \((\rho ^i, W^i)\) of \((\rho , W)\) in \([0,T_r]\) satisfying \(T_{D}(W^i) < \infty \), and their distribution is \({\mathbb {N}}_{x,0}(\, \cdot \, | T_D < \infty )\). Since \(T_r \uparrow \infty \), the approximation (3.21) under \({\mathbb {P}}_{x,0}\) now follows from the result under \({\mathbb {N}}_{x,0}\). This completes the proof of Proposition 3.4. \(\square \)

3.2 Proof of special Markov property

Now that we have already studied the trajectories staying in D, we turn our attention to the complementary side of the picture and we start by introducing formally the notion of excursions from D.

Excursions from D. Observe that (2.25) and assumption (\(\hbox {H}_{1}\)) imply that

$$\begin{aligned} {\mathbb {N}}_{x}\Big (\int _{0}^{\sigma }{\textrm{d}}s~\mathbb {1}_{\{\tau _{D}(W_{s})< \zeta _{s} \}}>0\Big )>0. \end{aligned}$$

Hence, the set \(\big \{s\in [0,\sigma ]:\,\tau _{D}(W_{s})<\zeta _{s}\big \}\) is non-empty with non null measure under \({\mathbb {N}}_x\) and \({\mathbb {P}}_{0,x}\). If we define

$$\begin{aligned} \gamma ^D_s:= \big ( \zeta _s - \tau _D(W_s) \big )_+, \quad \quad s \geqslant 0, \end{aligned}$$

it is straightforward to show by the snake property and the continuity of \(\zeta \) that \(\gamma ^D\) is continuous. Set

$$\begin{aligned} \sigma ^{D}_{t}:=\inf \big \{s\geqslant 0:\,\int _{0}^{s} {\textrm{d}}r\mathbb {1}_{\{\gamma ^D_{r}>0\}} > t\big \}, \end{aligned}$$

and consider the process \((\rho ^{D}_{t})_{t\geqslant 0}\) taking values in \({\mathcal {M}}_{f}({\mathbb {R}}_{+})\) defined, for any bounded measurable function \(f: {\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+\), by the relation:

$$\begin{aligned} \langle \rho ^{D}_{t}, f \rangle :=\int \rho _{\sigma ^{D}_{t}}({\textrm{d}}h)f\big (h-\tau _{D}(W_{\sigma ^{D}_{t}})\big ) \mathbb {1}_{\{ h>\tau _D(W_{\sigma _t^D}) \}}. \end{aligned}$$
(3.22)

Then, by Proposition 4.3.1 in [11], \(\rho ^D\) and \(\rho \) have the same distribution under \({\mathbb {P}}_{0,x}\). In particular, \(\langle \rho ^D, 1 \rangle \) has the same law as the reflected Lévy process \(X-I\) and we denote its local time at 0 by \((\ell ^D(s): s \geqslant 0)\). Moreover, it is shown in [11, Section 4.3] that the process \(L^D\) is related to the local time \(\ell ^D\) by the identity:

$$\begin{aligned} L^D_t = \ell ^D \left( \int _0^t {\textrm{d}}s \, \mathbb {1}_{\{ \gamma ^D_s > 0 \}} \right) . \end{aligned}$$
(3.23)

The proof of Proposition 4.3.1 in [11] shows that \(\rho ^{D}\) can be obtained as limit of functions which are independent of \({\mathcal {F}}^{D}\), implying that \(\rho ^{D}\) is on its turn independent of \({\mathcal {F}}^{D}\). Now, denote the connected components of the open set

$$\begin{aligned} \big \{t\geqslant 0:\,\tau _{D}(W_{t})< \zeta _t \big \} = \big \{ t \geqslant 0: \gamma ^D_t > 0 \big \}, \end{aligned}$$

by \(\big ((a_{i},b_{i}): i\in {\mathcal {I}} \big )\), where \({\mathcal {I}}\) is an indexing set that might be empty. By construction, for any \(s \in (a_i,b_i)\), the trajectory \(W_s\) is a trajectory leaving D. Remark that \(H_{a_{i}}=H_{b_{i}}<H_r\) for every \(r\in (a_i,b_i)\) and let \((\rho ^{i},W^{i})\) be the subtrajectory of \((\rho ,W)\) associated with \([a_{i},b_{i}]\) as defined in Sect. 2.3. Observe that in our setting, \((\rho ^i, W^i)\) is defined for each \(s \geqslant 0\) and for any measurable function \(f:{\mathbb {R}}_+ \mapsto {\mathbb {R}}_+\) as

$$\begin{aligned} \langle \rho ^{i}_{s} , f \rangle =\int \rho _{(a_i+s)\wedge b_i }({\textrm{d}}h)f(h - \tau _D(W_{a_i}))\mathbb {1}_{\{ h > \tau _D(W_{a_i}) \}} \end{aligned}$$

and

$$\begin{aligned} W^i_s = W_{(a_i+s) \wedge b_i }( t + \tau _D(W_{a_i})) \quad \text { for } t \in [ 0, \zeta _{ (a_i+ s)\wedge b_i } - \tau _D(W_{a_i}) ], \end{aligned}$$

with respective lifetime process given by

$$\begin{aligned} \zeta _s^i = \zeta _{(a_i + s) \wedge b_i} - \tau _D(W_{a_i}), \end{aligned}$$

where \(\tau _D(W_s)= \tau _D(W_{a_i}) = \zeta _{a_i}\). We say that \((\rho ^i, W^i)\) is an excursion of \((\rho ,W)\) from D. Observe that \(W_s^i(0) = W_{a_i}^i(0)\) for all \(s \in [a_i, b_i]\) by the snake property and that we have \(W_{a_i}^i(0) \in \partial D\). This is the point of \(\partial D\) used by the subtrajectory \(W^i\) to escape from D.

In order to state the special Markov property we need to introduce one last notation. Let \(\theta \) be the right inverse of \({\widetilde{L}}^D\), viz. the \({\mathcal {F}}^D\)-measurable function defined as

$$\begin{aligned} \theta _{r}:=\inf \big \{s \geqslant 0 \,: L_{\Gamma ^D_{s}}^{D}> r\big \}, \quad \text { for all } r\in [0,L_{\sigma }^{D}).\, \end{aligned}$$

Recall that we are considering some fixed \(x\in D\), the notation \(( (\rho ^i, W^i): i \in {\mathcal {I}} )\) for the excursions outside D, and that we are working under the hypothesis (\(\hbox {H}_{1}\)). We are now going to state and prove the special Markov property under \({\mathbb {P}}_{0,x}\), and we will deduce by standard arguments a version under the excursion measure \({\mathbb {N}}_{x}\). Under \({\mathbb {P}}_{0,x}\) we use the same notation as under \({\mathbb {N}}_x\), but observing that \(\sigma _H = \infty \) and noticing that \({\mathbb {P}}_{0,x}\)-a.s., we have \({\mathcal {Y}}_{D}=\int _{0}^{\infty } {\textrm{d}}s \mathbb {1}_{\{H_{s} \leqslant \tau _{D}(W_s)\}}=\infty \) and \(L^D_\infty = \infty \). In particular, this implies that \(\Gamma ^D_s\) and \(\theta _s\) are finite for every \(s < \infty \).

Theorem 3.7

(Special Markov property) Under \({\mathbb {P}}_{0,x}\), conditionally on \({\mathcal {F}}^{D}\), the point measure

$$\begin{aligned} \sum \limits _{i\in {\mathcal {I}}} \delta _{(L_{a_{i}}^{D},\rho ^{i},W^{i})}({\textrm{d}}\ell , \,{\textrm{d}}\rho , \,{\textrm{d}}\omega ) \end{aligned}$$

is a Poisson point process with intensity

$$\begin{aligned} \mathbb {1}_{[0,\infty )}(\ell )\,{\textrm{d}}\ell \, {\mathbb {N}}_{\textrm{tr}_{D}({\widehat{W}})_{\theta _\ell }}({\textrm{d}}\rho , \,{\textrm{d}}\omega ). \end{aligned}$$

Recall that we have established in Proposition 3.4 that \({\widetilde{L}}^D\) is \({\mathcal {F}}^D\)-measurable. It might also be worth observing that if \(F = F(\uprho , \omega )\) is a measurable function, when integrating with respect to the intensity measure \( \mathbb {1}_{[0,\infty )}(\ell )\, {\textrm{d}}\ell \, {\mathbb {N}}_{\textrm{tr}_{D}({\widehat{W}})_{\theta _\ell }}({\textrm{d}}\rho , \,{\textrm{d}}\omega )\) we can re-write the expression in the following more tractable form:

$$\begin{aligned} \int _0^\infty {\textrm{d}}\ell \, {\mathbb {N}}_{\text {tr}_{D}\widehat{(W)}_{\theta _\ell }} (F) = \int _0^\infty {\textrm{d}}{\widetilde{L}}_s^D \, {\mathbb {N}}_{\text {tr}_{D}({\widehat{W}})_{s}} (F) = \int _0^\infty {\textrm{d}}{L}_s^D \, {\mathbb {N}}_{{\widehat{W}}_{s}} (F) \end{aligned}$$

where in the last equality, we applied a change of variable for Lebesgue-Stieltjes integrals using the fact that \(L^D\) is constant on the excursion intervals \([\Gamma ^D_{s-}, \Gamma ^D_{s} ]\) when \(\Gamma ^D_{s-} < \Gamma ^D_s\). Let us now prove Theorem 3.7.

Proof

In this proof, we work with \((\rho ,W)\) under \({\mathbb {P}}_{0,x}\). Let us start with some preliminary constructions and remarks. First, we introduce the \({\mathcal {S}}_x\)-valued process \((\rho , W^*)\) defined at each \(t \geqslant 0\) as

$$\begin{aligned} \big (\rho _{t},W^{*}_{t}(s)\big )=\Big (\rho _{t}({\textrm{d}}h),W_{t}\big (s\wedge \tau _{D}(W_{t})\big )\Big ), \quad \text { for } s \in [0,\zeta _{W_t}], \end{aligned}$$

and let \({\mathcal {F}}^D_*\) be its generated sigma-field on \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_{E,x})\). The snake \((\rho , W^*)\) can be interpreted as the Lévy snake associated with \((\psi ,\xi ^*)\), where \(\xi ^{*}\) is the stopped Markov process \((\xi ^*_t:~t\geqslant 0)=(\xi _{t\wedge \tau _D(\xi )}:~t\geqslant 0)\). Since, for every \(t\geqslant 0\),

$$\begin{aligned} \big (\zeta _{W_{t}}-\tau _{D}(W_{t})\big )_{+}=\big (\zeta _{W_{t}^{*}}-\tau _{D}(W_{t}^{*})\big )_{+}, \end{aligned}$$

we derive that the process \(\gamma _{t}^D = (\zeta _{W_{t}}-\tau _{D}(W_{t}))_{+}\) is \({\mathcal {F}}_*^D-\)measurable. Consequently, we have \({\mathcal {F}}^D\subset {\mathcal {F}}^D_*\) since \(V^{D}\) – the functional measuring the time spent in D defined in (3.1) – is \({\mathcal {F}}^D_*\)-measurable and by definition \(\textrm{tr}_D (\rho ,W)=\textrm{tr}_D (\rho ,W^*)\). Recalling that \(\big ( (a_{i},b_{i}): i \in {\mathcal {I}}\big )\) stands for the connected components of the open set

$$\begin{aligned} \{t\geqslant 0:\,\tau _{D}(W_{t})< \zeta _{W_t} \} = \{ t \geqslant 0: \gamma ^D_t > 0 \}, \end{aligned}$$

we deduce by the previous discussion and the identity \(\tau _D(W_{a_i}) = \zeta _{a_i}\), that the variables

$$\begin{aligned} {\widehat{W}}_{a_i} = {\widehat{W}}^*_{a_i},\, \zeta ^{i} =\zeta _{(a_{i}+\cdot \, )\wedge b_{i}}-\zeta _{a_{i}} \text { and a fortiori } \, \rho ^{i} \, \text { are } {\mathcal {F}}^D_* - \text { measurable}. \end{aligned}$$

Informally, \({\mathcal {F}}_*^D\) encodes the information of the trajectories staying in D and the tree structure. We claim that conditionally on \({\mathcal {F}}^D_*\), the excursions \(( W^i:i \in {\mathcal {I}} )\) are independent, and that the conditional distribution of \(W^{i}\) is \({\mathbb {Q}}_{{\widehat{W}}_{a_{i}}^*}^{\zeta ^{i}}\), where we recall from Sect. 2.3 that we denote the distribution of the snake driven by h started at x by \({\mathbb {Q}}^h_x\).

In order to prove this claim, consider a collection of snake trajectories \(\big ( W^{i,\prime }:i\in {\mathcal {I}} \big )\) such that, conditionally on \((\rho , W^{*})\), they are independent and each one is respectively distributed according to the measure \({\mathbb {Q}}_{{\widehat{W}}_{a_{i}}^{*}}^{\zeta ^{i}}\). Next let \(W^{\prime }\) be the process defined as follows: for every t such that \(\gamma _{t}^D=0\) set \(W^{\prime }_{t}=W^{*}_{t}\), and if \(\gamma _{t}^D>0\) we set:

$$\begin{aligned} W^{\prime }_{t}(s)= {\left\{ \begin{array}{ll} W_{t}^{*}(s) \hspace{24mm} \, \, \text {if}\,\,s\in [0,\tau _{D}(W_{t}^{*})]\\ W_{t-a_{i}}^{i,\prime }\big (s-\tau _D(W_{t}^{*})\big )\,\, \hspace{4mm} \text {if}\,\,s\in [\tau _{D}(W_{t}^{*}), \zeta (W_t^*)], \end{array}\right. } \end{aligned}$$

where i is the unique index such that \(t\in (a_{i},b_{i})\). By construction, \((\rho , W^{\prime })\) is in \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_{E,x})\) and a straightforward computation of its finite marginals shows that its distribution is \({\mathbb {P}}_{0,x}\), proving our claim.

Notice that (3.3) implies that \(L^D\) is constant on the intervals \([\Gamma ^D_{s-}, \Gamma ^D_{s}]\) when \(\Gamma ^D_{s-} < \Gamma ^D_s\). Hence, \(L^D_s = {\widetilde{L}}^D_{V^D_s}\) for all \(s \geqslant 0\) and in particular \(L^D_{a_i} = {\widetilde{L}}^D_{V^D_{a_i}}\), the latter being \({\mathcal {F}}^D_*\)-measurable. Consider now U a bounded \({\mathcal {F}}^{D}\)-measurable random variable, and remark that to obtain the desired result, it is enough to show that:

$$\begin{aligned}{} & {} {\mathbb {E}}_{0,x}\Big [U\exp (-\sum \limits _{i\in {\mathcal {I}}} F\big (L_{a_{i}}^{D},\rho ^{i},W^{i})\big )\Big ]\\{} & {} \quad = {\mathbb {E}}_{0,x}\Big [U\exp \Big (-\int _{0}^{\infty }{\textrm{d}}\ell \, {\mathbb {N}}_{\text {tr}_{D}\widehat{(W)}_{\theta _\ell }}\big (1-\exp (-F(\ell ,\rho ,W)\big )\Big )\Big ], \end{aligned}$$

for every non-negative measurable function F in \({\mathbb {R}}_+ \times {\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_E)\). In order to prove this identity, we start by projecting the left term on \({\mathcal {F}}^{D}_*\): by the previous discussion and recalling that \({\mathcal {F}}^{D}\subset {\mathcal {F}}^{D}_*\), we get

$$\begin{aligned} {\mathbb {E}}_{0,x}\Big [U\exp (-\sum \limits _{i\in {\mathcal {I}}} F\big (L_{a_{i}}^{D},\rho ^{i},W^{i})\big )\Big ]={\mathbb {E}}_{0,x}\Big [U\prod \limits _{i\in {\mathcal {I}}}{\mathbb {Q}}_{{\widehat{W}}_{a_{i}}^{*}}^{\zeta ^i}\big (\exp (-F(L_{a_{i}}^{D},\rho ^i,W)\big )\Big ]. \end{aligned}$$

Moreover, it is straightforward to see that

$$\begin{aligned} {\widehat{W}}^*_{a_i} = {\widehat{W}}_{a_{i}}=\text {tr}_{D}\widehat{(W)}_{\theta _{L_{a_{i}}^{D}}}, \end{aligned}$$

we omit the details of this identity since the argument used in (23) of [24, Theorem 20] for the Brownian snake applies directly to our framework. Consequently, we have:

$$\begin{aligned}{} & {} {\mathbb {E}}_{0,x}\Big [U\prod \limits _{i\in {\mathcal {I}}}{\mathbb {Q}}_{{\widehat{W}}_{a_{i}}^{*}}^{\zeta ^{i}} \big (\exp (-F(L_{a_{i}}^{D},\rho ^i,W)\big )\Big ]\\{} & {} \quad ={\mathbb {E}}_{0,x}\Big [U\prod \limits _{i\in {\mathcal {I}}}{\mathbb {Q}}_{\text {tr}_{D}\widehat{(W)}_{\theta _{L_{a_{i}}^{D}}}}^{\zeta ^{i}}\big (\exp (-F(L_{a_{i}}^{D},\rho ^i,W)\big )\Big ]. \end{aligned}$$

Now, we need to take the projection on \({\mathcal {F}}^D\). Recalling that \(H(\rho ^i)=\zeta ^i\), observe that for every \(i\in {\mathcal {I}}\),

$$\begin{aligned} {\mathbb {Q}}_{\text {tr}_{D}\widehat{(W)}_{\theta _{L_{a_{i}}^{D}}}}^{\zeta ^{i}}\big (\exp (-F(L_{a_{i}}^{D},\rho ^i,W)\big ) \end{aligned}$$

is a measurable function of the pair \((L_{a_i}^D,\rho ^i)\) and the process \((\text {tr}_{D}(W)_{\theta _r}:~r\geqslant 0)\), the latter being \({\mathcal {F}}^D\)-measurable. We are going to conclude by showing that the point measure

$$\begin{aligned} \sum \limits _{i\in {\mathcal {I}}}\delta _{(L_{a_{i}}^{D},\rho ^{i})} \end{aligned}$$

is a Poisson point measure with intensity \(\mathbb {1}_{ [0,\infty )}(\ell ){\textrm{d}}\ell \, N({\textrm{d}}\rho )\) independent of \({\mathcal {F}}^D\). Remark that once this has been established, an application of the exponential formula for functionals of Poisson random measures yields

$$\begin{aligned}{} & {} {\mathbb {E}}_{0,x}\Big [U\prod \limits _{i\in \mathcal {{\mathbb {N}}}}{\mathbb {Q}}_{\text {tr}_{D}\widehat{(W)}_{\theta _{L_{a_{i}}^{D}}}}^{\zeta ^{i}}\big (\exp (-F(L_{a_{i}}^{D},\rho ^i,W)\big )\Big ]\\{} & {} \quad ={\mathbb {E}}_{0,x}\Big [U\exp \Big (-\int _{0}^{\infty } {\textrm{d}}\ell \, {\mathbb {N}}_{\text {tr}_{D}\widehat{(W)}_{\theta _\ell }}\big (1-\exp (-F(\ell ,\rho ,W)\big )\Big )\Big ], \end{aligned}$$

giving the desired result. In this direction, recall the definition of \(\rho ^{D}\) given in (3.22), and that \(\ell ^D\) stands for the local time of \(\rho ^D\) at 0. We denote the connected components of the open set \(\{t\geqslant 0: \, \langle \rho ^{D}_{t}, 1 \rangle \ne 0\} = \{ t \geqslant 0: H(\rho ^D_t) > 0 \}\) by \(\big ((c_{j},d_{j}): j \in {\mathbb {N}} \big )\) – the latter equality holding since \(\rho ^D_t(\{ 0 \}) =0\) – and observe that these are precisely the excursion intervals of \(\langle \rho ^D, 1 \rangle \) from 0. It follows by (2.10) and the discussion before the proof that

$$\begin{aligned} \sum _{j\in {\mathbb {N}}}\delta _{ (\ell ^D(c_j), \, \rho ^{D}_{(c_{j}+\cdot )\wedge d_{j}}) } \end{aligned}$$

is a Poisson point measure with intensity \(\mathbb {1}_{ [0,\infty )}(\ell ){\textrm{d}}\ell \, N({\textrm{d}}\rho )\) and observe that this measure is independent of \({\mathcal {F}}^D\) – since \(\rho ^D\) is independent of \({\mathcal {F}}^D\). Furthermore, by (3.23) we have:

$$\begin{aligned} L_{\sigma _{s}^{D}}^{D}= \ell ^D \Big ( \int _0^{\sigma _{s}^{D}} {\textrm{d}}r \, \mathbb {1}_{\{ \gamma ^D_r > 0 \}} \Big )=\ell ^D (s), \end{aligned}$$

for every \(s\geqslant 0\). It is now straightforward to deduce from our last observations that:

$$\begin{aligned} \big \{ ( L_{a_i}^D, \rho ^i ): i \in {\mathcal {I}} \big \} = \big \{ ( \ell ^D(c_j), \rho ^{D}_{(c_{j}+\cdot )\wedge d_{j}} ): j \in {\mathbb {N}} \big \}, \end{aligned}$$

concluding the proof. \(\square \)

Setting \(T_D = \inf \{ t \geqslant 0: \tau _D(W_t) < \infty \}\), we infer from our previous result a version of the special Markov property holding under the probability measure

$$\begin{aligned} {\mathbb {N}}^D_{x}:= {\mathbb {N}}_{x}( \, \cdot \, | T_D < \infty ). \end{aligned}$$

Observe that \({\mathbb {N}}_x(T_D < \infty )\) is finite: if this quantity was infinite, by excursion theory, the process \((\rho , W)\) under \({\mathbb {P}}_{0,x}\) would have infinitely many excursions exiting D on compact intervals, contradicting the continuity of its paths. Finally, note that \((\rho , W)\) under \({\mathbb {N}}^D_x\) has the distribution of the first excursion exiting the domain D. As a straightforward consequence of Theorem 3.7, this observation allows us to deduce:

Theorem 3.8

Under \({\mathbb {N}}^D_{x}\) and conditionally on \({\mathcal {F}}^{D}\), the point measure:

$$\begin{aligned} \sum \limits _{i\in {\mathcal {I}}} \delta _{(L_{a_{i}}^{D},\rho ^{i},W^{i})}({\textrm{d}}\ell , \, {\textrm{d}}\rho , \, {\textrm{d}}\omega ) \end{aligned}$$

is a Poisson point process with intensity

$$\begin{aligned} \mathbb {1}_{ [0, L_{\sigma }^{D}]}(\ell )\, {\textrm{d}}\ell \, {\mathbb {N}}_{\textrm{tr}_{D}({\widehat{W}})_{\theta _\ell }}({\textrm{d}}\rho , \, {\textrm{d}}\omega ). \end{aligned}$$

Recall that the measure \({\textrm{d}}L_s^D\) is supported on \(\{ s\geqslant 0: {\widehat{W}}_s \in \partial D \}\) and consider a measurable function \(g:\partial D\rightarrow {\mathbb {R}}_{+}\). Under \({\mathbb {N}}_x\), we define the exit measure from D, denoted by \({\mathcal {Z}}^D\) as:

$$\begin{aligned} \langle {\mathcal {Z}}^D, g \rangle := \int _0^\sigma {\textrm{d}}L^D_s g({\widehat{W}}_s). \end{aligned}$$

The total mass of \({\mathcal {Z}}^D\) is \(L_\sigma ^D\) and, in particular, \({\mathcal {Z}}^D\) is non-null only in \(\{ T_D < \infty \}\). Again by a standard change of variable, we get

$$\begin{aligned} \langle {\mathcal {Z}}^D, g \rangle = \int _0^\sigma {\textrm{d}}{\widetilde{L}}^D_s g( \text {tr}_D ({\widehat{W}}_s)) = \int _0^{L^D_\sigma } {\textrm{d}}\ell \, g({\text {tr}_D {\widehat{W}}_{\theta _\ell }}), \quad \quad {\mathbb {N}}_x \text {-a.e.} \end{aligned}$$

and this implies that \({\mathcal {Z}}^D\) is \({\mathcal {F}}^D\)-measurable since \(L^D_\sigma \in {\mathcal {F}}^D\) by Proposition 3.4. In this work, we shall frequently make use of the following simpler version of the special Markov property. By Theorem 3.8, we have

Corollary 3.9

Under \({\mathbb {N}}_x^D\) and conditionally on \({\mathcal {F}}^D\), the point measure

$$\begin{aligned} \sum _{i \in {\mathcal {I}}} \delta _{(\rho ^i, W^i)} ({\textrm{d}}\rho , \, {\textrm{d}}\omega ) \end{aligned}$$
(3.24)

is a Poisson random measure with intensity \(\int {\mathcal {Z}}^D({\textrm{d}}y) {\mathbb {N}}_{y}({\textrm{d}}\rho , \, {\textrm{d}}\omega )\).

Let us close this section by recalling some well-known properties of \({\mathcal {Z}}^D\) that will be needed, and by introducing some useful notations. Remark by (3.4) that, for any measurable \(g: \partial D \mapsto {\mathbb {R}}_+\) and for every \(y \in D\), we have

$$\begin{aligned} \,{\mathbb {N}}_{y}\big (\langle {\mathcal {Z}}^{D},g \rangle \big )=\Pi _{y}\Big (\mathbb {1}_{\{\tau _{D}<\infty \}}\exp (-\alpha \tau _{D}) g(\xi _{\tau _{D}})\Big ), \end{aligned}$$

and for such g, we set:

$$\begin{aligned} \,u_{g}^{D}(y):={\mathbb {N}}_{y}\big (1-\exp (-\langle {\mathcal {Z}}^{D},g\rangle )\big ), \quad \text { for all } y\in D. \end{aligned}$$
(3.25)

Theorem 4.3.3 in [11] states that for every \(g:\partial D\rightarrow {\mathbb {R}}_{+}\) bounded measurable function, \(u_{g}^{D}\) solves the integral equation:

$$\begin{aligned} u_{g}^{D}(y)+\Pi _{y}\Big (\int _{0}^{\tau _{D}}{\textrm{d}}t \, \psi (u_{g}^{D}(\xi _{t}))\Big )=\Pi _{y}\big (\mathbb {1}_{\{\tau _{D}<\infty \}}g(\xi _{\tau _{D}})\big ). \end{aligned}$$
(3.26)

By convention, we set \(u_{g}^{D}(y):=g(y)\) for every \(y\in \partial D\), and we stress that this convention is compatible with (3.26).

4 Construction of a measure supported on \(\{ t \in {\mathbb {R}}_+: {\widehat{W}}_t = x \}\)

From now on, we fix \(x\in E\) and we consider the random set:

$$\begin{aligned} \{ t \in {\mathbb {R}}_+: {\widehat{W}}_t = x \}, \quad \text {as well as its image on the tree }{\mathcal {T}}_H\text {, viz.}\quad \{ \upsilon \in {\mathcal {T}}_H: \xi _\upsilon = x \}.\nonumber \\ \end{aligned}$$
(4.1)

In order to study the latter, we shall construct an additive functional \(A:= (A_t)_{t \in {\mathbb {R}}_+}\) of the Lévy snake supported on \(\{ t \in {\mathbb {R}}_+: {\widehat{W}}_t = x \}\). The present section is devoted to the construction of A and to develop the machinery needed for our analysis. The study of \(\{ \upsilon \in {\mathcal {T}}_H: \xi _\upsilon = x \}\) is delayed to Sect. 5 and will heavily rely on the results of this section. Let us discuss now in detail the framework we will consider in the rest of this work.

Framework of Section 4and 5: With the same notations as in previous sections, consider a strong Markov process \(\xi \) taking values in E with a.s. continuous sample paths and we make the following assumptions:

figure c

and

figure d

Under (\(\hbox {H}_{2}\)) the local time of \(\xi \) at x is well defined up to a multiplicative constant (that we fix arbitrarily) and we denote it by \({\mathcal {L}}\). If we denote by \({\textrm{d}}{\mathcal {L}}\) the Stieltjes measure of \({\mathcal {L}}\), the support of the measure \({\textrm{d}}{\mathcal {L}}\) is almost surely \(\{ t \geqslant 0: \xi _t = x\}\), see e.g. [4, Chapter 4]. The recurrence hypothesis is assumed for convenience and we expect our results to hold with minor modifications without it. Set \(E_*:= E {\setminus } \{ x\}\) and for \({\text {w}}\in {\mathcal {W}}_E\), with the notation of Sect. 3 write

$$\begin{aligned} \tau _{E_*}({\text {w}}) = \inf \{ h \in [0,\zeta _{\text {w}}]: {\text {w}}(h) = x \}, \end{aligned}$$

for the exit time of \({\text {w}}\) from the open set \(E_*\). Observe that since x is recurrent for \(\xi \), we have

$$\begin{aligned} \Pi _y(\tau _{E_*}< \infty ) = 1 \end{aligned}$$
(4.2)

for every \(y \in E_*\), and in particular (\(\hbox {H}_{1}\)) holds. This will allow us to make use of the special Markov property established in the previous section. Assumption (\(\hbox {H}_{3}\)) might seem a technicality but it plays a crucial role in our study: it will ensure, under \({\mathbb {N}}_{y}\) and \({\mathbb {P}}_{0,y}\), that the set of branching points of \({\mathcal {T}}_H\) and \(\{ \upsilon \in {\mathcal {T}}_H {\setminus } \{ 0 \}: \xi _\upsilon = x \}\) are disjoint. We will explain properly this point after concluding the presentation of the section.

Let \({\mathcal {N}}\) be the excursion measure of \(\xi \) at x associated with \({\mathcal {L}}\) and, with a slight abuse of notation, still write \(\sigma _\xi \) for the lifetime of \(\xi \) under \({\mathcal {N}}\). The pair

$$\begin{aligned} {\overline{\xi }}_s = ( \xi _s, {\mathcal {L}}_s ), \quad s \geqslant 0, \end{aligned}$$

is a strong Markov process taking values in the Polish space \({\overline{E}}:=E\times {\mathbb {R}}_+\) equipped with the product metric \(d_{{\overline{E}}}\). We set \(\Pi _{y,r}\) for its law started from an arbitrary point \((y,r)\in {\overline{E}}\). Recall that we always work under the assumptions (\(\hbox {H}_{0}\)), which for \((\psi , {\overline{\xi }})\) takes the following form:

  • Hypothesis \(({{\textbf {H}}}'_{0})\). There exists a constant \(C_\Pi > 0\) and two positive numbers \(p,q > 0\) such that, for every \(y \in E\) and \(t\geqslant 0\), we have:

    figure e

where we recall the definition of \(\Upsilon \) from (2.17). We will use respectively the notation \({\overline{\Theta }}\), \(\overline{{\mathcal {S}}}\) for the sets defined as \(\Theta \), \({\mathcal {S}}\) in Sect. 2.3 but replacing the Polish space E by \({\overline{E}}\). It will be convenient to write the elements of \({\mathcal {W}}_{{\overline{E}}}\) as pairs \(\overline{\text {w}} = (\text {w}, \ell )\), where \({\text {w}}\in {\mathcal {W}}_E\) and \(\ell : [0,\zeta _{\text {w}}]\mapsto {\mathbb {R}}_+\) is a continuous function. Recall that under (\(\hbox {H}_{0}^\prime \)), the family of measures \(({\mathbb {P}}_{\mu , {\overline{{\text {w}}}}}: (\mu , {\overline{{\text {w}}}}) \in {\overline{\Theta }} )\) are defined on the canonical space \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_{{\overline{E}}} )\) and we denote the canonical process by \((\rho , W, \Lambda )\), where \(W_{s}:[0,\zeta _{s}({\overline{W}}_{s})]\mapsto E\) and \(\Lambda _{s}:[0,\zeta _{s}({\overline{W}}_{s})]\mapsto {\mathbb {R}}_+\). Said otherwise, for each \((\mu , {\overline{{\text {w}}}})\in {\overline{\Theta }}\), under \({\mathbb {P}}_{\mu , {\overline{{\text {w}}}}}\) the process

$$\begin{aligned} (\rho _s, W_{s},\Lambda _{s} ), \quad s \geqslant 0, \end{aligned}$$

is the \(\psi \)-Lévy snake with spatial motion \({\overline{\xi }}\) started from \((\mu , {\overline{{\text {w}}}})\) and we simply write \({\overline{W}}_s:= (W_{s},\Lambda _{s})\). For every \((y,r_0) \in {\overline{E}}\), we denote the excursion measure of \((\rho , {\overline{W}})\) starting from \((0,y,r_0)\) by \({{\mathbb {N}}}_{y,r_0}\).

Recall that under \({\mathbb {P}}_{0,y,r_0}\) or \({\mathbb {N}}_{y,r_0}\), for each \(s\geqslant 0\) and conditionally on \(\zeta _s\), the pair

$$\begin{aligned} (W_s,\Lambda _s) = \big ( (W_s(h), \Lambda _s(h)\big ): h \in [0,\zeta _s]\big ) \end{aligned}$$

has the distribution of \((\xi , {\mathcal {L}})\) under \(\Pi _{y,r_0}\) killed at \(\zeta _s\). In particular, the associated Lebesgue-Stieltjes measure of \(\Lambda _s\) is supported on the closure of \(\{ h \in [0,\zeta _s): W_s(h) = x \}\), \({\mathbb {P}}_{0,y,r_0}\) and \({\mathbb {N}}_{y,r_0}\)–a.e. We will restrict our analysis to the collection of initial conditions \((\mu , {\overline{{\text {w}}}}):=(\mu ,{\text {w}},\ell ) \in {\overline{\Theta }}\) satisfying that:

  1. (i)

    \(\ell \) is a non-decreasing continuous function and the support of its Lebesgue-Stieltjes measure is

    $$\begin{aligned} \overline{\big \{ h \in [0,\zeta _{\text {w}}): {\text {w}}(h) = x \big \}}. \end{aligned}$$
  2. (ii)

    The measure \(\mu \) does not charge the set \(\{ h \in [0,\zeta _{\text {w}}]: {\text {w}}(h) = x\}\), viz.

    $$\begin{aligned} \int _{[0, \zeta _{{\text {w}}}]} \mu ({\textrm{d}}h) \, \mathbb {1}_{\{ {\text {w}}(h) = x \}} = 0. \end{aligned}$$

This subcollection of \({\overline{\Theta }}\) is denoted by \({\overline{\Theta }}_x\) and we will work with the process \(\big ( (\rho , {\overline{W}}), ({\mathbb {P}}_{\mu , {\overline{{\text {w}}}}}: (\mu , {\overline{{\text {w}}}}) \in {\overline{\Theta }}_x) \big )\). Conditions (i) and (ii) are natural, since as a particular consequence of the next lemma, under \({\mathbb {P}}_{0,y,r_0}\) and \({\mathbb {N}}_{y,r_0}\) the Lévy snake \((\rho , {\overline{W}})\) takes values in \({\overline{\Theta }}_x\).

Lemma 4.1

For every \((\mu , \overline{\text {w} })\in {\overline{\Theta }}_x\) and \((y,r_0) \in {\overline{E}}\), the process \((\rho ,{\overline{W}})\) under \({\mathbb {P}}_{\mu ,\overline{\text {w} } }\) and \({\mathbb {N}}_{y,r_0}\) takes values in \({\overline{\Theta }}_x\).

Proof

First, we argue that \({\mathbb {N}}_{y,r_0}\)–a.e. the pair \((\rho _t, {\overline{W}}_t)\) satisfies (i) and (ii) for each \(t \in [0, \sigma ]\). On the one hand, by formula (2.25), for every \((y,r_0) \in {\overline{E}}\) we have:

$$\begin{aligned}&{\mathbb {N}}_{y,r_0}\left( \int _{0}^{\sigma _{H}} {\textrm{d}}t \, \langle \rho _{t},\{h \in [0,H_t]: \,W_{t}(h)=x\} \rangle \right) \\&\quad =\int _{0}^{\infty }{\textrm{d}}a\,\exp (-\alpha a)\, E^0 \otimes \Pi _{y,r_0}\big [\int _{0}^{a} J_a({\textrm{d}}h) \, \mathbb {1}_{\{\xi _{h}=x\}}\big ] = 0. \end{aligned}$$

In the last equality we used that, by (\(\hbox {H}_{3}\)) and the independence between \(\xi \) and \(J_\infty \), Campbell’s formula yields \(E^0 \otimes \Pi _{y,r_0}\big [\int _{0}^{\infty } J_{\infty }({\textrm{d}}h) \, \mathbb {1}_{\{\xi _{h}=x\}}\, \big ]=0\). On the other hand, by construction of the Lévy snake, for each fixed \(t \geqslant 0\) the support of \(\Lambda _t({\textrm{d}}h)\) is the closure of \(\{ h \in [0,H_t): W_t(h) = x \}\) in \([0,H_t]\), \({\mathbb {N}}_{y,r_0}\)–a.e. Consequently, \({\mathbb {N}}_{y,r_0}\)–a.e. , we can find a countable dense set \({\mathcal {D}} \subset [0,\sigma ]\) such that we have

$$\begin{aligned}{} & {} \langle \rho _{t},\{h \in [0,H_t]: \,W_{t}(h)=x\} \rangle = 0 \hspace{0.5cm}\text { and}\hspace{0.5cm} \\{} & {} \text {supp } \Lambda _t({\textrm{d}}h)\text { is the closure of }\{ h \in [0,H_t): W_t(h) = x \}, \end{aligned}$$

for every \(t \in {\mathcal {D}}\). For instance, one can construct the set \({\mathcal {D}}\) by taking an infinite sequence of independent uniform points in \([0,\sigma ]\). We now claim that \(\rho \) satisfies that \({\mathbb {N}}_{y,r_0}\)–a.e., for every \(s<t\), we have \(\rho _{s}\mathbb {1}_{[0,m_H(s,t))} = \rho _{t}\mathbb {1}_{[0,m_H(s,t))}\), where we recall the notation \(m_{H}(s,t)=\min _{[s,t]}H\). Indeed, remark that for fixed \(s<t\), this holds by the Markov property and we can extend this property to all \(0\leqslant s<t\leqslant \sigma \) since \(\rho \) is càdlàg with respect to the total variation distance. Now, by the snake property we deduce that \({\mathbb {N}}_{y,r_0}\)–a.e, for every \(t \in [0,\sigma ]\), we have

$$\begin{aligned}{} & {} \langle \rho _{t},\{h \in [0,H_t): \,W_{t}(h)=x\} \rangle = 0 \quad \quad \text { and }\quad \quad \nonumber \\{} & {} \{ h \in [0,H_t): {W}_t(h) = x \} = \text {supp } \Lambda _t({\textrm{d}}h) \cap [0,H_t). \end{aligned}$$
(4.3)

Taking the closure in the second equality we deduce that the closure of \( \{ h \in [0,H_t): {W}_t(h) = x \}\) is exactly \(\text {supp } \Lambda _t({\textrm{d}}h)\). However, to conclude that \({\mathbb {N}}_{y,r_0}\)-a.e.

$$\begin{aligned} \langle \rho _{t},\{h \in [0,H_t]: \,W_{t}(h)=x\} \rangle =0, \quad \quad \text { for all } \, t\in [0, \sigma ], \end{aligned}$$
(4.4)

we still need an additional step. Arguing by contradiction, suppose that for some \(t>0\) the quantity (4.4) is non-null. Then, by (4.3) we must have \(\rho _t(\{ H_t \})>0\) and \(W_t(H_t)=x\). By right-continuity of \(\rho \) with respect to the total variation metric, we get

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}|\rho _t(\{ H_t \}) - \rho _{t+\varepsilon }(\{ H_t \})| = 0, \end{aligned}$$

and we deduce that for \(\varepsilon \) small enough, \(\rho _u(\{ H_t\})> 0\) for all \(u \in [t,t+\varepsilon )\); in particular \(H(\rho _{u})\geqslant H(\rho _t)\) for all \(u \in [t,t+\varepsilon )\). Since \(W_t(H_t) = x\), the snake property ensures that \({W}_{u}(H_t) = x\) for all \(u \in [t,t+\varepsilon )\) and, since \(\rho _u(\{ H_t\})> 0\) for every \(u \in [t,t+\varepsilon )\), we obtain a contradiction with the fact that \(\langle \rho _{u},\{h \in [0,H_u]: \,W_{u}(h)=x\} \rangle = 0\) for every \(u \in {\mathcal {D}}\).

Let us now deduce this result under \({\mathbb {P}}_{\mu , {\overline{{\text {w}}}}}\). First, observe that the statement of the lemma follows directly under \({\mathbb {P}}_{0,y,r_0}\) by excursion theory. Next, fix \((\mu ,\overline{\text {w}})\in {\overline{\Theta }}_{x}\) with \(\mu \ne 0\) and \({\overline{{\text {w}}}}(0) = (y,r_0)\), consider \((\rho ,{\overline{W}})\) under \({\mathbb {P}}_{\mu ,{\overline{{\text {w}}}}}\) and set \(T_{0}^+:=\inf \{t\geqslant 0:\, \langle \rho _{t},1 \rangle =0\}\). The strong Markov property gives us that \(((\rho _{T_{0}^+ +t},{\overline{W}}_{T_{0}^++t}):~t\geqslant 0)\) is distributed according to \({\mathbb {P}}_{0,y,r_0}\) and consequently, \((\rho _{T_{0}^+ + t},{\overline{W}}_{T_{0}^++t})_{t\geqslant 0}\) takes values in \({\overline{\Theta }}_x\). To conclude, it remains to prove the statement of the lemma under \({\mathbb {P}}^{\dag }_{\mu , {\overline{{\text {w}}}}}\). In this direction, under \({\mathbb {P}}^{\dag }_{\mu , {\overline{{\text {w}}}}}\), consider \(\big ((\alpha _i,\beta _i):~i\geqslant 0\big )\) the excursion intervals of \(\langle \rho ,1\rangle \) from its running infimum. Then, write \((\rho ^i, {\overline{W}}^i)\) for the subtrajectories of \((\rho , {\overline{W}})\) associated with \([\alpha _i,\beta _i]\) and set \(h_i:=H_{\alpha _i}\). We recall from (2.23) that the measure:

$$\begin{aligned} \sum \limits _{i\in {\mathbb {N}}}\delta _{(h_{i},\rho ^{i},{\overline{W}}^{i})}, \end{aligned}$$

is a Poisson point measure with intensity \(\mu ({\textrm{d}}h)\,{\mathbb {N}}_{{\overline{{\text {w}}}}(h)}({\textrm{d}}\rho ,\,{\textrm{d}}{\overline{W}}).\) Since \((\mu ,{\overline{{\text {w}}}})\in {\overline{\Theta }}_x\), it follows by the result under the excursion measures \(({\mathbb {N}}_{y,r_0}: (y,r_0) \in {\overline{E}} )\) that \({\mathbb {P}}^\dag _{\mu , {\overline{{\text {w}}}}}\)–a.s. the pair \((\rho _{t},{\overline{W}}_{t})\) belongs to \({\overline{\Theta }}_x\) for every \(t\in [0,T_{0}^+]\), as wanted. \(\square \)

Finally, recall that the snake property ensures that the function \(({\widehat{W}}_s, {\widehat{\Lambda }}_s )_{s \geqslant 0}\) is well defined on the quotient space \({\mathcal {T}}_H\). Hence, we can think of \({\overline{W}}\) as a tree-indexed process, that we write with the usual abuse of notation as

$$\begin{aligned} (\xi _\upsilon , {\mathcal {L}}_\upsilon )_{\upsilon \in {\mathcal {T}}_H}. \end{aligned}$$

Main results of Section 4: Now that we have introduced our framework, we can state the main results of this section. Much of our effort is devoted to the construction of a measure supported on the set \(\{ t\in {\mathbb {R}}_+: {\widehat{W}}_t = x \}\) and satisfying suitable properties. In this direction, for every \(r\geqslant 0\), we set \(\tau _r({\overline{{\text {w}}}}):= \inf \{ h \geqslant 0:~{\overline{{\text {w}}}}(h)=(x, r) \}\) and remark that, for every \((\mu , {\overline{{\text {w}}}}) \in {\overline{\Theta }}_x\), it holds that \(\mu (\{\tau _r({\overline{{\text {w}}}})\}) = 0,\) with the convention \(\mu (\infty ) = 0\). We can now state the main result of this section:

Theorem 4.2

Fix \((y,r_0)\in {\overline{E}}\) and \((\mu ,\overline{\text {w} })\in {\overline{\Theta }}_x\). The convergence

$$\begin{aligned} A_t = \lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon } \int _0^t {\textrm{d}}u \int _{{\mathbb {R}}_+} {\textrm{d}}r \, \mathbb {1}_{\{ \tau _r({\overline{W}}_u )< H_u < \tau _r({\overline{W}}_u) + \varepsilon \}}, \end{aligned}$$
(4.5)

holds uniformly on compact intervals in measure under \({\mathbb {P}}_{\mu , \overline{\text {w} } }\) and \({\mathbb {N}}_{y,r_0}( \, \cdot \, \cap \{ \sigma > z \} )\) for every \(z >0\). Moreover, (4.5) defines a continuous additive functional \(A = (A_t)\) for the Lévy snake \((\rho , {\overline{W}})\) whose Lebesgue-Stieltjes measure \({\textrm{d}}A\) is supported on \(\{ t \in {\mathbb {R}}_+: {\widehat{W}}_t = x \}\).

We will give another equivalent construction of the additive functional A in Proposition 4.10 but we are not yet in position to formulate the precise statement. Both constructions will be needed for our work. Next, the second main result of the section characterizes the support of the measure \({\textrm{d}}A\) as follows:

Theorem 4.3

Fix \((y,r_0)\in {\overline{E}}\), \((\mu ,\overline{\text {w} })\in {\overline{\Theta }}_x\) and denote the support of the Lebesgue-Stieltjes measure of A by \(\text {supp } {\textrm{d}}A\). Under \({\mathbb {N}}_{y,r_0}\) and \({\mathbb {P}}_{\mu ,\overline{\text {w} }}\), we have:

$$\begin{aligned} \text {supp } \, {\textrm{d}}A = \overline{\big \{ t \in [0,\sigma ]: \, \xi _{p_H(t)} = x \text { and } p_H(t) \in \text {Multi} _2({\mathcal {T}}_H) \cup \{ 0 \} \big \}}. \end{aligned}$$
(4.6)

Observe that under \({\mathbb {P}}_{\mu , {\overline{{\text {w}}}}}\) with \({\text {w}}(0) = x\), the root of \({\mathcal {T}}_H\) has infinite multiplicity and this is why we had to consider it separately in the previous display. This result is stated in a slightly different but equivalent form in Theorem 4.20. The identity (4.6) can be also formulated in terms of constancy intervals of \({\widehat{\Lambda }}\). More precisely, we will also establish in Theorem 4.20 that under \({\mathbb {N}}_{y,r_0}\) and \({\mathbb {P}}_{\mu ,{\overline{{\text {w}}}}}\), we have:

$$\begin{aligned} \text {supp } {\textrm{d}}A = [0,\sigma ] \setminus \big \{ t \in [0,\sigma ]: ~ \sup _{(t-\varepsilon , t+\varepsilon )} {\widehat{\Lambda }}_{s} = \inf _{(t-\varepsilon , t+\varepsilon )} {\widehat{\Lambda }}_{s}, \quad \text { for some } \varepsilon > 0 \big \}.\nonumber \\ \end{aligned}$$
(4.7)

We conclude the presentation of our framework with a consequence of Lemma 4.1. Roughly speaking it states that, with the exception of the root under \({\mathbb {P}}_{0,x,0}\), the process \((\xi _\upsilon )_{\upsilon \in {\mathcal {T}}_H}\) can not take the value x at the branching points of \({\mathcal {T}}_H\). The precise statement is the following:

Proposition 4.4

For every \((y,r_0) \in {\overline{E}}\) and \((\mu ,\overline{\textrm{w}})\in {\overline{\Theta }}_x\), we have:

$$\begin{aligned} \big \{ t \in [0,\sigma ] : \widehat{W}_t = x \text { and } p_H(t) \in \big (Multi _3 (\mathcal {T}_H) \cup Multi _\infty (\mathcal {T}_H)\big ) \setminus \{0\} \big \} = \varnothing , \end{aligned}$$

under \({\mathbb {N}}_{y,r_0}\) and \({\mathbb {P}}_{\mu ,\overline{\text {w} }}\).

Proof

We start by proving our result under \({\mathbb {N}}_{y,0}\). First, introduce the measure \({\mathbb {N}}^\bullet _{y,0}({\textrm{d}}\rho , {\textrm{d}}{\overline{W}}, \, {\textrm{d}}s )\) supported on \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_{{\overline{E}}}) \times {\mathbb {R}}_+\) defined by \({\mathbb {N}}^\bullet _{y,0} = {\mathbb {N}}_{y,0}({\textrm{d}}\rho , {\textrm{d}}{\overline{W}}) \, {\textrm{d}}s \mathbb {1}_{\{ s \leqslant \sigma \}}\) and write \(U: {\mathbb {R}}_+ \mapsto {\mathbb {R}}_+\) for the identity function \(U(s) = s\). The law under \({\mathbb {N}}^\bullet _{y,0}\) of \((\rho , {\overline{W}}, U)\) is therefore given by

$$\begin{aligned} {\mathbb {N}}^{\bullet }_{y,0}\Big ( \Phi (\rho , {\overline{W}}, U) \Big ) = {\mathbb {N}}_{y,0}\Big ( \int _0^\sigma {\textrm{d}}s \, \Phi ( \rho , {\overline{W}}, s) \Big ). \end{aligned}$$

The measure \({\mathbb {N}}^{\bullet }_{y,0}\) can be seen as a pointed version of \({\mathbb {N}}_{y,0}\). In particular, conditionally on \((\rho , {\overline{W}})\), the random variable U is a uniform point in \([0,\sigma ]\). Under \({\mathbb {N}}_{y,0}^\bullet \) we still write \(X_t:= \langle \rho _t,1 \rangle \) and \(H_t:=H(\rho _t)\). Furthermore, we set \(X^{\bullet }_t:= X_{U+ t} - X_U\) and \(I^{\bullet }_t:= \inf _{s \leqslant t } X^{\bullet }_s\), for every \(t \geqslant 0\), and we denote the excursion intervals over the running infimum of \(X^{\bullet }\) by \(\big ( (\alpha _i, \beta _i): \, i \in {\mathbb {N}} \big )\). The dependence on U is dropped to simplify notation. Finally, set

$$\begin{aligned} h_i^{\bullet }:= H\big ( \kappa _{-I^{\bullet }_{\alpha _i}} \rho _U \big ) , \end{aligned}$$

and write \((\rho ^{\bullet , i}, {\overline{W}}^{\bullet , i})\) for the corresponding subtrajectory associated with \((\alpha _i, \beta _i)\) occurring at height \(h_i^{\bullet }\). Under \({\mathbb {N}}^\bullet _{y,0}\), the Markov property applied at time U and (2.23) gives that, conditionally on \((\rho _U, W_U)\), the random measure

$$\begin{aligned} {{\mathcal {M}}}^\bullet := \sum _{i\in {\mathbb {N}}} \delta _{ ({ h_i^{\bullet }, \, \rho ^{\bullet , i},\, {\overline{W}}^{\bullet , i}})}, \end{aligned}$$

is a Poisson point measure with intensity \( \rho _U({\textrm{d}}h) \, {\mathbb {N}}_{{\overline{W}}_U(h)} ({\textrm{d}}\rho , {\textrm{d}}{W} ). \) In particular, the functional

$$\begin{aligned} F({\mathcal {M}}^\bullet ) = \# \Big \{ (h_i^{\bullet },\rho ^{\bullet ,i}, {\overline{W}}^{\bullet ,i} ) \in {\mathcal {M}}^\bullet : W^{\bullet ,i}(0) = x \Big \}, \end{aligned}$$

conditionally on \((\rho _U, W_U)\), is a Poisson random variable with parameter \(\int \rho _U({\textrm{d}}h) \mathbb {1}_{\{ W_U(h) = x \}}\). However, by Lemma 4.1, we have \(\int \rho _U(\textrm{d}h) \mathbb {1}_{\{ W_U(h) = x \}} =0\) and we derive that, \({\mathbb {N}}^{\bullet }_{y,0}\) –a.e., \(F({\mathcal {M}}^\bullet )\) is null. Heuristically, the previous argument shows that if we take – conditionally on \(\sigma \) – a point u uniformly at random in \({\mathcal {T}}_H\), there is no branching point \(\upsilon \) with label x on the branch connecting the root to u. Let us now show that this ensures that

$$\begin{aligned} \big \{ t \!\in \! [0,\sigma ]: {\widehat{W}}_t \!=\! x \big \} \cap \big \{ t \!\in \! [0,\sigma ]: p_H(t) \!\in \! \text {Multi}_3 ({\mathcal {T}}_H) \cup \text {Multi}_\infty ({\mathcal {T}}_H) \big \} \!=\! \varnothing , \quad {\mathbb {N}}_{y,0} \text {-a.e.} \end{aligned}$$

Since \({\mathbb {N}}_{y,0}^{\bullet }(\Phi (\rho ,{\overline{W}})) = {\mathbb {N}}_{y,0}(\sigma \cdot \Phi (\rho ,{\overline{W}}))\), it suffices to prove the previous display under \({\mathbb {N}}^\bullet _{y,0}\). Let \((\upsilon _i:i\in {\mathbb {N}})\) be an indexation of the branching points of \({\mathcal {T}}_H\) – an indexing measurable with respect to (HX).Footnote 9 Pick a branching point \(\upsilon _i \in \text {Multi}_3 ({\mathcal {T}}_H) \cup \text {Multi}_\infty ({\mathcal {T}}_H)\) and let \(t_i\) be the smallest element of \(p_H^{-1}(\upsilon _i)\). Arguing by contradiction, suppose that \({\widehat{W}}_{p_H(t_i)} = x\). Still under \({\mathbb {N}}^{\bullet }_{y,0}\), since \(\upsilon _i\) is a branching point, we can find \(0 \leqslant s_* < t_*\leqslant \sigma \) in \(p_H^{-1}(\{\upsilon _i\})\) such that the event \(\{ {\widehat{W}}_{p_H(t_i)} = x \} \cap \{ s_*< U <t_* \}\) is included in \(\{ F({\mathcal {M}}^\bullet ) > 0 \}\). However \(F({\mathcal {M}}^\bullet )= 0, \, {\mathbb {N}}^{\bullet }_{y,0}\)–a.e. and we deduce that \({\mathbb {N}}^{\bullet }_{y,0}\big ( {\widehat{W}}_{p_H(t_i)} = x, s_*< U <t_* \big ) = 0\). Next, since conditionally on \((\rho ,{\overline{W}})\), the variable U is uniformly distributed on \([0,\sigma ]\), we conclude that \({\mathbb {N}}^{\bullet }_{y,0}\big ( {\widehat{W}}_{p_H(t_i)} = x \big ) = 0\). The desired result now follows, since the collection of branching points \((\upsilon _i:i\in {\mathbb {N}})\) is countable. Finally, we deduce the statement under \({\mathbb {N}}_{y,r_0}\) by the translation invariance of the local time and under \({\mathbb {P}}_{\mu ,{\overline{{\text {w}}}}}\) by excursion theory – we omit the details since this is standard and one can apply the method described in Lemma  4.1. \(\square \)

The section is organised as follows: In Sect. 4.1 we address several preliminary results needed to prove Theorems  4.2 and 4.3 and our results of Sect. 5. More precisely, Sect. 4.1 is essentially devoted to the study of a family of exit local times of \((\rho , {\overline{W}})\) that will be used as building block for our second construction of A. Then in Sect. 4.2 we shall prove Theorem 4.2 and establish our second construction of A in terms of the family of exit times studied in Sect. 4.1. The rest of the section is dedicated to the study of basic properties of A that we will frequently use in our computations. Finally, in Sect. 4.3 we turn our attention to the study of the support of the measure \({\textrm{d}}A\) and it will lead us to the proof of Theorem 4.3 and the characterization (4.7).

4.1 Special Markov property of the local time

The first step towards constructing our additive functional A, with associated Lebesgue-Stieltjes measure \({\textrm{d}}A\) supported in \(\{ t\in {\mathbb {R}}_+: {\widehat{W}}_t = x \}\), consists in the study of a particular family of \([0,\infty )\)-indexed exit local times of \((\rho ,{\overline{W}})\). More precisely, for each \(r \geqslant 0\), let \(D_r \subset {\overline{E}}:= E \times {\mathbb {R}}_+\) be the open domain

$$\begin{aligned} D_r:= {\overline{E}} \setminus \{(x,r)\} \quad \text { and recall the notation } \quad \tau _r({\overline{{\text {w}}}}):= \inf \{ h \geqslant 0:~{\overline{{\text {w}}}}(h)=(x,r)\}, \end{aligned}$$

for every \({\overline{{\text {w}}}}\in {\mathcal {W}}_{{\overline{E}}}\). Notice that \(\mathcal {\tau }_r({\overline{{\text {w}}}})\) is the exit time from \(D_r\) and we write \(\mathcal {\tau }_r\) instead of making use of the more cumbersome notation \(\mathcal {\tau }_{D_r}\). We also recall that since \(\tau _{r}({\overline{{\text {w}}}}) \in \{ h \in [0,\zeta _{\text {w}}]: {\text {w}}(h) = x \}\) as soon as \(\tau _r({\overline{{\text {w}}}}) < \infty \), for \((\mu , {\overline{{\text {w}}}})\in {\overline{\Theta }}_x\) we have \(\mu (\{ \tau _{r}({\overline{{\text {w}}}}) \}) = 0\). Proposition 3.3 now yields that for any \((\mu , {\overline{{\text {w}}}}) \in {\overline{\Theta }}_x\) with \({\overline{{\text {w}}}}(0) \ne (x,r)\) we have

$$\begin{aligned} L^{D_r}_t(\rho ,{\overline{W}}):=\lim \limits _{\varepsilon \rightarrow 0} \frac{1}{\varepsilon } \int _0^t {\textrm{d}}s \, \mathbb {1}_{\{ \tau _{r}({\overline{W}}_s)< H_s < \tau _{r}({\overline{W}}_s) + \varepsilon \}}, \end{aligned}$$
(4.8)

where the convergence holds uniformly on compact intervals in \(L^1({\mathbb {P}}_{\mu , {\overline{{\text {w}}}}})\) and \(L^1({\mathbb {N}}_{{\overline{{\text {w}}}}(0)})\). Let us be more precise: recalling the notation \(E_* = E \setminus \{x \}\) as well as \({\overline{{\text {w}}}} = ({\text {w}}, \ell )\), first remark that if \(\ell (0) < r\), for any \({\text {w}}(0) \in E\) we have \(\Pi _{{\text {w}}(0), \ell (0)}(\tau _r < \infty ) = 1\) and in consequence (\(\hbox {H}_{1}\)) holds. On the other hand, if \(r<\ell (0)\), we simply have \(L^{D_r} = 0\) since \(\tau _{r}({\overline{W}}_s)=\infty \) for every \(s \geqslant 0\). Finally, if \( {\text {w}}(0) \ne x\) and \(\ell (0) = r \geqslant 0\), we have \(\tau _{D_{r}}({\overline{W}}_s)=\tau _{E_*}( W_s)\) for every \(s\geqslant 0\), and recalling (4.2) it follows that \(L^{D_r}(\rho ,{\overline{W}}) = L^{E_*}(\rho ,W)\). It will be useful for our purposes to extend the definition to the remaining case \({\overline{{\text {w}}}}(0) = (x,r)\), and that is precisely the content of the following lemma:

Lemma 4.5

For \(r \geqslant 0\), fix \((\mu ,\overline{\textrm{w}})=(\mu ,\textrm{w},\ell )\) \(\in {\overline{\Theta }}_x\) with \(\overline{\textrm{w}}(0) = (x,r)\). Then, the limit (4.8) exists for every \(t\geqslant 0\), where the convergence holds uniformly on compact intervals in \(L^1({\mathbb {P}}_{\mu ,\overline{\textrm{w}}})\) and \(L^1({\mathbb {N}}_{x,r})\), and defines a continuous non-decreasing process that we still denote by \(L^{D_r}\). Moreover, under \({\mathbb {P}}^\dag _{\mu , \overline{\textrm{w}}}\) and \({\mathbb {N}}_{x,r}\), we have \(L^{D_r}_\sigma = 0\).

Proof

We work under \({\mathbb {P}}_{\mu ,\overline{\textrm{w}}}\) since the result under \({\mathbb {N}}_{x,r}\) follows directly by excursion theory. For every \(a \geqslant 0\) we set \(T_a:=\inf \{t\geqslant 0:~H_t=a\}\) and let \(T_0^{+}:=\inf \{t\geqslant 0:~\langle \rho _t, 1\rangle =0\}\). Since \(\mathcal {\tau }_{r}({\overline{W}}_s) = 0\) for every \(s \geqslant 0\), we have

$$\begin{aligned} \int _0^t {\textrm{d}}s \, \mathbb {1}_{\{ \tau _{r}(\overline{W}_s)< H_s< \tau _{r}(\overline{W}_s) + \varepsilon \}}&=\int _0^t {\textrm{d}}s \, \mathbb {1}_{\{0< H_s< \varepsilon \}} \\ {}&=\int _{T_\varepsilon \wedge t}^{T_0^+ \wedge t} {\textrm{d}}s \, \mathbb {1}_{\{ 0< H_s< \varepsilon \}}+\int _{T_0^+ \wedge t}^{t} {\textrm{d}}s \, \mathbb {1}_{\{0< H_s < \varepsilon \}}. \end{aligned}$$

Furthermore, by the strong Markov property and (2.7), we already know that \(\varepsilon ^{-1} \int _{T_0^+}^{T_0^++t} {\textrm{d}}s \, \mathbb {1}_{\{ 0< H_s < \varepsilon \}}\) converges as \(\varepsilon \downarrow 0\) uniformly on compact intervals in \(L^{1}({\mathbb {P}}_{\mu ,\text {w}})\). To conclude, it suffices to show that:

$$\begin{aligned} \lim \limits _{\varepsilon \rightarrow 0}\varepsilon ^{-1}\cdot {\mathbb {E}}_{\mu ,{\overline{{\text {w}}}}}\bigg [\int _{T_\varepsilon }^{T_0^+} {\textrm{d}}s \, \mathbb {1}_{\{ 0< H_s < \varepsilon \}}\bigg ]=0. \end{aligned}$$
(4.9)

If \(\mu =0\) there is nothing to prove, thus from now on assume that \(\mu \ne 0\). Write \((\alpha _i, \beta _i)\) for \(i \in {\mathbb {N}}\) the excursion intervals of the killed process \((\langle \rho _t,1 \rangle :~t\in [0,T_0^{+}])\) over its running infimum and let \((\rho ^i, {\overline{W}}^i)\) be the subtrajectory associated with the excursion interval \([\alpha _i, \beta _i]\). To simplify notation, we also set \(h_i=H(\alpha _i)\) and recall from (2.23) that the measure \({\mathcal {M}}:= \sum _{i \in {\mathbb {N}}}\delta _{(h_i, \rho ^i, {\overline{W}}^i)}\) is a Poisson point measure with intensity \(\mu ({\textrm{d}}h){\mathbb {N}}_{{\overline{{\text {w}}}}(h)}({\textrm{d}}\rho , {\textrm{d}}{\overline{W}})\). Next, notice that:

$$\begin{aligned} \int _{T_\varepsilon }^{T_{0}^+} {\textrm{d}}s \, \mathbb {1}_{\{ 0< H_s< \varepsilon \}}\leqslant \sum \limits _{0\leqslant h_i\leqslant \varepsilon }\int _{0}^{\sigma ({\overline{W}}^i)} {\textrm{d}}s \, \mathbb {1}_{\{ 0< H(\rho _s^i) < \varepsilon \}}, \end{aligned}$$

and we can now use that \({\mathcal {M}}\) is a Poisson point measure with intensity \(\mu ({\textrm{d}}h){\mathbb {N}}_{{\overline{{\text {w}}}}(h)}({\textrm{d}}\rho , {\textrm{d}}{\overline{W}})\) to get that:

$$\begin{aligned} {\mathbb {E}}_{\mu ,{\overline{{\text {w}}}}}\bigg [\int _{T_\varepsilon }^{T_{0}^+} {\textrm{d}}s \, \mathbb {1}_{\{ 0< H_s< \varepsilon \}} \bigg ] \leqslant \mu ([0,\varepsilon ])N\bigg (\int _{0}^{\sigma }{\textrm{d}}s \mathbb {1}_{\{0< H(\rho _s)<\varepsilon \}}\bigg ). \end{aligned}$$

Finally, by (2.26), the previous display is bounded above by \(\varepsilon \cdot \mu ([0,\varepsilon ])\), which gives:

$$\begin{aligned} \limsup \limits _{\varepsilon \rightarrow 0}\varepsilon ^{-1}\cdot {\mathbb {E}}^{\dag }_{\mu ,{\text {w}}}\Big [ \int _{T_\varepsilon }^{T_{0}^+} {\textrm{d}}s \, \mathbb {1}_{\{ 0< H_s < \varepsilon \}}\Big ]=\mu (\{0\}). \end{aligned}$$

Now (4.9) follows since we have \(\mu (\{0\})=0\), which holds since \({\text {w}}(0)=x\) and \((\mu ,{\overline{{\text {w}}}})\in {\overline{\Theta }}_x\). \(\square \)

Now, we give a regularity result for the double-indexed family \(({L}_{s}^{D_r}: (s,r)\in {\mathbb {R}}_+^2)\) that will be needed in the next section.

Lemma 4.6

Let \((\mu ,\overline{\textrm{w}})\in {\overline{\Theta }}_x\) with \(\overline{\textrm{w}}=(\textrm{w},\ell )\). There exists a \({\mathcal {B}}({\mathbb {R}}_+)\otimes {\mathcal {B}}({\mathbb {R}}_+) \otimes {\mathcal {F}}\)-measurable function \(({\mathscr {L}}_t^r: \, (r,t) \in {\mathbb {R}}_+^2 )\) satisfying the following properties:

  1. (i)

    For every \(r\geqslant 0\), the processes \({L}^{D_r}\) and \({\mathscr {L}}^r\) are indistinguishables under \({\mathbb {P}}_{\mu ,\overline{\textrm{w}}}\).

  2. (ii)

    \({\mathbb {P}}_{\mu ,\overline{\textrm{w}}}\) almost surely, the map** \(t \mapsto {\mathscr {L}}^r_t\) is continuous for every \(r \geqslant 0\).

The result also holds under the measure \({\mathbb {N}}_{y,r_0}\), for every \((y,r_0)\in {\overline{E}}\), by the same type of arguments and we omit the details.

Proof

Fix an initial condition \((\mu ,\overline{\textrm{w}})=(\mu , {\text {w}}, \ell )\in {\overline{\Theta }}_x\). Since under \({\mathbb {P}}_{\mu ,{\text {w}},\ell }\), the distribution of \((\rho ,W,\Lambda -\ell (0))\) is exactly \({\mathbb {P}}_{\mu ,{\text {w}},\ell -\ell (0)}\), without loss of generality we might assume that \(\ell (0)=0\). Next, by (4.8) and Lemma 4.5, for every \(r\geqslant 0\) we have

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} {\mathbb {E}}_{\mu ,\overline{\textrm{w}}}\Big [ \sup _{s \leqslant t} |{L}^{D_r}_s - \varepsilon ^{-1}\int _0^s {\textrm{d}}u \mathbb {1}_{\{ \tau _r({\overline{W}}_u )< H_u < \tau _r({\overline{W}}_u) + \varepsilon \}} | \Big ] = 0, \end{aligned}$$

and hence for any subsequence \((\varepsilon _n)\) converging to 0, the sequence of processes

$$\begin{aligned} Y_n(r, t):= \varepsilon _n^{-1}\int _0^t {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}_u )< H_u < \tau _r({\overline{W}}_u) + \varepsilon _n \}}, \quad \quad t \geqslant 0, \end{aligned}$$

converges uniformly on compact intervals in probability towards \({L}^{D_r}\). Now, to simplify notation write \({\varvec{\omega }}:= (\uprho , \omega )\) for the elements of \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+)\times {\mathcal {W}}_{{\overline{E}}} )\). Remark now that the map** \((u,r,{\varvec{\omega }})\mapsto \tau _r({\overline{W}}_u(\omega ))\) is jointly measurable since for each \((u,{\varvec{\omega }})\) it is càdlàg in r, while for each fixed r the map** \( (u,{\varvec{\omega }})\mapsto \tau _r( {\overline{W}} _u(\omega ))\) is measurable. Consequently, by Fubini, for every fixed t the application

$$\begin{aligned} (r,{\varvec{\omega }}) \mapsto \int _0^t {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}_u )< H_u < \tau _r({\overline{W}}_u) + \varepsilon _n \}} (\omega ) \end{aligned}$$

is measurable while for fixed \((r,{\varvec{\omega }})\) it is continuous in t, and we deduce that \(Y_n\) is jointly measurable in \((r,t,{\varvec{\omega }})\). It is now standard – see e.g. [34, Theorem 62] and its proof – to deduce that there exits a jointly measurable process \((r,t,{\varvec{\omega }}) \mapsto Y(r,t, {\varvec{\omega }})\) such that for every \((r, {\varvec{\omega }}) \in {\mathbb {R}}_+ \times {\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+)\times {\mathcal {W}}_{{\overline{E}}})\), the map** \(t \mapsto Y(r,t, {\varvec{\omega }} )\) is continuous and for each fixed \(r\geqslant 0\), \(Y_n(r,\cdot ) \mapsto Y(r, \cdot )\) as \(n \uparrow \infty \) uniformly on compact intervals in probability. In particular for each \(r\geqslant 0\), the process \((Y(r,t): t \geqslant 0 )\) is indistinguishable from \(({L}^{D_r}_t:~ t \geqslant 0)\) and we shall write \(({\mathscr {L}}^{\,r}_t:~ t \geqslant 0, \, r \geqslant 0)\) instead of \((Y(r,t): \, t \geqslant 0, \, r \geqslant 0 )\). \(\square \)

We now turn our attention to the Markovian properties of \(({\mathscr {L}}^{\,r}_\sigma : r \geqslant 0)\) under the excursion measure \({\mathbb {N}}_{x,0}\). To simplify notation, for every \(y\ne x\), we set:

$$\begin{aligned} u_\lambda (y):={\mathbb {N}}_{y,0} \left( 1-\exp (- \lambda {\mathscr {L}}^{\,0}_\sigma ) \right) , \quad \text { for } y \in E_*, \end{aligned}$$
(4.10)

and remark that with the notation of (3.25) we have \(u_{\lambda }(y)=u_{\lambda }^{E_*}(y)\). We shall use the usual convention \(u_\lambda (x) = \lambda \).

Before stating our next result, we briefly recall from [22, Chapter II-1] that an \({\mathbb {R}}_+\)–valued Markov process with semigroup \((P_t(y,{\textrm{d}}z):~ t, y \in {\mathbb {R}}_+ )\) is called a branching process if its semigroup satisfies the branching property, viz. if for any \(y,y' \in {\mathbb {R}}_+\), we have \(P_t(y,\cdot )*P_t(y',\cdot ) = P_t(y+y', \cdot )\). In order to fall in the framework of [22, Chapter II- Theorem 1] we also assume that \(\int _{{\mathbb {R}}_+} P_t(y,{\textrm{d}}z) z \leqslant y\) for every \(t,y \in {\mathbb {R}}_+\). By the branching property it follows that for any \(t,y\in {\mathbb {R}}_+\) the distribution \(P_t(y,{\textrm{d}}z)\) is infinitely divisible and non-negative, and consequently, for every \(t,y \in {\mathbb {R}}_+\), the Laplace transform of \(P_t(y,{\textrm{d}}z)\) takes the Lévy-Khintchine form:

$$\begin{aligned} \int _{{\mathbb {R}}_+} \, P_t(y, {\textrm{d}}z)\exp (-\lambda z)= \exp \big (-y a_t(\lambda ) \big ), \quad \quad \text { for } \lambda \geqslant 0, \end{aligned}$$

for some function \((a_t(\lambda ): t, \lambda \geqslant 0)\). By [22, Chapter II- Theorem 1], the function \((a_t(\lambda ): t, \lambda \geqslant 0)\) is the unique non-negative solution of the integral equation

$$\begin{aligned} a_t(\lambda ) + \int _0^t {\textrm{d}}u \, \Psi ( a_u(\lambda ) ) = \lambda , \end{aligned}$$
(4.11)

for a function \((\Psi (\lambda ): \lambda \geqslant 0)\) of the form,

$$\begin{aligned} \Psi (\lambda ) = c_1 \lambda + c_2 \lambda ^2 + \int _{{\mathbb {R}}_+} \nu ({\textrm{d}}y) \, (\exp (-\lambda y) -1 + \lambda y ), \quad \quad \text { for } \lambda \geqslant 0, \end{aligned}$$

where \(c_1, c_2 \in {\mathbb {R}}_+\) and \(\nu \) is a Lévy measure supported on \((0,\infty )\) satisfying \(\int _{(0,\infty )} \nu ({\textrm{d}}y) (y \wedge y^2) < \infty \). By (4.11), it follows that \(a_t(\lambda )\) is the unique function that satisfies

$$\begin{aligned} \int _{a_t(\lambda )}^\lambda \frac{{\textrm{d}}s}{\Psi (s)} = t,\quad \quad t,\lambda \geqslant 0. \end{aligned}$$

The Markov process with semigroup \((P_t)\) is then called a CSBP with branching mechanism \(\Psi \), or in short a \(\Psi \)-CSBP. A Lévy process with exponent \(\Psi \) clearly fulfils (A1) as well as (A3) by [4, Corollary 2]. So to fall in our framework, it only need to satisfy (A4) – since as explained in the preliminaries (A4) implies (A2).

Proposition 4.7

Under \({\mathbb {N}}_{x,0}\), the process \(({\mathscr {L}}^{\,r}_\sigma : r > 0)\) is a continuous state branching process with entrance measure \(\nu _r( {\textrm{d}}u) = {\mathbb {N}}_{x,0}({\mathscr {L}}^{\,r}_\sigma \in {\textrm{d}}u)\), for \(r > 0\), and branching mechanism

$$\begin{aligned} {\widetilde{\psi }}(\lambda ) = {\mathcal {N}}\Big ( \int _0^\sigma {\textrm{d}}h ~ \psi \big (u_\lambda (\xi _h)\big )\Big ), \quad \text { for } \, \lambda \geqslant 0, \end{aligned}$$
(4.12)

where we recall the definition of \(u_\lambda \) from (4.10) and that we write \({\mathcal {N}}\) for the excursion measure of \(\xi \) away from x. Moreover, a \({\widetilde{\psi }}\)-Lévy process satisfies the assumptions (A1)–(A4) introduced in Sect. 2.1 and consequently we can associate to it a Lévy tree.

Our result is a particular case, in the terminology of Lévy snakes, of Theorem 4 in [5] stated in the setting of superprocesses. Theorem 4 in [5] is more general and the family \(({\mathscr {L}}^r_\sigma )_{r > 0}\) in our result correspond precisely to the total mass process of the superprocess considered in [5], for the same branching mechanism \({\widetilde{\psi }}\).

Proof

The proof is structured as follows: we start by introducing a family of probability kernels \((P_t)\) and by showing that they form a semigroup of operators associated with a branching process. We then establish that \(({\mathscr {L}}^{\,r}_\sigma :~ r > 0)\) is a Markov process associated with the semigroup \((P_t)\), with entrance measure \((\nu _r: \, r >0)\). Finally, we conclude the proof by establishing that its branching mechanism is \({\widetilde{\psi }}\) and that it fulfills (A4).

We stress that we are only interested in the finite-dimensional distributions of \(({\mathscr {L}}^{r}_\sigma : r > 0)\). Recalling the notation (3.25), for any \(r > 0\) and \(\lambda \geqslant 0\), we write

$$\begin{aligned} u_\lambda ^{D_r}{(x,0)} = {\mathbb {N}}_{x,0}\big (1-\exp (-\lambda {\mathscr {L}}^{\,r}_\sigma ) \big ) = \int \nu _r({\textrm{d}}y) \, ( 1-\exp (-\lambda y) ). \end{aligned}$$
(4.13)

Note that the function \(u^{D_r}_\lambda (y,s)\) is defined for every pair (ys) with \(s\leqslant r\) in the Polish space \(E \times {\mathbb {R}}_+\), here we simply replace in (3.25) the space E by \(E \times {\mathbb {R}}_+\). Moreover, since \({\mathbb {N}}_{x,0}({\mathscr {L}}^{\,r}_\sigma > 0) \leqslant {\mathbb {N}}_{x,0}(\sup {\widehat{\Lambda }} \geqslant r ) < \infty \), we have \(\int _{(0,\infty )}\nu _r({\textrm{d}}y) (1 \wedge y) < \infty \), and we deduce that the function \(\lambda \mapsto u_\lambda ^{D_r}(x,0)\) is the Laplace exponent of an infinitely divisible random variable with Lévy measure \(\nu _r(\cdot \, \cap (0,\infty ))\). Indeed, observe that (4.13) is of the Lévy-Khintchine form. For each \(t>0\) and \(y \in {\mathbb {R}}_+\) denote by \(P_t(y,{\textrm{d}}z)\) the probability measure with Laplace transform

$$\begin{aligned} \int P_t(y,{\textrm{d}}z ) \, \exp (-\lambda z )= \exp \big (-y\cdot u_\lambda ^{D_t}(x,0)\big ), \quad \quad \lambda \geqslant 0. \end{aligned}$$
(4.14)

Remark now that the translation invariance of the local time of \(\xi \) implies that, under \({\mathbb {P}}_{0,x,r}\) (resp. \({\mathbb {N}}_{x,r}\)) for \(r\geqslant 0\), the distribution of \((W,\Lambda -r)\) is \({\mathbb {P}}_{0,x,0}\) (resp. \({\mathbb {N}}_{x,0}\)). In particular, for every \(s,t\geqslant 0\) and \(y\in E\), we have

$$\begin{aligned} u_\lambda ^{D_{t+s}}(y,s) = u_\lambda ^{D_t}(y,0). \end{aligned}$$
(4.15)

We deduce that the family \((P_t(y,{\textrm{d}}z), t > 0, y \in {\mathbb {R}}_+)\) is a semigroup since, by the special Markov property applied at the domain \(D_s\), it holds that

$$\begin{aligned} \int P_{t+s}(y,{\textrm{d}}z) \exp (- \lambda z)&= \exp \Big ( -y \, {\mathbb {N}}_{x,0}\Big (1-\exp \big (-\lambda {\mathscr {L}}^{\,t+s}_\sigma \big ) \Big ) \Big ) \\&= \exp \Big ( -y \, {\mathbb {N}}_{x,0}\Big (1-\exp \big (- {\mathscr {L}}^{\,s}_\sigma \cdot u_\lambda ^{D_{t+s}}(x,s) \big ) \Big ) \Big ) \\&= \exp \Big ( -y \, {\mathbb {N}}_{x,0}\Big (1-\exp \big (- {\mathscr {L}}^{\,s}_\sigma \cdot u_\lambda ^{D_t}(x,0) \big ) \Big ) \Big ) \\&= \exp \Big ( -y \cdot u^{D_s}_{ u_\lambda ^{D_t}(x,0)}\big (x,0\big ) \Big ), \end{aligned}$$

which coincides with the Laplace transform of the measure \( \int _{u \in {\mathbb {R}}_+} P_s(y,{\textrm{d}}u)P_t(u,{\textrm{d}}z)\). Since \({\mathbb {N}}_{x,0}({\mathscr {L}}^{\,r}_\sigma ) \leqslant 1\) by (3.4) and \(1-\exp (-\lambda {\mathscr {L}}^{\,r}_\sigma ) \leqslant \lambda {\mathscr {L}}^{\,r}_\sigma \), we deduce by dominated convergence and (4.14) that \( \int _{{\mathbb {R}}_+} P_t(y,{\textrm{d}}z) \, z = y \cdot {\mathbb {N}}_{x,0}({\mathscr {L}}^{\,r}_\sigma ) \leqslant y. \) Since the semigroup clearly fulfils the branching property, it follows that there exists a CSBP associated with the semigroup \((P_t)\).

Recall the notation \(T_{D_\varepsilon }:= \inf \{ t \geqslant 0: \tau _{\varepsilon }({\overline{W}}_t) < \infty \}\) as well as the definition of the sigma field \({\mathcal {F}}^{D_\varepsilon }\) from (3.2). We will now show that for any \(\varepsilon >0\), the process \(({\mathscr {L}}^{\,\varepsilon +r}_\sigma : r \geqslant 0 )\) under the probability measure \({\mathbb {N}}_{x,0}^{D_\varepsilon }:= {\mathbb {N}}_{x,0}(\, \cdot \, | T_{D_\varepsilon } < \infty )\) has transition kernel \((P_t)\). Fix \(\varepsilon< a < b\); by considering the point process of excursions (3.24) outside \(D_a\), we deduce by an application of the special Markov property that

$$\begin{aligned} {\mathbb {N}}_{x,0}^{D_\varepsilon } \left( \exp \left( - \lambda {\mathscr {L}}^{\,b}_\sigma \right) \big | {\mathcal {F}}^{D_a} \right)&= \exp \left( - {\mathscr {L}}^{\,a}_\sigma {\mathbb {N}}_{x,a}\left( 1-\exp (-\lambda {\mathscr {L}}^{\,b}_\sigma ) \right) \right) \\&= \exp \left( - {\mathscr {L}}^{\,a}_\sigma \cdot u_\lambda ^{D_{b-a}}(x,0) \right) , \quad {\mathbb {N}}_{x,0}^{D_\varepsilon } \text { -a.s.}, \end{aligned}$$

where in the last equality we used (4.15). We have obtained that, for every \(\varepsilon > 0\), \(({\mathscr {L}}^{\,r+\varepsilon }_\sigma :~ r \geqslant 0)\) under \({\mathbb {N}}^{D_\varepsilon }_{x,0}\) is a CSBP with Laplace functional \((u_\lambda ^{D_r}(x,0):~r> 0)\) and initial distribution \({\mathbb {N}}^{D_\varepsilon }_{x,0}({\mathscr {L}}^{\,\varepsilon }_\sigma \in {\textrm{d}}x )\) with respect to the filtration \(({\mathcal {F}}^{D_{\varepsilon + r}}: r \geqslant 0)\) (recall that \({\mathscr {L}}^{\,r}_\sigma \) is \({\mathcal {F}}^{D_r}\)-measurable by Proposition 3.4 and Lemma 4.6). Now, we claim that for any \(0<r_1< \dots < r_k\) and any collection of non-negative measurable functions \(f_i: {\mathbb {R}}_+ \mapsto {\mathbb {R}}_+\),

$$\begin{aligned} {\mathbb {N}}_{x,0}\left( \prod _{i=1}^k f_i({\mathscr {L}}^{\,r_i}_\sigma ) \right)= & {} \int _{{\mathbb {R}}_+} \nu _{r_1}({\textrm{d}}z_1) f_{1}(z_1) \int _{{\mathbb {R}}_+} P_{r_2-r_1}(z_1,{\textrm{d}}z_2)f_2(z_2) \nonumber \\{} & {} \dots \int _{{\mathbb {R}}_+} P_{r_k-r_{k-1}}(z_{k-1},{\textrm{d}}z_k)f_k(z_k). \end{aligned}$$
(4.16)

This follows from the previous result, by observing that for any \(\varepsilon < r_1\) we have

$$\begin{aligned}&{\mathbb {N}}_{x,0}\left( \prod _{i=1}^k f_i({\mathscr {L}}^{\,r_i}_\sigma ) \mathbb {1}_{\{ T_{D_\varepsilon }< \infty \}} \right) \\&\quad = {\mathbb {N}}_{x,0} \Bigg ( \mathbb {1}_{\{ T_{D_\varepsilon } < \infty \}} f_1({\mathscr {L}}^{\,r_1}_\sigma ) \int _{{\mathbb {R}}_+} P_{r_2-r_1}({\mathscr {L}}^{\,r_1}_\sigma ,{\textrm{d}}z_2)f_2(z_2)\dots \int _{{\mathbb {R}}_+} P_{r_k-r_{k-1}}(z_{k-1},{\textrm{d}}z_k)f_k(z_k) \Bigg ), \end{aligned}$$

and we conclude taking the limit as \(\varepsilon \downarrow 0\). The fact that the family \((\nu _t: t \geqslant 0)\) satisfies that \(\nu _{t+s} = \nu _s P_t\) for \(t,s \geqslant 0\) now follows from (4.16). Let us now identify \({\widetilde{\psi }}\). Recall from our discussion in (4.11) that the Laplace exponent \((u_\lambda ^{D_r}(x,0): r,\lambda \geqslant 0 )\) is the unique solution to the equation

$$\begin{aligned} u_\lambda ^{D_r}(x,0) + \int _0^r {\textrm{d}} u \, \Psi \big ( u_{\lambda }^{D_u}(x,0) \big ) = \lambda , \end{aligned}$$
(4.17)

where \(\Psi \) is the branching mechanism associated with \((P_t)\), and that it is defined in a unique way by (4.17). In particular, \(\Psi \) characterizes completely the semigroup \((P_t)\). To identify the branching mechanism we argue as follows: first, observe that the identity (3.26) applied at the domain \(D_r\) yields

$$\begin{aligned} u_\lambda ^{D_r}(x, 0) + \Pi _{x,0}\left( \int _0^{\tau _{D_r}} {\textrm{d}}t ~\psi ( u^{D_r}_\lambda (\xi _t, {\mathcal {L}}_t) ) \right) = \lambda , \end{aligned}$$
(4.18)

for every \(\lambda \geqslant 0\) and \(r>0\). Next, by excursion theory and (\(\hbox {H}_{3}\)) we get:

$$\begin{aligned} \Pi _{x,0}\left( \int _0^{\tau _{D_r}} {\textrm{d}}t~\psi ( u^{D_r}_\lambda (\xi _t, {\mathcal {L}}_t) ) \right)= & {} \int _{0}^r {\textrm{d}}u~ {\mathcal {N}}\left( \int _0^\sigma {\textrm{d}}t~\psi \left( u_\lambda ^{D_r} (\xi _t, u)\right) \right) \\= & {} \int _0^r {\textrm{d}}u~ {\mathcal {N}}\left( \int _0^\sigma {\textrm{d}}t ~\psi \left( u_\lambda ^{D_{r-u}} (\xi _t, 0)\right) \right) , \end{aligned}$$

where in the last equality we used (4.15). Moreover, the special Markov property applied at the domain \(D_0\) gives

$$\begin{aligned} u_\lambda ^{D_{r}}(y,0) = u_{u_{\lambda }^{D_r}(x,0)}(y), \end{aligned}$$

for every \(y\in E {\setminus } \{ x \}\) and \(\lambda \geqslant 0\), and the identity also holds for \(y=x\). Putting everything together, by definition of \({\widetilde{\psi }}\), the identity (4.18) can be re-written as follows:

$$\begin{aligned} u_\lambda ^{D_r}(x,0) + \int _0^r {\textrm{d}} u \, {\widetilde{\psi }}( u_{\lambda }^{D_u}(x,0) ) = \lambda . \end{aligned}$$
(4.19)

Consequently, we deduce that the branching mechanism associated with the Laplace functional \(u_\lambda ^{D_r}(x,0)\) is \({\widetilde{\psi }}\). It remains to show that the conditions stated in Sect. 2.1 are satisfied by \({\widetilde{\psi }}\). As we already mentioned, it only remains to verify (A4). In this direction and recalling the notation \(T_{D_r} = \inf \{ t \geqslant 0: {\widehat{\Lambda }}_t \geqslant r \}\), also by (4.19) we obtain that \(f(\lambda , r):= u_\lambda ^{D_r}(x,0)\) satisfies for every r,

$$\begin{aligned} \int _{f(\lambda , r)}^\lambda \frac{\textrm{d} s}{ {\widetilde{\psi }}(s) } = r, \end{aligned}$$
(4.20)

where the limit \(f(\infty , r) = {\mathbb {N}}_{x,0}(L_\sigma ^{D_r} > 0)\) is finite, since \(\{ L_\sigma ^{D_r} > 0 \} \subset \{T_{D_r} < \infty \}\) and \({\mathbb {N}}_{x,0}(T_{D_r}< \infty ) < \infty \) by the same argument used before Theorem 3.8. Hence, taking the limit as \(\lambda \uparrow \infty \) in (4.20), we infer that the following conditions are fulfilled:

$$\begin{aligned} {\widetilde{\psi }}(\infty ) = \infty \quad \quad \text { and }\quad \quad \int ^\infty _{\cdot } \frac{{\textrm{d}}s }{{\widetilde{\psi }}(s)} < \infty . \end{aligned}$$

To derive the exact form of (A4), recall that \({\widetilde{\psi }}\) is convex and that we have \({\widetilde{\psi }}(0) = 0\) and \({\widetilde{\psi }}'(0+) \geqslant 0\). \(\square \)

Now that we have established that \({\widetilde{\psi }}\) is the Laplace exponent of a Lévy tree, let us briefly introduce some related notation and a few facts that will be used frequently in the upcoming sections. From now on, we set \({\widetilde{X}}\) a \({\widetilde{\psi }}\)-Lévy process and we write \({\widetilde{I}}\) for the running infimum of \({\widetilde{X}}\). We also denote the excursion measure of the reflected process \({\widetilde{X}}- {\widetilde{I}}\) by \({\widetilde{N}}\) – where the associated local time is \(-{\widetilde{I}}\). The usual notation introduced in Sect. 2.1 applied to \({\widetilde{X}}\) are indicated with a \(\sim \). For instance, we denote the height process and the exploration process issued from \({\widetilde{X}}\) respectively by \({\widetilde{H}}\) and \({\widetilde{\rho }}\).

By convexity and the fact that \({\widetilde{\psi }}'(0+) \geqslant 0\), the only solution to \({\widetilde{\psi }}(\lambda ) = 0\) is \(\lambda = 0\). This implies that the map** \(\lambda \mapsto {\widetilde{\psi }}(\lambda )\) is invertible in \([0,\infty )\). By classical results in the theory of Lévy processes, \({\widetilde{\psi }}^{-1}\) is the Laplace exponent of the right-inverse of \(-{\widetilde{I}}\) and, since \({\widetilde{X}} -{\widetilde{I}}\) does not spend time at 0, the former is a subordinator with no drift. So, recalling the relation between excursion lengths and jumps of the right-inverse of \(-{\widetilde{I}}\), we derive that:

$$\begin{aligned} {\widetilde{\psi }}^{-1}(\lambda ) = {\widetilde{N}} (1-\exp (-\lambda \sigma )), \quad \lambda \geqslant 0. \end{aligned}$$
(4.21)

For a more detailed discussion, we refer to Chapters IV and VII of [4].

We close this section with some useful identities in the same vein of (4.12), that will be used frequently in our computations. These identities allow to express some Laplace-like transforms concerning the process \(\big (\psi (u_{\lambda }(\xi _t)):~t\geqslant 0\big )\), under the excursion measure \({\mathcal {N}}\), in terms of \({\widetilde{\psi }}\). As an application of these computations, we will identify the drift and Brownian coefficients of \({\widetilde{\psi }}\). We summarise these identities in the following lemma.

Lemma 4.8

For every \(\lambda _1,\lambda _2 \in {\mathbb {R}}_+\) with \(\lambda _1\ne \lambda _2\), we have

$$\begin{aligned} {\mathcal {N}} \left( 1- \exp \Big (- \int _0^\sigma {\textrm{d}}s ~ \frac{\psi \big (u_{\lambda _1}(\xi _s) \big ) -\psi \big ( u_{\lambda _2}(\xi _s) \big ) }{u_{\lambda _1}(\xi _s) - u_{\lambda _2}(\xi _s) } \Big ) \right) = \frac{{\widetilde{\psi }}\big (\lambda _1\big ) - {\widetilde{\psi }}\big ( \lambda _2\big ) }{ \lambda _1-\lambda _2 }.\nonumber \\ \end{aligned}$$
(4.22)

Recalling the identities (2.24), remark that Lemma 8 allows to express the Laplace exponent of \(({\widetilde{U}}^{(1)}, {\widetilde{U}}^{(2)})\) in terms of \({\mathcal {N}}\) and \(\psi \).

Proof

First note that the functions \(\lambda \mapsto u_\lambda (y)\) and \(\lambda \mapsto \psi (u_\lambda (y))\) are non-decreasing. So without loss of generality we can and will assume that \(\lambda _1>\lambda _2\). We set \(T_x:= \inf \{ t \geqslant 0: \xi _t = x \}\) and we write

$$\begin{aligned}&{\mathcal {N}} \Big ( 1- \exp \Big (- \int _0^\sigma {\textrm{d}}s ~ \frac{\psi \big (u_{\lambda _1}(\xi _s) \big ) -\psi \big ( u_{\lambda _2}(\xi _s) \big ) }{u_{\lambda _1}(\xi _s) - u_{\lambda _2}(\xi _s) } \Big ) \Big ) \\&\quad = {\mathcal {N}} \left( \int _0^\sigma {\textrm{d}}s~ \frac{\psi \big (u_{\lambda _1}(\xi _s) \big ) -\psi \big ( u_{\lambda _2}(\xi _s) \big ) }{u_{\lambda _1}(\xi _s) - u_{\lambda _2}(\xi _s) } \cdot \exp \Big (- \int _s^\sigma {\textrm{d}}t~\frac{\psi \big (u_{\lambda _1}(\xi _t) \big ) -\psi \big ( u_{\lambda _2}(\xi _t) \big ) }{u_{\lambda _1}(\xi _t) - u_{\lambda _2}(\xi _t) } \Big ) \right) \\&\quad = {\mathcal {N}} \left( \int _0^\sigma {\textrm{d}}s~\frac{\psi \big (u_{\lambda _1}(\xi _s) \big ) -\psi \big ( u_{\lambda _2}(\xi _s) \big ) }{u_{\lambda _1}(\xi _s) - u_{\lambda _2}(\xi _s) } \cdot \Pi _{\xi _s} \left( \exp \Big (- \int _0^{T_x} {\textrm{d}}t~ \frac{\psi \big (u_{\lambda _1}(\xi _t) \big ) -\psi \big ( u_{\lambda _2}(\xi _t)\big ) }{u_{\lambda _1}(\xi _t) - u_{\lambda _2}(\xi _t) } \Big ) \right) \right) \end{aligned}$$

where in the last equality we applied the Markov property. On the other hand, the definition of \({\widetilde{\psi }}\) given in (4.12) yields

$$\begin{aligned} \frac{{\widetilde{\psi }}\big (\lambda _1\big ) - {\widetilde{\psi }}\big ( \lambda _2\big ) }{ \lambda _1-\lambda _2 }&= {\mathcal {N}}\left( \int _0^\sigma {\textrm{d}}s~ \frac{\psi \big (u_{\lambda _1}(\xi _s)\big ) - \psi \big (u_{\lambda _2}(\xi _s)\big ) }{ \lambda _1-\lambda _2} \right) \\&= {\mathcal {N}}\left( \int _0^\sigma {\textrm{d}}s~ \frac{\psi \big (u_{\lambda _1}(\xi _s)\big ) - \psi \big (u_{\lambda _2}(\xi _s)\big ) }{ u_{\lambda _1}(\xi _s) - u_{\lambda _2}(\xi _s) } \cdot \frac{ u_{\lambda _1}(\xi _s) - u_{\lambda _2}(\xi _s) }{\lambda _1-\lambda _2} \right) . \end{aligned}$$

Consequently, the lemma will follow as soon as we establish the identity:

$$\begin{aligned} \frac{u_{\lambda _1}(y) - u_{\lambda _2}(y) }{\lambda _1-\lambda _2} = \Pi _{y} \Big ( \exp \Big (- \int _0^{T_x} {\textrm{d}}t~ \frac{\psi \big (u_{\lambda _1}(\xi _t)\big ) - \psi \big (u_{\lambda _2}(\xi _t)\big ) }{ u_{\lambda _1}(\xi _t) - u_{\lambda _2}(\xi _t) } \Big ) \Big ). \end{aligned}$$

In this direction, recall that under \({\mathbb {N}}_{y,0}\) with \(y\ne x\) the processes \({\mathscr {L}}^{\,0}(\rho ,{\overline{W}})\) and \(L^{E_*}(\rho ,W)\) are well defined and indistinguishables, and remark that

$$\begin{aligned} u_{\lambda _1}(y) - u_{\lambda _2}(y)&= {\mathbb {N}}_{y,0} \Big ( \exp \big (- \lambda _2 \int _{0}^{\sigma }{\textrm{d}}{\mathscr {L}}^{\,0}_u\big ) - \exp \big (- \lambda _1 \int _{0}^{\sigma }{\textrm{d}}{\mathscr {L}}^{\,0}_u\big ) \Big ) \\&=(\lambda _1 - \lambda _2)\cdot {\mathbb {N}}_{y,0} \Big ( \exp (- \lambda _1 \int _0^\sigma {\textrm{d}}{\mathscr {L}}^0_u ) \cdot \int _0^\sigma {\textrm{d}}{\mathscr {L}}^0_s \exp \big ( (\lambda _1 - \lambda _2)\int _0^s {\textrm{d}}{\mathscr {L}}_u^0 \big ) \Big )\\&=(\lambda _1-\lambda _2) \cdot {\mathbb {N}}_{y,0} \Big ( \int _0^\sigma {\textrm{d}}{\mathscr {L}}^{\,0}_s \exp \big (- \lambda _1 \int _0^s {\textrm{d}}{\mathscr {L}}^{\,0}_u \big ) \cdot \exp \big ( -\lambda _2 \int _s^\sigma {\textrm{d}}{\mathscr {L}}^{\,0}_u \big ) \Big ). \end{aligned}$$

Then, an application of the Markov property gives:

$$\begin{aligned} u_{\lambda _1}(y) - u_{\lambda _2}(y)= & {} (\lambda _1-\lambda _2) \cdot {\mathbb {N}}_{y,0} \Big ( \int _0^\sigma {\textrm{d}}{\mathscr {L}}^{\,0}_s \exp \big (- \lambda _1 {\mathscr {L}}^{\,0}_s\big )\cdot {\mathbb {E}}^{\dag }_{\rho _s, {\overline{W}}_s} \big [ \exp \big ( -\lambda _2 {\mathscr {L}}^{\,0}_\sigma \big ) \big ] \Big ). \end{aligned}$$

We can now apply the duality identity \(\big ((\rho _{(\sigma -t)-},\eta _{(\sigma -t)-},{\overline{W}}_{\sigma -t}):~t\in [0,\sigma ]\big ) \overset{(d)}{=} \big ((\eta _{t},\rho _{t},{\overline{W}}_{t}):~t\in [0,\sigma ]\big )\) under \({\mathbb {N}}_{y,0}\), to get that the previous display is equal to

$$\begin{aligned}&(\lambda _1-\lambda _2)\cdot {\mathbb {N}}_{y,0} \Big ( \int _0^\sigma {\textrm{d}}{\mathscr {L}}^{\,0}_s \exp \big (-\lambda _1 \int _s^{\sigma } {\textrm{d}}{\mathscr {L}}^{\,0}_t\big ) \cdot {\mathbb {E}}^{\dag }_{\eta _s , {\overline{W}}_s} \big [ \exp \big ( -\lambda _2 {\mathscr {L}}^{\,0}_\sigma \big ) \big ] \Big )\\&\quad =(\lambda _1-\lambda _2)\cdot {\mathbb {N}}_{y,0} \Big ( \int _0^\sigma {\textrm{d}}{\mathscr {L}}^{\,0}_s ~ {\mathbb {E}}^{\dag }_{\rho _s , {\overline{W}}_s} \big [ \exp \big (-\lambda _1 {\mathscr {L}}^{\,0}_\sigma \big ) \big ] \cdot {\mathbb {E}}^{\dag }_{\eta _s , {\overline{W}}_s} \big [ \exp \big ( -\lambda _2 {\mathscr {L}}^{\,0}_\sigma \big ) \big ] \Big ). \end{aligned}$$

Remark that \((\eta , {\overline{W}})\) takes values in \({\overline{\Theta }}_x\) by duality and right-continuity of \(\eta \) with respect to the total variation distance. We are now in position to apply the many-to-one equation (2.25). In this direction, for \((\mu , {\overline{{\text {w}}}}) \in {\overline{\Theta }}_x\) with \({\overline{{\text {w}}}}(0) = (y,0)\) and \(y \ne x\) we notice that

$$\begin{aligned} {\mathbb {E}}_{\mu , {\overline{{\text {w}}}}}^{\dag } \Big [ \exp \big (-\lambda {\mathscr {L}}^{\,0}_\sigma \big )\Big ]= & {} \exp \Big (-\int _0^{\tau _{D_0}({\overline{{\text {w}}}})} \mu ({\textrm{d}}h)~ {\mathbb {N}}_{{\overline{{\text {w}}}}(h)} \big ( 1-\exp (- \lambda {\mathscr {L}}^{\,0}_\sigma ) \big ) \Big ) \\= & {} \exp \Big (- \int _0^{\tau _{D_0}({\overline{{\text {w}}}})} \mu ({\textrm{d}}h)~ u_\lambda ({\text {w}}(h)) \Big ), \end{aligned}$$

for every \(\lambda >0\). Consequently, (2.25) gives:

Finally an application of (2.24) yields exactly the desired result (4.22). \(\square \)

As an immediate consequence, we obtain two other useful identities taking \(\lambda _2 = 0\) and letting \(\lambda _2 \downarrow \lambda _1\) respectively. For every \(\lambda >0\), we have

$$\begin{aligned}{} & {} {\mathcal {N}} \Big ( 1- \exp \Big (- \int _0^\sigma {\textrm{d}}h~ \psi \big (u_\lambda (\xi _h)\big )/u_\lambda (\xi _h) \Big ) \Big ) = {\widetilde{\psi }}(\lambda )/\lambda \quad \text { and }\quad \nonumber \\{} & {} {\mathcal {N}} \Big ( 1- \exp \Big (- \int _0^\sigma {\textrm{d}}h~ \psi ^{\prime }(u_\lambda (\xi _h)) \Big ) \Big ) = {\widetilde{\psi }}^{\prime }(\lambda ), \end{aligned}$$
(4.23)

where for the first one we used that \(u_0(y) = 0\) since \({\mathbb {N}}_y(L^{E_*}_\sigma = \infty ) = 0\). We also stress that (4.23) can be proved independently directly by the same arguments as the ones applied in the proof of (4.22).

Since by Proposition 4.7 the exponent \({\widetilde{\psi }}\) satisfies (A1)–(A4), it can be written in the following form

$$\begin{aligned} {\widetilde{\psi }}(\lambda ) = {\widetilde{\alpha }}\lambda + {\widetilde{\beta }} \lambda ^2 +\int _{{\mathbb {R}}_+} {\widetilde{\pi }}({\textrm{d}}x) \, (\exp (-\lambda x)-1+\lambda x), \end{aligned}$$

where \({\widetilde{\alpha }}, {\widetilde{\beta }} \geqslant 0\) and \({\widetilde{\pi }}\) is a measure on \({\mathbb {R}}_+\) satisfying \(\int {\widetilde{\pi }}({\textrm{d}}x) (x \wedge x^2) < \infty \). In the following corollary, we identify the coefficients \({\widetilde{\alpha }}\) and \({\widetilde{\beta }}\).

Corollary 4.9

We have \({\widetilde{\alpha }} = {\mathcal {N}}\big (1-\exp (-\alpha \sigma )\big )\) and \({\widetilde{\beta }} = 0\).

Proof

To simplify notation, for \(\lambda \geqslant 0\) set \({\psi }^*(\lambda ):= {\psi }(\lambda )/ \lambda \), \({\widetilde{\psi }}^*(\lambda ):= {\widetilde{\psi }}(\lambda )/ \lambda \). Since \({\widetilde{\psi }}\) satisfies (A1)–(A4), by Fubini we derive that \({\widetilde{\psi }}^*\) is the Laplace exponent of a subordinator with exponent:

$$\begin{aligned} {\widetilde{\alpha }} + {\widetilde{\beta }} \lambda + \int _\mathbb {R_+} {\textrm{d}}r \, {\widetilde{\pi }}([r,\infty )) \big ( 1-\exp (-\lambda r ) \big ). \end{aligned}$$
(4.24)

Next, introduce the measure \({\mathcal {N}}^*({\textrm{d}}\xi ):= {\mathcal {N}}(\exp (-\alpha \sigma ) {\textrm{d}}\xi )\) and observe that by (4.23), \({\widetilde{\psi }}^*(\lambda )\) can also be written in the form

$$\begin{aligned}{} & {} \!\!\!\!\!\!\!\!{\mathcal {N}} \Big ( 1- \exp \Big (- \int _0^\sigma {\textrm{d}}h~ \psi ^*\big (u_\lambda (\xi _h)\big ) \Big ) \Big ) \nonumber \\{} & {} \!\!\!\!\!\!\!\!\quad = {\mathcal {N}} \big ( 1- \exp (- \alpha \sigma ) \big ) + {\mathcal {N}}^* \Big ( 1- \exp \Big (- \int _0^\sigma {\textrm{d}}h~ \big (\psi ^*\big (u_\lambda (\xi _h)\big ) - \alpha \big ) \Big ) \Big ). \end{aligned}$$
(4.25)

Comparing with (4.24), our result will follow by showing that the second term on the right-hand side of (4.25) is the Laplace exponent of some pure-jump subordinator. In this direction, introduce under \(E^0 \otimes {\mathcal {N}}^*\) and conditionally on \((J_\infty , \xi )\), a Poisson point measure

$$\begin{aligned} {\mathcal {M}}({\textrm{d}}h, {\textrm{d}}\rho , {\textrm{d}}{\overline{W}}) = \sum _{i \in {\mathbb {N}}} \delta _{(h_i, \rho ^i,{\overline{W}}^i )}, \end{aligned}$$

with intensity \(J_\sigma ({\textrm{d}}h) {\mathbb {N}}_{\xi (h),0}\big ( {\textrm{d}}\rho ,{\textrm{d}}{\overline{W}} \big )\). This is always possible up to enlarging the measure space and for simplicity we still denote the underlying measure by \(E^0\otimes {\mathcal {N}}^*\). Next, define the functional \(\sum _{i \in {\mathbb {N}}}{\mathscr {L}}^{\,0}_\sigma ( \rho ^i, {\overline{W}}^i)\) and denote its distribution by \(\nu ({\textrm{d}}x)\). By definition, we have:

$$\begin{aligned}&E^0\otimes {\mathcal {N}}^*\Big ( 1-\exp \Big ( -\lambda \sum _{i \in {\mathbb {N}}}{\mathscr {L}}^{\,0}_\sigma ( \rho ^i, W^i) \Big ) \Big ) \\&\quad = E^0\otimes {\mathcal {N}}^*\Big ( 1-\exp \Big ( - \int _0^\sigma J_\sigma ({\textrm{d}}h) u_\lambda \big (\xi (h)\big ) \Big ) \Big ) \\&\quad = {\mathcal {N}}^* \Big ( 1- \exp \Big (- \int _0^\sigma {\textrm{d}}h~ \big (\psi ^*\big (u_\lambda (\xi _h)\big ) - \alpha \big ) \Big ) \Big ), \end{aligned}$$

where in the last equality we used that \(J_\infty \) is the Lebesgue-Stieltjes measure of a subordinator with exponent \(\psi ^*(\lambda ) -\alpha \). Since the latter expression is finite, we deduce that \(\nu \) is a Lévy measure satisfying \(\int \nu ({\textrm{d}}r )\, (1 \wedge r)< \infty \), and that the second term on the right-hand side of (4.25) is the Laplace exponent of a driftless subordinator with Lévy measure given by \(\nu \). \(\square \)

4.2 Construction of the additive functional \((A_t)\)

We are finally in position to introduce our additive function:

Proposition 4.10

Fix \((y,r_0)\in {\overline{E}}\) and \((\mu ,\overline{\text {w} })\in {\overline{\Theta }}_x\). Under \({\mathbb {N}}_{y,r_0}\) and \({\mathbb {P}}_{\mu ,\overline{\text {w} }}\), the process defined as

$$\begin{aligned} A_t = \int _{{\mathbb {R}}_+} {\textrm{d}}r {\mathscr {L}}^{\,r}_t, \quad \quad \text { for } t \geqslant 0, \end{aligned}$$

is a continuous additive functional of the Lévy snake taking finite values. Furthermore, we have

$$\begin{aligned} A_t = \lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon } \int _0^t {\textrm{d}}u \int _{{\mathbb {R}}_+} {\textrm{d}}r \, \mathbb {1}_{\{ \tau _r({\overline{W}}_u )< H_u < \tau _r({\overline{W}}_u) + \varepsilon \}}, \end{aligned}$$
(4.26)

where the convergence holds uniformly on compact intervals in measure under \({\mathbb {P}}_{\mu , \overline{\text {w} } }\) and \({\mathbb {N}}_{y,r_0}( \, \cdot \, \cap \{ \sigma > z \} )\) for every \(z >0\).

Proof

We start proving the proposition under \({\mathbb {P}}_{\mu ,{\overline{{\text {w}}}}}\), where \((\mu ,{\overline{{\text {w}}}}):=(\mu ,{\text {w}},\ell )\in {\overline{\Theta }}_x\). Remark that by the translation invariance of the local time we might assume that \(\ell (0)=0\) without loss of generality. For simplicity, we set \(y:={\text {w}}(0)\). Next, we write \({\widehat{\Lambda }}^*_t:= \sup _{s \leqslant t} {\widehat{\Lambda }}_s\) and we note that it suffices to show that for any \(t, K > 0\)

$$\begin{aligned} {\mathbb {E}}_{\mu ,{\overline{{\text {w}}}}} \Big [\sup _{s \leqslant t} | \int _{{\mathbb {R}}_+} {\textrm{d}}r \, \frac{1}{\varepsilon } \int _0^s {\textrm{d}}u ~\mathbb {1}_{\{ \tau _r({\overline{W}}_u )< H_u< \tau _r({\overline{W}}_u) + \varepsilon \}} - \int _\mathbb {R_+} {\textrm{d}}r {\mathscr {L}}^{\,r}_s | \, \cdot \mathbb {1}_{\{ {\widehat{\Lambda }}^*_t < K \}} \Big ] \rightarrow 0, \end{aligned}$$

as \(\varepsilon \downarrow 0\). In this direction, we remark that the previous expression is bounded above by

$$\begin{aligned}&\int _{{\mathbb {R}}_+} {\textrm{d}}r \, {\mathbb {E}}_{\mu ,{\overline{{\text {w}}}}} \Big [ \sup _{s \leqslant t} | \frac{1}{\varepsilon } \int _0^s {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}_u)< H_u< \tau _r({\overline{W}}_u) + \varepsilon \}} - {\mathscr {L}}^{\,r}_s | \, \cdot \mathbb {1}_{\{ {\widehat{\Lambda }}^*_t< K \}} \Big ] \\&\quad \leqslant \int _{(0,K]} {\textrm{d}}r\, {\mathbb {E}}_{\mu ,{\overline{{\text {w}}}}} \Big [ \sup _{s \leqslant t} | \frac{1}{\varepsilon } \int _0^s {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}_u)< H_u < \tau _r({\overline{W}}_u) + \varepsilon \}} - {\mathscr {L}}^{\,r}_s | \Big ], \end{aligned}$$

since on the event \(\{ {\widehat{\Lambda }}^*_t < K \}\) we have \({\mathscr {L}}^{r} = 0\) for every \(r>K\). Now, by Lemma 4.6, it suffices to show that the expectation under \({\mathbb {P}}_{\mu ,{\overline{{\text {w}}}}}\) in the previous display is uniformly bounded on \(\varepsilon , r > 0\) since the desired result then follows by dominated convergence. To do so, we set \(T_0^+:= \inf \big \{ t \geqslant 0: \langle \rho _t,1\rangle = 0 \big \}\) and we notice that by the strong Markov property, under \({\mathbb {P}}_{\mu ,{\overline{{\text {w}}}}}\), the distribution of \((\rho _{T_0^++s},{\overline{W}}_{T_0^++s}:~s\geqslant 0)\) is \({\mathbb {P}}_{0,y,0}({\textrm{d}}\rho ,{\textrm{d}}{\overline{W}})\). In particular we have the upper bound:

$$\begin{aligned}&{\mathbb {E}}_{\mu ,{\overline{{\text {w}}}}} \Big [ \sup _{s \leqslant t} | \frac{1}{\varepsilon } \int _0^s {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}_u)< H_u< \tau _r({\overline{W}}_u) + \varepsilon \}} - {\mathscr {L}}^{\,r}_s | \Big ]\\&\quad \leqslant {\mathbb {E}}_{\mu ,{\overline{{\text {w}}}}}^\dag \Big [ \frac{1}{\varepsilon } \int _0^{\sigma } {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}_u)< H_u< \tau _r({\overline{W}}_u) + \varepsilon \}}+ {\mathscr {L}}^{\,r}_{\sigma } \Big ]\\&\qquad +{\mathbb {E}}_{0,y,0} \Big [ \sup _{s \leqslant t} | \frac{1}{\varepsilon } \int _0^s {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}_u)< H_u < \tau _r({\overline{W}}_u) + \varepsilon \}} - {\mathscr {L}}^{\,r}_s | \Big ]. \end{aligned}$$

So to conclude we need to prove both:

$$\begin{aligned}&\text {(i)} \quad \sup _{\varepsilon>0} \sup _{r> 0} {\mathbb {E}}_{0,y,0} \Big [ \sup _{s \leqslant t} | \frac{1}{\varepsilon } \int _0^s {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}_u)< H_u< \tau _r({\overline{W}}_u) + \varepsilon \}} - {\mathscr {L}}^{\,r}_s | \Big ]< \infty ; \\&\text {(ii)}\quad \sup _{\varepsilon>0} \sup _{r > 0} {\mathbb {E}}_{\mu ,{\overline{{\text {w}}}}}^\dag \Big [ \frac{1}{\varepsilon } \int _0^{\sigma } {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}_u)< H_u< \tau _r({\overline{W}}_u) + \varepsilon \}}+ {\mathscr {L}}^{\,r}_{\sigma } \Big ]<\infty . \end{aligned}$$

Let us start showing (i). We are going to apply similar techniques to the ones used in the proof of Theorem 3.7. In this direction, we work under \({\mathbb {P}}_{0,y,0}\) and we fix \(r,\varepsilon >0\). Now, recall the definition of \(\gamma ^{D_r}\), \(\sigma ^{D_r}\) and \(\rho ^{D_r}\) introduced in Sect. 3.2 (kee** in mind the fact that here we work with \((\rho ,{\overline{W}})\)) and set

$$\begin{aligned} R^{D_r}_t:= \int _0^t {\textrm{d}}s \mathbb {1}_{\{ \gamma ^{D_r} _s > 0 \}}, \quad \quad \text { for }t \geqslant 0, \end{aligned}$$

which is the right inverse of \(\sigma ^{D_r}\). Next, for every \(r > 0\), by definition we have \(\tau _r(\rho _t,{\overline{W}}_t)=\tau _{D_r}(\rho _t,{\overline{W}}_t)\) and we derive that

$$\begin{aligned} \int _0^s {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}_u )< H_u< \tau _r({\overline{W}}_u) + \varepsilon \}}&= \int _0^{R_s^{D_r}} {\textrm{d}}u~ \mathbb {1}_{\{ 0< H(\rho ^{D_r}_u) < \varepsilon \}} , \end{aligned}$$

since on \(\{ u\geqslant 0: \, H(\rho _{\sigma ^{D_r}_u}) > \tau _{r}({\overline{W}}_{\sigma ^{D_r}_u}) \}\), we have \(H(\rho _u^{D_r}) = H ( \rho _{\sigma _u^{D_r}}) - \tau _{r}({\overline{W}}_{\sigma _u^{D_r}}).\) Recall from (3.22) that \(\langle \rho ^{D_r},1 \rangle \) is distributed as \(\langle \rho ,1 \rangle \) under \({\mathbb {P}}_{0,y,0}\), which is a reflected \(\psi \)-Lévy process, and that we denote its local time at 0 by \(\ell ^{D_r}\). In particular, the distribution of \((\langle \rho ^{D_r},1\rangle , \ell ^{D_r})\) is the same as \(\big ((X_t-I_{t}, -I_{t}):~t\geqslant 0\big )\). Recalling from (3.23) that \({\mathscr {L}}^{\,r}_t = \ell ^{D_r} ( R^{D_r}_t )\) and noticing that \(R^{D_r}_s \leqslant s\), we derive the following inequality:

$$\begin{aligned}&{\mathbb {E}}_{0,y,0}\Big [ \sup _{s \leqslant t} | \frac{1}{\varepsilon } \int _0^s {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}_u )< H_u< \tau _r({\overline{W}}_u) + \varepsilon \}} - {\mathscr {L}}^{\,r}_s | \Big ]\\&\quad = {\mathbb {E}}_{0,y,0} \Big [ \sup _{s \leqslant t} | \frac{1}{\varepsilon } \int _0^{R^{D_r}_s} {\textrm{d}}u~ \mathbb {1}_{\{ 0< H(\rho ^{D_r}_u)< \varepsilon \}} - \ell ^{D_r}(R^{D_r}_s) | \Big ] \\&\quad \leqslant {\mathbb {E}}_{0,y,0}\Big [\sup _{s \leqslant t} | \frac{1}{\varepsilon } \int _0^{s} {\textrm{d}}u~ \mathbb {1}_{\{ 0< H(\rho ^{D_r}_u)< \varepsilon \}} - \ell ^{D_r} (s) | \Big ]\\&\quad ={\mathbb {E}}_{0,y,0}\Big [\sup _{s \leqslant t} | \frac{1}{\varepsilon } \int _0^{s} {\textrm{d}}u~ \mathbb {1}_{\{ 0< H(\rho _u) < \varepsilon \}} + I_{s} | \Big ], \end{aligned}$$

where in the first line we used that for each fixed \(r>0\), the processes \({\mathscr {L}}^{\,r}\) and \(L^{D_r}\) are indistinguishable. The latter quantity does not depend on r and by (2.7) it converges to 0 as \(\varepsilon \downarrow 0\), giving (i).

We now turn our attention to the proof of (ii). On the one hand, by Proposition  3.3 - (ii) and (3.4), for every \(r>0\) we have

$$\begin{aligned} {\mathbb {E}}^{\dag }_{\mu ,{\overline{{\text {w}}}}} \big [ {\mathscr {L}}^{\,r}_\sigma \big ]= & {} \int _{(0,\tau _r({\overline{{\text {w}}}}))} \mu ({\textrm{d}}h)~{\mathbb {N}}_{{\overline{{\text {w}}}}(h)}\big ( {\mathscr {L}}^{\,r}_\sigma \big ) \\= & {} \int _{(0,\tau _r({\overline{{\text {w}}}}))} \mu ({\textrm{d}}h)~ E^0 \otimes \Pi _{{\overline{{\text {w}}}}(h)}\big [ \mathbb {1}_{\{ \tau _r(\xi ,{\mathcal {L}}) < \infty \}}\exp \big (-\alpha \tau _r(\xi ,{\mathcal {L}})\big )\big ] \leqslant \langle \mu , 1 \rangle . \end{aligned}$$

On the other hand, the remaining term

$$\begin{aligned} {\mathbb {E}}_{\mu ,{\overline{{\text {w}}}}}^\dag \Big [ \frac{1}{\varepsilon } \int _0^{\sigma } {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}_u)< H_u < \tau _r({\overline{W}}_u) + \varepsilon \}}\Big ] \end{aligned}$$

can be bounded similarly as we did in (3.10). To this end, notice that if \(\mu =0\) there is nothing prove and therefore, from now on, we assume that \(\mu \ne 0\). Then consider, under \({\mathbb {P}}_{\mu , {\overline{{\text {w}}}}}^{\dag }\), the random measure \(\sum _{i\in {\mathbb {N}}}\delta _{(h_i, \rho ^i, {\overline{W}}^i)}\) defined in (2.23), set \(T:= \inf \{ t > 0: H_t = \tau _r({\overline{{\text {w}}}}) \}\), with the convention \(T=0\) if \(\tau _r({\overline{{\text {w}}}})=\infty \), and remark that for every \(s\in [0,T]\) we have \(\tau _r({\overline{W}}_s) = \tau _r({\overline{{\text {w}}}})\). Recalling \(\mu (\{\tau _r({\overline{{\text {w}}}})\}) = 0\), it follows by considering the excursion intervals of H over its running infimum and our previous remark, that the integral \(\int _0^\sigma {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}_u)< H_u < \tau _r({\overline{W}}_u) +\varepsilon \}}\) can be written as

$$\begin{aligned} \sum _{h_i > \tau _r({\overline{{\text {w}}}})} \int _0^{\sigma ({\overline{W}}^i)} {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{{\text {w}}}})< h_i + H(\rho ^i_u)< \tau _r({\overline{{\text {w}}}}) + \varepsilon \}} + \sum _{h_i< \tau _r({\overline{{\text {w}}}})} \int _0^{\sigma ({\overline{W}}^i)} {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}^i_u)< H(\rho ^i_u) < \tau _r({\overline{W}}_u^i)\}}, \end{aligned}$$

where the first term is now bounded above by \( \sum _{h_i > \tau _r({\overline{{\text {w}}}})} \int _0^{\sigma ({\overline{W}}^i)} {\textrm{d}}u~ \mathbb {1}_{\{ 0< H(\rho ^i_u) < \varepsilon \}}. \) Consequently, by (2.25) we have

$$\begin{aligned}&{\mathbb {E}}^{\dag }_{\mu ,{\overline{{\text {w}}}}} \Big [ \int _0^\sigma {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}_u)< H_u< \tau _r({\overline{W}}_u) + \varepsilon \}} \Big ] \\&\quad \leqslant \mu ((\tau _r({\overline{{\text {w}}}}), \infty ))N(\int _{0}^{\sigma }{\textrm{d}}s \, \mathbb {1}_{\{0< H(\rho _s)<\varepsilon \}})\\&\qquad +\int _{(0,\, \tau _r({\overline{{\text {w}}}}))}\mu ({\textrm{d}}h){\mathbb {N}}_{{\overline{{\text {w}}}}(h)}\big (\int _{0}^{\sigma } {\textrm{d}}s \, \mathbb {1}_{\{ \tau _{r}({\overline{W}}_s)< H_s <\tau _{r}({\overline{W}}_s)+ \varepsilon \}}\big ), \end{aligned}$$

and by the many-to-one formula (2.25), the previous display is bounded by \(\varepsilon \cdot \langle \mu , 1 \rangle \). Putting everything together we deduce the upper bound

$$\begin{aligned} {\mathbb {E}}^{\dag }_{\mu ,{\overline{{\text {w}}}}} \Big [ \frac{1}{\varepsilon } \int _0^\sigma {\textrm{d}}u~ \mathbb {1}_{\{ \tau _r({\overline{W}}_u)< H_u < \tau _r({\overline{W}}_u) + \varepsilon \}} + {\mathscr {L}}^{\,r}_\sigma \Big ] \leqslant 2\cdot \, \langle \mu , 1 \rangle , \end{aligned}$$

which does not depend on the pair \(r, \varepsilon >0\) and concludes the proof of (ii).

Finally, we extend the result under the excursion measure \({\mathbb {N}}_{y,r_0}\). Working under \({\mathbb {P}}_{0,y,r_0}\) fix \(z > 0\) and denote by \( (\rho ^\prime , {\overline{W}}^\prime ) = (\rho _{(g+\cdot )\wedge d}, {\overline{W}}_{(g+\cdot )\wedge d})\) the first excursion with length \(\sigma > z\). By the previous result, the quantity

$$\begin{aligned}&\sup _{s \leqslant t} \Big |~\varepsilon ^{-1} \int _{0}^{s} {\textrm{d}}u \int _{{\mathbb {R}}_+} {\textrm{d}}r \mathbb {1}_{\{ \tau _r({\overline{W}}^{\prime }_u)< H(\rho _u^{\prime })< \tau _r({\overline{W}}^{\prime }_u) + \varepsilon \}} - \int _{{\mathbb {R}}_+} {\textrm{d}}r {\mathscr {L}}^{\,r}_s (\rho ^\prime , {\overline{W}}^\prime ) ~\Big | \\&\quad = \sup _{s \leqslant t \wedge (d-g)} \Big |~ \varepsilon ^{-1} \int _{g}^{g+s}{\textrm{d}}u \int _{{\mathbb {R}}_+} {\textrm{d}}r \mathbb {1}_{\{ \tau _r({\overline{W}}_u)< H_u < \tau _r({\overline{W}}_u) + \varepsilon \}} - \int _{{\mathbb {R}}_+} {\textrm{d}}r ({\mathscr {L}}^{\,r}_{g+s} -{\mathscr {L}}^{\,r}_g) ~\Big | \end{aligned}$$

converges in probability to 0, and it then follows that (4.26) holds in measure under \({\mathbb {N}}_{y,r_0}( \, \cdot \, \cap \{ \sigma > z\} )\). \(\square \)

As a straightforward consequence of the definition of A we deduce the following many-to-one formula:

Lemma 4.11

For any non-negative measurable function \(\Phi \) on \(M_f({\mathbb {R}}_+)\times M_f({\mathbb {R}}_+) \times {\mathcal {W}}_{{\overline{E}}}\) and \((y,r_0) \in {\overline{E}}\), we have

(4.27)

Proof

By the translation invariance of the local time it is enough to prove the Lemma for \(r_0=0\). Now recall that, under \({\mathbb {N}}_{y,0}\), for every fixed \(r\geqslant 0\) the processes \({\mathscr {L}}^{\,r}\) and \(L^{D_r}\) are indistinguishable. Consequently, the left-hand side of (4.27) can be written in the form:

$$\begin{aligned} \int _0^\infty {\textrm{d}}r \, {\mathbb {N}}_{y,0} \left( \int _0^\sigma {\textrm{d}}{L}_s^{D_r}~ \Phi \left( \rho _s, \eta _s,{\overline{W}}_s \right) \right) , \end{aligned}$$

and hence we arrive at (4.27) applying (3.4). \(\square \)

A first consequence of Lemma 4.11 is that for any \((y,r_0) \in {\overline{E}}\), we have

$$\begin{aligned} \text {supp} \, {\textrm{d}}A \subset \{ t \in {\mathbb {R}}_+: {\widehat{W}}_t = x \}, \quad {\mathbb {N}}_{y,r_0} \text {- a.e.} \end{aligned}$$
(4.28)

Indeed, it suffices to observe that by (4.27), for any \(\varepsilon >0\), it holds that \( {\mathbb {N}}_{y,r_0} \left( \int _0^\sigma {\textrm{d}}A_s \mathbb {1}_{\{ {\textrm{d}}_E ( {\widehat{W}}_s,x ) > \varepsilon \}} \right) = 0, \) where we recall that \(d_E\) stands for the metric of E. Let us comment on a few useful identities that will be used frequently in our computations:

Remark 4.12

Fix \((y,r_0) \in {\overline{E}}\) with \(y \ne x\). Under \({\mathbb {N}}_{y,r_0}\) or \({\mathbb {P}}_{0,y,r_0}\), let (gd) be an interval such that \(H_s>H_g = H_d\), for every \(s\in (g,d)\), and \({\widehat{\Lambda }}_g = r_0\) – remark that in particular we have \(p_{H}(g)=p_{H}(d)\). We denote the corresponding subtrajectory, in the sense of Sect. 2.3, by \((\rho ^\prime , {\overline{W}}^\prime )\) and its duration by \(\sigma ^\prime = \sigma (W^\prime )\). Since for any \(q\geqslant r\) and \(s \geqslant 0\):

$$\begin{aligned} H_{(g + s)\wedge d}= H_g + H(\rho ^\prime _{s \wedge \sigma ^\prime }) ~~~\text { and }~~~ \tau _q({\overline{W}}_{(g + s)\wedge d}) = H_g + \tau _q({\overline{W}}_{s \wedge \sigma ^\prime }^{\prime }), \end{aligned}$$

we deduce by the approximation (4.26) that the process \((A_{(g + t)\wedge d} - A_g: t \geqslant 0)\) only depends on \((\rho ^\prime , {\overline{W}}^\prime )\) and it will be denoted by \((A_{s}(\rho ^\prime , {\overline{W}}^\prime ): s \geqslant 0)\). Now we make the following observations:

(i) Working under \({\mathbb {N}}_{y,r_0}\) or \({\mathbb {P}}_{0,y,r_0}\), we denote the connected components of the open set \(\{( H_s - \tau _{r_0}({\overline{W}}_s) )_+ > 0\}\) by \(((\alpha _i, \beta _i): i \in {\mathcal {I}} )\) and we set \(\sigma _i:=\beta _i-\alpha _i\) its duration. We also write \(( \rho ^{i}, {\overline{W}}^{i})\) for the excursions from \(D_{r_0}\) corresponding to the interval \((\alpha _i,\beta _i)\). By (4.26) in Proposition 4.10, the measure \({\textrm{d}}A\) does not charge the set \(\{ s\geqslant 0: H_s \leqslant \tau _{r_0}({\overline{W}}_s) \}\) and we derive that:

$$\begin{aligned} A_{\sigma } = \sum _{i \in {\mathcal {I}}} \int _{(\alpha _i, \beta _i]}{\textrm{d}}A_s = \sum _{i\in {\mathcal {I}}} A_{\sigma _i}(\rho ^{i}, {\overline{W}}^{i}), \quad \quad {\mathbb {N}}_{y,r_0}\text {-a.e.} \text { and } {\mathbb {P}}_{0,y,r_0} \text {-a.s.} \end{aligned}$$
(4.29)

(ii) We will now make similar remarks holding under \({\mathbb {P}}_{\mu , {\overline{{\text {w}}}}}^\dag \), for \((\mu ,{\overline{{\text {w}}}})\in {\overline{\Theta }}_x\) with \(\mu \ne 0\). Under \({\mathbb {P}}^{\dag }_{\mu , {\overline{{\text {w}}}}}\), denote the connected components of \(\{s\geqslant 0:~H_s>\inf _{[0,s]} H \}\) by \(((a_i,b_i):i\in {\mathcal {I}})\) and write \((\rho ^i,{\overline{W}}^i)\) for the subtrajectory associated with \([a_i,b_i]\). We also set \(h_i=H_{a_i}\), \(\sigma _i=b_i-a_i\) and recall that the measure \({\mathcal {M}} = \sum _{i \in {\mathcal {I}}}\delta _{(h_i, \rho ^i, {\overline{W}}^{i})}\) is the Poisson point measure (2.23) associated with \((\rho , {\overline{W}})\). Moreover, we have:

$$\begin{aligned} {\mathbb {E}}^{\dag }_{\mu , {\overline{{\text {w}}}}}\big [| A_\sigma -\sum _{i\in {\mathcal {I}}} A_{\sigma _i}(\rho ^{i}, {\overline{W}}^{i})|\big ]&\leqslant \int _{{\mathbb {R}}_+} {\textrm{d}}r~ {\mathbb {E}}^{\dag }_{\mu , {\overline{{\text {w}}}}}\big [| {\mathscr {L}}_\sigma ^{\,r} -\sum _{i\in {\mathcal {I}}} {\mathscr {L}}_{\sigma _i}^{\,r}(\rho ^{i}, {\overline{W}}^{i})|\big ]. \end{aligned}$$

Consequently, by Proposition 3.3 - (ii), the previous quantity is null and it follows that we still have

$$\begin{aligned} A_\sigma = \sum _{i\in {\mathcal {I}}} A_{\sigma _i}(\rho ^{i}, {\overline{W}}^{i}),\quad \quad {\mathbb {P}}^{\dag }_{\mu ,{\overline{{\text {w}}}}} \text { - a.s.} \end{aligned}$$
(4.30)

Recall now the definition (4.12) of \({\widetilde{\psi }}\) and the notation \(u_\lambda \) introduced in (4.10). The following proposition relates the Laplace transform of the total mass \(A_\sigma \) under \({\mathbb {N}}_{y,r_0}\) and the Laplace exponent \({\widetilde{\psi }}\). This identity will be needed to characterize the support of \({\textrm{d}}A\) and will also play a central role in Sect. 5.

Proposition 4.13

For every \(r_0,\lambda \geqslant 0\) and \(y \in E\), we have

$$\begin{aligned} {\mathbb {N}}_{y,r_0}\Big ( 1-\exp \big (- \lambda A_\infty \big ) \Big ) = u_{{\widetilde{\psi }}^{-1}(\lambda )}(y), \end{aligned}$$

where we recall the convention \(u_\lambda (x)=\lambda \), for every \(\lambda \geqslant 0\). Moreover, for every \((\mu ,\overline{\textrm{w}})\in {\overline{\Theta }}_{x}\), we have:

$$\begin{aligned} {\mathbb {E}}^{\dag }_{\mu , \overline{\textrm{w}}} \Big [ \exp \big (- \lambda A_\infty \big ) \Big ] = \exp \Big (- \int \mu ({\textrm{d}}h)~ u_{{\widetilde{\psi }}^{-1}(\lambda )}(\text {w} (h)) \Big ). \end{aligned}$$

The proposition has the following consequence: since \({\widetilde{\psi }}^{-1}(\lambda ) = {\widetilde{N}}(1-\exp (-\lambda \sigma ))\), the total mass \(A_\infty \) under \({\mathbb {N}}_{x,0}\) and \(\sigma \) under \({\widetilde{N}}\) have the same distribution. This connection is the tip of the iceberg of the results that will be established in the upcoming section, where we establish that the tree structure of the set \(\{\upsilon \in {\mathcal {T}}_H:~\xi _\upsilon =x\}\) is encoded by a \({\widetilde{\psi }}\)–Lévy tree.

Proof

Under \({\mathbb {N}}_{y,r_0}\) with \(y \ne x\) and \(r_0\geqslant 0\), set

$$\begin{aligned} T^*:= \inf \{t \geqslant 0: \tau _{r_0}({\overline{W}}_t) < \infty \}, \end{aligned}$$

which is just the first hitting time of x by \(({\widehat{W}}_t)_{t \in [0,\sigma ]}\). Notice that by (4.28), \(A_\infty \) vanishes on \(\{ T^* = \infty \}\) \({\mathbb {N}}_{y,r_0}\)-a.e.. We set \(G_\lambda := {\mathbb {N}}_{x,0}(1-\exp (-\lambda A_\infty ))\), and remark that the identity (4.29) and the special Markov property applied to the domain \(D_{r_0}\) yields:

$$\begin{aligned} {\mathbb {N}}_{y,r_0}\Big ( 1- \exp \big (-\lambda A_\infty \big ) \Big ) = {\mathbb {N}}_{y,r_0}\Big ( 1- \exp \Big ( - {\mathscr {L}}^{r_0}_\sigma \cdot {\mathbb {N}}_{x, r_0} \big ( 1-\exp \big (-\lambda A_\infty \big )\big ) \Big )\Big ). \end{aligned}$$

Next, by the translation invariance of the local time \({\mathcal {L}}\), we derive that the previous display is equal to:

$$\begin{aligned} {\mathbb {N}}_{y,0}\Big ( 1- \exp \Big ( - {\mathscr {L}}^{0}_\sigma \cdot {\mathbb {N}}_{x, 0} \big ( 1-\exp \big (-\lambda A_\infty \big )\big ) \Big )\Big )=u_{G_\lambda }(y). \end{aligned}$$

Moreover, for \((\mu , {\overline{{\text {w}}}} ) \in {\overline{\Theta }}_x\) if we denote under \({\mathbb {P}}^{\dag }_{\mu , {\overline{{\text {w}}}}}\) the Poisson process introduced in (2.23) by \(\sum _{i \in {\mathcal {I}}}\delta _{(h_i, \rho ^i, {\overline{W}}^{i})}\), we get:

$$\begin{aligned} {\mathbb {E}}^{\dag }_{\mu , {\overline{{\text {w}}}}} \Big [ \exp \big (- \lambda A_\infty \big ) \Big ]&= {\mathbb {E}}^{\dag }_{\mu , {\overline{{\text {w}}}}} \Big [ \exp \big (- \lambda \sum _{i \in {\mathcal {I}}} A_\infty (\rho ^i , {\overline{W}}^{i})\big ) \Big ] \\&= \exp \Big ( -\int \mu ({\textrm{d}}h) ~{\mathbb {N}}_{{\overline{{\text {w}}}}(h)}\big ( 1-\exp \big (-\lambda A_\infty \big ) \big ) \Big ) \\&= \exp \Big (- \int \mu ({\textrm{d}}h)~ u_{G_\lambda }({\text {w}}(h)) \Big ), \end{aligned}$$

where in the first equality we applied (4.30), and in the second we used that \(\sum _{i \in {\mathcal {I}}}\delta _{(h_i, \rho ^i, {\overline{W}}^{i})}\) is a Poisson point measure with intensity \(\mu ({\textrm{d}}h) {\mathbb {N}}_{{\overline{{\text {w}}}}(h)}({\textrm{d}}\rho ,{\textrm{d}}W)\). Consequently, the statement of the proposition will now follow if we establish that \(G_\lambda ={\widetilde{\psi }}^{-1}(\lambda )\). In this direction, for \(\lambda > 0\), notice that the Markov property implies that

$$\begin{aligned} G_\lambda&= \lambda \cdot {\mathbb {N}}_{x,0} \Big ( \int _0^\sigma {\textrm{d}}A_s ~ \exp \big (- \lambda \int _s^\sigma {\textrm{d}}A_u \big ) \Big )\\&=\lambda \cdot {\mathbb {N}}_{x,0} \Big ( \int _0^\sigma {\textrm{d}}A_s ~ {\mathbb {E}}^{\dag }_{\rho _s, {\overline{W}}_s} \Big [ \exp \big (- \lambda \int _0^\sigma {\textrm{d}}A_u\big ) \Big ] \Big ). \end{aligned}$$

By the previous discussion under \({\mathbb {P}}^{\dag }_{\mu , {\overline{{\text {w}}}}}\) and the many-to-one formula of A given in Lemma 4.11, we get:

$$\begin{aligned} G_\lambda&= \lambda \int _0^\infty {\textrm{d}}r ~ E^{0} \otimes \Pi _{x,0} \Big ( \exp \big (- \alpha \tau _r\big ) \exp \Big (- \int _0^{\tau _r} J_{\tau _r}({\textrm{d}}h) ~u_{G_\lambda }\big (\xi (h)\big ) \Big ) \Big )\\&= \lambda \int _0^\infty {\textrm{d}}r ~ \Pi _{x,0} \Big ( \exp \Big (- \int _0^{\tau _r} {\textrm{d}}h~ \frac{\psi \big ( u_{G_\lambda }(\xi (h))\big )}{u_{G_\lambda }\big (\xi (h)\big )} \Big ) \Big ), \end{aligned}$$

where we recall that \(\tau _r(\xi ,{\mathcal {L}}):=\inf \{s\geqslant 0:~{\mathcal {L}}_s\geqslant r\}\) and in the second equality we used that \(J_\infty ({\textrm{d}}h)\) is the Lebesgue-Stieltjes measure of a subordinator with exponent \(\psi (\lambda )/\lambda - \alpha \). Next, under \(\Pi _{x,0}\), we consider \((s_i,t_i)_{i\geqslant 1}\) the connected components of \(\{s\geqslant 0:~\xi _s\ne x\}\), by \((\xi ^i)_{i \geqslant 1}\) the corresponding excursions, and we remark that:

$$\begin{aligned} \int _0^{\tau _r} {\textrm{d}}h~ \frac{\psi \big ( u_{G_\lambda }(\xi (h))\big )}{u_{G_\lambda }\big (\xi (h)\big )}= \sum \limits _{i\geqslant 1, {\mathcal {L}}_{s_i}< r} \int _{s_i}^{t_i} {\textrm{d}}h~\frac{\psi \big ( u_{G_\lambda }(\xi (h))\big )}{u_{G_\lambda }\big (\xi (h)\big )}, \end{aligned}$$

since \(\int _0^\infty {\textrm{d}}h \mathbb {1}_{\{ \xi _h = x \}} = 0\) by assumption (\(\hbox {H}_{3}\)). Consequently, if we denote by \(\sigma _{\xi ^i}\) the lifetime of the excursion \(\xi ^i\), by Campbell’s formula we get:

$$\begin{aligned}&\Pi _{x,0} \Big ( \exp \Big (- \int _0^{\tau _r} {\textrm{d}}h~ \frac{\psi \big ( u_{G_\lambda }(\xi (h))\big )}{u_{G_\lambda }\big (\xi (h)\big )} \Big ) \Big )\\&\quad =\Pi _{x,0} \Big ( \exp \Big (- \sum \limits _{i\geqslant 1, {\mathcal {L}}_{s_i} < r} \int _{0}^{\sigma _{\xi ^i}} {\textrm{d}}h~\frac{\psi \big ( u_{G_\lambda }(\xi ^i(h))\big )}{u_{G_\lambda }\big (\xi ^i(h)\big )} \Big ) \Big ) \\&\quad = \exp \Big (- r\cdot {\mathcal {N}} \Big ( 1-\exp \big (- \int _0^\sigma {\textrm{d}}h ~\frac{ \psi \big (u_{G_\lambda }(\xi _h))}{u_{G_\lambda }(\xi _h)} \big ) \Big )\Big ), \end{aligned}$$

and hence

$$\begin{aligned} G_\lambda&= \lambda \cdot {\mathcal {N}} \Big ( 1-\exp \Big (- \int _0^\sigma {\textrm{d}}h ~ \frac{\psi \big (u_{G_\lambda }(\xi _h))}{u_{G_\lambda }(\xi _h)} \Big ) \Big ) ^{-1}. \end{aligned}$$

However, by the first identity in (4.23), we have

$$\begin{aligned} {\mathcal {N}} \Big ( 1-\exp \Big (- \int _0^\sigma {\textrm{d}}h \frac{\psi (u_{G_\lambda }(\xi _h))}{u_{G_\lambda }(\xi _h)} \Big ) \Big )&= \frac{{\widetilde{\psi }}(G_\lambda )}{G_\lambda }, \end{aligned}$$

and we derive that \({\widetilde{\psi }}(G_\lambda ) = \lambda \) for \(\lambda > 0\) and equivalently \(G_\lambda ={\widetilde{\psi }}^{-1}(\lambda )\). Finally, since \(G_0 = 0\) the identity also holds for \(\lambda = 0\). \(\square \)

Remark 4.14

We conclude this section with an informal discussion relating our additive functional A with the so-called family of local times at \(y \in {\mathbb {R}}\) of a one-dimensional super-Brownian motion \({{\textbf {X}}} = ({{\textbf {X}}}_t: t \geqslant 0)\) with branching mechanism \(\psi (\lambda ) = 2\lambda ^2\); this remark was pointed out by one of the anonymous referees, to whom we are thankful. To illustrate this, let \({\mathbb {N}}_{0,0}\) be the excursion measure away from (0, 0) of the \(2 \lambda ^2 \)-Lévy snake with spatial motion the pair formed by a one-dimensional Brownian motion and its local time at 0.Footnote 10 For some arbitrary fixed \(r>0\), consider a Poisson measure \(\sum _{i \in {\mathbb {N}}}\delta _{(\rho ^i, {\overline{W}}^i)}\) with intensity \(r\cdot {\mathbb {N}}_{0,0}\) where as usual, we use the notation \({\overline{W}}^i=(W^i,\Lambda ^i)\). Then, if for every \(t >0\), we write \((L^t_s: s \geqslant 0 )\) for the height process at level t in the sense of [11, Definition 1.3.1], the measure-valued process defined by the relation

$$\begin{aligned} \langle {{\textbf {X}}}_t, f \rangle =\sum \limits _{i\in {\mathbb {N}}} \int _0^{\sigma _i} {\textrm{d}}L^t_s(\rho ^i, {\overline{W}}^i) f({\widehat{W}}^i_s), \quad \text { for }t >0, \end{aligned}$$

for every non-negative measurable function \(f: {\mathbb {R}} \rightarrow {\mathbb {R}}_+\), and \({{\textbf {X}}}_0=r\cdot \delta _0\), is a one-dimensional super-Brownian motion with branching mechanism \(\psi (\lambda ) = 2\lambda ^2\) started at \(r \cdot \delta _0\). We denote its law by \(P_{r \delta _0}\) and we refer to [11, Theorem 4.2.1] for background on this statement. It was proved in [36, Theorem 2] that there exists a process \((Y(t,y)): 0 \leqslant t \leqslant \infty , y \in {\mathbb {R}})\) defined under \(P_{r\delta _0}\) and characterized, for every non-negative measurable function \(f: {\mathbb {R}} \rightarrow {\mathbb {R}}_+\), by the following occupation formula

$$\begin{aligned} \int _0^t {\textrm{d}}s \, \langle {{\textbf {X}}}_s, f \rangle = \int _{\mathbb {R}} {\textrm{d}}y \, f(y) Y(t,y). \end{aligned}$$

When \(t = \infty \) and \(y = 0\), using equation (18) in [28], we infer that \(Y(\infty ,0) = \sum _{i\in {\mathbb {N}}} A_{\infty }(\rho ^i, {\overline{W}}^i)\), where A stands for the additive functional defined for the point 0. Furthermore, the special Markov property leads us to conjecture that we have \(Y(t,0) = \int _0^\infty {\textrm{d}}A_s (\rho ^i, {\overline{W}}^i) \mathbb {1}_{{\{ H_s(\rho ^i) \leqslant t\} }}\), for \(0< t < \infty \), and that this relation holds also for Y(ty) for an arbitrary fixed \(y \in {\mathbb {R}}\); making use now of our additive functional associated with the point \(y \in {\mathbb {R}}\). It is worth noting that these relations should also hold for more general branching mechanisms and spatial motions. Specifically, when the branching mechanism is stable and the spatial motion is a Brownian motion, the local time process Y has already been defined and studied.Footnote 11 For more details, we direct the reader to [33] and the references therein. The aforementioned relations should extend to this case.

4.3 Characterization of the support of \({\textrm{d}}A\)

The rest of the section is devoted to the characterization, under \({\mathbb {N}}_{y,r_0}\) and \({\mathbb {P}}_{\mu ,{\overline{{\text {w}}}}}\), of the support of the measure \({\textrm{d}}A\). Our characterization is given in terms of the constancy intervals of \({\widehat{\Lambda }}\), and of a family of special times for the Lévy snake that will be named exit times from x. Before giving a precise statement we will need several preliminary results under \({\mathbb {N}}_{x,0}\). First recall that under \({\mathbb {N}}_{x,0}\), for every \(r>0\) the processes \({\mathscr {L}}^{\,r }\) and \(L^{D_r}\) are indistinguishable – and in particular, by Proposition 3.4, \({\mathscr {L}}^{\,r }_\sigma \) is \({\mathcal {F}}^{D_r}\) measurable. Fix \(r>0\), recall the notation \(\tau _r(\rho _t,{\overline{W}}_t)=\tau _{D_r}(\rho _t,{\overline{W}}_t)\) for \(t\geqslant 0\), and denote the connected components of the open set \(\{ t \in [0,\sigma ]: \tau _{r}({\overline{W}}_t) < H_t \}\) by \(\{ (a^r_i, b^r_i): \, i \in {\mathcal {I}}_r \}\). We write \(\{(\rho ^{i,r},{\overline{W}}^{i,r} ): \, i \in {\mathcal {I}}_r \}\) for the corresponding subtrajectories, where as usual \({\overline{W}}^{i,r}=(W^{i,r},\Lambda ^{i,r})\). Next, recall the notation \(\Gamma _s^{D_r}:=\inf \big \{t\geqslant 0: V_t^{D_r} > s\big \}\) for \(V^{D_r}\) defined by (3.1) and we set:

$$\begin{aligned} \theta _{u}^{r}:=\inf \big \{s \geqslant 0 \,: {\mathscr {L}}^{\,r}_{\Gamma ^{D_r}_{s}}> u \big \}, \quad \text { for all } u \in [0,{\mathscr {L}}^{\,r}_\sigma ). \end{aligned}$$

Remark that \(\text {tr}_{D_r} \widehat{( {W}, {\Lambda })}_{\theta ^r_u}=(x,r)\), for every \(u\in [0,{\mathscr {L}}_\sigma ^{\,r})\). An application of the special Markov property applied at the domain \(D_r\) gives that, conditionally on \({\mathcal {F}}^{D_r}\), the point measure of the excursions from \(D_r\)

$$\begin{aligned} {\mathcal {M}}^{(r)}:= \sum _{i \in {\mathcal {I}}_r} \delta _{({\mathscr {L}}^{\,r}_{a_i^r}, \rho ^{i,r}, {\overline{W}}^{i,r} )} \end{aligned}$$

is a Poisson point measure with intensity \( \mathbb {1}_{[0, {\mathscr {L}}^{\,r}_\sigma ]}(u ) {\textrm{d}}u~ {\mathbb {N}}_{x,r}\left( {\textrm{d}}\rho , {\textrm{d}}{\overline{W}} \right) \). For the sake of clarity, let us outline the structure of this section. The first step for characterizing the support of the measure \({\textrm{d}}A\) consists in establishing Lemma 4.15, where we prove, under \({\mathbb {N}}_{x,r}\) for \(r \geqslant 0\), that the points \(\{ 0,\sigma \}\) belong to \(\text {supp }{\textrm{d}}A\). This result, coupled with the special Markov property applied to the domains \(D_r\) for \(r > 0\) and an approximation argument, will yield the characterization for the support of the measure \({\textrm{d}}A\). This characterization is stated in Theorem 4.20 and is the main result of the section. The approximation argument relies on topological considerations and an explicit description - of independent interest - for the sets \(\text {supp }{\textrm{d}}{\mathscr {L}}^r\) for \(r > 0\), which is given in Lemma 4.18.

Lemma 4.15

\({\mathbb {N}}_{x,0}\)–a.e., we have \(\{0,\sigma \}\in \textrm{supp } ~ {\textrm{d}}A\).

Proof

We are going to show that for any \(\varepsilon >0\), we have \({\mathbb {N}}_{x,0}(A_{\varepsilon \wedge \sigma } =0 ) =0\) – the Lemma will follow since the symmetric statement \({\mathbb {N}}_{x,0}(A_{\sigma } -A_{ (\sigma - \varepsilon )\vee 0} = 0) = 0\) will then hold by the duality identity (2.21). As previously we write

$$\begin{aligned} G_\lambda := {\mathbb {N}}_{x}\big (1-\exp \big (-\lambda A_\infty )\big )={\widetilde{\psi }}^{-1}(\lambda ), \end{aligned}$$

where the second equality holds by Proposition 4.13 taking \((y,r_0)=(x,0)\). For every positive rational numbers r and q, we introduce the stop** time \( T_{q}^r: = \inf \big \{ s \geqslant 0: {\mathscr {L}}^{\,r}_s > q \big \}, \) with the convention \(T_{q}^r = \infty \), if \({\mathscr {L}}^{\,r}_\sigma \leqslant q\). Let us prove that

$$\begin{aligned} {\mathbb {N}}_{x,0}\big (A_{T_q^{r}} = 0, {\mathscr {L}}^{\,r}_\sigma > 0 \big ) = 0. \end{aligned}$$
(4.31)

In this direction, set \({\mathbb {N}}_{x,0}^r:= {\mathbb {N}}^{r}_{x,0}({\textrm{d}}\rho , \, {\textrm{d}}{\overline{W}} | {\mathscr {L}}_\sigma ^{r} > 0)\) and using the fact that, conditionally on \({\mathcal {F}}^{D_r}\), the measure \({\mathcal {M}}^{(r)}\) is a Poisson point measure with intensity \(\mathbb {1}_{[0, {\mathscr {L}}^{\,r}_\sigma ]}( u ) {\textrm{d}}u~ {\mathbb {N}}_{x,r}\left( {\textrm{d}}\rho , {\textrm{d}}{\overline{W}} \right) \), remark that

$$\begin{aligned} {\mathbb {N}}^{ r}_{x,0}\Big ( \exp \big (-\lambda A_{T_q^{r}}\big ) \Big )&\leqslant {\mathbb {N}}^{ r}_{x,0}\Big ( \exp \big ( {-\lambda \sum _{i \in {\mathcal {I}}_r} A_\sigma (\rho ^{i,r} , {\overline{W}}^{i,r}) \mathbb {1}_{\{ {\mathscr {L}}^{\,r}_{a_i} \leqslant q \}}} \big ) \Big ) \\&= {\mathbb {N}}^{ r}_{x,0} \Big ( \exp \Big ( - (q \wedge {\mathscr {L}}^{\,r}_\sigma ) {\mathbb {N}}_{x,0}\big ( 1-\exp (-\lambda A_\infty ) \big ) \Big ) \Big ) \\&= {\mathbb {N}}^{ r}_{x,0} \Big ( \exp \big (-(q \wedge {\mathscr {L}}^{\,r}_\sigma ) G_\lambda \big ) \Big ) \end{aligned}$$

and hence:

$$\begin{aligned} {\mathbb {N}}_{x,0}^{ r} ( A_{T_q^r} = 0 ) + {\mathbb {N}}^{ r}_{x,0}\big ( \exp \big (-\lambda A_{T_q^{r}}\big ) \mathbb {1}_{\{ A_{T_q^r} >0 \}} \big ) \leqslant {\mathbb {N}}^{ r}_{x,0} \left( \exp \big (-(q \wedge {\mathscr {L}}^{\,r}_\sigma ) \cdot G_\lambda \big ) \right) . \end{aligned}$$

Now (4.31) follows taking the limit as \(\lambda \uparrow \infty \), since we are working under \(\{ {\mathscr {L}}^{\,r}_\sigma > 0 \}\) and by Proposition 4.7 the function \({\widetilde{\psi }}\) satisfies (A4), which gives that \(G_\lambda \) goes to \(\infty \) when \(\lambda \uparrow \infty \). We stress that (4.31) holds for any positive rational numbers r and q. Now fix \(\varepsilon >0\), and notice that by the monotonicity of A, we have

$$\begin{aligned} \big \{ A_{\varepsilon \wedge \sigma } = 0 ~;~ T_{q}^r< \varepsilon \big \} \subset \big \{ A_{T_q^r} = 0 ~;~ T_{q}^r < \varepsilon ~;~ {\mathscr {L}}^{\,r}_\sigma >0 \big \}, \end{aligned}$$

where the last set has null \({\mathbb {N}}_{x,0}\) measure by (4.31). The identity \({\mathbb {N}}_{x,0} (A_{\varepsilon \wedge \sigma } = 0) = 0\) now will follow as soon as we show that, \({\mathbb {N}}_{x,0}\)-a.e. , there exists two positive rational numbers r and q satisfying that \(T_{q}^r < \varepsilon \). Said otherwise, we need to establish that the origin is an accumulation point of \(\{ T_{q}^r:~ r,q \in {\mathbb {Q}}_+^* \}\). Arguing by contradiction, write

$$\begin{aligned} \Omega _0 = \bigcap _{r,q \in {\mathbb {Q}}_+^* } \big \{ T_{q}^r \geqslant \varepsilon \big \} = \bigcap _{r \in {\mathbb {Q}}_+^*} \big \{ T_q^r \geqslant \varepsilon ~:~ \forall q > 0 \big \} = \bigcap _{r\in {\mathbb {Q}}_+^*} \big \{ {\mathscr {L}}^{\,r}_\varepsilon = 0 \big \} \end{aligned}$$

where in the last equality we used (4.31), and suppose that \({\mathbb {N}}_{x,0}( \Omega _0) >0\). To simplify notation, set \(C(r):= \inf \{ s \geqslant 0: {\widehat{\Lambda }}_s > r \}\), and remark that the special Markov property, as stated in Theorem 3.8, applied to the domain \(D_r\) gives \(\{ {\mathscr {L}}^{\,r}_\varepsilon = 0 \} = \{ C(r) \geqslant \varepsilon \}\). We then derive that

$$\begin{aligned} 0 < {\mathbb {N}}_{x,0}\Big ( \bigcap \limits _{r\in {\mathbb {Q}}_+^*} \{ C(r) \geqslant \varepsilon \} \Big ) = {\mathbb {N}}_{x,0}\Big ( {\widehat{\Lambda }}_s = 0, \, \forall s \in [0,\varepsilon \wedge \sigma ] \Big ). \end{aligned}$$

However, recalling the definition (2.20) of the excursion measure \({\mathbb {N}}_{x,0}\) this is in contradiction with the fact that for every \(s\in (0,\sigma )\), \({\mathbb {N}}_{x,0}\) a.e., \({\widehat{\Lambda }}_{s} > 0\). Indeed, by definition of the Lévy snake under \({\mathbb {N}}_{x,0}\), for any fixed \(s\geqslant 0\), conditionally on \(\zeta _s\), the process \(((W_s(t), \Lambda _s(t)): t \leqslant \zeta _s)\) has the distribution of a trajectory of the Markov process \(((\xi _t, {\mathcal {L}}_t): t \geqslant 0 )\) under \(\Pi _{x,0}\) killed at \(\zeta _s\). We then have \(\Lambda _s(t)>0\), for every \(t>0\), since \(\zeta _s = H(\rho _s)\) does not vanish on \((0,\sigma )\) and, \(\Pi _{x,0}\)– a.s., 0 is in the support of \({\textrm{d}}{\mathcal {L}}\); for a justification of the later fact, we refer to our discussion at the beginning of Sect. 4. \(\square \)

Define:

$$\begin{aligned} {\mathcal {C}}^*:= \Big \{ t \in [0,\sigma ]: ~ \sup _{(t-\varepsilon , t+\varepsilon )\cap [0,\sigma ]} {\widehat{\Lambda }} = \inf _{(t-\varepsilon , t+\varepsilon )\cap [0,\sigma ]} {\widehat{\Lambda }}~, \quad \text { for some } \varepsilon > 0 \Big \}, \end{aligned}$$

and remark that – the closure of the – connected components of \({\mathcal {C}}^*\) are exactly the constancy intervals of \({\widehat{\Lambda }}\). We will show that the support of \({\textrm{d}}A\) is precisely the complement of \({\mathcal {C}}^*\). In this direction, our goal now is to give an equivalent definition of \({\mathcal {C}}^*\) in terms of H and W, and for this purpose we introduce the notion of exit times.

Definition 4.16

(Exit times from x) A non negative number t is said to be an exit time from the point x for the process \((\rho ,W)\) if \({\widehat{W}}_{t}=x\) and there exists \(s>0\) such that

$$\begin{aligned} H_t < H_{t+u}, \quad \text { for all } \, u \in (0,s). \end{aligned}$$

The collection of exit times from x is denoted by \(\textrm{Exit}(x)\).

Remark 4.17

Note that, for every \(t\in \textrm{Exit}(x) \), the point \(p_H(t)\) corresponds by definition to a point of the Lévy tree with multiplicity bigger than 1 and in fact, recalling the result of Proposition 4.4, \(p_H(t)\) is a point of multiplicity 2 in \({\mathcal {T}}_H\). In particular, for every \(t\in \textrm{Exit}(x)\), there exists a unique \(s>t\) such that \(p_{H}(t)=p_{H}(s)\) and satisfying that:

$$\begin{aligned} {\widehat{W}}_s = x\quad \text { and } \quad H_{s-u} > H_t=H_s \quad \text { for all } \, u \in (0,v), \end{aligned}$$

for some \(v>0\) – in this case, we can take \(v:= t-s\). By analogy, we write for the collection of times in \([0,\sigma ]\) satisfying the previous display. Remark that the correspondence described above between \(\text {Exit}(x)\) and defines a bijection. We also stress that the inclusion is a priori strict since we are excluding in our definition potential times that will be mapped by \(p_H\) into leaves with label x.

Let us now prove the following technical lemma:

Lemma 4.18

For every fixed \(r>0\), under \({\mathbb {N}}_{x,0}\), we have:

$$\begin{aligned} \textrm{supp } ~{\textrm{d}}{\mathscr {L}}^{\,r} = \overline{\{ a_i^r, b_i^r: \, i \in {\mathcal {I}}_r \} } = \overline{\textrm{Exit}(x)\cap \big \{s\in [0,\sigma ]:~{\widehat{\Lambda }}_{s}=r\big \}}, \end{aligned}$$
(4.32)

and the same identity holds if we replace \(\overline{\textrm{Exit}(x)}\) by . In particular, the measure \({\textrm{d}}A\) gives no mass to the complement of \(\overline{\textrm{Exit}(x)}\) (or ).

Proof

First remark that if \({\mathscr {L}}^{\,r}_\sigma =0\), by the special Markov property applied to the domain \(D_r\), all the sets appearing in (4.32) are empty. Hence, it suffices to show (4.32) under \({\mathbb {N}}_x^{r}:= {\mathbb {N}}_{x}( \cdot \, | {\mathscr {L}}^{\,r}_\sigma >0)\). Moreover, notice that by definition we have:

and

To deduce (4.32), it is then enough to show that:

$$\begin{aligned} \textrm{supp } ~{\textrm{d}}{\mathscr {L}}^{\,r} = \overline{\{ a_i^r: \, i \in {\mathcal {I}}_r \} }, \end{aligned}$$

since the same equality will hold for \(\{ a_i^r: i \in {\mathcal {I}}_r \}\) replaced by \(\{ b_i^r: i \in {\mathcal {I}}_r \}\), using the duality identity (2.21) under \({\mathbb {N}}_{x,0}\).

So let us prove the previous display, and we start showing the inclusion \( \textrm{supp } ~{\textrm{d}}{\mathscr {L}}^{\,r} \subset \overline{\{ a_i^r: \, i \in {\mathcal {I}}_r \} }\). In this direction, consider \(s\in \textrm{supp } ~{\textrm{d}}{\mathscr {L}}^{\,r}\). By the special Markov property the set \(\{{\mathscr {L}}_{a_i^r}^{\,r}:~i\in {\mathcal {I}}_r\}\) is dense in \([0,{\mathscr {L}}^{\,r}_\sigma ]\), which gives that for every \(\varepsilon >0\) there exists \(i\in {\mathcal {I}}_r\) such that \({\mathscr {L}}^{\,r}_{(s-\varepsilon )+}<{\mathscr {L}}^{\,r}_{a^r_i}<{\mathscr {L}}^{\,r}_{s+\varepsilon }\). This ensures that \(a^r_i\in (s-\varepsilon , s+\varepsilon )\), due to the monotonicity of \({\mathscr {L}}^{\,r}\). As a result, the set \(\text {supp } ~{\textrm{d}}{\mathscr {L}}^{\,r}\) is contained in the closure of \(\{a^r_i:~i\in {\mathcal {I}}_r\}\). We will now establish the reverse inclusion by proving that for every \(j\in {\mathcal {I}}_r\), we have \(a_j^r\in \textrm{supp } ~{\textrm{d}}{\mathscr {L}}^{\,r}\). To this end, fix \(j\in {\mathcal {I}}_r\) and note that, given that the special Markov property ensures that all the values \(\{\mathscr {L}_{a_i^r}^{\,r}:~i\in \mathcal {I}_r\}\) are distinct, it is enough to show that for every \(\varepsilon > 0\), we can find some \(k \in {\mathcal {I}}_r\) such that \(0< a_j^r - a_k^r < \varepsilon \). To prove this, define \(R_t:= \sum _{{\mathscr {L}}_{a^r_i}^r \leqslant t }\sigma ({\overline{W}}^{i,r})\) for \(t \geqslant 0\) and observe that it is a càdlàg process since \(R_\infty \leqslant \sigma <\infty \). Now, note that by definition, for every \(k\in {\mathcal {I}}_r\) with \(a_k^r<a_j^r\), we have:

$$\begin{aligned} a_j^{r}-a_k^r \leqslant R_{{\mathscr {L}}_{a_j^r}^r -} - R_{{\mathscr {L}}_{a_k^r}^r -} + \theta ^r_{{\mathscr {L}}_{a_j^r}^r} - \theta ^r_{{\mathscr {L}}_{a_k^r}^r - }. \end{aligned}$$

Since \(\theta ^{r}\) is monotone, it has a countable number of discontinuities and it follows by the special Markov property – using that \(\theta ^r\) is \({\mathcal {F}}^{D_r}\)-measurable – that all the points \(\{{\mathscr {L}}_{a^r_i}^{\,r}:~i\in {\mathcal {I}}\}\) are continuity points of \(\theta ^{r}\).Footnote 12 Using once again the fact that the set \(\{\mathscr {L}_{a_i^r}^{\,r}:~i\in \mathcal {I}_r\}\) is dense in \([0, {\mathscr {L}}_\sigma ^r ]\), it follows from our previous remark – coupled with the fact that R is a càdlàg process – that for any \(\varepsilon > 0\), we can find some \(k \in {\mathcal {I}}_r\) such that the right-hand side in the last display is bounded above by \(\varepsilon \). This implies that for every \(\varepsilon >0\) there exists \(k\in {\mathcal {I}}_r\) such that \(a_j^{r}-\varepsilon<a_k^{r}<a_j^{r}\) and we derive that \(a_j^{r}\in \textrm{supp } ~{\textrm{d}}{\mathscr {L}}^{\,r} \), as wanted. As a consequence of (4.32), it follows that:

$$\begin{aligned} {\mathbb {N}}_{x,0} \left( \int _0^\sigma {\textrm{d}}A_s \mathbb {1}_{s\notin \overline{\textrm{Exit}(x)}} \right) = \int _0^\infty {\textrm{d}}r \, {\mathbb {N}}_{x,0} \left( \int _0^\sigma {\textrm{d}}{\mathscr {L}}^{\,r}_s \mathbb {1}_{s\notin \overline{\textrm{Exit}(x)}} \right) =0, \end{aligned}$$

and we deduce by duality that \({\textrm{d}}A\) gives no mass to the complement of \(\overline{\textrm{Exit}(x)}\) – the same result holding for . \(\square \)

The next proposition establishes the connection between the constancy intervals of \({\widehat{\Lambda }}\), the exit times from x and the excursion intervals from \(D_r\). This is the last result needed to characterize the support of \({\textrm{d}}A\).

Proposition 4.19

\({\mathbb {N}}_{x,0}\)–a.e., we have:

(4.33)

Proof

The first step consists in showing

$$\begin{aligned} \overline{\textrm{Exit}(x)} \subset \overline{\{ a_i^r, b_i^r: r \in {\mathbb {Q}}_+^*\text { and }i\in {\mathcal {I}}_r \} }. \end{aligned}$$
(4.34)

Remark that by Lemma 4.18 the other inclusion is satisfied and still holds if we replace \(\overline{\textrm{Exit}(x)}\) by . In this direction, recall that by Lemma 4.1 the process \((\rho ,{\overline{W}})\) takes values in \({\overline{\Theta }}_{x}\). In particular, we have

figure f

where we recall that \(\text {supp } \Lambda _q({\textrm{d}}h)\) is precisely the set

$$\begin{aligned}{} & {} \Big \{ t \in [0,\zeta _q]: \, \Lambda _q(t + h)> \Lambda _q(t) \text { for any } 0< h< (H_q - t) \text { or } \\{} & {} \quad \Lambda _q(t) > \Lambda _q(t - h) \text { for any } 0< h < t \Big \}. \end{aligned}$$

In what follows, we shall use implicitly at multiple instances the fact that a time of right-increase (resp. left-increase) for \({\widehat{\Lambda }}\) must be a time of right-increase (resp. left-increase) for H. We let \(\Omega _0 \subset {\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_{{\overline{E}}})\) be a measurable subset with \({\mathbb {N}}_{x,0}(\Omega _0^c) = 0\) at which property \((*)\) holds for every \((\uprho , \omega ) \in \Omega _0\) and we argue for fixed \((\uprho , \omega ) \in \Omega _0\). Fix \(t\in \text {Exit}(x)\); by definition, for any \(\varepsilon > 0\) we can find \(t<q < t+\varepsilon \) such that \(H_t<H_{r}\) for every \(r\in (t,q]\). Observe that in particular, by the snake property \(W_q(H_t) = x\) and therefore \(H_t\) belongs to \(\text {supp } \Lambda _q({\textrm{d}}h)\). By our choice of \(\Omega _0\) and the identity in the previous display, it must hold either that:

  1. (i)

    \(H_t\) is a time of right-increase for \(\Lambda _q\) (and in particular \({\widehat{\Lambda }}_q > {\Lambda }_q(H_t) = {\widehat{\Lambda }}_t\)), or

  2. (ii)

    \(H_t\) is not a time of right-increase for \(\Lambda _q\), (hence \({\widehat{\Lambda }}_t={\Lambda }_q(H_t) > {\Lambda }_q(H_t-s), \text { for } \, 0< s < H_t\)).

If (i) holds, set \(s_{k}:=\sup \{s\in [t,q]:~{\widehat{\Lambda }}_{s}\leqslant 2^{-k}\lfloor 2^{k}{\widehat{\Lambda }}_{t}\rfloor +2^{-k}\}\) and remark that we have \(s_{k}\in \bigcup _{r\in {\mathbb {Q}}_+^*} \{ a_i^r, b_i^r:i\in {\mathcal {I}}_r \} \), as soon as \({\widehat{\Lambda }}_{s_k}<{\widehat{\Lambda }}_{q}\). However, this is satisfied for k large enough. On the other hand, if (ii) holds we must have \(\inf _{[t-\varepsilon , t]}H < H_t\) since t can not be a local infimum for H (otherwise, \(p_H(t)\) would be a branching point with label \({\widehat{W}}_t = x\), in contradiction with Proposition 4.4). Now, the argument of case (i) holds by working with \(s_k^\prime :=\sup \{s\in [0, t]:~{\widehat{\Lambda }}_{s}\leqslant 2^{-k}\lfloor 2^{k}{\widehat{\Lambda }}_{t}\rfloor \}\). This implies that t belongs to the closure of \(\bigcup _{r\in {\mathbb {Q}}_+^*} \{ a_i^r, b_i^r:i\in {\mathcal {I}}_r \} \) giving (4.34). Moreover, by duality the contention (4.34) holds replacing \(\text {Exit}(x)\) by , proving the first two equalities in (4.33). Consequently, to conclude it is enough to show that:

(4.35)

In this direction, notice that for every \(r\in {\mathbb {Q}}_+^*\), under \({\mathbb {N}}_{x,r}\), we have \({\widehat{\Lambda }}_t>r\) for every \(t\in (0,\sigma )\). Now, an application of the special Markov property applied to the domain \(D_r\) gives that:

$$\begin{aligned} \{ a_i^r, b_i^r: ~i\in {\mathcal {I}}_r \}\subset [0,\sigma ] \setminus {\mathcal {C}}^*,\quad {\mathbb {N}}_{x,0}-\text {a.e.}, \end{aligned}$$

for every \(r\in {\mathbb {Q}}_+^*\), and the first inclusion \(\subset \) in (4.35) follows. In order to obtain the remaining inclusion, let \(t\in [0,\sigma ]{\setminus } {\mathcal {C}}^*\). By definition, for every \(\varepsilon > 0\) there exists \(t-\varepsilon<t_1< t_2 < t + \varepsilon \) such that \({\widehat{\Lambda }}_{t_1} < {\widehat{\Lambda }}_{t_2}\) or \({\widehat{\Lambda }}_{t_1} > {\widehat{\Lambda }}_{t_2}\). If the first holds, then \(\sup \{ s \in [t-\varepsilon , t_2]: \, {\widehat{\Lambda }}_{s} \leqslant {\widehat{\Lambda }}_{t_1} \}\) is an exit time and the other case follows by taking \(\inf \{ s \in [t_1,t_2]: {\widehat{\Lambda }}_s \leqslant {\widehat{\Lambda }}_{t_2} \}\). This ensures that t is in the closure of concluding our proof. \(\square \)

Now, we are in position to state and prove the main result of the section:

Theorem 4.20

Fix \((y,r_0)\in {\overline{E}}\) and \((\mu ,\overline{\text {w} })\in {\overline{\Theta }}_x\). Under \({\mathbb {P}}_{\mu ,\overline{\text {w} }}\) and \({\mathbb {N}}_{y,r_0}\), we have

(4.36)

where we recall the convention \([0,\infty ]=[0,\infty )\).

Proof

Let us start with some simplifications, that will allow us to reduce the proof of the theorem to establishing the result under \({\mathbb {N}}_{x,0}\) and \({\mathbb {P}}_{0,x,0}\). First, notice that for every \(r_0>0\) and \(y\ne x\), an application of the special Markov property applied to the domain \(D_{r_0}\), paired with the identity (4.29), entails that the desired result under \({\mathbb {P}}_{0,y,r_0}\) or \({\mathbb {N}}_{y,r_0}\) can be deduced directly from the same result under \({\mathbb {N}}_{x,r_0}\) or equivalently, under \({\mathbb {N}}_{x,0}\). Next, consider an arbitrary \((\mu , {\overline{{\text {w}}}}) \in {\overline{\Theta }}_x\) with \(\mu \ne 0\) and set \(T_0:= \inf \{ t > 0: \rho _t = 0 \}\). By the strong Markov property, the process \(( (\rho _{T_0 + t}, {\overline{W}}_{T_0+t}): t \geqslant 0)\) is distributed according to \({\mathbb {P}}_{0,{\overline{{\text {w}}}}(0)}\). Therefore, the support of \({\textrm{d}}A\) in \([T_0, \infty )\) can be identified using the characterization under \({\mathbb {P}}_{0,y,r_0}\) for \((y,r_0) \in {\overline{E}}\). To study the support of \({\textrm{d}}A\) on \([0,T_0]\), recall that under \({\mathbb {P}}_{\mu , {\overline{{\text {w}}}}}^\dag \) the measure (2.23) is a Poisson measure with intensity \(\mu ({\textrm{d}}h){\mathbb {N}}_{\text {w}(h)}({\textrm{d}}\rho , {\textrm{d}}W)\). Now, the support of \({\textrm{d}}A\) in \([0,T_0]\) can be identified from our result under \({\mathbb {N}}_{y,r_0}\) for \(y \in E\), by making use of (4.30). Therefore, it suffices to prove (4.36) under \({\mathbb {N}}_{x,0}\) and \({\mathbb {P}}_{0,x,0}\).

To this end, we start by proving the theorem under \({\mathbb {N}}_{x,0}\) and remark that by Proposition 4.19 we only have to establish the first equality in (4.36). Moreover, by Lemma 4.18 it only remains to show that under \( {\mathbb {N}}_{x,0}\):

$$\begin{aligned} \text {supp } {\textrm{d}}A \supset \overline{\text {Exit}(x)}. \end{aligned}$$
(4.37)

However, by Lemma  4.15 we know that \({\mathbb {N}}_{x,0}( \{0,\sigma \}\cap \text {supp } {\textrm{d}}A=\varnothing ) = 0\), and then using that conditionally on \({\mathcal {F}}^{D_r}\) the measure \({\mathcal {M}}^{(r)}\) is a Poisson point measure with intensity \( \mathbb {1}_{[0, {\mathscr {L}}^{\,r}_\sigma ]}(\ell ) {\textrm{d}}\ell ~ {\mathbb {N}}_{x,r}\left( {\textrm{d}}\rho , {\textrm{d}}{\overline{W}} \right) \), we derive that:

$$\begin{aligned} {\mathbb {N}}_{x,0}-\text {a.e., } \text { for all } r \in {\mathbb {Q}}_+^*, \, \, \{a_i^r,b_i^r: i \in {\mathcal {I}}_r \} \subset \text {supp } {\textrm{d}}A. \end{aligned}$$

Consequently, Proposition 4.19 implies (4.37). Finally, let us briefly explain how to obtain the result under \({\mathbb {P}}_{0,x,0}\). In this direction, under \({\mathbb {P}}_{0,x,0}\), denote the connected components of \(\{ s \in {\mathbb {R}}_+: X_s - I_s \ne 0 \}\) by \(\big ((\alpha _i, \beta _i): i \in {\mathcal {I}} \big )\) and recall that the point measure (2.22) is a Poisson point measure with intensity \(\mathbb {1}_{[0, \langle \mu ,1 \rangle ]} (u) \, {\textrm{d}}u \, {\mathbb {N}}_{\text {w}( H( \kappa _{u} \mu ) )}({\textrm{d}}\rho , {\textrm{d}}W)\). Excursion theory and our results under \({\mathbb {N}}_{x,0}\), give that, under \({\mathbb {P}}_{0,x,0}\), we have:

\(\alpha _i\in \text {supp } {\textrm{d}}A\cap \overline{\text {Exit}}(x)\cap \big ([0,\infty ){\setminus } {\mathcal {C}}^*\big )\) and for every \(i\in {\mathcal {I}}\). The desired result now follows since the set \(\{\alpha _i:~i\in {\mathcal {I}}\}\) and \(\{\beta _i:~i\in {\mathcal {I}}\}\) are dense in \(\{ s \in {\mathbb {R}}_+: X_s - I_s = 0 \}\). \(\square \)

5 The tree structure of \(\{\upsilon \in {\mathcal {T}}_H:~\xi _\upsilon =x\}\)

In this section, we work under the framework introduced at the beginning of Sect. 4. Our goal now is to study the structure of the set \(\{ \upsilon \in {\mathcal {T}}_H: \xi _\upsilon = x \}\) and to do so, we encode it by the subordinate tree of \({\mathcal {T}}_H\) with respect to the local time \(({\mathcal {L}}_\upsilon : \upsilon \in {\mathcal {T}}_H)\). In this direction, we need to briefly recall the notion of subordination of trees defined in [24].

Subordination of trees by increasing functions. Let \(({\mathcal {T}},d_{{\mathcal {T}}}, \upsilon _0)\) be an \({\mathbb {R}}\)-tree and recall the standard notation \(\preceq _{\mathcal {T}}\) and \(\curlywedge _{\mathcal {T}}\) for the ancestor order and the first common ancestor. Next, consider \(g:{\mathcal {T}}\rightarrow {\mathbb {R}}_{+}\) a non-negative continuous function. We say that g is non-decreasing if for every \(u,v\in {\mathcal {T}}\):

$$\begin{aligned} u\preceq _{{\mathcal {T}}} v \text { implies that } g(u)\leqslant g(v). \end{aligned}$$

When the later holds, we can define a pseudo-distance on \({\mathcal {T}}\) by setting

$$\begin{aligned} d_{{\mathcal {T}}}^{g}(u,v):=g(u)+g(v)-2 \cdot g(u\curlywedge _{{\mathcal {T}}} v), \quad \quad (u,v)\in {\mathcal {T}}\times {\mathcal {T}}. \end{aligned}$$
(5.1)

The pseudo-distance \(d_{\mathcal {T}}^g\) induces the following equivalence relation on \({\mathcal {T}}\): for \(u, v \in {\mathcal {T}}\) we write

$$\begin{aligned} u\sim _{{\mathcal {T}}}^{g} v \iff d_{{\mathcal {T}}}^{g}(u,v)=0, \end{aligned}$$

and it was shown in [24] that \({\mathcal {T}}^{g}:=({\mathcal {T}}/\sim _{{\mathcal {T}}}^{g},d_{{\mathcal {T}}}^{g},\upsilon _0)\) is a compact pointed \({\mathbb {R}}\)-tree, where we still denoted the equivalency class of the root of \({\mathcal {T}}^g\) by \(\upsilon _0\). The tree \({\mathcal {T}}^{g}\) is called the subordinate tree of \({\mathcal {T}}\) with respect to g and we write \(p_{g}^{{\mathcal {T}}}:{\mathcal {T}}\rightarrow {\mathcal {T}}^{g}\) for the canonical projection which associates every \(u\in {\mathcal {T}}\) with its \(\sim _{{\mathcal {T}}}^{g}\)–equivalency class. Observe that any two points \(u,v \in {\mathcal {T}}\) are identified if and only if g stays constant on \([u,v]_{\mathcal {T}}\) and consequently the subordinate tree is obtained from \({\mathcal {T}}\) by identifying in a single point the components of \({\mathcal {T}}\) where g is constant.

Getting back to our setting, recall that under \({\mathbb {N}}_{x,0}\), \(({\mathcal {L}}_\upsilon : \upsilon \in {\mathcal {T}}_H)\) corresponds to \(({\widehat{\Lambda }}_t: t \geqslant 0)\) in the quotient space \({\mathcal {T}}_H = [0,\sigma ]/ \sim _H\). This entails that the local time \(({\mathcal {L}}_\upsilon : \upsilon \in {\mathcal {T}}_H)\) is a non-decreasing function on \({\mathcal {T}}_H\) and we denote the induced subordinate tree by \({\mathcal {T}}_H^{ {\mathcal {L}} }\). Recall that the exponent

$$\begin{aligned} {\widetilde{\psi }}(\lambda ) = {\mathcal {N}}\left( \int _0^\sigma {\textrm{d}}h \, \psi (u_\lambda (\xi _h)) \right) , \quad \text { for } \, \lambda \geqslant 0, \end{aligned}$$

is the exponent of a Lévy tree by Proposition 4.7. Hence, a \({\widetilde{\psi }}\)-Lévy process satisfies (A1)–(A4) and by Corollary 4.9, the exponent \({\widetilde{\psi }}\) can be written in the following form:

$$\begin{aligned} {\widetilde{\psi }}(\lambda ):={\widetilde{\alpha }}\lambda +\int _{(0,\infty )}{\widetilde{\pi }}({\textrm{d}}x)(\exp (-\lambda x)-1+\lambda x), \end{aligned}$$

where \({\widetilde{\alpha }}={\mathcal {N}}\big (1-\exp (-\alpha \sigma )\big )\) and \({\widetilde{\pi }}\) is a sigma-finite measure on \({\mathbb {R}}_+\setminus \{0\}\) satisfying \(\int _{(0,\infty )}{\widetilde{\pi }}({\textrm{d}}x)(x\wedge x^{2})<\infty \). We will also use the notation \({\widetilde{H}}\) and \({\widetilde{N}}\) introduced prior to (4.21) for the height process and the excursion measure of a \({\widetilde{\psi }}\)–Lévy tree. Finally, we recall that A stands for the additive functional introduced in Proposition 4.10 and we denote its right inverse by \(A_t^{-1}:=\inf \{s\geqslant 0:A_{s}>t\}\), with the convention \(A^{-1}_{t}=\sigma \) for every \(t\geqslant A_\infty = A_\sigma \). Remark that the constancy intervals of A in \([0,\sigma ]\) are the connected components of \([0,\sigma ] {\setminus } \text {supp } {\textrm{d}}A\), which by Theorem 4.20 are precisely the connected components of \({\mathcal {C}}^*\) – the constancy intervals of the process \(({\widehat{\Lambda }}_{t}:~t\in [0,\sigma ])\). In particular, \(({\widehat{\Lambda }}_{A^{-1}_t}: t \geqslant 0)\) is a continuous non-negative process, with lifetime \(A_\infty \). We can now state the main result of this section:

Theorem 5.1

The following properties hold:

  1. (i)

    Under \({\mathbb {N}}_{x,0}\), the subordinate tree of \({\mathcal {T}}_H\) with respect to the local time \({\mathcal {L}}\), that we denote by \({\mathcal {T}}_H^{ {\mathcal {L}}}\), is isometric to the tree coded by the continuous function \(({\widehat{\Lambda }}_{A^{-1}_t}\!: t \!\geqslant \! 0).\)

  2. (ii)

    Moreover, we have the equality in distribution

    $$\begin{aligned} \Big ( ({\widetilde{H}}_t: \, t \geqslant 0), \text { under }{\widetilde{N}} \Big )\overset{(d)}{=}\Big ( \big ( {\widehat{\Lambda }}_{A^{-1}_t}:\, t \geqslant 0\big ), \text { under } {\mathbb {N}}_{x,0}\Big ). \end{aligned}$$
    (5.2)

    In particular, \({\mathcal {T}}_H^{ {\mathcal {L}}}\) is a Lévy tree with exponent \({\widetilde{\psi }}\).

Remark 5.2

Let us mention that when \(\psi (\lambda )=\lambda ^{2}/2\) and the underlying spatial motion \(\xi \) is a Brownian motion in \({\mathbb {R}}\), the previous theorem implies that under \({\mathbb {N}}_{0,0}\) the subordinate tree of \({\mathcal {T}}_H\) with respect to the local time \({\mathcal {L}}\) at 0 is a Lévy tree and – as a direct consequence of the scaling invariance of the Brownian motion – its exponent is of the form \({\widetilde{\psi }}(\lambda )=c\lambda ^{3/2}\), for some constant \(c>0\). This result was already obtained by other methods in [24, Theorem 2].

We stress that the key result in (ii) is the identity in distribution (5.2): it entails that not only the function \(({\widehat{\Lambda }}_{A^{-1}_t}: t \geqslant 0)\) encodes the subordinate tree, but it is also the height process of a Lévy tree. The fact that \({\mathcal {T}}_{H}^{{\mathcal {L}}}\) is a \({\widetilde{\psi }}\)-Lévy tree is then a direct consequence of (i) and (5.2). By a straightforward application of excursion theory one can deduce a version under \({\mathbb {P}}_{0,x,0}\) of Theorem 5.1, where now \({\mathcal {T}}_H^{ {\mathcal {L}}}\) is a Lévy forest with exponent \({\widetilde{\psi }}\). The details are left to the reader.

The rest of the section is organised as follows: The section is devoted to the proof of Theorem 5.1. In Section  5.1 we start by showing (i) and we present the strategy that we follow to prove (ii). The proof of (ii) relies in all the machinery developed in previous sections combined with standard properties of Poisson point measures and is the content of Sect. 5.2.

5.1 The height process of the subordinate tree

In this brief section, we establish the first claim of Theorem 5.1 and address some essential aspects needed for the proof of the second part of Theorem 5.1. For every \(u\in {\mathcal {T}}_{H}\), recall that \({\mathcal {L}}_{u}:={\widehat{\Lambda }}_{s}\) where s is any element of \(p_{H}^{-1}(\{u\})\) (note that the definition is non-ambiguous by the snake property) and that \({\mathcal {L}}\) is non-decreasing on \({\mathcal {T}}_H\). To simplify notation, we set:

$$\begin{aligned} {H}^A_t:={\widehat{\Lambda }}_{A^{-1}_t},\quad t\geqslant 0, \end{aligned}$$

which is a continuous process – as it was already mentioned in the discussion before Theorem 5.1. Let us start with the proof of Theorem  5.1-(i).

Proof of Theorem 5.1-(i)

Our goal is to show that, under \({\mathbb {N}}_{x,0}\), the trees \({\mathcal {T}}_{H^A}\) and \({\mathcal {T}}_H^{ {\mathcal {L}}}\) are isometric. In this direction, we start by introducing the pseudo-distance:

$$\begin{aligned} {\widetilde{d}}(s,t):={{\widehat{\Lambda }}}_{t}+{{\widehat{\Lambda }}}_s-2\cdot \min \limits _{s\wedge t,s\vee t} {{\widehat{\Lambda }}} ~,\quad s,t\in [0,\sigma ], \end{aligned}$$

and we write \(s\approx t\) if and only if \({\widetilde{d}}(s,t)=0\). By the snake property, we have \(s\approx t\) for every \(s\sim _H t\). Moreover, since \({\mathcal {L}}\) is increasing on \({\mathcal {T}}_H\), we get

$$\begin{aligned} {\widetilde{d}}(s,t)&= {\mathcal {L}}_{p_H(t)}+{\mathcal {L}}_{p_H(s)}-2\cdot {\mathcal {L}}_{p_H(s)\curlywedge _{{\mathcal {T}}_H} p_H(t)}, \end{aligned}$$

for every \(s,t\in [0,\sigma ]\). The right-hand side of the previous display is exactly the definition of the pseudo-distance associated with the subordinate tree \({\mathcal {T}}_H^{ {\mathcal {L}}}\) between \(p_{H}(s)\) and \(p_{H}(t)\) given in (5.1). We deduce that \(([0,\sigma ]/\approx , {\widetilde{d}},0)\) is isometric to \({\mathcal {T}}^{{\mathcal {L}}}_H\). It remains to show that \(([0,\sigma ]/\approx , {\widetilde{d}},0)\) is also isometric to \(({\mathcal {T}}_{H^A},d_{H^A},0)\). In order to prove it, we notice that:

$$\begin{aligned} {\widetilde{d}}(A_{r_1}^{-1}, A_{r_2}^{-1})=d_{H^A}(r_1,r_2), \end{aligned}$$

for every \(r_1,r_2\in [0,A_\sigma ]\). Furthermore, for every \(t\in [0,\sigma ]\) there exists \(r\in [0,A_\sigma ]\) such that \(A_{r-}^{-1}\leqslant t \leqslant A_{r}^{-1}\) since by Lemma 4.15 the points 0 and \(\sigma \) are in the support of \({\textrm{d}}A\). Moreover we have \({\widetilde{d}}(A_r^{-1},t)=0\), since by Theorem 4.20 the process \({\widehat{\Lambda }}\) stays constant on every interval of the form \([A_{r-}^{-1},A_{r}^{-1}]\). This implies that \([0,\sigma ]/\approx ~=\{A_{r}^{-1}:~r\in [0,A_\infty ]\}/\approx \) and we deduce by the previous display that \(([0,\sigma ]/\approx , {\widetilde{d}},0)\) and \(({\mathcal {T}}_{H^A},d_{H^A},0)\) are isometric giving the desired result. \(\square \)

The main difficulty to establish Theorem 5.1 (ii) comes from the fact that \({\widetilde{H}}\) is not a Markov process. To circumvent this, we are going to use the notion of marked trees embedded in a function.

Marked trees embedded in a function. A marked tree is a pair \({{\textbf {T}}}:=(\text {T}, \{ h_v:~ v \in \text {T}\})\), where \(\text {T}\) is a finite rooted ordered tree and \(h_v \geqslant 0\) for every \(v \in \text {T}\) – the number \(h_v\) is called the label of the individual v. For completeness let us give the formal definition of a rooted ordered tree. First, introduce Ulam’s tree:

$$\begin{aligned} {\mathcal {U}}:= \bigcup _{n=0}^{\infty } \{1,2,...\}^n \end{aligned}$$

where by convention \(\{1,2,...\}^0 = {\varnothing }\). If \(u = (u_1,...u_m)\) and \(v = (v_1,...,v_n)\) belong to \({\mathcal {U}}\), we write uv for the concatenation of u and v, viz. \((u_1,...u_m,v_1,...,v_n)\). In particular, we have \(u\varnothing =\varnothing u=u\). A (finite) rooted ordered tree \(\text {T}\) is a finite subset of \({\mathcal {U}}\) such that:

  1. (i)

    \(\varnothing \in \text {T}\);

  2. (ii)

    If \(v\in \text {T}\) and \(v=uj\) for some \(u\in {\mathcal {U}}\) and \(j\in \{1,2,...\}\), then \(u\in \text {T}\);

  3. (iii)

    For every \(u \in \text {T}\), there exists a number \(k_u(\text {T})\geqslant 0\) such that \(uj\in \text {T}\) if and only if \(1 \leqslant j \leqslant k_u(\text {T})\).

If \(u \in \text {T}\) can be written as \(u=v j\) for some \(v \in \text {T}\), \(1 \leqslant j \leqslant k_v(\text {T})\), we say that v is the parent of u. More generally, if \(u=v y\) for some \(v \in \text {T}\) and \(y \in {\mathcal {U}}\) with \(y \ne \varnothing \), we say that v is an ancestor of u or equivalently that u is a descendant of v. On the other hand, if \(u \in \text {T}\) satisfies that \(k_u(\text {T}) = 0\), u is called a leaf. The element \(\varnothing \) is interpreted as the root of the tree and if v is a vertex of \(\text {T}\), the branch connecting the root and v is the set of prefixes of v – considered with its corresponding family of labels.

Let us also introduce the concatenation of marked trees. If \({{\textbf {T}}}_1,...,{{\textbf {T}}}_k\) are k marked trees and h is a non-negative real number, we write \([{{\textbf {T}}}_1,...,{{\textbf {T}}}_k]_h \) for the marked tree defined as follows. The label of \(\varnothing \) is h, \(k_{\varnothing }=k\), and for \(1\leqslant j\leqslant k\) the point ju belongs to the tree structure of \([{{\textbf {T}}}_1,...,{{\textbf {T}}}_k]_h\) if and only if \(u\in \text {T}_j\) and its label is the label of u in \({{\textbf {T}}}_j\). For convenience, we will identity a marked tree \({{\textbf {T}}}:=(\text {T}, \{ h_v:~ v \in \text {T}\})\) with the set \(\{(v,h_v):~v\in \text {T}\}\).

We are now in position to define the embedded marked tree associated with a continuous function \((e(t))_{t \in [a, b]}\) and a given finite collection of times. We fix a finite sequence of times \(a \leqslant t_1 \leqslant \dots t_n \leqslant b\) and we recall the notation \(m_e(s,t)=\inf _{[s\wedge t,s\vee t]}e\). The embedded tree associated with the marks \(t_1, \dots , t_n\) and the function e, \(\theta (e,t_1, \dots , t_n)\), is defined inductively, according to the following steps:

  • If \(n = 1\), set \(\theta (e,t_1) = ( \varnothing ,\{e(t_1)\} )\).

  • If \(n \geqslant 2\), suppose that we know how to construct marked trees with less than n marks. Let \(i_1, \dots , i_k\) be the distinct indices satisfying that \(m_e(t_{i_q},t_{i_q +1}) = m_e(t_{1},t_{n})\), and define the following restrictions for \(1 \leqslant q \leqslant k-1\)

    $$\begin{aligned} e^{(0)}(t)&:= (e(t) : \, t \in [t_1,t_{i_1}]), \, \, e^{(q)}(t) := (e(t) : \, t \in [t_{i_q+1},t_{i_{q+1}}]), \, \,\\ e^{(k)}(t)&:= (e(t) : \, t \in [t_{i_{k}+1},t_{n}]). \end{aligned}$$

    Next, consider the associated finite labelled trees,

    $$\begin{aligned} \theta (e^{(0)},t_1, \dots , t_{i_1}), \, \theta (e^{(q)},t_{i_q+1}, \dots , t_{i_{q+1}}), \, \theta (e^{(k)},t_{i_k+1}, \dots , t_{n}), \quad \\ \text { for } 1 \leqslant q \leqslant k-1, \end{aligned}$$

    and finally, concatenate them with a common ancestor with label \(m_e(t_1, t_n)\), by setting

    $$\begin{aligned} \theta (e,t_1, \dots , t_n):= [ \theta (e^{(0)},t_1, \dots , t_{i_1}), \dots , \theta (e^{(k)},t_{i_k+1}, \dots , t_{n})]_{m_e(t_1, t_n)}, \end{aligned}$$

    and completing the recursion.

We say that the label \(h_v\) is the height of v in \( \theta (e,t_1, \dots , t_n)=(\text {T}, \{h_v:~v\in \text {T}\})\). Let us justify this terminology. First assume that \(e(0)=0\) and consider \({\mathcal {T}}_e\) the compact \({\mathbb {R}}\)–tree induced by e. Then if \(v_{1}, \dots , v_{n}\) are the leaves of \({{\textbf {T}}}\) in lexicographic order, we have \((h_{v_{1}}, \dots , h_{v_{n}}) = (e(t_1), \dots , e(t_n))\). Moreover, if we write \(v_i\curlywedge _{\text {T}} v_j\) for the common ancestor of \(v_i\) and \(v_j\) in \({{\textbf {T}}}\), it holds that \(h_{v_j \curlywedge _{\text {T}} v_i} = \inf _{[t_i \wedge t_j, t_i \vee t_j]} e\).Footnote 13

Statements and main steps for the proof of Theorem 5.1(ii). Our argument relies in identifying the distribution of the discrete embedded tree associated with \(({\widehat{\Lambda }}_{{A}^{-1}_t}:{0 \leqslant t \leqslant A_\infty })\) when the collection of marks are Poissonian. In this direction, we denote the law of a Poisson process \(({\mathcal {P}}_t:~t\geqslant 0)\) with intensity \(\lambda \) by \(Q^\lambda \) and we work with the pair \((H^A_t, {\mathcal {P}}_t )_{t \leqslant A_\infty }\), under the product measure \( {\mathbb {N}}_{x,0} \otimes \, Q^\lambda \). For convenience, we denote the law of \((\rho , {\overline{W}}, {\mathcal {P}}_{\cdot \wedge A_\infty })\) under \({\mathbb {N}}_{x,0} \otimes \, Q^\lambda \) by \({\mathbb {N}}_{x,0}^\lambda \) and we let \(0 \leqslant {\mathfrak {t}}_1< \dots < {\mathfrak {t}}_M \leqslant A_\infty \) be the jum** times of \(({\mathcal {P}}_t)\) falling in the excursion interval \([0,A_\infty ]\), where \(M:= {\mathcal {P}}_{A_\infty }\). Finally, consider the associated embedded tree

$$\begin{aligned} {{\textbf {T}}}^A:= \theta \big ( H^A, {\mathfrak {t}}_1, \dots , {\mathfrak {t}}_M \big ), \quad \text { under } {\mathbb {N}}^\lambda _{x,0}( \, \cdot \, | M \geqslant 1). \end{aligned}$$

Remark that the probability measure \({\mathbb {N}}^\lambda _{x,0}( \, \cdot \, | M \geqslant 1)\) is well defined since by Proposition 4.13 we have

$$\begin{aligned} {\mathbb {N}}^\lambda _{x,0}\left( M \geqslant 1 \right) = {\mathbb {N}}_{x,0}\left( 1- \exp (-\lambda A_\infty ) \right) = {\widetilde{\psi }}^{-1}(\lambda ). \end{aligned}$$
(5.3)

Our goal is to show that \({{\textbf {T}}}^A\) is distributed as the discrete embedded tree of a \({\widetilde{\psi }}\)-Lévy tree associated with Poissonian marks with intensity \(\lambda \). To state this formally, recall the notation \({\widetilde{N}}\) for the excursion measure of a \({\widetilde{\psi }}\)-Lévy process, and that \({\widetilde{H}}\) stands for the associated height process. We write \({\widetilde{N}}^\lambda \) for the law of \(({\widetilde{\rho }}, {\mathcal {P}}_{\cdot \wedge \sigma _{{\widetilde{H}}}})\) under \({\widetilde{N}} \otimes Q^\lambda \) and remark that \({\widetilde{M}}:= {\mathcal {P}}_{\sigma _{{\widetilde{H}}}}\) is the number of Poissonian marks in \([0,\sigma _{{\widetilde{H}}}]\). For simplicity, we denote the jum** times of \({\mathcal {P}}\) under \({\widetilde{N}}^\lambda \) by \({\mathfrak {t}}_1,\dots , {\mathfrak {t}}_{{\widetilde{M}}}\).

Proposition 5.3

The discrete tree \({{\textbf {T}}} ^A\) under \({\mathbb {N}}^{\lambda }_{x,0}( \, \cdot \, | M \geqslant 1)\) has the same distribution as

$$\begin{aligned} \widetilde{{{\textbf {T}}} }:= \theta \big ( {\widetilde{H}}, {\mathfrak {t}}_1, \dots , {\mathfrak {t}}_{{\widetilde{M}}}\big ) \quad \quad \text {under } {\widetilde{N}}^\lambda ( \, \cdot \, | {\widetilde{M}} \geqslant 1). \end{aligned}$$

The proof of Proposition 5.3 is rather technical and will be postponed to Sect. 5.2. The reason behind considering Poissonian marks to identify the distribution of \(H^A\) is to take advantage of the memoryless of Poissonian marks; this flexibility will allow us to make extensive use of the Markov property and excursion theory. Let us now explain how to deduce Theorem 5.1 (ii) from Proposition  5.3.

Proof of Theorem 5.1 (ii)

First remark that the fact that \({\mathcal {T}}_{H}^{{\mathcal {L}}}\) is a \({\widetilde{\psi }}\)-Lévy tree is a direct consequence of Theorem 5.1 (i) and (5.2). To conclude it remains to prove (5.2). In this direction, recall from Proposition 4.13 and the discussion after it, that \(A_\infty \) under \({\mathbb {N}}_{x,0}\) and \(\sigma _{{\widetilde{H}}}\) under \({\widetilde{N}}\) have the same distribution. This ensures that, up to enlarging the measure space, we can define the height process \({\widetilde{H}}\) under the measure \({\mathbb {N}}_{x,0}\) in such a way that its lifetime is precisely \(A_\infty \), viz. \(\sigma _{{\widetilde{H}}} = A_\infty \). Then, for every \(\lambda > 0\) and under \({\mathbb {N}}_{x,0}^\lambda \), we might and will consider the same Poisson point process \(({\mathcal {P}}_{t}:~t\geqslant 0)\) to mark \({\widetilde{H}}\) and \(H^A\). Since M coincides with \({\widetilde{M}}\), we will no longer make use of the latter notation. We stress that the marks \({\mathfrak {t}}_1, \dots , {\mathfrak {t}}_M\) are now being used to mark both processes \(H^A\) and \({\widetilde{H}}\). In the rest of the proof, we work with this coupling. Our goal now is to establish that for every measurable bounded function \(F: {\mathbb {C}}({\mathbb {R}}_+, {\mathbb {R}}) \mapsto {\mathbb {R}}_+\) we have

$$\begin{aligned} {\mathbb {N}}_{x,0}\big (F({\widetilde{H}}) \big ) = {\mathbb {N}}_{x,0} \big (F({H}^A) \big ) \end{aligned}$$
(5.4)

where \({\mathbb {C}}({\mathbb {R}}_+, {\mathbb {R}})\) stands for the space of continuous functions from \({\mathbb {R}}_+\) into \({\mathbb {R}}\) endowed with the topology of uniform convergence on compact sets. For every \(h\in {\mathbb {C}}({\mathbb {R}}_+, {\mathbb {R}})\), we use the standard notation \(\sigma _h:=\sup \{t\geqslant 0:~h(t)\ne 0\}\). Since Lemma 4.15 entails that \({\mathbb {N}}_{x,0}(A_\infty = 0)\) = 0, the usual approximation arguments and an application of the monotone convergence theorem yield that it suffices to prove (5.4) for an arbitrary continuous function F vanishing in the complement of \(\{ h \in {\mathbb {C}}({\mathbb {R}}_+, {\mathbb {R}}): \sigma _h > \varepsilon \}\), for some arbitrary \(\varepsilon >0\). Let us now proceed with the proof of (5.4) under our standing assumptions on F. To this end, fix an arbitrary \(\lambda >0\) and notice that, under \({\mathbb {N}}^{\lambda }_{x,0}( \, \cdot \, | M \geqslant 1)\), the marked trees \({{{\textbf {T}}}}^A\), \(\widetilde{{{{\textbf {T}}}}}\) are ordered trees – the order of the vertices being the one induced by the marks. Recall that for every \(1\leqslant i\leqslant M\), the variables \(H^A_{{\mathfrak {t}}_i}\), \({\widetilde{H}}_{{\mathfrak {t}}_i}\) are the respective labels of the i-th leaf, with respect to the lexicographical order, in \( {{\textbf {T}}}^A\) and \(\widetilde{{{\textbf {T}}}}\). Consequently, the identity \({{\textbf {T}}}^A \overset{(d)}{=}\widetilde{{{\textbf {T}}}}\) under \({\mathbb {N}}^{\lambda }_{x,0}( \, \cdot \, | M \geqslant 1)\) of Proposition 5.3 yields the following equality in distribution under \({\mathbb {N}}^{\lambda }_{x,0}( \, \cdot \, | M \geqslant 1)\),

$$\begin{aligned} \left( M, {\widetilde{H}}_{{\mathfrak {t}}_1}, \dots , {\widetilde{H}}_{{\mathfrak {t}}_M} \right) \overset{(d)}{=}\ \left( M, H^A_{{\mathfrak {t}}_1}, \dots , H^A_{{\mathfrak {t}}_M} \right) . \end{aligned}$$

Remark that, conditionally on \(A_\infty \), \(({\mathcal {P}}_t: t \leqslant A_\infty )\) is independent of \({\widetilde{H}}\) and \(H^A\), and that the random variable M is Poisson with intensity \((\lambda A_\infty )\). Let \(( U_i:~i\geqslant 1 )\) be a collection of independent identically distributed random variables uniformly distributed in \([0,A_\infty ]\), conditionally on \((\rho , {\overline{W}}, {\widetilde{H}})\). By conditioning on \(A_\infty \), we deduce that for any \(m \geqslant 1\) and any measurable function \(f:{\mathbb {R}}^{m}\mapsto {\mathbb {R}}_+\), we have

$$\begin{aligned}&{\mathbb {N}}^\lambda _{x,0}\left( f( {\widetilde{H}}_{U^m_{(1)}}, \dots , {\widetilde{H}}_{U^m_{(m)}} ) \frac{( \lambda A_\infty )^{m}}{m!}\exp \big (-\lambda A_\infty \big ) \right) \\&\quad = {\mathbb {N}}_{x,0}^\lambda \left( f( H^A_{U^m_{(1)}}, \dots , H^A_{U^m_{(m)}} ) \frac{( \lambda A_\infty )^{m}}{m!}\exp \big (-\lambda A_\infty \big ) \right) , \end{aligned}$$

where \(( U^m_{(1)}, \dots , U^m_{(m)} )\) stands for the order statistics of \(\{ U_1, \dots , U_m \}\). By considering a proper extension of the measure \({\mathbb {N}}_{x,0}\) - to ensure the existence of the sequence \((U_i: i \geqslant 1)\) - the identity in the previous display still holds if we replace \({\mathbb {N}}_{x,0}^\lambda \) by \({\mathbb {N}}_{x,0}\). Moreover, since such identity is satisfied for every \(\lambda > 0\), it readily follows by injectivity of the Laplace transform that

$$\begin{aligned} \left( A_\infty , {\widetilde{H}}_{U^m_{(1)}}, \dots , {\widetilde{H}}_{U^m_{(m)}} \right) \overset{(d)}{=}\ \left( A_\infty , H^A_{U^m_{(1)}}, \dots , H^A_{U^m_{(m)}} \right) , \end{aligned}$$

under \({\mathbb {N}}_{x,0}\), for every \(m\geqslant 1\). Denote the unique continuous function vanishing on \({\mathbb {R}}_+ {\setminus } (0,A_\infty )\) and linearly interpolating between the points \(\{ ( A_\infty \cdot im^{-1}, {\widetilde{H}}_{U_i^{(m)}}): i \in \{ 1, \dots , m \}\}\cup \{ (0,0), (A_\infty , 0) \} \) by \(({\widetilde{H}}^m_t:~ t \geqslant 0)\). Similarly, let \({H}^{A,m}\) be the analogous function defined by replacing \({\widetilde{H}}\) by \(H^A\). The identity in distribution in the last display ensures that

$$\begin{aligned} {\mathbb {N}}_{x,0} \big ( F ({\widetilde{H}}^m ) \big ) = {\mathbb {N}}_{x,0} \big ( F ( H^{A,m} ) \big ). \end{aligned}$$

Furthermore, since for every fixed rational \(t\in [0,1]\) we have the a.e. pointwise convergence \(U^m_{\lfloor tm \rfloor } \rightarrow tA_\infty \), we infer by Dini’s theorem that \(\sup \{|U_{ \lfloor t m \rfloor }^{m}- A_\infty \cdot t|:~t\in [0,1]\}\rightarrow 0\) a.e. It is now straightforward to derive by uniform continuity of \(H^A\) and \({\widetilde{H}}\) the a.e. convergences \({{\widetilde{H}}}^m \rightarrow {\widetilde{H}}\) and \({H}^{A,m} \rightarrow {H}^A\) as \(m \uparrow \infty \) in \({\mathbb {C}}({\mathbb {R}}_+, {\mathbb {R}})\). Finally, since F vanishes in the complement of the finite measure event \(\{ A_\infty > \varepsilon \}\), an application of the dominated convergence theorem gives (5.4) by taking the limit as \(m \uparrow \infty \) in the previous display. \(\square \)

5.2 Trees embedded in the subordinate tree

This section is devoted to the proof of Proposition 5.3. In short, the idea is to decompose inductively \(\widetilde{{{\textbf {T}}}}\) and \({{\textbf {T}}}^A\) starting from their respective “left-most branches” – viz. the path connecting the root \(\varnothing \) and the first leaf with the corresponding labels – and to show that they have the same law. Next, if we remove the left-most branch of \(\widetilde{{{\textbf {T}}}}\) and \({{\textbf {T}}}^A\), we are left with two ordered collections of independent subtrees and we shall establish that they have respectively the same law as \(\widetilde{{{\textbf {T}}}}\) and \({{\textbf {T}}}^A\). This will allow us to iterate this left-most branch decomposition in such a way that the branches discovered at step n in \(\widetilde{{{\textbf {T}}}}\) and \({{\textbf {T}}}^A\) have the same law. Proposition  5.3 will follow since this procedure leads respectively to discover \(\widetilde{{{\textbf {T}}}}\) and \({{\textbf {T}}}^A\). In order to state this formally let us introduce some notation.

If \({{\textbf {T}}}:=(\text {T},(h_v:~v\in \text {T}))\) is a discrete labelled tree and \(n\geqslant 0\), we let \({{\textbf {T}}}(n)\) be the set of all couples \((u,h_u)\in {{\textbf {T}}}\) such that u has at most n entries in \(\{2,3,...\}\). In particular \({{\textbf {T}}}(0)\) is the branch connecting the root and the first leaf. Next, we introduce the collection

$$\begin{aligned} {{\mathbb {S}}}({{\textbf {T}}}):=\big ( (h_v, k_v(\text {T})-1):~v\text { is a vertex of } {{\textbf {T}}}(0)\big ), \end{aligned}$$

where the elements are listed in increasing order with respect to the height and we recall that \(k_v(\text {T})\) stands for the number of children of v. For simplicity, set \(R:=\#{{\textbf {T}}}(0)-1\), write \(v_1,...,v_{R+1}\) for the vertices of \({{\textbf {T}}}(0)\) in lexicographic order and observe that \(v_1\) is the root while \(v_{R+1}\) is the first leaf – in particular \(k_{v_{R+1}}(\text {T}) =0\). Heuristically, \({{\mathbb {S}}}({{\textbf {T}}})\) – or more precisely the measure \(\sum _i (k_{v_i}(\text {T})-1)\delta _{h_{v_i}}\) – is a discrete version of the exploration process when visiting the first leaf of \(\text {T}\) and for this reason \({{\mathbb {S}}}({{\textbf {T}}})\) will be called the left-most spine of \({{\textbf {T}}}\). Now, for every \(1 \leqslant j \leqslant R\), set

$$\begin{aligned} K_j({{\textbf {T}}}):= \sum _{i=1}^{j} (k_{v_i}(\text {T})-1), \end{aligned}$$

with the convention \(K_0({{\textbf {T}}})=0\) and remark that \(K({{\textbf {T}}}):=K_{R}({{\textbf {T}}})\) stands for the number of subtrees attached “to the right” of \({{\textbf {T}}}(0)\) in \({{\textbf {T}}}\). To define these subtrees when \(K({{\textbf {T}}}) \geqslant 1\), we need to introduce the following: for every \(1\leqslant i\leqslant K_{R}({{\textbf {T}}}) = K({{\textbf {T}}})\), let a(i) be the unique index such that \(K_{a(i)-1}({{\textbf {T}}}) < i \leqslant K_{a(i)} ({{\textbf {T}}})\). Then, we introduce the marked tree

$$\begin{aligned} {{\textbf {T}}}_{i}:=\big \{(u,h'_u):~\big (v_{a(i)}(K_{a(i)} + 2 -i )u,h_{v_{a(i)}} + h'_u \big )\in {{\textbf {T}}}\big \}. \end{aligned}$$
(5.5)

Remark that the labels in each subtree \({{\textbf {T}}}_i\) have been shifted by their relative height in \({\mathbb {S}}({{\textbf {T}}})\) and that the collection \(({{\textbf {T}}}_i: 1 \le i \leqslant K({{\textbf {T}}}))\) is listed in counterclockwise order.

We now apply this decomposition to \(\widetilde{{{\textbf {T}}}}\) and \({{\textbf {T}}}^A\). For simplicity, we write \({\widetilde{K}}:= K(\widetilde{{{\textbf {T}}}})\) (resp. \(K:= K({{{\textbf {T}}}}^A)\)) for the number of subtrees attached to the right of \(\widetilde{{{\textbf {T}}}}(0)\) (resp. \({{\textbf {T}}}^A(0)\)). When \({\widetilde{K}}\geqslant 1\) (resp. \(K\geqslant 1\)), we let \(\widetilde{{{\textbf {T}}}}_i\) (resp. \({{\textbf {T}}}_i^A\)) be the marked trees defined by (5.5) using \(\widetilde{{{\textbf {T}}}}\) (resp. \({{\textbf {T}}}^A\)). Proposition 5.3 can now be reduced to the following result:

Proposition 5.4

(i) We have

$$\begin{aligned} \Big ( {\mathbb {S}}(\widetilde{{{\textbf {T}}} })~:~{\widetilde{N}}^\lambda (\cdot |{\widetilde{M}}\geqslant 1) \Big ) \overset{(d)}{=}\ \Big ({\mathbb {S}}({{\textbf {T}}} ^A) ~:~{\mathbb {N}}_{x,0}^\lambda (\cdot |M\geqslant 1)\Big ). \end{aligned}$$

(ii) Under \({\widetilde{N}}^\lambda ( \cdot \, |{\widetilde{M}} \geqslant 1, {\widetilde{K}})\) and conditionally on \({\mathbb {S}}( \widetilde{{{\textbf {T}}} })\), the subtrees \(\widetilde{{{\textbf {T}}} }_{1}, \dots \widetilde{{{\textbf {T}}} }_{{\widetilde{K}}}\) are distributed as \({\widetilde{K}}\) independent copies distributed as \(\widetilde{{{\textbf {T}}} }\) under \({\widetilde{N}}^\lambda ( \cdot \, |{\widetilde{M}} \geqslant 1)\). Similarly, under \({\mathbb {N}}_{x,0}^\lambda ( \, \cdot \, |M \geqslant 1, K)\) and conditionally on \({\mathbb {S}}({{\textbf {T}}} ^A)\), the subtrees \({{{\textbf {T}}} }^{A}_1, \dots , {{\textbf {T}}} ^{A}_K\) are distributed as K independent copies distributed as \({{\textbf {T}}} ^A\) under \({\mathbb {N}}_{x,0}^\lambda (\, \cdot \, |M \geqslant 1)\).

We stress that the notations \({\widetilde{N}}^\lambda ( \cdot \, |{\widetilde{M}} \geqslant 1, {\widetilde{K}})\), \({\mathbb {N}}_{x,0}^\lambda ( \, \cdot \, |M \geqslant 1, K)\) stand for the conditional expectation with respect to \({\widetilde{K}}\) resp. K under the probability measures \({\widetilde{N}}^\lambda (\, \cdot \, | {\widetilde{M}} \geqslant 1)\) resp. \({\mathbb {N}}_{x,0}^\lambda ( \, \cdot \, |M \geqslant 1)\). Let us explain why Proposition 5.3 is a consequence of the previous result.

Proof of Proposition 5.3

We are going to show by induction that for every \(n\geqslant 0\):

$$\begin{aligned} \widetilde{{{\textbf {T}}}}(n)\text { under } {\widetilde{N}}^\lambda ( \cdot \, |{\widetilde{M}} \geqslant 1) ~~~~\text { is distributed as }~~~~ {{\textbf {T}}}^A(n) \text { under }{\mathbb {N}}_{x,0}^\lambda (\cdot |M\geqslant 1).\nonumber \\ \end{aligned}$$
(5.6)

First notice that Proposition 5.4 - (i) gives the previous identity in the case \(n=0\). Assume now that (5.6) holds for \(n\geqslant 0\) and let us prove the identity for \(n+1\). First, remark that it is enough to argue with \(\widetilde{{{\textbf {T}}}}(n+1)\) under \({\widetilde{N}}^\lambda ( \cdot \, |{\widetilde{M}} \geqslant 1, {\widetilde{K}})\) and \({{\textbf {T}}}^A(n+1)\) under \({\mathbb {N}}_{x,0}^\lambda (\cdot |M\geqslant 1, K)\) – since by Proposition 5.4, the variable \({\widetilde{K}}\) under \({\widetilde{N}}^\lambda ( \cdot \, |{\widetilde{M}} \geqslant 1)\) is distributed as K under \({\mathbb {N}}_{x,0}^\lambda (\cdot |M\geqslant 1)\). Next, we see that \(\widetilde{{{\textbf {T}}}}(n+1)\) can be obtained by gluing the trees \(\widetilde{{{\textbf {T}}}}_i(n)\) to \(\widetilde{{{\textbf {T}}}}(0)\) at their respective positions after translating the labels by the associated heights. Moreover, these positions and heights are precisely the entries of \({\mathbb {S}}(\widetilde{{{\textbf {T}}}})\). Since the same discussion holds when replacing \(\widetilde{{{\textbf {T}}}}\) by \({{\textbf {T}}}^A\), the case \(n+1\) follows by Proposition 5.4 and the case n. Finally, since the trees \(\widetilde{{{\textbf {T}}}}\) and \({{\textbf {T}}}^A\) are finite, (5.6) implies the desired result. \(\square \)

Our goal now is to prove Proposition 5.4. In this direction, we will first encode the spines \({{\mathbb {S}}}(\widetilde{{{\textbf {T}}}}), {{\mathbb {S}}}({{\textbf {T}}}^A)\) as well as the corresponding subtrees \(\widetilde{{{\textbf {T}}}}_i\), \({{\textbf {T}}}^A_i\) in terms of \({\widetilde{\rho }}\), \((\rho , {\overline{W}})\) and \({\mathcal {P}}\). This will allow us to identify their law by making use of the machinery developed in previous sections. While \({\mathbb {S}}(\widetilde{{{\textbf {T}}}})\) can be constructed directly in terms of \(({\widetilde{\rho }}_{{\mathfrak {t}}_1 + t}: t \geqslant 0 )\) and the Poisson marks, the construction of \({\mathbb {S}}({{\textbf {T}}}^A)\) is more technical. Roughly speaking, the strategy consists in defining in terms of \((\rho , {\overline{W}})\) the exploration process for the subordinate tree at time \({\mathfrak {t}}_1\), say \(\rho ^*_{{\mathfrak {t}}_1}\), and then show – see Lemma 5.8 below – that \({\widetilde{\rho }}_{{\mathfrak {t}}_1}\) and \(\rho ^*_{{\mathfrak {t}}_1}\) have the same distribution. Needless to say that this statement is informal, since we have not yet shown that the subordinate tree is a Lévy tree. We will then deduce (i) by considering \({\mathbb {S}}(\widetilde{{{\textbf {T}}}})\), \({\mathbb {S}}({{\textbf {T}}}^A)\) and conditioning respectively on \({\widetilde{\rho }}_{{\mathfrak {t}}_1}\) and \(\rho ^*_{{\mathfrak {t}}_1}\), Point (ii) will then follow easily by construction. For simplicity, from now on we write \({\mathfrak {t}}:= {\mathfrak {t}}_1\).

We first start working under \({\widetilde{N}}^\lambda (\cdot \, | M \geqslant 1)\) and we introduce the following notation: let \(\big (({\widetilde{\alpha }}_i, {\widetilde{\beta }}_i):~i\in {\mathbb {N}}\big )\) be the connected components of the open set

$$\begin{aligned} \big \{s\geqslant {\mathfrak {t}}:~{\widetilde{H}}_s>\inf \limits _{[{\mathfrak {t}},s]} {\widetilde{H}} \big \}. \end{aligned}$$

As usual, we write \({\widetilde{\rho }}^{\,i}\) for the associated subtrajectory of the exploration process in the interval \([{\widetilde{\alpha }}_i, {\widetilde{\beta }}_i ]\). We also consider \({\widetilde{H}}^{i}:=({\widetilde{H}}_{({\widetilde{\alpha }}_i+s)\wedge {\widetilde{\beta }}_i}-{\widetilde{H}}_{{\widetilde{\alpha }}_i}:~s\geqslant 0)\), \(\widetilde{{\mathcal {P}}}^{i}:=(\widetilde{{\mathcal {P}}}_{({\widetilde{\alpha }}_i+t)\wedge {\widetilde{\beta }}_i}-\widetilde{{\mathcal {P}}}_{{\widetilde{\alpha }}_i}:~t\geqslant 0)\) and note that in particular we have \(H({\widetilde{\rho }}^{\,i}) = {\widetilde{H}}^i\). Write \({\widetilde{h}}_i:={\widetilde{H}}({\widetilde{\alpha }}_i)\), and consider the marked measure:

$$\begin{aligned} \widetilde{{\mathcal {M}}}:=\sum \limits _{i\in {\mathbb {N}}}\delta _{({\widetilde{h}}_i, {\widetilde{\rho }}^{\,i},\widetilde{{\mathcal {P}}}^i)}. \end{aligned}$$

By the Markov property and (2.23), conditionally on \({\mathcal {F}}_{{\mathfrak {t}}}\), the measure \(\widetilde{{\mathcal {M}}}\) is a Poisson point measure with intensity \({\widetilde{\rho }}_{{\mathfrak {t}}}({\textrm{d}}h){\widetilde{N}}^\lambda ({\textrm{d}}\rho , \, {\textrm{d}}{\mathcal {P}}).\) Now we can identify \({\mathbb {S}}(\widetilde{{{\textbf {T}}}})\) in terms of functionals of \(\widetilde{{\mathcal {M}}}\) and \({\widetilde{H}}_{{\mathfrak {t}}}\). First, set \(({\widetilde{h}}^\circ _p:~ 1\leqslant p \leqslant {\widetilde{R}})\) the collection of the different heights – in increasing order – among \(({\widetilde{h}}_i:~ i \in {\mathbb {N}})\) at which \(\widetilde{{\mathcal {P}}}^{i}_{\sigma ({\widetilde{\rho }}^{\,i})} \geqslant 1\). In particular, \({\widetilde{R}}\) gives the number of different heights \({\widetilde{h}}_j\) at which we can find at least one marked excursion above the running infimum of \(({\widetilde{H}}_{{\mathfrak {t}}+ t}: t \geqslant 0)\). Next, we write \({\widetilde{M}}^\circ _p\) for the number of atoms at level \({\widetilde{h}}^\circ _p\) in \(\widetilde{{\mathcal {M}}}\) with at least one Poissonian mark. Now, remark that by construction we have:

$$\begin{aligned} {\mathbb {S}}(\widetilde{{{\textbf {T}}}}) = \big ( ({\widetilde{h}}^\circ _1, {\widetilde{M}}^\circ _1), \dots , ({\widetilde{h}}^\circ _{{\widetilde{R}}},{\widetilde{M}}^\circ _{{\widetilde{R}}}), ( {\widetilde{H}}_{{\mathfrak {t}}}, -1)\big ), \end{aligned}$$
(5.7)

and in particular \({\widetilde{K}} = \sum _{i=1}^{{\widetilde{R}}} {\widetilde{M}}^\circ _i\). Finally, for later use denote the corresponding marked excursions arranged in counterclockwise order by \( \widetilde{{\mathscr {E}}}:= ( ({\widetilde{\rho }}^q_\circ ,{\widetilde{H}}^q_\circ , \widetilde{{\mathcal {P}}}^q_\circ ): 1 \leqslant q \leqslant {\widetilde{K}})\). Notice that the subtrees \((\widetilde{{{\textbf {T}}}}_i: 1 \le i \leqslant {\widetilde{K}})\) are precisely the respective embedded marked trees associated with \((({\widetilde{H}}^q_\circ , \widetilde{{\mathcal {P}}}^q_\circ ): 1 \leqslant q \leqslant {\widetilde{K}})\).

The main step remaining in our analysis under \({\widetilde{N}}^\lambda (\cdot \, | {\widetilde{M}} \geqslant 1)\) consists in characterizing the law of \(({\widetilde{H}}_{{\mathfrak {t}}},{\widetilde{\rho }}_{{\mathfrak {t}}})\), and this is the content of the following lemma. Since \(\widetilde{{\mathcal {M}}}\) conditionally on \({\mathcal {F}}_{{\mathfrak {t}}}\) is a Poisson point measure with intensity \({\widetilde{\rho }}_{{\mathfrak {t}}}({\textrm{d}}h){\widetilde{N}}^\lambda ({\textrm{d}}\rho , \, {\textrm{d}}{\mathcal {P}})\), this will suffice to identify the distribution of \({\mathbb {S}}(\widetilde{{{\textbf {T}}}})\). In this direction, Corollary 4.9 ensures that the measure \({\widetilde{\rho }}_{{\mathfrak {t}}}\) is purely atomic and consequently by (2.4) it is of the form:

$$\begin{aligned} {\widetilde{\rho }}_{{\mathfrak {t}}}:=\sum _{i \in {\mathbb {N}}} {\widetilde{\Delta }}_i \cdot \delta _{{\widetilde{h}}^i} . \end{aligned}$$

We stress that we have \(\{{\widetilde{h}}^i:i\in {\mathbb {N}}\}=\{{\widetilde{h}}_i:i\in {\mathbb {N}}\}\) – even though the latter set has repeated elements.

Lemma 5.5

Under \({\widetilde{N}}^\lambda (\cdot |~{\widetilde{M}}\geqslant 1)\), the random variable \({\widetilde{H}}_{{\mathfrak {t}}}\) is exponentially distributed with intensity \(\lambda /{\widetilde{\psi }}^{-1}(\lambda )\). Moreover, conditionally on \({\widetilde{H}}_{{\mathfrak {t}}}\), the measure \(\sum \delta _{({\widetilde{h}}^i,{\widetilde{\Delta }}_i)}\) is a Poisson point measure with intensity \(\mathbb {1}_{[0,{\widetilde{H}}_{{\mathfrak {t}}}]}({\textrm{d}}h) {\widetilde{\nu }}({\textrm{d}}z)\), where \({\widetilde{\nu }}({\textrm{d}}z)\) is the measure supported on \({\mathbb {R}}_+\) characterized by:

$$\begin{aligned} \int {\widetilde{\nu }}({\textrm{d}}z)\big (1-\exp (-p z)\big )=\frac{{\widetilde{\psi }}(p)-\lambda }{p-{\widetilde{\psi }}^{-1}(\lambda )} - \frac{\lambda }{{\widetilde{\psi }}^{-1}(\lambda )}, \qquad p\geqslant 0. \end{aligned}$$
(5.8)

Proof

Recall that by Proposition 4.13, we have \({\widetilde{\psi }}^{-1}(\lambda ) = {\widetilde{N}}(1-\exp (-\lambda \sigma ))= {\widetilde{N}}^\lambda ({\widetilde{M}}\geqslant 1)\). Consider two measurable functions \(g: {\mathbb {R}}_+ \mapsto {\mathbb {R}}_+\), \(F: {\mathcal {M}}_f({\mathbb {R}}_+) \mapsto {\mathbb {R}}_+\) and remark that

$$\begin{aligned} {\widetilde{N}}^{\lambda }\big ( g({\widetilde{H}}_{{\mathfrak {t}}})F({\widetilde{\rho }}_{{\mathfrak {t}}})\mathbb {1}_{\{{\widetilde{M}}\geqslant 1 \}}\big )= \lambda \cdot {\widetilde{N}}\big (\int _{0}^\sigma {\textrm{d}}s \exp (-\lambda s) g({\widetilde{H}}_{s})F({\widetilde{\rho }}_{s})\big ). \end{aligned}$$

By duality (2.21) and the Markov property, the previous expression can be written in the form:

$$\begin{aligned}&\lambda \cdot {\widetilde{N}} \big (\int _{0}^{\sigma } {\textrm{d}}s~ g({\widetilde{H}}_{s})F({\widetilde{\eta }}_{s}) \exp (-\lambda (\sigma -s)) \big ) \\&\quad = \lambda \cdot {\widetilde{N}} \big (\int _{0}^{\sigma } {\textrm{d}}s~ g({\widetilde{H}}_{s})F({\widetilde{\eta }}_{s}){\widetilde{E}}_{{\widetilde{\rho }}_s}[ \exp (-\lambda \sigma )]\big )\\&\quad =\lambda \cdot {\widetilde{N}}\big (\int _{0}^{\sigma } {\textrm{d}}s~ g({\widetilde{H}}_{s})F({\widetilde{\eta }}_{s})\exp (-{\widetilde{\psi }}^{-1}(\lambda )\langle {\widetilde{\rho }}_{s}, 1\rangle )\big ), \end{aligned}$$

where in the last line we use the identity \({\widetilde{\psi }}^{-1}(\lambda )={\widetilde{N}}(1-\exp (-\lambda \sigma )) \). Consider under \(P^0\) the pair of subordinators \(({\widetilde{U}}^{(1)}, {\widetilde{U}}^{(2)})\) with Laplace exponent (2.24), defined replacing \(\psi \) by \({\widetilde{\psi }}\), and denote its Lévy measure by \({\widetilde{\gamma }}({\textrm{d}}u_1, {\textrm{d}}u_2 )\). We stress that since \({\widetilde{\psi }}\) does not have Brownian part, the subordinators \(({\widetilde{U}}^{(1)}, {\widetilde{U}}^{(2)})\) does not have drift. The many-to-one formula (2.25) applied to \({\widetilde{\psi }}\) gives:

$$\begin{aligned}&{\widetilde{N}}^{\lambda }\big (g({\widetilde{H}}_{{\mathfrak {t}}})F({\widetilde{\rho }}_{{\mathfrak {t}}})\big |~{\widetilde{M}}\geqslant 1\big )\nonumber \\&=\frac{\lambda }{{\widetilde{\psi }}^{-1}(\lambda )}\int _{0}^{\infty }{\textrm{d}}a \, \exp (-{\widetilde{\alpha }} a) g(a) E^{0} \big [F(\mathbb {1}_{[0,a]} {\textrm{d}}{\widetilde{U}}^{(1)}) \exp (-{\widetilde{\psi }}^{-1}(\lambda ) {\widetilde{U}}^{(2)}_a )\big ]. \end{aligned}$$
(5.9)

We shall now deduce from the later identity that the pair \(\big ({\widetilde{H}}_{{\mathfrak {t}}}, \sum \delta _{({\widetilde{h}}^i,{\widetilde{\Delta }}_i)}\big )\) has the desired distribution. In this direction, observe that since \(\rho \) takes values in \(M_p({\mathbb {R}}_+) \subset {\mathcal {M}}_f({\mathbb {R}}_+) \), the subspace of finite atomic measures in \({\mathbb {R}}_+\), we can and will consider in the last display functionals F vanishing in the complement of \(M_p({\mathbb {R}}_+)\).Footnote 14 Now let \(f:{\mathbb {R}}_+^{2}\rightarrow {\mathbb {R}}_+\) be a measurable function satisfying \(f(h,0)=0\), for every \(h\geqslant 0\). By (5.9) and our previous discussion, we derive that

$$\begin{aligned}&{\widetilde{N}}^{\lambda } \big (g({\widetilde{H}}_{{\mathfrak {t}}})\exp \big (-\sum _{i \in {\mathbb {N}}} f({\widetilde{h}}_i,{\widetilde{\Delta }}_i )\big )|{\widetilde{M}}\geqslant 1\big ) \nonumber \\&\quad =\frac{\lambda }{{\widetilde{\psi }}^{-1}(\lambda )}\int _{0}^{\infty }{\textrm{d}}a \, g(a)\exp (-{\widetilde{\alpha }} a) E^{0}\Big [ \exp \Big (-\sum _{h \leqslant a} \big (f(h,\Delta {\widetilde{U}}^{(1)}_{h} )+{\widetilde{\psi }}^{-1}(\lambda )\Delta {\widetilde{U}}^{(2)}_{h} \big )\Big )\Big ]. \end{aligned}$$
(5.10)

Moreover, by the exponential formula it follows that the expectation under \(E^0\) in the previous display equals

$$\begin{aligned} \exp \Big (-\int _0^a {\textrm{d}}h \int {\widetilde{\gamma }}({\textrm{d}}u_1,{\textrm{d}}u_2) \big (1-\exp (-f(h,u_1)-{\widetilde{\psi }}^{-1}(\lambda ) u_2)\big )\Big ), \end{aligned}$$

and notice that we can write:

$$\begin{aligned}&\int {\widetilde{\gamma }}({\textrm{d}}u_1,{\textrm{d}}u_2)\big (1-\exp (-f(h,u_1)-{\widetilde{\psi }}^{-1}(\lambda ) u_2)\big )\\&\quad = \int {\widetilde{\gamma }}({\textrm{d}}u_1,{\textrm{d}}u_2)\exp (-{\widetilde{\psi }}^{-1}(\lambda ) u_2)\big (1-\exp (-f(h,u_1))\big )\\&\qquad + \int {\widetilde{\gamma }}({\textrm{d}}u_1,{\textrm{d}}u_2)\big (1-\exp (-{\widetilde{\psi }}^{-1}(\lambda ) u_2)\big ). \end{aligned}$$

To simplify this expression, introduce the measure \({\widetilde{\gamma }}^{\prime }({\textrm{d}}u_1):=\int _{u_2 \in {\mathbb {R}}}{\widetilde{\gamma }}({\textrm{d}}u_1,{\textrm{d}}u_2)\exp (-{\widetilde{\psi }}^{-1}(\lambda ) u_2)\) and observe that (2.24) entails

$$\begin{aligned} \int \widetilde{\gamma }({\textrm{d}}u_1,{\textrm{d}}u_2)(1-\exp (-\widetilde{\psi }^{-1}(\lambda ) u_2))=\frac{\lambda }{\widetilde{\psi }^{-1}(\lambda )}-\widetilde{\alpha }.\end{aligned}$$

We deduce that (5.10) can be written in the following form:

$$\begin{aligned} \frac{\lambda }{{\widetilde{\psi }}^{-1}(\lambda )}\int _{0}^{\infty }{\textrm{d}}a \, g(a) \exp (-\frac{\lambda }{{\widetilde{\psi }}^{-1}(\lambda )} a) \exp \Big ( - \int _0^a{\textrm{d}}h\int {\widetilde{\gamma }}'({\textrm{d}}u_1) \Big (1- \exp \big ( -f(h,u_1) \big ) \Big ) \Big ), \end{aligned}$$

and to conclude it suffices to remark that \({\widetilde{\gamma }}^\prime ={\widetilde{\nu }}\), since by (2.24) we have

$$\begin{aligned}&\int {\widetilde{\gamma }}^\prime ({\textrm{d}}u_1)\big (1-\exp (-p u_1)\big )\\ {}&=\int {\widetilde{\gamma }}({\textrm{d}}u_1,{\textrm{d}}u_2) \big (\exp (-{\widetilde{\psi }}^{-1}(\lambda ) u_2)-\exp (-pu_1-{\widetilde{\psi }}^{-1}(\lambda ) u_2)\big )\\&=\frac{{\widetilde{\psi }}(p)-\lambda }{p-{\widetilde{\psi }}^{-1}(\lambda )} - \frac{\lambda }{{\widetilde{\psi }}^{-1}(\lambda )}, \end{aligned}$$

for every \(p\geqslant 0\). \(\square \)

We now turn our attention to the other side of the picture, and we now work under \({\mathbb {N}}^{\lambda }_{x,0}(\cdot |M\geqslant 1)\). The objective is to obtain analogue results for the spine \({\mathbb {S}}({{\textbf {T}}}^A)\). In this direction, recall the notation \(G_\lambda := {\widetilde{\psi }}^{-1}(\lambda )\) and we start with the following technical lemma characterizing the law of \((\rho ,{\overline{W}})\) at time \(A^{-1}_{\mathfrak {t}}\).

Lemma 5.6

For any non-negative measurable function f in \(M_f({\mathbb {R}}_+) \times {\mathcal {W}}_{{\overline{E}}}\), we have:

Proof

Since \(\{ M \geqslant 1 \} = \{ {\mathfrak {t}} \leqslant A_\infty \}\), we have:

$$\begin{aligned} \mathbb {N}_{x,0}^\lambda \left( f ( \rho _{A^{-1}_{\mathfrak {t}}} , \overline{W}_{A^{-1}_{\mathfrak {t}}}) \mathbb {1}_{\{ M\geqslant 1 \}} \right)&= \lambda \cdot \mathbb {N}_{x,0} \left( \int _0^{A_{\infty }} {\textrm{d}}s \,f(\rho _{A^{-1}_s} , \overline{W}_{A^{-1}_s} ) \exp (-\lambda s) \right) \\ {}&=\lambda \cdot \mathbb {N}_{x,0} \left( \int _0^{\sigma } {\textrm{d}}A_s \, f(\rho _{s} , \overline{W}_{s} ) \exp (- \lambda A_s )\right) , \end{aligned}$$

and by a change of variable, the previous display equals

$$\begin{aligned} -\lambda \cdot \mathbb {N}_{x,0} \Big ( \int _0^{\sigma } {\textrm{d}}A_{\sigma - s} f(\rho _{\sigma - s} ,\overline{W}_{\sigma - s} ) \exp (- \lambda A_{\sigma - s} ) \Big ). \end{aligned}$$

Moreover, by time reversal (2.21) and the additivity of A, we know that:

$$\begin{aligned} (\rho _{(\sigma - s)-}, {\overline{W}}_{\sigma - s }, A_{\sigma - s}: 0 \leqslant s \leqslant \sigma ) \overset{(d)}{=}\ (\eta _{s}, {\overline{W}}_{s }, A_\sigma - A_{s}: 0 \leqslant s \leqslant \sigma ), \end{aligned}$$

and we remark that \(\{ s \in [0,\sigma ]: \rho _{s} \ne \rho _{s-} \} \subset \{ s \in [0,\sigma ]: \rho _{s}(\{ H_s \}) > 0 \}\) which has null \({\textrm{d}}A\) measure \({\mathbb {N}}_{x,0}\)– a.e by the many-to-one formula of Lemma 4.11. This implies:

$$\begin{aligned}{} & {} - {\mathbb {N}}_{x,0} \left( \int _0^{\sigma } {\textrm{d}}A_{\sigma - s} \, f(\rho _{\sigma - s}, {\overline{W}}_{\sigma - s} ) \exp (- \lambda A_{\sigma - s} ) \right) \\{} & {} \quad = {\mathbb {N}}_{x,0} \left( \int _0^{\sigma } {\textrm{d}}A_{s} \, f(\eta _{ s}, {\overline{W}}_{ s} ) \exp (- \lambda \int _{s}^{\sigma } {\textrm{d}}A_{s} ) \right) . \end{aligned}$$

Next, by making use of the strong Markov property, we derive that

$$\begin{aligned}&{\mathbb {N}}_{x,0}^\lambda \left( f ( \rho _{A^{-1}_{{\mathfrak {t}}}} , {\overline{W}}_{A^{-1}_{{\mathfrak {t}}}}) \mathbb {1}_{\{ M\geqslant 1 \}} \right) \\&\quad = \lambda \cdot {\mathbb {N}}_{x,0} \left( \int _0^{\sigma } {\textrm{d}}A_{s} \, f(\eta _{s} , {\overline{W}}_{s} ) \exp \big (- \lambda \int _s^\sigma {\textrm{d}}A_s\big ) \right) \\&\quad = \lambda \cdot {\mathbb {N}}_{x,0} \left( \int _0^{\sigma } {\textrm{d}}A_{s} \, f(\eta _{s} , {\overline{W}}_{s} ) {\mathbb {E}}^{\dag }_{\rho _s , {\overline{W}}_s} \Big [ \exp \big (- \lambda \int _0^\sigma {\textrm{d}}A_s\big ) \Big ] \right) \\&\quad = \lambda \cdot {\mathbb {N}}_{x,0} \left( \int _0^{\sigma } {\textrm{d}}A_{s} \, f(\eta _{s} , {\overline{W}}_{s} ) \exp \Big ( {-\int \rho _s({\textrm{d}}h) \, u_{G_\lambda } (W_s(h)) } \Big ) \right) , \end{aligned}$$

where in the last line we used Proposition 4.13. The statement of the lemma now follows applying (4.27) and recalling that , under \(P^0\). \(\square \)

For simplicity, in the rest of the section we write:

$$\begin{aligned} (\rho ^A_{\mathfrak {t}}, {\overline{W}}_{{\mathfrak {t}}}^A):= (\rho _{A^{-1}_{{\mathfrak {t}}}}, {\overline{W}}_{A^{-1}_{{\mathfrak {t}}}}), \end{aligned}$$

and \( {\overline{W}}_{{\mathfrak {t}}}^A:=( W_{{\mathfrak {t}}}^A, \Lambda _{{\mathfrak {t}}}^A)\) – remark that in particular we have \(H^A_{\mathfrak {t}} = {\widehat{\Lambda }}^A_{{\mathfrak {t}}}\). Let us now decompose \({\overline{W}}_{{\mathfrak {t}}}^A\) in terms of its excursion intervals away from x. To be more precise, we need to introduce some notation. For every \(r>0\) and \({\overline{{\text {w}}}}:=({\text {w}},\ell )\in {\mathcal {W}}_{{\overline{E}}}\), we set:

$$\begin{aligned} \tau _r^{+}({\overline{{\text {w}}}}):= \inf \big \{ h \geqslant 0: \ell (h) > r \big \}. \end{aligned}$$

Remark that since \(\ell \) is continuous, \(r \mapsto \tau _r^{+}({\overline{{\text {w}}}})\) is càdlàg in \([0, {\widehat{\ell }})\) and we write \(\tau _{r-}^{+}({\overline{{\text {w}}}})\) for the left limit of \(\tau ^{+}({\overline{{\text {w}}}})\) at \(r \in [0, {\widehat{\ell }}]\), with the convention \(\tau _{0-}^{+}({\overline{{\text {w}}}})=\tau _0^{+}({\overline{{\text {w}}}})\). Moreover, \(\tau ({\overline{{\text {w}}}})\) and \(\tau ^+({\overline{{\text {w}}}})\) are related by the relation \(\tau _r({\overline{{\text {w}}}})=\tau _{r-}^{+}({\overline{{\text {w}}}})\). Similarly and with analogous conventions, under \(\Pi _{y,0}\) for \(y\in E\) we will write \(\tau _{r}^+(\xi ):=\inf \{t\geqslant 0: ~ {\mathcal {L}}_t> r\}\) for every \(r\geqslant 0\), and observe that a.s. for every \(r \geqslant 0\) we have \(\tau _{r-}^+(\xi ) = \tau _{r}(\xi )\). The advantage of working with \(\tau ^{+}(\xi )\) instead of \(\tau (\xi )\) is that, under \(\Pi _{x,0}\), the process \(\tau ^{+}(\xi )\) is a subordinator. Moreover, by Theorem 8 in [4, Chapter IV], its Lévy-Itô decomposition is given by

$$\begin{aligned} \tau _r^+(\xi ) = \sum _{s \leqslant r} \Delta \tau _s^+(\xi ), \quad \quad r \geqslant 0, \end{aligned}$$

since (\(\hbox {H}_{3}\)) ensures that the process \(\tau ^+(\xi )\) does not have drift part – equivalently \(\tau ^{+}(\xi )\) is purely discontinuous. For simplicity, when there is no risk of confusion the dependency on \(\xi \) is dropped. For background on the Lévy-Itô decomposition we refer to Section 1 in [4, Chapter I].

Getting back to our discussion, under \({\mathbb {N}}^\lambda _{x,0}(\cdot \, | M \geqslant 1)\), let \((r_{j}:~j\in {\mathcal {J}})\) be an enumeration of the jum** times of the càdlàg process \((\tau ^{+}_r({\overline{W}}^A_{{\mathfrak {t}}}): 0 \leqslant r < H^A_{\mathfrak {t}} )\) – for technical reasons the indexing is assumed to be measurable with respect to \({\overline{W}}^A_{{\mathfrak {t}}}\).Footnote 15 For each \(j \in {\mathcal {J}}\), set

$$\begin{aligned} {\overline{W}}^{A,j}_{{\mathfrak {t}}}:= & {} \Big ( \big ( W^A_{{\mathfrak {t}}}\big (h+\tau _{r_j}({\overline{W}}^A_{{\mathfrak {t}}})\big ), \Lambda ^A_{{\mathfrak {t}}}\big (h+\tau _{r_j}({\overline{W}}^A_{{\mathfrak {t}}})\big )-\Lambda ^A_{{\mathfrak {t}}}(\tau _{r_j}\big ({\overline{W}}^A_{{\mathfrak {t}}})\big ) \big ):~h\in [0, \tau ^{+}_{r_j}({\overline{W}}^{A}_{{\mathfrak {t}}}) - \tau _{r_j}({\overline{W}}^{A}_{{\mathfrak {t}}}) ] \Big ), \end{aligned}$$

and

$$\begin{aligned} \langle \rho ^{A,j}_{{\mathfrak {t}}}, f \rangle :=\int \rho _{{\mathfrak {t}}}^A({\textrm{d}}h)f(h -\tau _{r_j}({\overline{W}}^A_{{\mathfrak {t}}}) )\mathbb {1}_{\{\tau _{r_j}({\overline{W}}^A_{{\mathfrak {t}}})<h< \tau ^{+}_{r_j}({\overline{W}}^A_{{\mathfrak {t}}}) \}}. \end{aligned}$$

The first coordinates of the family \(({\overline{W}}^{A,j}_{{\mathfrak {t}}}: j \in {\mathcal {J}})\) correspond to the excursion of \(W^A_{\mathfrak {t}}\) away from x while the second coordinate is identically zero. We also stress that since \((x,0)\in {\overline{\Theta }}_x\), by Lemma  4.1 the support of \(\rho ^A_{{\mathfrak {t}}}\) is included in \(\bigcup _{j\in {\mathcal {J}}}(\tau _{r_j}({\overline{W}}^A_{{\mathfrak {t}}}),\tau _{r_j}^+({\overline{W}}^A_{{\mathfrak {t}}}))\). Our goal now is to identify the law of \(\sum _{j\in {\mathcal {J}}}\delta _{(r_j, \rho _{{\mathfrak {t}}}^{A,j}, {W}^{A,j}_{{\mathfrak {t}}} )}\). As we shall see, the restriction to the first and last coordinates of this measure is, roughly speaking, a biased version of the excursion point measure of \(\xi \) under \(\Pi _{x,0}\). More precisely, let \(( E^{0}\otimes {\mathcal {N}})_*({\textrm{d}}J,{\textrm{d}}\xi )\) be the measure on \({\mathcal {M}}_f({\mathbb {R}}_+)\otimes {\mathbb {D}}({\mathbb {R}}_+, E)\) defined by

Lemma 5.7

Under \({\mathbb {N}}_{x,0}^{\lambda }(\cdot |M\geqslant 1)\), the random variable \(H_{{\mathfrak {t}}}^A\) is exponentially distributed with parameter \(\lambda /\psi ^{-1}(\lambda )\). Moreover, conditionally on \(H_{{\mathfrak {t}}}^A\), the measure:

$$\begin{aligned} \sum \limits _{j\in {\mathcal {J}}}\delta _{(r_j, \rho _{{\mathfrak {t}}}^{A,j}, {W}^{A,j}_{{\mathfrak {t}}})}, \end{aligned}$$

is a Poisson point measure with intensity \(\mathbb {1}_{[0,H_{{\mathfrak {t}}}^A]}(r){\textrm{d}}r ( E^{0}\otimes {\mathcal {N}})_*( {\textrm{d}}J,{\textrm{d}}\xi )\).

Proof

First, we fix two measurable functions \(g: {\mathbb {R}}_+ \mapsto {\mathbb {R}}_+\) and \(f:{\mathbb {R}}_+ \times {\mathcal {M}}_f({\mathbb {R}}_+)\times {\mathbb {D}}({\mathbb {R}}_+, E) \mapsto {\mathbb {R}}_+\). The statement of the lemma will follow by establishing that:

$$\begin{aligned}&{\mathbb {N}}_{x,0}^{\lambda }\big (g(H^{A}_{{\mathfrak {t}}})\exp (-\sum _{j \in {\mathcal {J}}} f(r_j,\rho _{\mathfrak {t}}^{A,j},{W}_{{\mathfrak {t}}}^{A,j} ))\,|\,M\geqslant 1\big ) \nonumber \\&\quad = \frac{\lambda }{{\widetilde{\psi }}^{-1}(\lambda )}\int _{0}^\infty {\textrm{d}}r \, \exp \big ( - r \cdot \frac{\lambda }{{\widetilde{\psi }}^{-1}(\lambda )} \big ) g(r) \nonumber \\&\quad \cdot \exp \Big ( -\int _0^r {\textrm{d}}s \, ( E^{0}\otimes {\mathcal {N}})_*\big [ 1- \exp \big (- f(s, J,\xi ) \big ) \big ] \Big ). \end{aligned}$$
(5.11)

In this direction we recall from (5.3) the identity \({\widetilde{\psi }}^{-1}(\lambda )={\mathbb {N}}_{x,0}^{\lambda }\big (M\geqslant 1)\) and, to simplify notation, for every \(\mu \in {\mathcal {M}}({\mathbb {R}}_+)\) and \(a,b\geqslant 0\), we write \(\phi (\mu ,a,b)\) for the measure \(\nu \) defined by:

$$\begin{aligned} \int \nu ({\textrm{d}}h)F(h)=\int _{(a,b)}\mu ({\textrm{d}}h)F(h-a). \end{aligned}$$

Next, under \(\Pi _{x,0}\), denote the excursion point measure of \(\xi \) by \(\sum _{j}\delta _{(r_j, \xi ^{j})}\). Now an application of Lemma 5.6 gives

where in first equality we used that for every fixed \( r \geqslant 0\), \(\Pi _{x,0}\)–a.e. we have \(\tau ^+_{r-} = \tau ^+_{r}\), and the last equality follows from the fact that \(\tau ^+\) is purely discontinuous and that thanks to (\(\hbox {H}_{3}\)), under \(P^0\otimes \Pi _{x,0}\), we can write .

We are going to conclude using standard techniques of excursion theory. First remark that if we introduce an i.i.d. collection of measures distributed as under \(P^0\), the previous display can be written in the form:

(5.12)

Since by excursion theory is a Poisson point measure with intensity , we deduce that the expectation under \(E^0\otimes \Pi _{x,0}\) in (5.12) can be written as:

Next, we remark that the previous display equals:

Moreover, by (2.24) the measure is the Lebesgue-Stieltjes measure of a subordinator with Laplace exponent \(p\mapsto \psi (p)/p - \alpha \), which yields

where in the first equality we applied (2.24) and in last one we used (4.23). Putting everything together we obtain the desired identity (5.11). \(\square \)

To identify the law of \({\mathbb {S}}({{\textbf {T}}}^A)\), we now define the natural candidate of the exploration process of the subordinate tree at time \({\mathfrak {t}}\) – as we already mentioned, this statement is purely heuristic. Let us start by introducing some notations. Still under \({\mathbb {N}}^{\lambda }_{x,0}( \cdot |M \geqslant 1)\) denote the connected components of the open set

$$\begin{aligned} \big \{ s \geqslant A^{-1}_{\mathfrak {t}} :~ {H}_s>\inf \limits _{[A^{-1}_{\mathfrak {t}},s]} {H} \big \} \end{aligned}$$

by \(((\alpha _i, \beta _i):i \in {\mathbb {N}})\), and as usual write \((\rho ^i,{\overline{W}}^{i}):=(\rho ^i,W^i,\Lambda ^i)\) for the subtrajectory associated with the excursion interval \([\alpha _i, \beta _i]\). Further, set \(h_i:= H_{\alpha _i}\) and consider the measure:

$$\begin{aligned} \sum \limits _{i\in {\mathbb {N}}}\delta _{(h_i, \rho ^{i},{\overline{W}}^{i} ) }\,. \end{aligned}$$
(5.13)

By the strong Markov property and (2.23), conditionally on \(({\rho }^A_{{\mathfrak {t}}}, {\overline{W}}^A_{{\mathfrak {t}}})\), the measure (5.13) is a Poisson point measure with intensity \( \rho _{{\mathfrak {t}}}^A({\textrm{d}}h){\mathbb {N}}_{{\overline{W}}_{{\mathfrak {t}}}^A(h)}({\textrm{d}}\rho , {\textrm{d}}{\overline{W}}). \) Next, for every \(j\in {\mathcal {J}}\) we set:

$$\begin{aligned} L^{j}:=\sum \limits _{\tau _{r_j}({\overline{W}}^A_{{\mathfrak {t}}})<h_i<\tau ^{+}_{r_j}({\overline{W}}^A_{{\mathfrak {t}}})}{\mathscr {L}}^{\,r_j}_\sigma (\rho ^{i},{\overline{W}}^i), \end{aligned}$$
(5.14)

which is the total amount of exit local time from the domain \(D_{r_j}\) generated by the excursions glued on the right-spine of \({\overline{W}}^A_{{\mathfrak {t}}}\) at the interval \(\big (\tau _{r_j}({\overline{W}}^A_{\mathfrak {t}}), \tau _{r_j}^{+}({\overline{W}}^A_{\mathfrak {t}})\big )\). Finally, we introduce the measure \(\rho ^{*}_{{\mathfrak {t}}}:=\sum \limits _{j\in {\mathcal {J}}} L^j\cdot \delta _{r_j}\).

Lemma 5.8

We have the following identity in distribution:

$$\begin{aligned} \big (({\widetilde{H}}_{{\mathfrak {t}}},{\widetilde{\rho }}_{{\mathfrak {t}}}):{\widetilde{N}}^{\lambda }(\cdot |{\widetilde{M}}\geqslant 1)\big ) \overset{(d)}{=} \big ((H^{A}_{{\mathfrak {t}}},\rho ^{*}_{{\mathfrak {t}}}):~{\mathbb {N}}^{\lambda }_{x,0}(\cdot |M\geqslant 1)\big ). \end{aligned}$$

In particular, Lemma 5.8 implies that \(H(\rho ^*_{{\mathfrak {t}}})=H^{A}_{{\mathfrak {t}}}\).

Proof

We start noticing that, by Lemmas 5.5 and 5.7, we already have:

$$\begin{aligned} \big ({\widetilde{H}}_{{\mathfrak {t}}}:{\widetilde{N}}^{\lambda }(\cdot |{\widetilde{M}}\geqslant 1)\big ) \overset{(d)}{=} \big (H^{A}_{{\mathfrak {t}}}:~{\mathbb {N}}^{\lambda }_{x,0}(\cdot |M\geqslant 1)\big ). \end{aligned}$$

Consequently, again by Lemma 5.5 the desired result will follow by showing that, under \({\mathbb {N}}^{\lambda }_{x,0}(\cdot |M\geqslant 1)\) and conditionally on \(H^A_{\mathfrak {t}}\), the measure

$$\begin{aligned} \sum \limits _{j\in {\mathcal {J}}}\delta _{(r_j,L^j)} \end{aligned}$$

is a Poisson point measure with intensity \(\mathbb {1}_{[0,H^A_{\mathfrak {t}}]}({\textrm{d}}h) {\widetilde{\nu }}({\textrm{d}}z)\), where the measure \({\widetilde{\nu }}\) is characterized by (5.8). Observe that since \(\rho ^*_{{\mathfrak {t}}}\) takes values in \(M_p({\mathbb {R}}_+)\), the same reasoning employed in the proof of Lemma  5.5 allows us to conclude that characterizing the law of the measure in the previous display also characterizes the law of \(\rho ^*_{{\mathfrak {t}}}\) in \({\mathcal {M}}_f({\mathbb {R}}_+)\). In this direction, we work in the rest of the proof under \({\mathbb {N}}^{\lambda }_{x,0}(\cdot | M\geqslant 1)\) and recall that, conditionally on \((\rho ^A_{{{\mathfrak {t}}}}, {\overline{W}}^A_{{\mathfrak {t}}})\), the measure (5.13) is a Poisson point measure with intensity \( \rho ^A_{{{\mathfrak {t}}}}({\textrm{d}}h){\mathbb {N}}_{{\overline{W}}^A_{{{\mathfrak {t}}}}(h)}({\textrm{d}}\rho , {\textrm{d}}{\overline{W}}). \) In particular, (5.14) entails that conditionally on \((\rho ^A_{{{\mathfrak {t}}}}, {\overline{W}}^A_{{\mathfrak {t}}})\), the random variables \((L^{j}: j \in {\mathcal {J}})\) are independent. Moreover, since by definition \(u_p(y)={\mathbb {N}}_{y,0}(1-\exp (-p {\mathscr {L}}^{\,0}_\sigma ))\), the translation invariance of the local time \({\mathcal {L}}\) gives

$$\begin{aligned} {\mathbb {N}}^{\lambda }_{x,0}\big (\exp (-pL^j)\,|\,\rho ^A_{{{\mathfrak {t}}}}, {\overline{W}}^A_{{\mathfrak {t}}})= & {} \exp \Big (-\int _{\tau _{r_j}({\overline{W}}^A_{{\mathfrak {t}}})}^{\tau _{r_j}^+({\overline{W}}^A_{{\mathfrak {t}}})}\rho ^A_{{\mathfrak {t}}}({\textrm{d}}h)u_{p}\big (W^A_{{\mathfrak {t}}}(h)\big )\Big ) \\= & {} \exp \Big (-\int \rho ^{A,j}_{{\mathfrak {t}}}({\textrm{d}}h)u_{p}\big (W^{A,j}_{{\mathfrak {t}}}(h)\big )\Big ), \end{aligned}$$

for every \(j\in {\mathcal {J}}\). It will be then convenient to introduce, for \((\mu , {\text {w}})\in {\mathcal {M}}_f({\mathbb {R}}_+)\times {\mathcal {W}}_E\), the measure \(\textrm{m}_{\mu ,{\text {w}}}\) in \({\mathbb {R}}_+\) defined through its Laplace transform:

$$\begin{aligned} \int {\textrm{m}}_{\mu ,{\text {w}}}({\textrm{d}}z)\exp (-p z) =\exp \Big (-\int \mu ({\textrm{d}}h) u_{p}\big ({\text {w}}(h\wedge \zeta _{\text {w}})\big )\Big ). \end{aligned}$$

Notice that since \(u_0(y)=0\) for every \(y\in E\), the measure \({\textrm{m}}_{\mu ,{\text {w}}}\) is a probability measure (take \(p=0\) in the previous display). The map \((\mu ,{\text {w}})\mapsto \textrm{m}_{\mu ,{\text {w}}}\) takes values in \({\mathcal {M}}_f({\mathbb {R}}_+)\) and it is straightforward to see that it is measurable. Let us mention that only the case \(H(\mu )=\zeta ({\text {w}})\) will be of use and therefore we will have \({\text {w}}(h\wedge \zeta _{\text {w}})={\text {w}}(h)\) in the previous equation. Next, remark that by our previous discussion, for every bounded measurable function \(f: {\mathbb {R}}^2 \rightarrow {\mathbb {R}}\) we have:

$$\begin{aligned}&{\mathbb {N}}^{ \lambda }_{x,0}\Big ( G(H^{A}_{{\mathfrak {t}}}) \exp (-\sum _{j \in {\mathcal {I}}} f(r_j,L^{j}))\,|\,M\geqslant 1\Big )\\&\quad ={\mathbb {N}}^{\lambda }_{x,0}\Big ( G(H^{A}_{{\mathfrak {t}}}) \prod _{j \in {\mathcal {I}}} \int {\textrm{m}}_{\rho ^{A,j}_{{\mathfrak {t}}},W^{A,j}_{{\mathfrak {t}}}}({\textrm{d}}z)\exp (- f(r_j,z)) \,\Big |\,M\geqslant 1\Big )\\&\quad ={\mathbb {N}}^{\lambda }_{x,0}\Big ( G(H^A_{\mathfrak {t}}) \exp (-\sum _{j \in {\mathcal {J}}} f^*(r_j,\rho ^{A,j}_{\mathfrak {t}}, W_{{\mathfrak {t}}}^{A,j}))\,\Big |\,M\geqslant 1\Big ), \end{aligned}$$

where \(f^{*}(r,\mu ,{\text {w}}):=-\log \big (\int {\textrm{m}}_{\mu , {\text {w}}}({\textrm{d}}z) \exp (-f(r,z))\big ).\) Now, we can apply Lemma 5.7 to get:

$$\begin{aligned}&{\mathbb {N}}^{ \lambda }_{x,0}\Big ( G(H^{A}_{{\mathfrak {t}}}) \exp (-\sum _{j \in {\mathcal {I}}} f(r_j,L^{j}))\,\Big |\,M\geqslant 1\Big ) \\&\quad = {\mathbb {N}}^{\lambda }_{x,0}\Bigg ( G(H^{A}_{{\mathfrak {t}}}) \exp \Bigg (-\int _0^{H^A_{{\mathfrak {t}}}} {\textrm{d}}r \, ( E^{0}\otimes {\mathcal {N}})_* \Big [ \int \textrm{m}_{ J, \xi }({\textrm{d}}z) \Big (1-\exp \big (-f(r, z)\big ) \Big ) \Big ] \Bigg ) \Bigg ), \end{aligned}$$

and it follows that conditionally on \(H^A_{{\mathfrak {t}}}\) the measure \(\sum \limits \delta _{(r_j,L^j)}\) is a Poisson point measure with intensity:

$$\begin{aligned} \mathbb {1}_{[0,H^A_{{\mathfrak {t}}}]}(r) {\textrm{d}}r \, ( E^{0}\otimes {\mathcal {N}})_*\big [ \textrm{m}_{ J, \xi } ({\textrm{d}}z) ]. \end{aligned}$$

To conclude, we need to show that the measure \(( E^{0}\otimes {\mathcal {N}})_*\big [ \textrm{m}_{ J, \xi } ({\textrm{d}}z) ]\) is precisely \({\widetilde{\nu }}( {\textrm{d}}z)\). In this direction, remark that:

Then, (2.24) entails that the previous display is equal to

$$\begin{aligned}&{\mathcal {N}}\Big (1-\exp \big (-\int _0^\sigma {\textrm{d}}h\frac{\psi (u_p(\xi (h)))-\psi (u_{G_\lambda }(\xi (h)))}{u_p(\xi (h))-u_{G_\lambda }(\xi (h))})\Big )\\&\quad -{\mathcal {N}}\Big (1-\exp \big ( -\int _0^\sigma {\textrm{d}}h\frac{\psi (u_{G_\lambda }(\xi (h)))}{u_{G_\lambda }(\xi (h))}\big )\Big ). \end{aligned}$$

However, by Lemma 4.8 the previous display is precisely (5.8). \(\square \)

We can now identify \({\mathbb {S}}({{\textbf {T}}}^A)\) in terms of our functionals. In this direction, for every \(i \in {\mathbb {N}}\), we introduce \((\rho ^{i,k},W^{i,k},\Lambda ^{i,k})_{k\in {\mathcal {K}}_{i}}\) the excursions outside of \(D_0={\overline{E}}~\setminus \{(x,0)\}\) of \((\rho ^i,W^{i},\Lambda ^i-\Lambda ^i_0)\). In particular, the family \((\rho ^{i,k}, {\overline{W}}^{i,k})_{k\in {\mathcal {K}}_i}\) is in one-to-one correspondence with the connected components \([a_{i,k},b_{i,k}]\), \(k\in {\mathcal {K}}_i\), of the open set \(\{s\in [0,\sigma ({\overline{W}}^i)]:\tau _{\Lambda ^i_0}({\overline{W}}^{i}_s)<\zeta _s({\overline{W}}_s^{i})\}\), in such a way that \((\rho ^{i,k}, W^{i,k},\Lambda ^{i,k}+\Lambda _0^i)\) is the subtrajectory of \((\rho ^{i},{\overline{W}}^{i})\) associated with the interval \([a_{i,k},b_{i,k}]\). In the time scale of \(((\rho _{s},{\overline{W}}_{s}):s\geqslant 0)\), the excursion \((\rho ^{i,k},W^{i,k},\Lambda ^{i,k} + \Lambda ^i_0 )\) corresponds to the subtrajectory associated with \([\alpha _{i,k},\beta _{i,k}]\), where \(\alpha _{i,k}:=\alpha _i+a_{i,k}\) and \(\beta _{i,k}:=\alpha _i + b_{i,k}\). Next, for each \(k \in {\mathcal {K}}_i\), we introduce the point process \({\mathcal {P}}^{i,k}_{t}:={\mathcal {P}}_{(A_{\alpha _{i,k}}+t)\wedge A_{\beta _{i,k}}}-{\mathcal {P}}_{A_{\alpha _{i,k}}}\) and we set:

$$\begin{aligned} {\mathcal {M}}:=\sum \limits _{i\in {\mathbb {N}}}\sum \limits _{k\in {\mathcal {K}}_{i}}\delta _{(\Lambda _{0}^i(0), \rho ^{i,k}, {\overline{W}}^{i,k}, {\mathcal {P}}^{i,k})}. \end{aligned}$$

An application of the Markov property at time \(A^{-1}_{{\mathfrak {t}}}\) and the special Markov property applied to the domain \(D_0\) shows that, conditionally on \(\rho ^*_{{\mathfrak {t}}}\), the measure \({\mathcal {M}}\) is a Poisson point measure with intensity \(\rho ^*_{{\mathfrak {t}}}({\textrm{d}}r) {\mathbb {N}}_{x,0}^\lambda ({\textrm{d}}\rho , {\textrm{d}}{\overline{W}}, {\textrm{d}}{\mathcal {P}})\). For every \(j\in {\mathcal {J}}\), consider

$$\begin{aligned} M_j:=\#\Big \{ \big (\Lambda _{0}^i(0), \rho ^{i,k}, {\overline{W}}^{i,k}, {\mathcal {P}}^{i,k}\big )\in {\mathcal {M}}:~\Lambda _0^{i} (0) =r_j\text { and }{\mathcal {P}}^{i,k}_{A_\sigma ({\overline{W}}^{i,k})} \geqslant 1\Big \}, \end{aligned}$$

and denote the elements of \(\{ (r_j, M_j), j\in {\mathcal {J}}:~M_{j} \geqslant 1\}\) arranged in increasing order with respect to \(r_j\) by \(\big ( (r^\circ _1, M^\circ _1), \dots , (r^\circ _{R},M^\circ _{R}) \big ). \) We now remark that by construction we have:

$$\begin{aligned} {\mathbb {S}}({{\textbf {T}}}^A) =\big ((r^\circ _1, M^\circ _1), \dots , (r^\circ _{R},M^\circ _{R}), (H^A_{{\mathfrak {t}}}, -1) \big ), \end{aligned}$$
(5.15)

and, in particular, \(K = \sum _{p=1}^R M^\circ _{p}\) which is the number of atoms \((\Lambda _{0}^i(0),\rho ^{i,k},{\overline{W}}^{i,k},{\mathcal {P}}^{i,k}) \in {\mathcal {M}}\) with at least one Poissonian mark. Finally, we write

$$\begin{aligned} \mathscr {E} := ((\rho ^{q}_\circ , \overline{W}^q_\circ , \mathcal {P}^q_\circ ):~ 1 \leqslant q \leqslant K).\end{aligned}$$

for the collection of these marked excursions enumerated in counterclockwise order. Remark that, for every \(1 \leqslant q \leqslant K\), \({{{\textbf {T}}}}^A_q\) is the embedded tree associated with \({\widehat{\Lambda }}^q_\circ \) – time changed by \(A(\rho ^q_\circ , {\overline{W}}_\circ ^q)\) – and marked by \({\mathcal {P}}^q_\circ \). We are now in position to prove Proposition  5.4.

Proof of Proposition 5.4

For every \(h \geqslant 0\) with \({\widetilde{\rho }}_{{\mathfrak {t}}}(\{ h\}) > 0\), we define the restricted measure \(\widetilde{{\mathcal {M}}}^{(h)}:= \widetilde{{\mathcal {M}}}\mathbb {1}_{\{{\widetilde{h}}^i = h\}}\). Similarly, for every \(r \geqslant 0\) satisfying \({\rho }^*_{{\mathfrak {t}}}(\{ r \}) > 0\), we set \({\mathcal {M}}^{(r)}:={\mathcal {M}}\mathbb {1}_{\{ \Lambda _0^{i} (0) = r \}}\). We shall write respectively \(\widetilde{{\mathcal {M}}}^{(h)}({\widetilde{M}} \geqslant 1)\) and \({\mathcal {M}}^{(r)}(M \geqslant 1)\) respectively for the number of atoms in \(\widetilde{{\mathcal {M}}}^{(h)}\) and \({\mathcal {M}}^{(r)}\) with at least one Poissonian mark. Next, we introduce the following families respectively under \({\widetilde{N}}^\lambda (\cdot |{\widetilde{M}} \geqslant 1)\) and \({\mathbb {N}}^{\lambda }_{x,0}(\cdot | M \geqslant 1)\):

$$\begin{aligned} \Big \{ \big (h \mathbb {1}_{\{ \widetilde{{\mathcal {M}}}^{(h)}({\widetilde{M}} \geqslant 1) \geqslant 1 \}}, \, \widetilde{{\mathcal {M}}}^{(h)}({\widetilde{M}} \geqslant 1) \big ): h \geqslant 0, \, {\widetilde{\rho }}_{{\mathfrak {t}}}(\{ h \})> 0 \Big \} \cup \Big \{ ({\widetilde{H}}_{{\mathfrak {t}}},-1 ) \Big \},\nonumber \\ \end{aligned}$$
(5.16)

and

$$\begin{aligned} \Big \{ \big (r \mathbb {1}_{\{ {\mathcal {M}}^{(r)}(M \geqslant 1) \geqslant 1 \}}, \, {\mathcal {M}}^{(r)}(M \geqslant 1) \big ): r \geqslant 0, \, \rho ^*_{{\mathfrak {t}}}(\{ r \})> 0 \Big \} \cup \Big \{ ({H}^A_{{\mathfrak {t}}},-1 ) \Big \},\nonumber \\ \end{aligned}$$
(5.17)

where by Lemma 5.8, we have respectively that \(H(\rho ^A_{{\mathfrak {t}}}) = H^A_{\mathfrak {t}}\), \(H({\widetilde{\rho }}_{{\mathfrak {t}}}) = {\widetilde{H}}_{\mathfrak {t}}\). Recall that, under \({\widetilde{N}}^\lambda (\cdot |{\widetilde{M}} \geqslant 1,{\widetilde{\rho }}_{{\mathfrak {t}}})\), the measure \(\widetilde{{\mathcal {M}}}\) is a Poisson point measure with intensity \({\widetilde{\rho }}_{{\mathfrak {t}}}({\textrm{d}}h){\widetilde{N}}^\lambda ({\textrm{d}}\rho , \, {\textrm{d}}{\mathcal {P}})\) and similarly, under \({\mathbb {N}}^{\lambda }_{x,0}(\cdot | M \geqslant 1, \rho _{{\mathfrak {t}}}^*)\), the measure \({\mathcal {M}}\) is a Poisson point measure with intensity \(\rho ^*_{{\mathfrak {t}}}({\textrm{d}}r) {\mathbb {N}}_{x,0}^\lambda ({\textrm{d}}\rho , {\textrm{d}}{\overline{W}}, {\textrm{d}}{\mathcal {P}})\). Consequently, by restriction properties of Poisson measures, under \({\widetilde{N}}^\lambda (\cdot |{\widetilde{M}} \geqslant 1,{\widetilde{\rho }}_{{\mathfrak {t}}})\), the variables \((\widetilde{{\mathcal {M}}}^{(h)}({\widetilde{M}} \geqslant 1): \, {\widetilde{\rho }}_{{\mathfrak {t}}}(\{h\}) > 0\big )\) are independent Poisson random variables with intensity \({\widetilde{\rho }}_{{\mathfrak {t}}}(\{ h \}){\widetilde{N}}^{\lambda }(M \geqslant 1)\) and, under \({\mathbb {N}}^{\lambda }_{x,0}(\cdot | M \geqslant 1, \rho _{{\mathfrak {t}}}^*)\), the variables \(({\mathcal {M}}^{(r)}(M \geqslant 1): \, \rho ^*_{{\mathfrak {t}}}(\{r\}) > 0\big )\) are also independent Poisson random variables, this time with intensity \(\rho ^*_{{\mathfrak {t}}}(\{ r \}) {\mathbb {N}}_{x,0}^{\lambda }(M \geqslant 1)\). Now, recall from Lemma 5.8 the identity

$$\begin{aligned} \big ({\widetilde{\rho }}_{{\mathfrak {t}}}:{\widetilde{N}}^{\lambda }(\cdot |{\widetilde{M}}\geqslant 1)\big ) \overset{(d)}{=} \big (\rho ^{*}_{{\mathfrak {t}}}:~{\mathbb {N}}^{\lambda }_{x,0}(\cdot |M\geqslant 1)\big ). \end{aligned}$$

Since \({\widetilde{N}}^\lambda ({\widetilde{M}} \geqslant 1) = {\mathbb {N}}_{x,0}^\lambda (M \geqslant 1)\), this ensures that the families (5.16) and (5.17) have the same distribution. Moreover, the measures \({\widetilde{\rho }}_{{\mathfrak {t}}}\) and \(\rho _{{\mathfrak {t}}}^*\) being atomic, the families (5.7), (5.15) correspond respectively to the subset of elements of (5.16) and (5.17) with non-null entries. This gives the first statement of the proposition.

To establish (ii), it suffices to show that conditionally on \({\mathbb {S}}(\widetilde{{{\textbf {T}}}})\), the marked excursions \(\widetilde{{\mathscr {E}}}\) are distributed as \({\widetilde{K}}\) independent copies with law \({\widetilde{N}}^\lambda ( {\textrm{d}}H, {\textrm{d}}{\mathcal {P}} |{\widetilde{M}} \geqslant 1)\) and that, conditionally on \({\mathbb {S}}({{\textbf {T}}}^A)\), the marked excursions \({\mathscr {E}}\) are distributed as K independent copies with law \({\mathbb {N}}_{x,0}^\lambda ({\textrm{d}}{\overline{W}}, {\textrm{d}}{\mathcal {P}}|M \geqslant 1)\). Remark that our previous reasoning already implies that \(\widetilde{{\mathscr {E}}}\) and \({\mathscr {E}}\) satisfy the desired property if we do not take into account the ordering. However, this is not enough and to keep track of the ordering we proceed as follows:

We start studying \(\widetilde{{\mathscr {E}}}\) under \(\widetilde{{N}}^\lambda (\cdot | {\widetilde{M}} \geqslant 1)\) and we introduce \((\widetilde{{\mathscr {I}}}_s:s \geqslant {\mathfrak {t}})\), the running infimum of \((\langle {\widetilde{\rho }}_{s}, 1 \rangle - \langle {\widetilde{\rho }}_{{\mathfrak {t}}}, 1 \rangle : s \geqslant {\mathfrak {t}})\). Next, we consider the measure

$$\begin{aligned} \sum _{i \in {\mathbb {N}}}\delta _{(-\widetilde{{\mathscr {I}}}_{{\widetilde{\alpha }}_i},{\widetilde{\rho }}^{i}, \widetilde{{\mathcal {P}}}^i) }, \end{aligned}$$
(5.18)

and we stress that, by the strong Markov property and the discussion below (2.22), conditionally on \({\mathcal {F}}_{{\mathfrak {t}}}\) this measure is a Poisson point measure with intensity \(\mathbb {1}_{[0, \langle {\widetilde{\rho }}_{{\mathfrak {t}}},1 \rangle ]}(u) {\textrm{d}}u~ {\widetilde{N}}^\lambda ( {\textrm{d}}\rho ,{\textrm{d}}{\mathcal {P}})\). Moreover, its image by the transformation \(s \mapsto H(\kappa _{s} {\widetilde{\rho }}_{{\mathfrak {t}}} )\) on its first coordinate gives precisely \(\widetilde{{\mathcal {M}}}\). In particular, the collection \(\big (({\widetilde{h}}^\circ _1, {\widetilde{M}}^\circ _1), \dots , ({\widetilde{h}}^\circ _{{\widetilde{R}}},{\widetilde{M}}^\circ _{{\widetilde{R}}}), ({\widetilde{H}}_{{\mathfrak {t}}},-1 )\big )\) only depends on \({\widetilde{\rho }}_{{\mathfrak {t}}}\) and \(\big (\widetilde{{\mathscr {I}}}_{{\widetilde{\alpha }}_i}:~i\geqslant 0 \text { with } \widetilde{{\mathcal {P}}}^{i}_{\sigma ({\widetilde{\rho }}^{\,i})}\geqslant 1\big )\). Remark that the ordered marked excursions \(\widetilde{{\mathscr {E}}}\) correspond precisely to the atoms \(H({\widetilde{\rho }}^{\,i})\) of (5.18) with \(\widetilde{{\mathcal {P}}}^{i}_{\sigma ({\widetilde{\rho }}^{\,i})}\geqslant 1\), when considered in decreasing order with respect to \(-\widetilde{{\mathscr {I}}}_{{\widetilde{\alpha }}_i}\). Since \(H({\widetilde{\rho }}_{{\mathfrak {t}}}) = {\widetilde{H}}_{\mathfrak {t}}\), we deduce by restriction properties of Poisson measures that, conditionally on \(({\widetilde{\rho }}_{\mathfrak {t}},{\widetilde{K}})\), the collection \(\widetilde{{\mathscr {E}}}\) is independent of \({\mathbb {S}}(\widetilde{{{\textbf {T}}}})\) and formed by \({\widetilde{K}}\) i.i.d. variables with distribution \({\widetilde{N}}^\lambda ({\textrm{d}}\rho ,{\textrm{d}}{\mathcal {P}} | {\widetilde{M}} \geqslant 1)\), as wanted.

Let us now turn our attention to the distribution of \({\mathscr {E}}\) under \({\mathbb {N}}_{x,0}^\lambda (\cdot |M\geqslant 1)\). Similarly, under \({\mathbb {N}}_{x,0}^\lambda (\cdot |M\geqslant 1)\) we consider \(({\mathscr {I}}_s:~s \geqslant A_{{\mathfrak {t}}}^{-1})\), the running infimum of \((\langle \rho _{s}, 1 \rangle - \langle \rho _{A_{\mathfrak {t}}^{-1}}, 1 \rangle : s \geqslant A_{\mathfrak {t}}^{-1} )\) as well as the measure

$$\begin{aligned} \sum _{i \in {\mathbb {N}}}\delta _{(-{\mathscr {I}}_{\alpha _i},\rho ^{i}, W^i) }. \end{aligned}$$
(5.19)

Once again, by the strong Markov property and (2.22), conditionally on \({\mathcal {F}}_{A_{\mathfrak {t}}^{-1}}\), the measure (5.19) is a Poisson point measure with intensity \(\mathbb {1}_{[0, \langle \rho _{{\mathfrak {t}}}^A,1 \rangle ]}(u) {\textrm{d}}u~ {\mathbb {N}}^\lambda _{{\overline{W}}^A_{{\mathfrak {t}}}(H( \kappa _{u}\rho ^A_{{\mathfrak {t}}} ))}( {\textrm{d}}\rho ,{\textrm{d}}{\overline{W}})\). We now introduce the process:

$$\begin{aligned} V_t:=\sum \limits _{i\in {\mathbb {N}}} {\mathscr {L}}^{\,\Lambda ^i_0}_{t\wedge \beta _i-t\wedge \alpha _i}(\rho ^{i}, {\overline{W}}^i),\quad t\geqslant 0, \end{aligned}$$

where \(V_\infty =\langle \rho _{{\mathfrak {t}}}^*,1\rangle <\infty \) by Lemma 5.8. Recall that \((\rho ^{i,k}, {\overline{W}}^{i,k})_{k\in {\mathcal {K}}_i}\) stands for the excursions of \((\rho ^{i}, W^{i},\Lambda ^{i}-\Lambda ^i_0)\) outside \(D_0\) and we stress that in the time scale of \(((\rho _{s},{\overline{W}}_{s}):s\geqslant 0)\), the excursion \((\rho ^{i,k}, W^{i,k},\Lambda ^{i,k}+\Lambda _0^i)\) corresponds to the subtrajectory associated with \([\alpha _{i,k},\beta _{i,k}]\), where \(\alpha _{i,k}:=\alpha _i+a_{i,k}\) and \(\beta _{i,k}:=\alpha _i + b_{i,k}\). To simplify notation set \(\text {Tr}(\rho ^i,{\overline{W}}^{i})\) for the truncation of \((\rho ^i,{\overline{W}}^{i})\) to the domain \(D_{\Lambda ^i_0}\). An application of the strong Markov property combined with the special Markov property in the form given in Theorem 3.8 implies that, conditionally on \(\sum \limits _i \delta _{(-{\mathscr {I}}_{\alpha _i},\text {Tr}(\rho ^i, {\overline{W}}^i))}\), the measure:

$$\begin{aligned} \sum \limits _{i\in {\mathbb {N}},k\in {\mathcal {K}}_i} \delta _{(V_{\alpha _{i,k}},\rho ^{i,k},{\overline{W}}^{i,k},{\mathcal {P}}^{i,k})} \end{aligned}$$
(5.20)

is a Poisson point measure with intensity \(\mathbb {1}_{[0,\langle \rho _{{\mathfrak {t}}}^*,1\rangle ]}(p){\textrm{d}}p~{\mathbb {N}}_{x,0}^{\lambda }({\textrm{d}}\rho ,{\textrm{d}}{\overline{W}},{\textrm{d}}{\mathcal {P}})\). The conclusion is now similar to the previous discussion on \(\widetilde{{\mathscr {E}}}\), and therefore we will only provide a condensed exposition. In this direction, we claim that the collection \(((r^\circ _1, M^\circ _1), \dots , (r^\circ _{R},M^\circ _{R}), ( H^A_{\mathfrak {t}}, -1) )\) can be recovered from the pair

$$\begin{aligned} \sum \limits _{i \in {\mathbb {N}}} \delta _{(-{\mathscr {I}}_{\alpha _i},\text {Tr}(\rho ^i, {\overline{W}}^i))} \quad \text { and } \quad \Big (V_{\alpha _{i,k}}: i \in {\mathbb {N}}, k \in {\mathcal {K}}_i \text { with }{\mathcal {P}}^{i,k}_{A_\sigma ({\overline{W}}^{i,k})}\geqslant 1\Big ), \end{aligned}$$

by making use of the map** \(r \mapsto \sum _{(-{\mathscr {I}}_{\alpha _i}) \leqslant r}{\mathscr {L}}_\sigma ^{\Lambda ^i_0}(\rho ^i, {\overline{W}}^i)\) and the fact that \(\Lambda _0^i(0)\) can be read from \(\text {Tr}(\rho ^i,{\overline{W}}^i)\). This claim can be derived by a straightforward application of two key observations. Firstly, \({\mathscr {L}}^{\, \Lambda _0^i}_\sigma (\rho ^i, {\overline{W}}^i)\) is measurable with respect to \(\text {Tr}(\rho ^i, {\overline{W}}^i)\), as stated in Proposition 3.4. Secondly, we have the equality \(H^A_{{\mathfrak {t}}} = \sup _{i\in {\mathbb {N}}} \Lambda ^i_0(0)\), which holds since the measure \({\mathcal {M}}\), conditional on \(\rho ^*_{\mathfrak {t}}\), is a Poisson measure with intensity \(\rho ^*_{\mathfrak {t}}({\textrm{d}}r) {\mathbb {N}}^\lambda _{x,0}\), and \(H(\rho ^*_{{\mathfrak {t}}})=H^A_{\mathfrak {t}}\) by Lemma 5.8. In the interest of brevity, we leave some of the details to the reader. Now notice that, the ordered marked excursions \({\mathscr {E}}\) correspond precisely to the atoms of (5.20) with \({\mathcal {P}}^{i,k}_{A_\sigma ({\overline{W}}^{i,k})}\geqslant 1\) in decreasing order with respect to the process V, since V is non-decreasing and all the values \(\{V_{\alpha _{i,k}}:i\in {\mathbb {N}}, k\in {\mathcal {K}}_i\}\) are distinct. Putting everything together, we deduce by restriction properties of Poisson measures that, conditionally on \( \sum \limits _{i\in {\mathbb {N}}} \delta _{(-{\mathscr {I}}_{\alpha _i},\text {Tr}(\rho ^i, {\overline{W}}^i))}\) and K, the collection \({\mathscr {E}}\) is independent of \({\mathbb {S}}({{\textbf {T}}}^A)\) and composed by K i.i.d. variables with distribution \({\mathbb {N}}^\lambda _{x,0}({\textrm{d}}\rho ,{\textrm{d}}{\overline{W}},{\textrm{d}}{\mathcal {P}} | M \geqslant 1)\). This completes the proof of Proposition 5.4. \(\square \)

Notation index

  • \({\mathbb {D}}({\mathbb {R}}_+, M)\), for an arbitrary Polish space M, stands for the space of M-valued càdlàg paths indexed by \({\mathbb {R}}_+\), endowed with the Skorokhod topology

  • X canonical process in \({\mathbb {D}}({\mathbb {R}}_+, {\mathbb {R}})\) (Sect. 2.1)

  • \(\psi \) Laplace exponent of a Lévy process with Lévy-Khintchine triplet \((\alpha , \beta , \pi )\) (Sect. 2.1)

  • H height process (Sect. 2.1)

  • \({\mathcal {M}}_f({\mathbb {R}}_+)\) set of finite measures on \({\mathbb {R}}_+\) (Sect. 2.1)

  • \(H(\mu ):= \sup \text {supp } \mu \) for \(\mu \in {\mathcal {M}}_f({\mathbb {R}}_+)\) (Sect. 2.1)

  • \(\kappa _a\mu \) pruning operation for \(\mu \in {\mathcal {M}}_f({\mathbb {R}}_+)\) and \(a \geqslant 0\) (Sect. 2.1)

  • \([\mu , \nu ]\) concatenation of \(\mu , \nu \in {\mathcal {M}}_f({\mathbb {R}}_+)\) with \(H(\mu ) < \infty \) (Sect. 2.1)

  • \(\langle \mu , f \rangle \) integral of a measurable \(f: {\mathbb {R}}_+ \rightarrow {\mathbb {R}}\) with respect to \(\mu \) (Sect. 2.1)

  • \(\rho ^\mu \) exploration process started from \(\mu \in {\mathcal {M}}_f({\mathbb {R}}_+)\) (Sect. 2.1)

  • \({{\textbf {P}}}_\mu \) law of \(\rho ^\mu \) for \(\mu \in {\mathcal {M}}_f({\mathbb {R}}_+)\) (Sect. 2.1)

  • \(\eta \) dual of the exploration process (Sect. 2.1)

  • N excursion measure at 0 of the reflected Lévy process \(X-I\) (Sect. 2.1)

  • \(\sigma _e = \sup \{t \geqslant 0: e(t) \ne 0 \}\) lifetime of \(e \in {\mathbb {D}}({\mathbb {R}}_+, {\mathbb {R}})\) (Sect. 2.1)

  • \({\mathcal {T}}_e\) tree coded by the continuous non-negative function \(e:{\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+\) (Sect. 2.2)

  • \(d_e\) metric on \({\mathcal {T}}_e\) (Sect. 2.2)

  • \(m_e(s,t)\) infimum of e in the interval [st], for \(0 \leqslant s \leqslant t < \infty \) (Sect. 2.2)

  • \(p_e\) canonical projection from \({\mathbb {R}}_+\) to \({\mathcal {T}}_e\) (Sect. 2.2)

  • \(\text {Mult}_{i}({\mathcal {T}}_e)\) points of multiplicity \(i \in {\mathbb {N}}\) in \({\mathcal {T}}_e\) (Sect. 2.2)

  • E a Polish space with metric \(d_E\) (Sect. 2.3)

  • \(\xi \) canonical process in \({\mathbb {C}}({\mathbb {R}}_+, E)\), the space of continuous functions indexed by \({\mathbb {R}}_+\) taking values in E (Sect. 2.3)

  • \(\Pi _y\) law of an E-valued continuous Markov process started from \(y\in E\) (Sect. 2.3)

  • \({\mathcal {W}}_E\) space of finite E-valued paths (Sect. 2.3)

  • \(\zeta _{\text {w}}\) lifetime of \({\text {w}}\in {\mathcal {W}}_E\) (Sect. 2.3)

  • \({\widehat{{\text {w}}}}:= {\text {w}}(\zeta _{\text {w}})\) for \({\text {w}}\in {\mathcal {W}}_E\) (Sect. 2.3)

  • \((\rho , W)\) canonical process in \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_E )\) (Sect. 2.3)

  • \({\mathcal {M}}^{0}_{f}:=\big \{\mu \in {\mathcal {M}}_{f}({\mathbb {R}}_{+}):\,H(\mu )<\infty \ \text { and } \text {supp } \mu = [0,H(\mu )]\big \}\cup \{0\}\) (Sect. 2.3)

  • \(\Theta :=\big \{(\mu , {\text {w}}) \in {\mathcal {M}}_f^0 \times {\mathcal {W}}_{E}:~H(\mu )=\zeta _{\text {w}}\big \}\) subset of initial conditions for the Lévy snake (Sect. 2.3)

  • \(\zeta (\omega ) = (\zeta _{\omega _s}: s \geqslant 0)\) lifetime process of a continuous \({\mathcal {W}}_E\)-valued path \(\omega \) (Sect. 2.3)

  • \({\mathbb {P}}_{\mu , {\text {w}}}\) law in \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+)\times {\mathcal {W}}_E )\) of the Lévy snake started from \((\mu , {\text {w}}) \in \Theta \) (Sect. 2.3)

  • \({\mathbb {N}}_y\) excursion measure away from \((0,y) \in {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_E\) of the Lévy snake (Sect. 2.3)

  • \({\mathcal {S}}_{\mu , {\text {w}}}\) subset of \({\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_E )\) of snake paths started from \({(\mu , {\text {w}}) \, \in \, \Theta }\) (Sect. 2.3)

  • \({\mathcal {S}}:= \bigcup _{(\mu , {\text {w}}) \, \in \, \Theta } {\mathcal {S}}_{\mu , {\text {w}}}\) set of snake paths

  • \((U^{(1)}, U^{(2)})\) a two-dimentional subordinator with exponent (2.24) (Sect. 2.3)

  • Lebesgue-Stieltjes measure of \((U^{(1)}, U^{(2)})\) restricted to [0, a], for \(a \geqslant 0\) (Sect. 2.3)

  • \(\tau _{D}(\text {w}):=\inf \big \{t\in [0,\zeta _{\text {w}}]: ~ \text {w}(t)\notin D\big \}\) exit time from the open set D of \({\text {w}}\in {\mathcal {W}}_E\) (Sect. 3)

  • \(V^D_t(\uprho , \omega ):= \int _0^t {\textrm{d}}s \, \mathbb {1}_{\{ \zeta _{\omega _s} \leqslant \tau _D(\omega _s) \}}\) time spent by a path \((\uprho , \omega ) \in {\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_E )\) in the open domain \(D\subset E\) up to time \(t \geqslant 0\) (Sect. 3.1)

  • \(\Gamma _s^{D}(\uprho , \omega ):=\inf \big \{t\geqslant 0: V_t^D(\uprho , \omega ) > s\big \}\), \(s \geqslant 0\), right-inverse of \(V^D(\uprho , \omega )\) (Sect. 3.1)

  • \(\text {tr}_{D}\big (\uprho ,\omega \big ):=(\uprho _{\Gamma _s^{D}(\uprho , \omega )},\omega _{\Gamma _s^{D}(\uprho , \omega )})_{s\in {\mathbb {R}}_+}\) truncation of \((\uprho , \omega ) \in {\mathbb {D}}({\mathbb {R}}_+, {\mathcal {M}}_f({\mathbb {R}}_+) \times {\mathcal {W}}_E )\) to D (Sect. 3.1)

  • \( {\mathcal {F}}_D = \sigma ( \text {tr}_D(\rho , W)_s: s \geqslant 0 )\) sigma-field generated by the paths of the Lévy snake before they exit the open domain D (Sect. 3.1)

  • \(L^D = (L^D_t: t \geqslant 0)\) exit local time from D (Sect. 3.1)

  • \(u_{g}^{D}(y):={\mathbb {N}}_{y}\big (1-\exp (-\langle {\mathcal {Z}}^{D},g\rangle )\big )\), for \(y\in D\) (Sect. 3.2)

  • \(E_*:= E {\setminus } \{ x \}\) and \({\overline{E}}:= E \times {\mathbb {R}}_+\) (Sect. 4)

  • \({\overline{{\text {w}}}} = ({\text {w}}, \ell )\) elements of \({\mathcal {W}}_{{\overline{E}}}\) (Sect. 4)

  • \({\overline{\Theta }}\) set of pairs \((\mu , {\overline{{\text {w}}}})\in {\mathcal {M}}_f^0 \times {\mathcal {W}}_{{\overline{E}}}\) (Sect. 4)

  • \({\overline{\Theta }}_x\) subset of \({\overline{\Theta }}\) satisfying conditions (i) and (ii) from Sect. 4 (Sect. 4)

  • \(\tau _r({\overline{{\text {w}}}}):= \inf \{ h \geqslant 0:~{\overline{{\text {w}}}}(h)=(x,r)\}\) for \({\overline{{\text {w}}}} = ({\text {w}}, \ell ) \in {\mathcal {W}}_{{\overline{E}}}\) (Sect. 4)

  • \({\mathcal {N}}\) excursion measure of \(\xi \) away from x (Sect. 4)

  • \(D_r:= {\overline{E}} \setminus \{(x,r)\}\) for \(r \geqslant 0\) (Sect. 4.1)

  • \({\widetilde{\psi }}\) Laplace exponent of a Lévy process defined by the relation (4.12), and with Lévy-Khintchine triplet \(({\widetilde{\alpha }}, {\widetilde{\beta }}, {\widetilde{\pi }})\) (Sect. 4.1)