1 Introduction

The p-spin interaction version of the Sherrington–Kirkpatrick [13] is a spin system defined on the hypercube \(S_N\equiv \{-1,+1\}^N\) where the random Hamiltonian is given in terms of a Gaussian process \(X_{\cdot }: S_N \rightarrow {\mathbb {R}}\) given by

$$\begin{aligned} X_\sigma = \left( {\begin{array}{c}N\\ p\end{array}}\right) ^{-\frac{1}{2}} \sum _{1 \le i_1<i_2,\ldots <i_p \le N} J_{i_1,i_2,\dots ,i_p} \sigma _{i_1} \sigma _{i_2}\cdots \sigma _{i_p}, \end{aligned}$$
(1.1)

where the \(\{J_{i_1,\dots ,i_p}\}_{i_1,\dots , i_p=1}^\infty \) is a family of independent, standard normal random variables defined on some probability space \((\Omega , {\mathcal {F}}, {\mathbb {P}})\). Alternatively, X is characterised uniquely as the Gaussian field on \(S_N\) withe mean zero and covariance

$$\begin{aligned} {\mathbb {E}}\left( X_\sigma X_{\sigma ^{\prime }}\right) \equiv f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) , \end{aligned}$$
(1.2)

where

$$\begin{aligned} R_N(\sigma ,\sigma ^{\prime })\equiv \frac{1}{N}(\sigma ,\sigma ^{\prime })\equiv \frac{1}{N}\sum _{i=1}^N \sigma _i\sigma ^{\prime }_i \end{aligned}$$
(1.3)

is the overlap between the configurations \(\sigma , \sigma ^{\prime }\), and \(f_{p,N}\) is of the form (see [5])

$$\begin{aligned} f_{p,N}\left( x\right) =\sum _{k=0}^{[p/2]} d_{p-2k}N^{-k}x^{p-2k}(1+O(1/N)), \end{aligned}$$
(1.4)

where

$$\begin{aligned} d_{p-2k} \equiv (-1)^k \left( {\begin{array}{c}p\\ 2k\end{array}}\right) k!!. \end{aligned}$$
(1.5)

In particular,

$$\begin{aligned} f_{p,N} \left( x\right) = x^p \left( 1+O(1/N) \right) , \quad \text {uniformly for} \; x\in [-1, 1]. \end{aligned}$$
(1.6)

The model with \(p=2\) is the classical SK model, introduced in [13], and the general version with \(p\ge 3\), by Gardner [10]. The Hamiltonian is given by

$$\begin{aligned} H_N(\sigma ) \equiv -\sqrt{N} X_\sigma , \end{aligned}$$
(1.7)

and the partition function is

$$\begin{aligned} Z_{N}(\beta ) \equiv {\mathbb {E}}_\sigma \left[ {\mathrm e}^{-{\beta }H_N(\sigma )}\right] \equiv 2^{-N}\sum _{\sigma \in S_N} {\mathrm e}^{ {\beta }\sqrt{N} X_\sigma }. \end{aligned}$$
(1.8)

Finally, minus the free energy is

$$\begin{aligned} F_{N}({\beta }) \equiv \frac{1}{N} \ln Z_{N}({\beta }). \end{aligned}$$
(1.9)

For \(m \in (0,1)\), let

$$\begin{aligned} \phi (m)\equiv \frac{1-m}{2}\ln (1-m) +\frac{1+m}{2}\ln (1+m), \end{aligned}$$
(1.10)

and

$$\begin{aligned} \beta ^{2}_p\equiv \inf _{0<m<1}(1+m^{-p})\phi (m), \end{aligned}$$
(1.11)

for \(p\ge 3\), and \({\beta }_2=1\). It is a well-known consequence of Gaussian concentration of measure theorems, that the free energy is self-averaging in the sense that

$$\begin{aligned} \lim _{N\uparrow \infty } F_N({\beta })= \lim _{N\uparrow \infty } {\mathbb {E}}\left[ F_N({\beta })\right] , \hbox {a.s.}\end{aligned}$$
(1.12)

The existence of the limit on the right-hand side was established in a celebrated paper by Guerra and Toninelli [11]. For \({\beta }<{\beta }_p\), it is even true that the so-called quenched free energy on the right-hand side is equal to the so-called annealed free energy, that is

$$\begin{aligned} \lim _{N\uparrow \infty } {\mathbb {E}}\left[ F_N({\beta })\right] =\lim _{N \rightarrow \infty } \frac{1}{N} \ln {\mathbb {E}}[Z_N({\beta })] = \frac{{\beta }^2}{2}. \end{aligned}$$
(1.13)

This fact was first proven for \(p=2\) by Aizenman, Lebowitz, and Ruelle [1] and a very simple proof was given later by Talagrand [14]. The proof in the case \(p\ge 3\) is also due to Talagrand [15]. Note that

$$\begin{aligned} \lim _{p\uparrow +\infty } \beta _p=\sqrt{2\ln 2}, \end{aligned}$$
(1.14)

which is the well-known critical temperature of the REM [8]. It is, however, not known whether \({\beta }_p\) is the true critical value in general. It is natural to ask about fluctuations around this limit. This was first done in [1] with a cluster expansion and then by Comets and Neveu [7], who used the martingale central limit theorem (CLT) for the free energy in the case \(p=2\) for all \({\beta }<1\). Note that \({\beta }=1\) corresponds to the inverse critical temperature for \(p=2\). The case \(p\ge 3\) was analysed by Bovier, Kurkova, and Löwe [5], also using martingale methods combined with truncation techniques. They established a CLT in a range \({\beta }<{\tilde{{\beta }}}_p\), for some \({\tilde{{\beta }}}_p<{\beta }_p\). Our first result extends this to the entire range \({\beta }<{\beta }_p\).

Theorem 1.1

For all \(p\ge 3\) and \(\beta < \beta _p\),

$$\begin{aligned} N^{\frac{p}{2}} \left( F_{N}(\beta ) -\frac{\beta ^2}{2}\right) \buildrel {\mathcal D}\over \rightarrow \mathcal {N}\left( 0, \frac{\beta ^4p!}{2} \right) , \hbox {as} \; N\uparrow \infty . \end{aligned}$$
(1.15)

The proof of Theorem 1.1 is very different from that in [5] and in a sense closer to that of Aizenman et al. [1] in the case \(p=2\). In fact, we show that the limiting Gaussian comes from a very explicit term

$$\begin{aligned} J_N(\beta ) \equiv \frac{1}{2N}{\mathbb {E}}_\sigma \left( {\beta }^2 H_N(\sigma )^2\right) =\frac{\beta ^2}{2 \left( {\begin{array}{c}N\\ p\end{array}}\right) } \sum _{1\le i_1<\ldots <i_p \le N} J_{i_1, i_2,\dots ,i_p}^2. \end{aligned}$$
(1.16)

\(J_N({\beta })\) is a sum of independent square integrable random variables. Note that in the case \(p=2\), the fluctuations emerge from a sum over suitable weights of loops. However, for \(p \ge 3\), by the law of large numbers, for all \({\beta }\), we have

$$\begin{aligned} \lim _{N \rightarrow \infty } J_N(\beta ) = \frac{{\beta }^2}{2}\,, \hbox {a.s.}, \end{aligned}$$
(1.17)

and by the central limit theorem,

$$\begin{aligned} N^{\frac{p}{2}} \left( J_N(\beta ) - \frac{{\beta }^2}{2} \right) \buildrel {\mathcal D}\over \rightarrow \mathcal {N}\left( 0, \frac{\beta ^4p!}{2} \right) , \hbox {as} \,N\uparrow \infty . \end{aligned}$$
(1.18)

That \(J_N\) and the \(F_N\) have the same limits is not a coincidence. In fact, we prove Theorem 1.1 by proving that

$$\begin{aligned} \lim _{N\uparrow \infty } N^{\frac{p}{2}} \left( F_N({\beta }) -J_N(\beta ) \right) =0, \end{aligned}$$
(1.19)

in probability. This naturally leads to the question whether upon proper rescaling, the quantity \(F_N({\beta })-J_N({\beta })\) converges to a random variable. The positive answer is the main result of this paper and given by the following theorem.

Theorem 1.2

For \(p>2\) and for all \(\beta < \beta _p\), we have

$$\begin{aligned} A_N(p) \Big (F_{N}(\beta )-J_N(\beta ) \Big )\buildrel {\mathcal D}\over \rightarrow \mathcal {N}\left( \mu ({\beta },p), {\sigma \left( \beta ,p\right) }^2\right) , \end{aligned}$$
(1.20)

where

  1. (i)

    For p even,

    $$\begin{aligned} A_N(p)= N^{\left( \frac{3p}{4}-\frac{1}{2}\right) },\quad \mu (\beta ,p)=0, \end{aligned}$$
    (1.21)

    and

    $$\begin{aligned} \sigma \left( \beta ,p\right) ^2= \frac{\beta ^6}{3}{\mathbb {E}}\left[ \left( \sum _{k=0}^{p/2} d_{p-2k}{X}^{p-2k}\right) ^3\right] . \end{aligned}$$
    (1.22)
  2. (ii)

    For p odd,

    $$\begin{aligned} A_N(p)= N^{p-1},\quad \mu (\beta ,p)= \frac{-\beta ^4p!}{4}, \end{aligned}$$
    (1.23)

    and

    $$\begin{aligned} \sigma \left( \beta ,p\right) ^2= \frac{\beta ^8}{12}{\mathbb {E}}\left[ \left( \sum _{k=0}^{[p/2]} d_{p-2k}{X}^{p-2k}\right) ^4 \right] -\frac{\beta ^8 p!^2}{8}. \end{aligned}$$
    (1.24)

Here X is a standard normal random variable and \(d_{p-2k}=(-1)^k \frac{p(p-1)\dots (p-2k+1)}{2^kk!}\).

Compared to (1.15), Theorem 1.2 provides a higher-level resolution of the limiting picture. In fact, in the course of the proof we also identify exactly the terms arising in the expansion of the partition function that converge to the Gaussian in (1.20). Thus, one might envision that, once these terms are again subtracted, on a smaller scale, there appears yet another limit theorem. This might even continue ad infinitum. To prove such a result appears, however, rather formidable and will be left to future research.

It is interesting to compare this picture with the \(p=2\) case. In that case, the variance of the limiting Gaussian distribution blows up at the critical temperature, and thus detects the phase transition. For \(p>2\), this is not the case for the Gaussian from Theorem 1.1, nor for the corrections given by Theorem 1.2. This is of course completely in line with the predictions by theoretical physics pertaining to the so-called Gardner’s transition [10].

Results similar to Theorem 1.1 have been obtained for several related models. Chen et al. [6] obtained analogous results to [5] for mixed p-SK models, i.e. where the Hamiltonian is given as a linear combination of terms of type (1.1) with different p where only even p appear, and recently this was extended to the general case by Banerjee and Belius [3]. For spherical SK-models, related results were obtained by Baik and Lee [2]. We are not aware of any results like Theorem 1.2. After the prepublication of this article, the preprint of Dey and Wu [9] appeared where similar results are shown with an hypergraph counting approach. The paper is organised as follows. In the next section, we present the proof of Theorem 1.1. Many of the results obtained in the course of the proof are re-used in Sect. 3 where Theorem 1.2 is proven. In the appendix we state two frequently used facts about Gaussian random variables for quick reference.

2 Proof of Theorem 1.1

In view of (1.18), to prove Theorem 1.1, it is enough to establish that (1.19) holds for all \({\beta }< {\beta }_p\). Setting

$$\begin{aligned} \mathcal {Z}_N({\beta }) \equiv Z_{N}({\beta }){\mathrm e}^{-NJ_N({\beta })}, \end{aligned}$$
(2.1)

this amounts to showing that, for \({\beta }< {\beta }_p\),

$$\begin{aligned} \lim _{N \rightarrow \infty }N^{\frac{p-2}{2}}\ln \mathcal {Z}_N({\beta }) =0, \ \text {in probability}. \end{aligned}$$
(2.2)

The proof of (2.2) turns out to be remarkably difficult if the entire range \({\beta }<{\beta }_p\) is to be covered. This will require a truncation. For \(\epsilon >0\), we set

$$\begin{aligned} {\mathcal {Z}}_N({\beta }) = Z_\epsilon ^{\le } + Z_\epsilon ^{>}, \end{aligned}$$
(2.3)

where

$$\begin{aligned}{} & {} Z_\epsilon ^{\le } \equiv {\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \mathbbm {1}_{\{\left| -H_N(\sigma )-{\beta }N\right| \le \epsilon {\beta }N\}}\right) {\mathrm e}^{-N J_N({\beta })}, \nonumber \\{} & {} \quad Z_\epsilon ^{>} \equiv {\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \mathbbm {1}_{\{\left| -H_N(\sigma )-{\beta }N\right| > \epsilon {\beta }N\}}\right) {\mathrm e}^{-N J_N({\beta })}, \end{aligned}$$
(2.4)

where we dropped obvious dependencies on the parameters \({\beta }, N\) to lighten the notation.

We decompose

$$\begin{aligned} N^{\frac{p-2}{2}} \ln \mathcal {Z}_N({\beta }) = N^{\frac{p-2}{2}} \ln \left( \frac{\mathcal {Z}_N({\beta }) }{{Z_{\epsilon }^{\le }}} \right) +N^{\frac{p-2}{2}} \ln \left( \frac{{Z_{\epsilon }^{\le }}}{{\mathbb {E}}[Z_{\epsilon }^{\le }]}\right) + N^{\frac{p-2}{2}} \ln {\mathbb {E}}[Z_{\epsilon }^{\le } ] . \end{aligned}$$
(2.5)

The assertion of the theorem then follows from the fact that all three terms on the right-hand side of (2.5) converge to zero in probability.

Proposition 2.1

  1. (i)

    For any \(q \in {\mathbb {N}}\), \({\beta }< {\beta }_p \) and small enough \(\epsilon = \epsilon ({\beta }, p) >0\),

    $$\begin{aligned} \lim _{N\uparrow +\infty } N^{q} \ln \left( \frac{\mathcal {Z}_N({\beta }) }{{Z_{\epsilon }^{\le }}}\right) =0,\; \text {in probability}. \end{aligned}$$
    (2.6)
  2. (ii)

    For \({\beta }< {\beta }_p \) and small enough \(\epsilon = \epsilon ({\beta }, p) >0,\)

    $$\begin{aligned} \lim _{N\uparrow +\infty } N^{\frac{p-2}{2}} \ln \left( \frac{{Z_{\epsilon }^{\le }}}{{\mathbb {E}}{Z_{\epsilon }^{\le }}}\right) =0,\; \text {in probability}, \end{aligned}$$
    (2.7)
  3. (iii)

    For any \({\beta }, 1>\epsilon > 0\),

    $$\begin{aligned} \lim _{N\uparrow +\infty } N^{\frac{p-2}{2}} \ln {\mathbb {E}}[Z_{\epsilon }^{\le }] =0\,. \end{aligned}$$
    (2.8)

Remark

The fact that (2.6) holds for all \(q\in {\mathbb {N}}\) is not needed here, but will be used in the proof of Theorem 1.2.

The proof of Proposition 2.1 relies on computations of moments that are combinatorially rather complex.

We introduce some convenient notation. First, we denote by \(I_N\) the set of all strictly increasing p-tupels in \(\{1,\dots , N\}\),

$$\begin{aligned} I_N \equiv \left\{ (i_1, i_2, \dots , i_p) \in \{1,\dots , N\}^p,\ i_1< i_2<\dots < i_p \right\} . \end{aligned}$$
(2.9)

For \(A = (i_1, \dots , i_p) \in I_N\) we write

$$\begin{aligned} \sigma _A \equiv \sigma _{i_1} \sigma _{i_2}\cdots \sigma _{i_p},\quad \text {and}\quad J_A \equiv J_{i_1, \dots , i_p}. \end{aligned}$$
(2.10)

We abbreviate

$$\begin{aligned} a_N \equiv \sqrt{N} \left( {\begin{array}{c}N\\ p\end{array}}\right) ^{-1/2}. \end{aligned}$$
(2.11)

We can thus write

$$\begin{aligned} H_N(\sigma ) =- a_N \sum _{A\in I_N} J_A \sigma _A, \qquad \text {and}\quad J_N({\beta }) = \frac{{\beta }^2}{2N} a_N^2 \sum _{A \in I_N} J_A^2. \end{aligned}$$
(2.12)

For \(a_N, b_N \ge 0\) we write \(a_N \lesssim b_N\) if \(a_N \le C b_N\) for some numerical constant \(C>0\).

Finally, we will denote by \({\mathfrak {c}}>0\) a numerical constant, not necessarily the same at different occurrences.

2.1 First moments of \({\mathcal {Z}}_N({\beta })\) and \(Z_\epsilon ^\le \), and proof of part (iii) of Proposition (2.1)

We will show that

Lemma 2.2

With the notation above,

$$\begin{aligned} {\mathbb {E}}\left[ {\mathcal {Z}}_N({\beta })\right] = 1- \frac{{\beta }^4}{4} N a_N^2 +\frac{{\beta }^8}{32} N^2 a_N^4 +O\left( N^{3-2p}\right) . \end{aligned}$$
(2.13)

Proof

Interchanging the order of integration, we have

$$\begin{aligned} {\mathbb {E}}\left[ {\mathcal {Z}}_N({\beta })\right] = {\mathbb {E}}\left[ {\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )-NJ_N({\beta })} \right) \right] ={\mathbb {E}}_\sigma \left( {\mathbb {E}}\left[ {\mathrm e}^{-{\beta }H_N(\sigma )-NJ_N({\beta })} \right] \right) . \end{aligned}$$
(2.14)

Using (2.12) and the independence to the \(J_A\),

$$\begin{aligned} {\mathbb {E}}\left[ {\mathrm e}^{-{\beta }H_N(\sigma )-NJ_N({\beta })} \right] = \prod _{A \in I_N} {\mathbb {E}}\left[ {\mathrm e}^{{\beta }a_N J_A \sigma _A -\frac{{\beta }^2}{2} a_N^2 J_A^2}\right] . \end{aligned}$$
(2.15)

Computing the Gaussian integral, we get

$$\begin{aligned} {\mathbb {E}}\left[ {\mathrm e}^{{\beta }a_N J_A \sigma _A -\frac{{\beta }^2}{2} a_N^2 J_A^2}\right] = {\mathrm e}^{\left( {\frac{{\beta }^2 a_N^2\sigma _{A}^2}{2\left( 1+{\beta }^2 a_N^2\right) }} \right) } \frac{1}{\sqrt{1+{\beta }^2 a_N^2}}. \end{aligned}$$
(2.16)

Since \(\sigma _{A}^2=1\), this implies that

$$\begin{aligned} {\mathbb {E}}\left[ {\mathcal {Z}}_N({\beta }) \right] = \exp \left( {|I_N|\left( \frac{{\beta }^2 a_N^2}{2\left( 1+{\beta }^2 a_N^2\right) } - \frac{1}{2} \ln \left( 1+{\beta }^2 a_N^2\right) \right) }\right) . \end{aligned}$$
(2.17)

Moreover, using that \(\left| I_N\right| =\left( {\begin{array}{c}N\\ p\end{array}}\right) \), and by Taylor expansion we obtain

$$\begin{aligned} {\mathbb {E}}\left[ {\mathcal {Z}}_N({\beta }) \right]= & {} \exp \left( \frac{\left( {\begin{array}{c}N\\ p\end{array}}\right) }{2} \left( {\beta }^2 a_N^2- {\beta }^4 a_N^4+O( a_N^6)-{\beta }^2 a_N^2+ \frac{ {\beta }^4 a_N^4}{2}\right) \right) \nonumber \\= & {} 1- \frac{{\beta }^4}{4} N a_N^2 +\frac{{\beta }^8}{32} N^2 a_N^4 +O\left( N^{3-2p}\right) , \end{aligned}$$
(2.18)

which is (2.13). \(\square \)

From (2.13) it follows that \(\ln {\mathbb {E}}\left[ \mathcal Z_N({\beta })\right] =O(N a_N^2)\), and \(Na_N^2 =O(N^{2-p})\), it follows that \( N^{(p-2)/2 }\ln {\mathbb {E}}\left[ \mathcal Z_N({\beta })\right] =O(N^{1-p/2})\), which tends to zero for \(p\ge 3\). The next lemma states that \(Z_\epsilon ^\le \) and \(\mathcal Z_N({\beta })\) are exponentially close, which will imply (2.8),

Lemma 2.3

For any \(1>\epsilon >0\),

$$\begin{aligned} {\mathbb {E}}\left[ \left| Z_\epsilon ^\le -{\mathcal {Z}}_N({\beta })\right| \right] \le \exp {\left( -{\beta }^2 N \epsilon ^2/2(1+o_{\epsilon }(1)) +O_{\epsilon }(N^{2-p}) \right) }. \end{aligned}$$
(2.19)

Proof

Since \({\mathcal {Z}}_N({\beta })-Z_\epsilon ^\le =Z_\epsilon ^>\), we just have to control the expectation of the latter. Interchanging the order of integration, we obtain, using the Hölder inequality,

$$\begin{aligned} {\mathbb {E}}\left( Z_\epsilon ^>\right)= & {} {\mathbb {E}}_\sigma \left( {\mathbb {E}}\left( {\mathrm e}^{-{\beta }H_N(\sigma )} \mathbbm {1}_{\{\left| -H_N(\sigma )-{\beta }N\right|> \epsilon {\beta }N\}}{\mathrm e}^{-N J_N({\beta })}\right) \right) \nonumber \\\le & {} {\mathbb {E}}_\sigma \left( {\mathbb {E}}\left( {\mathrm e}^{ -q_1 {\beta }H_N(\sigma )} \mathbbm {1}_{\{\left| -H_N(\sigma )-{\beta }N\right|> \epsilon {\beta }N\}}\right) ^{1/ q_1} {\mathbb {E}}\left( {\mathrm e}^{-q_2 N J_N({\beta })} \right) ^{1/ q_2}\right) \nonumber \\= & {} {\mathbb {E}}_\sigma \left( {\mathbb {E}}\left( {\mathrm e}^{ q_1{\beta }\sqrt{N} X_\sigma } \mathbbm {1}_{\{\left| X_\sigma -{\beta }\sqrt{N}\right| > \epsilon {\beta }\sqrt{N}\}}\right) ^{1/ q_1} {\mathbb {E}}\left( {\mathrm e}^{-q_2 N J_N({\beta })} \right) ^{1/ q_2}\right) ,\nonumber \\ \end{aligned}$$
(2.20)

for \(1/ q_1+1/ q_2=1\).

Classical Gaussian estimates (see Fact I in the Appendix) yield that

$$\begin{aligned}{} & {} {\mathbb {E}}\left( {\mathrm e}^{ q_1{\beta }\sqrt{N} X_\sigma } \mathbbm {1}_{\{\left| X_\sigma -{\beta }\sqrt{N}\right|> \epsilon {\beta }\sqrt{N}\}}\right) = {\mathbb {E}}\left( {\mathrm e}^{ q_1{\beta }\sqrt{N} X_\sigma } \mathbbm {1}_{\{X_\sigma > (1+\epsilon ) {\beta }\sqrt{N} \}}\right) \nonumber \\{} & {} \qquad +{\mathbb {E}}\left( {\mathrm e}^{ q_1{\beta }\sqrt{N} X_\sigma } \mathbbm {1}_{\{X_\sigma < (1-\epsilon ) {\beta }\sqrt{N} \}}\right) \nonumber \\{} & {} \quad \le \max _{z\in \{-1,1\}} {\mathrm e}^{ -\frac{(1+z\epsilon )^2 {\beta }^2 N}{2}+q_1(1+z\epsilon ) {\beta }^2 N } \nonumber \\{} & {} \quad = {\mathrm e}^{- \frac{\beta ^2 N}{2}\left( -q_1+\epsilon ^2+\left( 1-q_1\right) \left( 1+2\epsilon \right) \right) } \end{aligned}$$
(2.21)

for N large enough and \(q_1<1+\epsilon \). Note that this bound is independent of \(\sigma \). It remains to calculate the second term on the r.h.s. of (2.20). By independence of the J’s,

$$\begin{aligned} {\mathbb {E}}\left( {\mathrm e}^{-q_2N J_N({\beta })}\right)= & {} \left[ {\mathbb {E}}\left( {\mathrm e}^{ -\frac{q_2{\beta }^2}{2} a_N^2 J_{A}^2}\right) \right] ^{\left( {\begin{array}{c}N\\ p\end{array}}\right) } =\left( 1+q_2{\beta }^2a_N^2\right) ^{-\frac{1}{2} \left( {\begin{array}{c}N\\ p\end{array}}\right) } \nonumber \\= & {} \exp \left( {- {\textstyle {\left( {\begin{array}{c}N\\ p\end{array}}\right) \over 2}} \ln \left( 1+{\textstyle {N{\beta }^2q_2\over \left( {\begin{array}{c}N\\ p\end{array}}\right) }}\right) }\right) =\exp \left( {- {\textstyle {N{\beta }^2 q_2\over 2}} +O(N^{2-p})}\right) .\nonumber \\ \end{aligned}$$
(2.22)

Combining (2.21) and (2.22), we obtain, for any \(1+\epsilon>q_1>1\),

$$\begin{aligned} {\mathbb {E}}\left( Z_\epsilon ^>\right)\le & {} \exp \left( -{\textstyle {{\beta }^2 N\over 2q_1}}\left( \epsilon ^2 +(1-q_1)(1+2\epsilon ) -O(N^{1-p}) \right) \right) . \end{aligned}$$
(2.23)

But this implies the assertion of the lemma by taking \(q_1=1+\epsilon ^3\). \(\square \)

Note that Lemma 2.3 and (2.18) imply that

$$\begin{aligned} {\mathbb {E}}\left[ Z_\epsilon ^\le \right] = 1- \frac{{\beta }^4}{4} N a_N^2 +\frac{{\beta }^8}{32} N^2 a_N^4 +O\left( N^{3-2p}\right) . \end{aligned}$$
(2.24)

Combining Lemma 2.2 and Lemma 2.3 proves (2.8).

2.2 The second moment of \({Z_\epsilon ^{\le }}\), and proof of part (ii) of Proposition 2.1

We set

$$\begin{aligned} \** _\epsilon \equiv \frac{{Z_{\epsilon }^{\le }}- {\mathbb {E}}{[Z_{\epsilon }^{\le }]}}{{\mathbb {E}}[{Z_{\epsilon }^{\le }}]} \,. \end{aligned}$$
(2.25)

(2.7) is then equivalent to the following lemma.

Lemma 2.4

For any \(\varepsilon >0\) and \({\beta }<{\beta }_p\),

$$\begin{aligned} \lim _{N\uparrow \infty } {\mathbb {P}}\left( \left| N^{\frac{p-2}{2}} \ln \left( 1+ \** _\epsilon \right) \right| \ge \varepsilon \right) = 0. \end{aligned}$$
(2.26)

Proof

Using the Chebyshev inequality and the fact that, for \(|x|\le 1/10\), \(({\mathrm e}^x- 1)^2\ge x^2/2\), for N large enough,

$$\begin{aligned} {\mathbb {P}}\left( \left| N^{\frac{p-2}{2}} \ln \left( 1+ \** _\epsilon \right) \right| \ge \varepsilon \right)\le & {} \frac{{\mathbb {E}}\left[ \** _\epsilon ^2\right] }{\left( {\mathrm e}^{\varepsilon N^{1-p/2}}-1\right) ^2} + \frac{{\mathbb {E}}\left[ \** _\epsilon ^2\right] }{\left( {\mathrm e}^{-\varepsilon N^{1-p/2}}-1\right) ^2}\nonumber \\\le & {} 8\varepsilon ^{-2} N^{p-2}{{\mathbb {E}}\left[ \** _\epsilon ^2\right] }. \end{aligned}$$
(2.27)

Since \({\mathbb {E}}\left[ \** _\epsilon ^2\right] =\frac{{\mathbb {E}}\left[ (Z^\le _\epsilon )^2\right] -{\mathbb {E}}\left[ Z^\le _\epsilon \right] ^2}{\left( {\mathbb {E}}\left[ Z^\le _\epsilon \right] \right) ^2}\), and \({\mathbb {E}}\left[ Z^\le _\epsilon \right] \) is already computed, we only need to compute a precise bound on the second moment of \(Z_\epsilon ^\le \).

We write \({\mathbb {E}}_{\sigma ,\sigma ^{\prime }}={\mathbb {E}}_\sigma {\mathbb {E}}_{\sigma ^{\prime }}\) and set \(\Gamma _N \equiv \{ -1,-1-\frac{2}{N}, \dots ,1\}\). Then, for any function \(G: {\mathbb {R}}\rightarrow {\mathbb {R}}\), one has

$$\begin{aligned} {\mathbb {E}}_{\sigma ,\sigma ^{\prime }}\left[ G\left( {\left( {\begin{array}{c}N\\ p\end{array}}\right) }^{-1}\sum _{i_1<i_2<\dots <i_p}\sigma _{i_1}\sigma _{i_1}^{\prime } \dots \sigma _{i_p}\sigma _{i_p}^{\prime } \right) \right]= & {} {\mathbb {E}}_{\sigma ,\sigma ^{\prime }}\left[ G\left( \hbox {Cov}(X_\sigma ,X_{\sigma ^{\prime }})\right) \right] \nonumber \\= & {} \sum _{m \in \Gamma _N} G[f_{p,N}\left( m\right) ] p_N(m),\nonumber \\ \end{aligned}$$
(2.28)

where \(p_N(m) \equiv {\mathbb {P}}_{\sigma ,\sigma ^{\prime }}( R_N(\sigma ,\sigma ^{\prime })=m)\).

With this in mind, we split the second moment according to the value of the overlap

$$\begin{aligned}{} & {} {\mathbb {E}}\left[ \left( Z_{\epsilon }^{\le }\right) ^2\right] ={\mathbb {E}}_{\sigma ,\sigma ^{\prime }} {\mathbb {E}}\left( {\mathrm e}^{{\beta }\sqrt{N} \left( X_\sigma +X_{\sigma ^{\prime }}\right) } \mathbbm {1}_{\{\left| -H_N(\sigma )-{\beta }N\right| \le \epsilon {\beta }N\}} \mathbbm {1}_{\{\left| -H_N(\sigma ^{\prime })-{\beta }N\right| \le \epsilon {\beta }N\}} {\mathrm e}^{-2NJ_N({\beta })} \right) \nonumber \\{} & {} \quad = {\mathbb {E}}_{\sigma ,\sigma ^{\prime }} {\mathbb {E}}\left( {\mathrm e}^{{\beta }\sqrt{N} \left( X_\sigma +X_{\sigma ^{\prime }}\right) } \mathbbm {1}_{\{\left| -H_N(\sigma )-{\beta }N\right| \le \epsilon {\beta }N\}} \mathbbm {1}_{\{\left| -H_N(\sigma ^{\prime })-{\beta }N\right| \le \epsilon {\beta }N\}} {\mathrm e}^{-2NJ_N({\beta })} \mathbbm {1}_{\{ |R_N(\sigma ,\sigma ^{\prime }) |<\delta \}} \right) \nonumber \\{} & {} \qquad +{\mathbb {E}}_{\sigma ,\sigma ^{\prime }} {\mathbb {E}}\left( {\mathrm e}^{{\beta }\sqrt{N} \left( X_\sigma +X_{\sigma ^{\prime }}\right) } \mathbbm {1}_{\{\left| X_\sigma -{\beta }\sqrt{N}\right| \le \epsilon {\beta }\sqrt{N}\}}\mathbbm {1}_{\{\left| X_{\sigma ^{\prime }}-{\beta }\sqrt{N}\right| \le \epsilon {\beta }\sqrt{N}\}} {\mathrm e}^{-2NJ_N({\beta })} \mathbbm {1}_{\{ |R_N(\sigma ,\sigma ^{\prime }) |\ge \delta \}}\right) \nonumber \\{} & {} \quad \equiv A+B, \end{aligned}$$
(2.29)

where \(2 \epsilon <\delta ^{p}\) and \(\delta ^{p-2}<\frac{1}{2 \beta _p^2}\). We will now prove that the B-term (large overlap) is subexponentially small and compute the leading orders of the A-term.

Lemma 2.5

For all \({\beta }<{\beta }_p\), there exists \(\epsilon _0>0\) and a constant \({\mathfrak {c}}\) such that, for all \(0\le \epsilon <\epsilon _0\),

$$\begin{aligned} B \le \exp \left( - {\mathfrak {c}} N \right) . \end{aligned}$$
(2.30)

Lemma 2.6

For any \({\beta }\),

$$\begin{aligned} A = 1-\frac{{\beta }^4N a_N^2}{2} +O \left( N^{3-3p/2}\right) . \end{aligned}$$
(2.31)

Proof of Lemma 2.5

To simplify the notation, set \(B_N \equiv \{ |R_N(\sigma ,\sigma ^{\prime }) |\ge \delta \} \). We simplify the constraints by using that

$$\begin{aligned} \mathbbm {1}_{\{\left| X_\sigma -{\beta }\sqrt{N}\right| \le \epsilon {\beta }\sqrt{N}\}}\mathbbm {1}_{\{\left| X_{\sigma ^{\prime }}-{\beta }\sqrt{N}\right| \le \epsilon {\beta }\sqrt{N}\}} \le \mathbbm {1}_{\{\left| X_\sigma +X_{\sigma ^{\prime }}-2{\beta }\sqrt{N}\right| \le 2\epsilon {\beta }\sqrt{N}\}}. \end{aligned}$$
(2.32)

By Hölder’s inequality, we then get

$$\begin{aligned} B \le&\,{\mathbb {E}}_{\sigma ,\sigma ^{\prime }} \bigg ( {\mathbb {E}}\left( {\mathrm e}^{q_1 {\beta }\sqrt{N} \left( X_\sigma +X_{\sigma ^{\prime }}\right) } \mathbbm {1}_{\{\left| X_\sigma +X_{\sigma ^{\prime }}-2{\beta }\sqrt{N}\right| \le 2\epsilon {\beta }\sqrt{N}\}} \right) ^{\frac{1}{q_1}}\nonumber \\&\,{\mathbb {E}}\left( {\mathrm e}^{-2q_2NJ_N({\beta })} \right) ^{\frac{1}{q_2}} \mathbbm {1}_{B_N} \bigg ), \end{aligned}$$
(2.33)

with \(q_1, q_2 > 1\) satisfying \(1/ q_1+1/ q_2=1\). Since \(X_\sigma +X_{\sigma ^{\prime }}\) is a Gaussian random variable with mean zero and variance \(2(1+f_{p,N}(R_N(\sigma ,\sigma ^{\prime }))\), the right hand side can be written as

$$\begin{aligned}{} & {} {\mathbb {E}}_{\sigma ,\sigma ^{\prime }} \left( {\mathbb {E}}\left( {\mathrm e}^{q_1 {\beta }\sqrt{N\left( 2+2 f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) } \xi } \mathbbm {1}_{\left\{ \left| \xi \sqrt{2\left( 1+f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) }-2\beta \sqrt{N} \right| \le 2 \epsilon \beta \sqrt{N} \right\} } \right) ^{\frac{1}{q_1}} \right. \nonumber \\{} & {} \qquad \left. {\mathbb {E}}\left( {\mathrm e}^{-2q_2 NJ_N({\beta })} \right) ^{\frac{1}{q_2}} \mathbbm {1}_{B_N} \right) \end{aligned}$$
(2.34)

where \(\xi \) is a standard Gaussian. As in (2.22),

$$\begin{aligned} {\mathbb {E}}\left( {\mathrm e}^{-2q_2 NJ_N({\beta })} \right) ^{\frac{1}{q_2}} \le {\mathrm e}^{-{\beta }^2 N+ O\left( N^{2-p}\right) }, \end{aligned}$$
(2.35)

and for the first term, we use the following decomposition

$$\begin{aligned}{} & {} {\mathbb {E}}\left( {\mathrm e}^{q_1 {\beta }\sqrt{N\left( 2+2 f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) } \xi } \mathbbm {1}_{\left\{ \left| \xi \sqrt{2\left( 1+f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) }-2\beta \sqrt{N} \right| \le 2 \epsilon \beta \sqrt{N} \right\} } \right) \nonumber \\{} & {} \qquad \left( \mathbbm {1}_{f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \le 0}+\mathbbm {1}_{f_{p,N}\left( R_N(\sigma ,\sigma ^{\prime })\right)>0} \right) \nonumber \\{} & {} \quad \le {\mathbb {E}}\left( {\mathrm e}^{q_1 {\beta }\sqrt{N\left( 2+2 f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) } \xi } \mathbbm {1}_{\left\{ \frac{2 \beta \sqrt{N} \left( 1-\epsilon \right) }{\sqrt{2\left( 1+f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) }} \le \xi \right\} } \right) \mathbbm {1}_{f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \le 0} \nonumber \\{} & {} \quad +{\mathbb {E}}\left( {\mathrm e}^{q_1 {\beta }\sqrt{N\left( 2+2 f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) } \xi } \mathbbm {1}_{\left\{ \xi \le \frac{2 \beta \sqrt{N} \left( 1+\epsilon \right) }{\sqrt{2\left( 1+f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) }} \right\} } \right) \mathbbm {1}_{f_{p,N}\left( R_N(\sigma ,\sigma ^{\prime })\right) >0},\nonumber \\ \end{aligned}$$
(2.36)

where we use the fact that

$$\begin{aligned}{} & {} \mathbbm {1}_{\left\{ \left| \xi \sqrt{2\left( 1+f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) }-2\beta \sqrt{N} \right| \le 2 \epsilon \beta \sqrt{N} \right\} }=\mathbbm {1}_{\left\{ \frac{2 \beta \sqrt{N} \left( 1-\epsilon \right) }{\sqrt{2\left( 1+f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) }} \le \xi \right\} }\\{} & {} \qquad \mathbbm {1}_{\left\{ \xi \le \frac{2 \beta \sqrt{N} \left( 1+\epsilon \right) }{\sqrt{2\left( 1+f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) }} \right\} } \end{aligned}$$

for the second line and by estimating one of the indicator function by 1 in both cases. On \(B_N\), we can now use the first Classical Gaussian estimate of the Appendix (see Fact I in the Appendix) for the term in the second line of (2.36) because

$$\begin{aligned} \frac{2 \beta \sqrt{N} (1-\epsilon )}{\sqrt{2\left( 1+f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) }}> q_1 {\beta }\sqrt{N\left( 2+2 f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) }, \end{aligned}$$

for \(q_1\) small enough. On \(B_N\), we can use the second Classical Gaussian estimate of the Appendix (see Fact I in the Appendix) for the term in the third line of (2.36). The two Gaussian estimates yield

$$\begin{aligned}{} & {} {\mathbb {E}}\left( {\mathrm e}^{q_1 {\beta }\sqrt{N\left( 2+2 f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) } \xi } \mathbbm {1}_{\left\{ \left| \xi \sqrt{2\left( 1+f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) }-2\beta \sqrt{N} \right| \le 2 \epsilon \beta \sqrt{N} \right\} } \right) ^{\frac{1}{q_1}}\mathbbm {1}_{B_N} \nonumber \\\end{aligned}$$
(2.37)
$$\begin{aligned}{} & {} \le 2^{\frac{1}{q_1}} \max _{z\in \{-1,1\}}\exp \left( {-\frac{(1+z\epsilon )^2 {\beta }^2 N}{q_1\left( 1+ f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) }+(1+z\epsilon ) 2 {\beta }^2 N}\right) \mathbbm {1}_{B_N}. \end{aligned}$$
(2.38)

Combining these two steps, we obtain

$$\begin{aligned}{} & {} B \le 2^{\frac{1}{q_1}} {\mathbb {E}}_{\sigma ,\sigma ^{\prime }} \bigg ( \max _{z\in \{-1,1\}} \mathbbm {1}_{B_N}\exp \left( -\frac{(1+z\epsilon )^2 {\beta }^2 N}{q_1\left( 1+f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) }+(1+z\epsilon ) 2 {\beta }^2 N\right. \nonumber \\{} & {} \qquad \left. -{\beta }^2 N+ O\left( { N^{2-p}} \right) \right) \bigg ). \end{aligned}$$
(2.39)

By setting \(q_1=1+\epsilon ^3\), one sees that the exponential term in (2.39) is bounded by

$$\begin{aligned} \max _{z\in \{-1,1\}} \exp {\left( -{\beta }^2 N \left( \frac{\epsilon ^2(1+o_{\epsilon }(1))-(1+2\epsilon z)f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) }{\left( 1+ f_{p,N} \left( R_N(\sigma ,\sigma ^{\prime })\right) \right) }+ O\left( N^{1-p} \right) \right) \right) }. \end{aligned}$$
(2.40)

By Stirlings estimate, we have

$$\begin{aligned} p_N(m)= \left( {\begin{array}{c}N\\ N\frac{(1+m)}{2}\end{array}}\right) 2^{-N} \le \exp {\left( -N\phi (m)\right) }. \end{aligned}$$
(2.41)

Using (2.28) and plugging (2.40) and (2.41) into (2.39) gives

$$\begin{aligned} B \lesssim \sum _{\begin{array}{c} m \in \Gamma _N, \\ |m |\ge \delta \end{array}} \max _{z\in \{-1,1\}} \exp { \left( N\left( -{\beta }^2 \frac{\epsilon ^2(1+o_{\epsilon }(1))-(1+2 z \epsilon )f_{p,N} \left( m \right) }{\left( 1+ f_{p,N} \left( m\right) \right) } -\phi (m)\right) \right) }. \end{aligned}$$
(2.42)

We write

$$\begin{aligned} \textrm{d}_N\equiv & {} -{\beta }^2 N \frac{\epsilon ^2(1+o_\epsilon (1))-(1+2 z \epsilon )f_{p,N} \left( m \right) }{\left( 1+ f_{p,N} \left( m\right) \right) } -\phi (m) N \nonumber \\= & {} -{\beta }^2 N \frac{\epsilon ^2(1+o_\epsilon (1))}{1+f_{p,N}(m)} + \frac{f_{p,N}(m) N}{1+ f_{p,N}(m)} \left[ {\beta }^2(2 z \epsilon +1) - \left( 1+ f_{p,N}(m)^{-1}\right) \phi (m) \right] \nonumber \\\le & {} \frac{f_{p,N}(m) N}{1+ f_{p,N}(m)} \left[ {\beta }^2(2 z \epsilon +1) - \left( 1+ f_{p,N}(m)^{-1}\right) \phi (m) \right] . \end{aligned}$$
(2.43)

If p is even or \(m\ge 0\), recalling that \({\beta }_p^2\equiv \inf _{0<m<1}(1+m^{-p})\phi (m)\), the last line in (2.43) is

$$\begin{aligned} \le \frac{f_{p,N}(m) N}{1+ f_{p,N}(m)} \left[ {\beta }^2(2\epsilon +1) - {\beta }_p^2 \right] . \end{aligned}$$

Since \(f_{p,N}(m) =m^p+O(1/N)\), for \(|m|\ge \delta \),

$$\begin{aligned} \textrm{d}_N\le -\frac{N \delta ^p+O(1)}{2} ({\beta }_p^2-{\beta }^2(1+2\epsilon )). \end{aligned}$$
(2.44)

This gives

$$\begin{aligned} B\lesssim N {\mathrm e}^{ -\frac{N \delta ^p+O(1)}{2} ({\beta }_p^2-{\beta }^2(1+2\epsilon ))}. \end{aligned}$$
(2.45)

If p is odd and \(m<0\), \(1+f_{p,N}(m)^{-1}\le 0\), and we immediately obtain

$$\begin{aligned} \textrm{d}_N \le -\frac{N\delta ^p {\beta }^2}{2}, \end{aligned}$$
(2.46)

which is even better. This proves Lemma 2.5\(\square \)

Next we prove Lemma 2.6

Proof of Lemma 2.6

We have to decompose the term A further according to the value of the overlap. For \(\alpha \) satisfying

$$\begin{aligned} \frac{1}{p}<\alpha <\frac{1}{2}, \end{aligned}$$
(2.47)

we set \(A=A_1+A_2\), where

$$\begin{aligned}{} & {} A_1 \equiv {\mathbb {E}}_{\sigma ,\sigma ^{\prime }} {\mathbb {E}}\left( {\mathrm e}^{{\beta }\sqrt{N} \left( X_\sigma +X_{\sigma ^{\prime }}\right) } \mathbbm {1}_{\{\left| X_\sigma -{\beta }\sqrt{N}\right| \le \epsilon {\beta }\sqrt{N}\}}\mathbbm {1}_{\{\left| X_{\sigma ^{\prime }}-{\beta }\sqrt{N}\right| \le \epsilon {\beta }\sqrt{N}\}} \right. \nonumber \\{} & {} \qquad \left. {\mathrm e}^{-2NJ_N({\beta })} \mathbbm {1}_{N^{-\alpha }\le |R_N(\sigma ,\sigma ^{\prime })|<\delta } \right) , \end{aligned}$$
(2.48)

and

$$\begin{aligned}{} & {} A_2 \equiv {\mathbb {E}}_{\sigma ,\sigma ^{\prime }} \Bigg ( {\mathbb {E}}\left( {\mathrm e}^{{\beta }\sqrt{N} \left( X_\sigma +X_{\sigma ^{\prime }}\right) } \mathbbm {1}_{\{\left| X_\sigma -{\beta }\sqrt{N}\right| \le \epsilon {\beta }\sqrt{N}\}}\mathbbm {1}_{\{\left| X_{\sigma ^{\prime }}-{\beta }\sqrt{N}\right| \le \epsilon {\beta }\sqrt{N}\}} {\mathrm e}^{-2NJ_N({\beta })} \right) \nonumber \\{} & {} \qquad \mathbbm {1}_{|R_N(\sigma ,\sigma ^{\prime })|<N^{-\alpha }} \Bigg ). \end{aligned}$$
(2.49)

The point is that \(A_1\) is very small, even if we drop the constraints on \(X_\sigma \) and \(X_{\sigma ^{\prime }}\), whereas \(A_2\) has to be computed precisely.

Thus, we bound \(A_1\) by

$$\begin{aligned} 0\le A_1\le {\mathbb {E}}_{\sigma ,\sigma ^{\prime }} {\mathbb {E}}\left( {\mathrm e}^{{\beta }\sqrt{N} \left( X_\sigma +X_{\sigma ^{\prime }}\right) } {\mathrm e}^{-2NJ_N({\beta })} \mathbbm {1}_{\{N^{-\alpha }\le |R_N(\sigma ,\sigma ^{\prime })|<\delta \}}\right) . \end{aligned}$$
(2.50)

Using the independence of the Gaussian variables

$$\begin{aligned} {\mathbb {E}}\left( {\mathrm e}^{{\beta }\sqrt{N} \left( X_\sigma +X_{\sigma ^{\prime }}\right) } {\mathrm e}^{-2NJ_N({\beta })} \right) =\prod _{K \in I_N} {\mathbb {E}}\left( {\mathrm e}^{ {\beta }a_N J_{K}\left( \sigma _{K}+\sigma _{K}^{\prime }\right) -{\beta }^2 a_N^2 J_{K}^2} \right) . \end{aligned}$$
(2.51)

Computing the Gaussian integrals,

$$\begin{aligned} {\mathbb {E}}\left( {\mathrm e}^{ {\beta }a_N J_{K}\left( \sigma _{K}+\sigma _{K}^{\prime }\right) -{\beta }^2 a_N^2 J_{K}^2} \right) ={\mathrm e}^{\left( 1+\sigma _{K} \sigma _{K}^{\prime }\right) \left( \frac{{\beta }^2a_N^2}{2{\beta }^2 a_N^2+1}\right) -\frac{\ln (1+2 {\beta }^2 a_N^2)}{2}}, \end{aligned}$$
(2.52)

and so

$$\begin{aligned} {\mathbb {E}}\left( {\mathrm e}^{{\beta }\sqrt{N} \left( X_\sigma +X_{\sigma ^{\prime }}\right) } {\mathrm e}^{-2NJ_N({\beta })} \right) = {\mathrm e}^{\left( \sum _{K \in I_N} \sigma _{K}\sigma _{K}^{\prime }\right) \left( \frac{{\beta }^2a_N^2}{2 {\beta }^2 a_N^2+1}\right) }{\mathrm e}^{\left( {\begin{array}{c}N\\ p\end{array}}\right) \left( \frac{{\beta }^2 a_N^2}{2{\beta }^2 a_N^2+1} -\frac{\ln (1+2 {\beta }^2 a_N^2)}{2}\right) }. \end{aligned}$$
(2.53)

As in (2.22), we have

$$\begin{aligned} \exp \left( {\left( {\begin{array}{c}N\\ p\end{array}}\right) \left( \frac{{\beta }^2 a_N^2}{2{\beta }^2 a_N^2+1} -\frac{\ln (1+2 {\beta }^2 a_N^2)}{2}\right) }\right) =\exp {\left( - {\beta }^4 N a_N^2+O\left( {N^{3-2p}}\right) \right) }. \end{aligned}$$
(2.54)

Thus

$$\begin{aligned} A_1 \le&\sum _{\begin{array}{c} m \in \Gamma _N \\ N^{-\alpha } \le |m |\le \delta \end{array}} \exp \left( \frac{{\beta }^2 N f_{p,N} \left( m \right) }{2{\beta }^2 a_N^2+1} \right) p_N(m) \nonumber \\ \le&\sum _{\begin{array}{c} m \in \Gamma _N \\ N^{-\alpha } \le |m |\le \delta \end{array}} \exp \left( N\left( {\textstyle {{\beta }^2 f_{p,N} \left( m \right) \over 2{\beta }^2 a_N^2+1}}-{\textstyle {m^2\over 2}}\right) \right) , \end{aligned}$$
(2.55)

where the last inequality uses (2.41). Using the asymptotics for \(f_{p,N}\), we get that

$$\begin{aligned} A_1 \le \sum _{\begin{array}{c} m \in \Gamma _N \\ N^{-\alpha } \le |m |\le \delta \end{array}} \exp { \left( N \frac{m^2}{2} \left( \frac{2 {\beta }^2 m^{p-2}\left( 1+o_N(1)\right) }{2 {\beta }^2 a_N^2+1}-1 \right) \right) }. \end{aligned}$$
(2.56)

In the range of summation,

$$\begin{aligned} \left| \frac{2 {\beta }^2 m^{p-2}\left( 1+o_N(1)\right) }{2 {\beta }^2 a_N^2+1}\right| \lesssim 2{\beta }^2 \delta ^{p-2} < 1, \end{aligned}$$
(2.57)

by assumption on \(\delta \), and thus, using also the lower bound on |m|,

$$\begin{aligned} A_1 \lesssim N \exp \left( -c N^{1-2\alpha }/2\right) , \end{aligned}$$
(2.58)

where \(c>0\).

For \(A_2\), the constraints on the \(X_\sigma ,X_{\sigma ^{\prime }}\) can also be dropped, but this is more subtle. We write \(A_2=A_{21}+R_2\), where

$$\begin{aligned} A_{21} \equiv {\mathbb {E}}_{\sigma ,\sigma ^{\prime }} \left( {\mathbb {E}}\left( {\mathrm e}^{{\beta }\sqrt{N} \left( X_\sigma +X_{\sigma ^{\prime }}\right) } {\mathrm e}^{-2NJ_N({\beta })} \right) \mathbbm {1}_{|R_N(\sigma ,\sigma ^{\prime })|<N^{-\alpha }} \right) . \end{aligned}$$
(2.59)

We first compute \(A_{21}\). We set \(\Gamma _N^{\alpha } \equiv \{m \in \Gamma _N, |m |\le N^{-\alpha }\}\). Using (2.28), we have

$$\begin{aligned} A_{21}\exp {\left( + {\beta }^4 N a_N^2+O\left( {N^{3-2p}}\right) \right) }\equiv \widetilde{A}_{21}= \sum _{m \in \Gamma ^\alpha _N} \exp {\left( \frac{{\beta }^2 N f_{p,N} \left( m \right) }{2{\beta }^2 a_N^2+1}\right) } p_N\left( m\right) . \end{aligned}$$
(2.60)

To deal with this term, we use the following standard bound for the exponential,

$$\begin{aligned} \left| \exp ( \xi ) - 1 - \xi - \frac{1}{2}\xi ^2-\frac{1}{3!}\xi ^3 \right| \le \frac{1}{4!}\xi ^4 \exp |\xi |, \end{aligned}$$
(2.61)

with \(\xi =\frac{{\beta }^2 N f_{p,N} \left( m \right) }{2{\beta }^2 a_N^2+1}\). Notice that on \(\Gamma ^\alpha _N\), \(N f_{p,N}(m)\le N^{1-p\alpha }\), which tends to zero, as \(N\uparrow \infty \). Hence, on the domain of summation of (2.60), \(\exp (|\xi |)\le {\mathrm e}^{\mathfrak {c}}\). This allows us to bound \(A_{21}\) as

$$\begin{aligned} \left| \widetilde{A}_{21}-\sum _{m \in \Gamma ^\alpha _N} \left( 1+\xi + \frac{1}{2}\xi ^2+\frac{1}{3!}\xi ^3 \right) p_N(m)\right| \le \frac{1}{4!}\sum _{m \in \Gamma ^\alpha _N}\xi ^4 {\mathrm e}^{{\mathfrak {c}}}p_N(m). \end{aligned}$$
(2.62)

Moreover, the sum over the terms on the left-hand side can be extended to sums over all of \(\Gamma _N\) with just an exponentially small error.

$$\begin{aligned} \left| \widetilde{A}_{21}\!-\!\sum _{m \in \Gamma _N} \left( 1+\xi \!+\! \frac{1}{2}\xi ^2\!+\!\frac{1}{3!}\xi ^3 \right) p_N(m)\right| \!\le \! \frac{1}{4!}\sum _{m \in \Gamma _N}\xi ^4 {\mathrm e}^{{\mathfrak {c}}}p_N(m) +O\left( {\mathrm e}^{-N^{1-2\alpha }}\right) . \end{aligned}$$
(2.63)

The sums over the \(\xi ^k\) can be computed fairly well by re-expressing them in terms of expectations over the \(\sigma \). Namely

$$\begin{aligned}{} & {} \sum _{m \in \Gamma _N} f_{p,N}(m)p_N(m)=\left( {\begin{array}{c}N\\ p\end{array}}\right) ^{-1}{\mathbb {E}}_{\sigma ,\sigma ^{\prime }}\left( \sum _{A\in I_N} \sigma _A\sigma ^{\prime }_A\right) =0, \end{aligned}$$
(2.64)
$$\begin{aligned}{} & {} \sum _{m \in \Gamma _N} f_{p,N}(m)^2p_N(m)=\left( {\begin{array}{c}N\\ p\end{array}}\right) ^{-2}{\mathbb {E}}_{\sigma ,\sigma ^{\prime }}\left( \sum _{A\in I_N} \sigma _A\sigma ^{\prime }_A\right) ^2 =\left( {\begin{array}{c}N\\ p\end{array}}\right) ^{-1}, \end{aligned}$$
(2.65)

and, for \(k\ge 3\), let \(C_k\) is a constant independent of N

$$\begin{aligned} \sum _{m \in \Gamma _N} f_{p,N}(m)^kp_N(m)=\left( {\begin{array}{c}N\\ p\end{array}}\right) ^{-k}{\mathbb {E}}_{\sigma ,\sigma ^{\prime }}\left( \sum _{A\in I_N} \sigma _A\sigma ^{\prime }_A\right) ^k \le C_k \left( {\begin{array}{c}N\\ p\end{array}}\right) ^{-k}N^{pk/2}, \end{aligned}$$
(2.66)

since all indices must occur alt least twice. From this we obtain

$$\begin{aligned} \widetilde{A}_{21}= 1+ \frac{{\beta }^4Na_N^2}{2(1+2{\beta }^2a_N^2)^2} +O\left( N^{3(1-p/2)}\right) =1+ \frac{{\beta }^4Na_N^2}{2} +O\left( N^{3(1-p/2)}\right) . \end{aligned}$$
(2.67)

Finally, we bound \(R_2\). Note that

$$\begin{aligned} |R_2|\le 2 {\mathbb {E}}_{\sigma ,\sigma ^{\prime }} \left( {\mathbb {E}}\left( {\mathrm e}^{{\beta }\sqrt{N} \left( X_\sigma +X_{\sigma ^{\prime }}\right) } \mathbbm {1}_{\{\left| X_\sigma -{\beta }\sqrt{N} \right| >\epsilon {\beta }\sqrt{N}\}} {\mathrm e}^{-2NJ_N({\beta })} \right) \mathbbm {1}_{|R_N(\sigma ,\sigma ^{\prime })|<N^{-\alpha }} \right) . \end{aligned}$$
(2.68)

The idea here is that under the constraint on \(R_N(\sigma ,\sigma ^{\prime })\), \(X_\sigma \) and \(X_{\sigma ^{\prime }}\) are almost independent. Using Hölder’s inequality as before,

$$\begin{aligned}{} & {} {\mathbb {E}}\left( {\mathrm e}^{{\beta }\sqrt{N} \left( X_\sigma +X_{\sigma ^{\prime }}\right) } \mathbbm {1}_{\{\left| X_\sigma -{\beta }\sqrt{N} \right|>\epsilon {\beta }\sqrt{N}\}} {\mathrm e}^{-2NJ_N({\beta })} \right) \nonumber \\{} & {} \quad \le \left( {\mathbb {E}}\left( {\mathrm e}^{q_1{\beta }\sqrt{N} \left( X_\sigma +X_{\sigma ^{\prime }}\right) } \mathbbm {1}_{\{\left| X_\sigma -{\beta }\sqrt{N} \right| >\epsilon {\beta }\sqrt{N}\}} \right) \right) ^{\frac{1}{q_1}}\nonumber \\{} & {} \qquad \times \left( {\mathbb {E}}\left( {\mathrm e}^{-2q_2NJ_N({\beta })} \right) \right) ^{\frac{1}{q_2}}. \end{aligned}$$
(2.69)

As in (2.22), we get for the second factor

$$\begin{aligned} \left( {\mathbb {E}}\left( {\mathrm e}^{-2q_2NJ_N({\beta })} \right) \right) ^{\frac{1}{q_2}} \le {\mathrm e}^{-N{\beta }^2 +O(N^{2-p})}. \end{aligned}$$
(2.70)

To deal with with first factor, we notice that \(X_{\sigma ^{\prime }}\) can be written as

$$\begin{aligned} X_{\sigma ^{\prime }}= \gamma X_\sigma +\sqrt{1-\gamma ^2} \xi , \end{aligned}$$
(2.71)

where \(\xi \) is a normal random variable independent of \(X_\sigma \) and \(\gamma = f_N(R_N(\sigma ,\sigma ^{\prime }))\). Hence

$$\begin{aligned} {\mathbb {E}}\left( {\mathrm e}^{q_1{\beta }\sqrt{N} \left( X_\sigma +X_{\sigma ^{\prime }}\right) } \mathbbm {1}_{\{\left| X_\sigma -{\beta }\sqrt{N} \right|>\epsilon {\beta }\sqrt{N}\}} \right)= & {} {\mathbb {E}}\left( {\mathrm e}^{q_1{\beta }\sqrt{N} X_\sigma (1+\gamma )} \mathbbm {1}_{\{\left| X_\sigma -{\beta }\sqrt{N} \right| >\epsilon {\beta }\sqrt{N}\}} \right) \nonumber \\{} & {} {\mathbb {E}}\left( {\mathrm e}^{q_1{\beta }\sqrt{N} \sqrt{1-\gamma ^2}\xi }\right) . \end{aligned}$$
(2.72)

Using again Fact 1 and since \(|R_N(\sigma ,\sigma ^{\prime })|\le N^{-\alpha }\), it follows that

$$\begin{aligned} |R_2|\le {\mathrm e}^{-{\beta }^2 N \epsilon ^2(1+o_\epsilon (1))/2 + o(N)}. \end{aligned}$$
(2.73)

With these bounds on \(A_1\) and \(A_2\), and the bound (2.54),

$$\begin{aligned} A= & {} \left( 1+ \frac{{\beta }^4Na_N^2}{2(1+2{\beta }^2a_N^2)^2} +O\left( N^{3(1-p/2)}\right) \right) \exp {\left( - {\beta }^4 N a_N^2+O\left( {N^{3-2p}}\right) \right) }\nonumber \\= & {} 1-\frac{{\beta }^4Na_N^2}{2} +O\left( N^{3(1-p/2)}\right) . \end{aligned}$$
(2.74)

This implies (2.31) and concludes the proof of Lemma 2.6. \(\square \)

We now conclude the proof of Proposition 2.1. Combining (2.30) and (2.31) yields

$$\begin{aligned} {\mathbb {E}}\left[ (Z_{\epsilon }^{\le })^2\right] = 1-\frac{{\beta }^4N a_N^2}{2}+O\left( N^{3-3p/2}\right) . \end{aligned}$$
(2.75)

Furthermore, using (2.24) we have that

$$\begin{aligned} \left( {\mathbb {E}}(Z_{\epsilon }^{\le })\right) ^2=\left( 1- \frac{{\beta }^4}{4} N a_N^2 +O\left( N^{4-2p}\right) \right) ^2=1- \frac{{\beta }^4 N a_N^2}{2}+O\left( N^{4-2p}\right) , \end{aligned}$$
(2.76)

hence combining (2.75) and (2.76) leads to

$$\begin{aligned} {\mathbb {E}}\left( \** _\epsilon ^2\right) =\frac{{\mathbb {E}}({Z_{\epsilon }^{\le }}^2)- {\mathbb {E}}({Z_{\epsilon }^{\le }})^2}{{\mathbb {E}}({Z_{\epsilon }^{\le }})^2}=\frac{O\left( N^{3-3p/2}\right) }{{\mathbb {E}}({Z_{\epsilon }^{\le }})^2} . \end{aligned}$$
(2.77)

Inserting this into (2.27), we get

$$\begin{aligned} {\mathbb {P}}\left( \left| N^{\frac{p-2}{2}} \ln \left( 1+ \** _\epsilon \right) \right| >\varepsilon \right) \le 8\varepsilon ^{-2} O\left( N^{1-p/2}\right) , \end{aligned}$$
(2.78)

which proves Lemma 2.4. \(\square \)

This also concludes the proof of part (ii) of Proposition 2.1.

2.3 Exponential concentration: proof of (i) of Proposition 2.1

Since

$$\begin{aligned} N^{q} \ln \left( \frac{\mathcal {Z}_N({\beta }) }{{Z_{\epsilon }^{\le }}}\right) =N^{q} \ln \left( \frac{\mathcal {Z}_N({\beta }) }{\mathcal {Z}_N({\beta }) -Z_{\epsilon }^{>}}\right) =-N^{q} \ln \left( 1-\frac{Z_{\epsilon }^{>}}{\mathcal {Z}_N({\beta }) }\right) , \end{aligned}$$
(2.79)

the assertion 2.6 in Lemma 2.1 follows from the following lemma.

Lemma 2.7

Assume that \({\beta }<{\beta }_p\). Then for all \( \varepsilon >0\) and \(a \in {\mathbb {N}}\) there exists \({\mathfrak {c}} > 0\) such that

$$\begin{aligned} {\mathbb {P}}\left( N^{a} \frac{Z_{\epsilon }^{>}}{ \mathcal {Z}_N({\beta }) } \ge \varepsilon \right) \le \exp ( -{\mathfrak {c}} N). \end{aligned}$$
(2.80)

Proof

$$\begin{aligned} {\mathbb {P}}\left( N^{a} \frac{Z_{\epsilon }^{>}}{\mathcal {Z}_N({\beta }) }\ge \varepsilon \right)= & {} {\mathbb {P}}\left( \frac{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \mathbbm {1}_{\left| -H_N(\sigma )-{\beta }N\right|> \epsilon \beta N \}}\right) }{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \right) } \ge \frac{\varepsilon }{N^{a}} \right) \nonumber \\\le & {} \frac{N^{a}}{\varepsilon } {\mathbb {E}}\left( \frac{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \mathbbm {1}_{\left| -H_N(\sigma )-{\beta }N\right| > \epsilon \beta N \}}\right) }{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \right) } \right) . \end{aligned}$$
(2.81)

By Gaussian concentration of measure, it follows that

$$\begin{aligned} {\mathbb {P}}\left( \left| \ln {\mathbb {E}}_{\sigma }{\mathrm e}^{-{\beta }H_N(\sigma )}-{\mathbb {E}}\left( \ln {\mathbb {E}}_{\sigma }{\mathrm e}^{-{\beta }H_N(\sigma )}\right) \right| >N{\beta }^2 \frac{\epsilon ^2}{4} \right) \le \exp \left( -N{\beta }^2 \frac{\epsilon ^4}{32}\right) . \end{aligned}$$
(2.82)

(See e.g. [5, (2.56)]).

We introduce the events

$$\begin{aligned} O_{N, {\beta }, \epsilon } \equiv \left\{ \left| \ln {\mathbb {E}}_{\sigma }{\mathrm e}^{-{\beta }H_N(\sigma )}-{\mathbb {E}}\left( \ln {\mathbb {E}}_{\sigma }{\mathrm e}^{-{\beta }H_N(\sigma )}\right) \right| >N{\beta }^2{\textstyle {\epsilon ^2\over 4}}\right\} \,, \end{aligned}$$
(2.83)

and split the r.h.s. of (2.81) as

$$\begin{aligned}{} & {} {\mathbb {E}}\left( \frac{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \mathbbm {1}_{\left| -H_N(\sigma )-{\beta }N\right|> \epsilon \beta N \}}\right) }{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \right) } \right) \nonumber \\{} & {} \quad \le {\mathbb {E}}\left( \mathbbm {1}_{ O_{N, {\beta }, \epsilon }^c} \frac{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \mathbbm {1}_{\left| -H_N(\sigma )-{\beta }N\right|> \epsilon \beta N \}} \right) }{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \right) } \right) +{\mathbb {P}}(O_{N, {\beta }, \epsilon }) \nonumber \\{} & {} \quad \le \, {\mathbb {E}}\left( \mathbbm {1}_{ O_{N, {\beta }, \epsilon }^c } \frac{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \mathbbm {1}_{\left| -H_N(\sigma )-{\beta }N\right| > \epsilon \beta N \}}\right) }{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \right) } \right) + \exp \left( {-N{\beta }^2 \frac{\epsilon ^4}{32}}\right) \,,\qquad \end{aligned}$$
(2.84)

where for the first inequality we use that the quotient of the \({\mathbb {E}}_\sigma \)-terms is smaller than one, and (2.82) is used in the last step. On the event \(O_{N, {\beta }, \epsilon }^c\), we have that

$$\begin{aligned} {\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )}\right)= & {} \exp \left( \ln {\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )}\right) -{\mathbb {E}}\left( \ln {\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )}\right) \right) \right. \nonumber \\{} & {} \left. +{\mathbb {E}}\left( \ln {\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )}\right) \right) \right) \nonumber \\\ge & {} \exp \left( {\mathbb {E}}\left( \ln {\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )}\right) \right) -N{\beta }^2\epsilon ^2/4\right) . \end{aligned}$$
(2.85)

Using this inequality

$$\begin{aligned}{} & {} {\mathbb {E}}\left( \mathbbm {1}_{\{\{O^{N}_{{\beta },\epsilon }\}^C\}} \frac{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \mathbbm {1}_{\left| -H_N(\sigma )-{\beta }N\right|> \epsilon \beta N \}} \right) }{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \right) } \right) \nonumber \\{} & {} \quad \le {\mathrm e}^{N{\beta }^2 \frac{\epsilon ^2}{4}} \frac{ {\mathbb {E}}\left( {\mathbb {E}}_{\sigma }\left( {\mathrm e}^{-{\beta }H_N(\sigma )-N\frac{{\beta }^2}{2}}\mathbbm {1}_{\left| -H_N(\sigma )-{\beta }N\right| > \epsilon \beta N \}} \right) \right) }{\exp \left( {\mathbb {E}}\ln {\mathbb {E}}_{\sigma }{\mathrm e}^{-{\beta }H_N(\sigma )-N\frac{{\beta }^2}{2}}\right) }. \end{aligned}$$
(2.86)

By classical Gaussian estimates (Fact I in Appendix), the numerator on the r.h.s. above reads

$$\begin{aligned} {\mathbb {E}}\left( {\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )-N\frac{{\beta }^2}{2} } \mathbbm {1}_{\left| -H_N(\sigma )-{\beta }N\right| > \epsilon \beta N \}}\right) \right) \le \exp \left( {-N {\beta }^2 \frac{\epsilon ^2}{2}}\right) . \end{aligned}$$
(2.87)

Combining (2.84), (2.86) and (2.87), we obtain

$$\begin{aligned}{} & {} {\mathbb {E}}\left( \frac{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \mathbbm {1}_{\left| -H_N(\sigma )-{\beta }N\right| > \epsilon \beta N \}}\right) }{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \right) } \right) \nonumber \\{} & {} \qquad \le \frac{\exp \left( {N{\beta }^2 \frac{\epsilon ^2}{4}}\right) \exp \left( -{N{\beta }^2 \frac{\epsilon ^2}{2}}\right) }{\exp \left( {\mathbb {E}}\ln {\mathbb {E}}_{\sigma }{\mathrm e}^{-{\beta }H_N(\sigma )-N\frac{{\beta }^2}{2}}\right) }+ \exp \left( {-N{\beta }^2 \frac{\epsilon ^4}{32}}\right) . \end{aligned}$$
(2.88)

It remains to bound the denominator. Note that

$$\begin{aligned} {\mathbb {E}}\ln {\mathbb {E}}_{\sigma }{\mathrm e}^{-{\beta }H_N(\sigma )-N\frac{{\beta }^2}{2}}={\mathbb {E}}\ln {\mathbb {E}}_{\sigma }{\mathrm e}^{-{\beta }H_N(\sigma )}-\ln {\mathbb {E}}{\mathbb {E}}_{\sigma }{\mathrm e}^{-{\beta }H_N(\sigma )}, \end{aligned}$$
(2.89)

so this is just the difference between the quenched and annealed free energy. In the course of the proof that these are asymptotically equal for \({\beta }<{\beta }_p\), it it actually shown that for any \({\beta }< {\beta }_p\), there exists \(K>0\) such that

$$\begin{aligned} -K\sqrt{N}<{\mathbb {E}}\ln {\mathbb {E}}_{\sigma }{\mathrm e}^{-{\beta }H_N(\sigma )}-\ln {\mathbb {E}}{\mathbb {E}}_{\sigma }{\mathrm e}^{-{\beta }H_N(\sigma )}\le 0. \end{aligned}$$
(2.90)

(see e.g. Sect. 11.2 in [4]). Inserting this estimate into (2.88), it follows that

$$\begin{aligned}{} & {} {\mathbb {E}}\left( \frac{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \mathbbm {1}_{\left| -H_N(\sigma )-{\beta }N\right| > \epsilon \beta N \}}\right) }{{\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \right) } \right) \nonumber \\{} & {} \qquad \le \exp \left( -{N{\beta }^2 \frac{\epsilon ^2}{4}+K\sqrt{N}}\right) + \exp \left( {-N{\beta }^2 \frac{\epsilon ^4}{32}}\right) . \end{aligned}$$
(2.91)

This together with the Markov inequality implies (2.80) and ends the proof of the lemma. \(\square \)

Thus the proof of Lemma 2.1 is complete and this also concludes the proof of Theorem 1.1.

3 Proof of Theorem 1.2

The quantity we need to control can be expressed as

$$\begin{aligned} F_{N}({\beta })-J_N({\beta }) =\frac{1}{N} \ln \left( {{\mathcal {Z}_N({\beta })}}\right) . \end{aligned}$$
(3.1)

The proof of Theorem 1.2 relies essentially on a Taylor expansion of the exponential function in \({\mathcal Z}_N({\beta })\). Recalling the definition of \(J_N({\beta })\), see (1.16),

$$\begin{aligned} \mathcal {Z}_N({\beta })={\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )-{{\mathbb {E}}_\sigma \left( {\beta }^2 H_N(\sigma )^2\right) }/{2}}\right) . \end{aligned}$$
(3.2)

Expanding the exponential and ordering terms in powers of \({\beta }\), we see that

$$\begin{aligned} \mathcal {Z}_N({\beta })= T_N({\beta })+O_N({\beta }^5), \end{aligned}$$
(3.3)

where

$$\begin{aligned} T_N({\beta }) \equiv 1-{\beta }^4\frac{\left( {\mathbb {E}}_{\sigma }H_N(\sigma )^2\right) ^2}{8}-{\beta }^3\frac{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^3\right) }{3!}+{\beta }^4\frac{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^4\right) }{4!}\,. \end{aligned}$$
(3.4)

Writing

$$\begin{aligned} \alpha _N(p) \ln {\mathcal Z}_N({\beta }) =A_N(p) \ln \left( 1+{\mathcal Z}_N({\beta })-1\right) , \end{aligned}$$
(3.5)

with \(\alpha _N(p)=A_N(p)/N\) we see that the assertion of the theorem is equivalent to

$$\begin{aligned} \alpha _N(p) \left( {\mathcal Z}_N({\beta })-1\right) \buildrel {\mathcal D}\over \rightarrow {\mathcal N}\left( \mu ({\beta },p),\sigma ({\beta },p)^2\right) . \end{aligned}$$
(3.6)

The proof of Theorem 1.2 will therefore follow from the following two lemmata.

Proposition 3.1

With the notation above, for \(p>2\) for any \({\beta }>0\),

$$\begin{aligned} \alpha _N(p) \left( T_N({\beta })-1\right) \buildrel {\mathcal D}\over \rightarrow \mathcal {N}\left( \mu ({\beta },p), \sigma ({\beta },p)^2\right) , \end{aligned}$$
(3.7)

as \(N\uparrow \infty \).

Proposition 3.2

For \(p>2\) and for all \({\beta }<{\beta }_p\),

$$\begin{aligned} \lim _{N \uparrow \infty }\alpha _N(p)\left| {\mathcal Z}_N({\beta })- T_N({\beta }) \right| =0, \text { in probability}. \end{aligned}$$
(3.8)

Remark

In view of the fact that by Lemma 2.3\({\mathcal Z}_N({\beta })\) and \(Z^\le _\epsilon \) differ only by an exponentially small quantity, Proposition 3.2 is immediate if we show that

$$\begin{aligned} \lim _{N \uparrow \infty }\alpha _N(p)\left| Z^\le _\epsilon - T_N({\beta }) \right| =0, \text { in probability}. \end{aligned}$$
(3.9)

The proof of these two claims is given in the next subsections. Before that, we emphasise that the different limiting pictures depending on the parity of \(p> 2\) stem, in fact, from the \(T_N\)-term:

  • p odd. In this case \({\mathbb {E}}_{\sigma }\left( H_N(\sigma )^3\right) =0\) by antisymmetry (see (3.19) below), in which case

    $$\begin{aligned} T_N({\beta }) = 1-{\beta }^4\frac{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^2\right) ^2}{8}+{\beta }^4\frac{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^4\right) }{4!}. \end{aligned}$$
    (3.10)

    This should be contrasted to

  • p even. We will see in the course of the proof that the only relevant term is, as a matter of fact, the third moment, with the second and fourth moments contributing nothing due to a "wrong" blow-up. In other words, it will become clear that

    $$\begin{aligned} T_N({\beta }) = 1+{\beta }^3\frac{{\mathbb {E}}_{\sigma }\left( -H_N(\sigma )^3\right) }{3!} + \text {"vanishing corrections"}. \end{aligned}$$
    (3.11)

We prove Propositions 3.1 and 3.2 in the remainder of this paper. As a first step, in Sect. 3.1 below we provide some explicit formulas for the moments of \({\mathbb {E}}_\sigma H^k, k=2,3, 4\) which appear in the definition of \(T_N({\beta })\). Proposition 3.1 for odd p is then proven in Sect. 3.2 below, whereas the case of p even in Sect. 3.3; the proof of Proposition 3.2 for even p is given in Sect. 3.4 and the proof for the odd p case is finally given in Sect. 3.5.

3.1 Explicit representations of quenched moments

In the sequel we use the following abbreviation when summing over multi-indices \(A,B\in I_N\).

$$\begin{aligned} \sum _{(\ne )} J_A J_B {\mathbb {E}}_\sigma (\sigma _A \sigma _B) \equiv \sum _{A, B \in I_N: A\ne B} J_A J_B {\mathbb {E}}_\sigma (\sigma _A \sigma _B), \end{aligned}$$
(3.12)

and similarly for sums involving a higher number of multi-indices, in which case we mean that all multi-indices involved must be different.

For the different terms appearing in \(T_N({\beta })\), taking into account cancellations due to the averages over \(\sigma \), we have the following representations.

Lemma 3.3

We have

$$\begin{aligned} {{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }= & {} { a_N^3}\sum _{A,B,C \in I_N}J_{A}J_{B}J_{C} {\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\right) \nonumber \\= & {} {a_N^3}\sum _{(\ne )} J_{A}J_{B}J_{C} {\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\right) \,. \end{aligned}$$
(3.13)

and

$$\begin{aligned} -\frac{1}{8}{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^2\right) ^2}+\frac{1}{4!}{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^4\right) }=-\frac{ a_N^4}{12}\sum _{A \in I}J_{A}^4+{\mathcal H}_4, \end{aligned}$$
(3.14)

where

$$\begin{aligned} {{\mathcal {H}}}_4 \equiv \frac{a_N^4}{4!}\sum _{(\ne )} J_{A}J_{B}J_{C}J_{D} {\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\sigma _{D}\right) . \end{aligned}$$
(3.15)

Proof

Equation (3.13) is straightforward. An elementary computations shows that

$$\begin{aligned} -{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^2\right) ^2}=-{ a_N^4}\sum _{A, B \in I_N}J_{A}^2 J_{B}^2 = -{a_N^4}\sum _{(\ne )}J_{A}^2 J_{B}^2-{ a_N^4}\sum _{A \in I_N}J_{A}^4. \end{aligned}$$
(3.16)

The fourth moment gives

$$\begin{aligned} {{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^4\right) }={a_N^4}\sum _{A, B, C, D \in I_N} J_{A}J_{B}J_{C}J_{D} {\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\sigma _{D}\right) . \end{aligned}$$
(3.17)

We now rearrange the summation according to the possible sub-cases: i) four multi-indices come in two distinct pairs (say \(A=B\) and \(C=D\) but \(A\ne C\)): in this case \({\mathbb {E}}_\sigma \sigma _A \sigma _B \sigma _C \sigma _D = {\mathbb {E}}_\sigma \sigma _A^2 \sigma _C^2 = 1\); ii) all four multi-indices coincide, in which case \({\mathbb {E}}_\sigma \sigma _A \sigma _B \sigma _C \sigma _D = {\mathbb {E}}_\sigma \sigma _A^4=1\); iii) at least one multi-index is different from all the others. In this case the only non-vanishing contribution comes if four multi-indices are different. Hence

$$\begin{aligned} {{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^4\right) } = {3 a_N^4}\sum _{( \ne ) } J_{A}^2J_{C}^2 +{a_N^4}\sum _{A \in I_N} J_{A}^4 +{ a_N^4}\sum _{(\ne )} J_{A}J_{B}J_{C}J_{D} {\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\sigma _{D}\right) , \end{aligned}$$
(3.18)

where for the first term on the right we use that there are \(\left( {\begin{array}{c}4\\ 2\end{array}}\right) =3\) ways to choose the pairs. Combining (3.16) and (3.18) yields the claim of the lemma. \(\square \)

3.2 Proof of Proposition 3.1: p odd

We first observe that

$$\begin{aligned} {\mathbb {E}}_{\sigma }\left( -H_N(\sigma )^3\right) = a_N^3\sum _{A,B,C \in I_N}J_{A}J_{B}J_{C} {\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\right) =0, \end{aligned}$$
(3.19)

since \(\sigma _A\sigma _B \sigma _C\) is a product of an odd number of spins, and hence its expectation vanishes. Combining Lemma 3.3 and (3.19), it follows that

$$\begin{aligned} \alpha _N(p)\left( T_N({\beta })-1\right) =N^{p-2}\left( -\frac{{\beta }^4 a_N^4}{12}\sum _{A \in I}J_{A}^4+{\beta }^4\mathcal {H}_4\right) . \end{aligned}$$
(3.20)

First note that

$$\begin{aligned} N^{p-2}\left( -\frac{{\beta }^4 a_N^4}{12}\sum _{A \in I}J_{A}^4\right) =-N^{p-2}\frac{{\beta }^4 N^2}{12\left( {\begin{array}{c}N\\ p\end{array}}\right) }\frac{1}{\left( {\begin{array}{c}N\\ p\end{array}}\right) }\sum _{A \in I}J_{A}^4 \rightarrow -\frac{{\beta }^4 p!}{4}, \hbox {a.s.}, \end{aligned}$$
(3.21)

as \(N\uparrow \infty \) by the strong law of large numbers. It remains to prove that \(N^{p-2} {\mathcal {H}}_4\) converges to a Gaussian with mean zero and variance \(\sigma ({\beta },p)^2\). This will be done by proving that the moments of \(N^{p-2} {\mathcal {H}}_4\) converge to those of the Gaussian. We break this up into a series of lemmata.

Lemma 3.4

(Second moment/variance). For any \({\beta }\ge 0\) and any \(p\ge 3\),

$$\begin{aligned} \lim _{N \rightarrow +\infty } {\beta }^8{\mathbb {E}}\left( \left( N^{p-2} \mathcal {H}_4\right) ^2\right) = \sigma ({\beta },p)^2, \end{aligned}$$
(3.22)

Lemma 3.5

(Even moments). For any \({\beta }\ge 0\), and p odd, and for all \(k\in {\mathbb {N}}\),

$$\begin{aligned} \lim _{N \rightarrow +\infty } {\beta }^{8k} E\left( \left( N^{p-2} \mathcal {H}_4\right) ^{2k}\right) =\frac{(2k)!}{2^kk!}\sigma ({\beta },p)^{2k}. \end{aligned}$$
(3.23)

Lemma 3.6

(Vanishing of odd moments). For any \({\beta }\ge 0\), p odd and for all \(k\in {\mathbb {N}}\),

$$\begin{aligned} \lim _{N \rightarrow +\infty } {\mathbb {E}}\left( \left( N^{p-2} \mathcal {H}_4\right) ^{2k+1}\right) =0. \end{aligned}$$
(3.24)

The remainder of this subsection is devoted to the proofs of these lemmata, which combined imply Proposition 3.1 for p odd.

Proof of Lemma 3.4

We have that

$$\begin{aligned} {\mathbb {E}}\left( \mathcal {H}_4^2\right)= & {} \frac{a_N^8}{4!^2}\sum _{\begin{array}{c} A,B,C,D \in I_N \\ (\ne ) \end{array}} \sum _{\begin{array}{c} E,F,G,H \in I_N \\ (\ne ) \end{array}} {\mathbb {E}}\left( J_{A}J_{B} \dots J_{H}\right) \nonumber \\{} & {} {\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\sigma _{D}\right) {\mathbb {E}}_{\sigma ^{\prime }}\left( \sigma _{E}^{\prime }\sigma _{F}^{\prime }\sigma _{G}^{\prime }\sigma _{H}^{\prime }\right) \nonumber \\= & {} 4!\frac{ a_N^8}{4!^2}\sum _{(\ne )} {\mathbb {E}}\left( J_{A}^2J_{B}^2J_{C}^2J_{D}^2\right) {\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\sigma _{D}\right) {\mathbb {E}}_{\sigma ^{\prime }}\left( \sigma _{A}^{\prime }\sigma _{B}^{\prime }\sigma _{C}^{\prime }\sigma _{D}^{\prime }\right) \nonumber \\= & {} \frac{ a_N^8}{4!}\sum _{(\ne )} {\mathbb {E}}_{\sigma ,\sigma ^{\prime }}\left( \sigma _{A}\sigma _{B}\sigma _{C}\sigma _{D}\sigma _{A}^{\prime }\sigma _{B}^{\prime }\sigma _{C}^{\prime }\sigma _{D}^{\prime }\right) . \end{aligned}$$
(3.25)

Here we used that in order to get a non-vanishing contributions, all the multi-indices in the first sum must be paired with one in the second sum. The number of such pairings is 4!.

Next we express \({\mathbb {E}}\left( \mathcal {H}_4^2\right) \) as a function of the overlaps.

$$\begin{aligned} {\mathbb {E}}\left( \mathcal {H}_4^2\right)= & {} \frac{ a_N^8}{4!}\bigg [\sum _{\begin{array}{c} A,B,C,D \in I_N \end{array}} {\mathbb {E}}_{\sigma ,\sigma '}\left( \sigma _{A}\sigma _{B}\sigma _{C}\sigma _{D}\sigma _{A}'\sigma _{B}'\sigma _{C}'\sigma _{D}'\right) \nonumber \\{} & {} -3\sum _{\begin{array}{c} A,B \in I_N\\ (\ne ) \end{array}} {\mathbb {E}}_{\sigma ,\sigma '}\left( \sigma _{A}^2\sigma _{B}^2\sigma _{A}'^2\sigma _{B}'^2\right) -\sum _{A\in I_N} {\mathbb {E}}_{\sigma ,\sigma '}\left( \sigma _{A}^4\sigma _{A}'^4\right) \bigg ]\nonumber \\= & {} \frac{ a_N^8}{4!}\left[ \sum _{\begin{array}{c} A,B,C,D \in I_N \end{array}} {\mathbb {E}}_{\sigma ,\sigma ^{\prime }}\left( \sigma _{A}\sigma _{B}\sigma _{C}\sigma _{D}\sigma _{A}'\sigma _{B}'\sigma _{C}'\sigma _{D}'\right) \right. \nonumber \\{} & {} \left. -3\left( {\left( {\begin{array}{c}N\\ p\end{array}}\right) }^2-{\left( {\begin{array}{c}N\\ p\end{array}}\right) }\right) - \left( {\begin{array}{c}N\\ p\end{array}}\right) \right] , \end{aligned}$$
(3.26)

and therefore

$$\begin{aligned} {\mathbb {E}}\left( \mathcal {H}_4^2\right)= & {} \frac{a_N^8}{4!} {\mathbb {E}}_{\sigma ,\sigma ^{\prime }}\left( \left( \sum _{A \in I_N} \sigma _{A}\sigma _{A}^{\prime } \right) ^4 \right) -\frac{3 a_N^8}{4!} {\left( {\begin{array}{c}N\\ p\end{array}}\right) }^2+\frac{2{\beta }^8 a_N^8}{4!} \left( {\begin{array}{c}N\\ p\end{array}}\right) \nonumber \\= & {} \frac{1}{4!}\sum _{m \in \Gamma _N} \left( N f_N^p\left( m \right) \right) ^4p_N(m)-\frac{N^4}{8\left( {\begin{array}{c}N\\ p\end{array}}\right) ^2}+O(N^{4-3p}), \end{aligned}$$
(3.27)

where we used (2.28). Collecting the leading terms, we see that

$$\begin{aligned} {\mathbb {E}}\left( \left( N^{p-2} \mathcal {H}_4\right) ^2\right) =\frac{1}{4!}\sum _{m \in \Gamma _N} \left( N^{\frac{p}{2}} f_N^p\left( m \right) \right) ^4p_N(m)-\frac{ p!^2}{8} +o(1). \end{aligned}$$
(3.28)

Furthermore, by (1.4), we have that

$$\begin{aligned} N^{\frac{p}{2}}f_N^p\left( m\right) =\sum _{k=0}^{[p/2]} d_{p-2k}{\left( \sqrt{N}m\right) }^{p-2k}(1+O(1/N)), \end{aligned}$$
(3.29)

and using this in the sum on the r.h.s. of (3.28) yields

$$\begin{aligned}{} & {} {\mathbb {E}}\left( \left( N^{p-2} \mathcal {H}_4\right) ^2\right) =\frac{1}{4!}\sum _{m \in \Gamma _N} \left( \sum _{k=0}^{[p/2]} d_{p-2k}{\left( \sqrt{N}m\right) }^{p-2k}\right) ^4p_N(m) \left( 1+O\left( \frac{1}{N}\right) \right) \nonumber \\{} & {} \qquad -\frac{ p!^2}{8}+o_N(1). \end{aligned}$$
(3.30)

By Taylor-expanding in \(m=0\), it can be checked that

$$\begin{aligned} p_N(m) = \frac{2}{\sqrt{2\pi N}} {\mathrm e}^{- N m^2/2}[1+o_N(1)]. \end{aligned}$$
(3.31)

It follows that the sum in (3.30) converges to an integral, namely,

$$\begin{aligned} \lim _{N \uparrow \infty } {\mathbb {E}}\left( \left( N^{p-2} \mathcal {H}_4\right) ^2\right)= & {} \frac{1}{12\sqrt{2 \pi }}\int _{-\infty }^{+\infty } \left( \sum _{k=0}^{[p/2]} d_{p-2k}{m}^{p-2k}\right) ^4 {\mathrm e}^{-\frac{m^2}{2}}dm-\frac{ p!^2}{8}\nonumber \\= & {} {\beta }^{-8}\sigma ({\beta },p)^2. \end{aligned}$$
(3.32)

This proves the lemma. \(\square \)

Proof of Lemma 3.5

The 2k-th moments of \({\mathcal H}_4\) can be written as

$$\begin{aligned} {\mathbb {E}}\left( \mathcal {H}_4^{2k}\right) = \frac{ a_N^{8k}}{4!^{2k}}{\mathbb {E}}\left( \left( \sum _{(\ne )} J_{A}J_{B}J_{C}J_{D} {\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\sigma _{D}\right) \right) ^{2k}\right) . \end{aligned}$$
(3.33)

and

$$\begin{aligned}{} & {} {\mathbb {E}}\left( \left( \sum _{\begin{array}{c} A,B,C,D \in I_N \\ (\ne ) \end{array}} J_{A}J_{B}J_{C}J_{D}{\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\sigma _{D}\right) \right) ^{2k}\right) \nonumber \\{} & {} \qquad =\prod _{i=1}^{2k}\sum _{\begin{array}{c} A_i,B_i,C_i,D_i \in I_N \\ (\ne ) \end{array}} {\mathbb {E}}\left( \prod _{i=1}^{2k} J_{A_i}J_{B_i}J_{C_i}J_{D_i} \right) \prod _{i=1}^{2k} {\mathbb {E}}_{\sigma }\left( \sigma _{A_i}\sigma _{B_i} \sigma _{C_i} \sigma _{D_i}\right) . \end{aligned}$$
(3.34)

Since the averages of odd powers of the random variables J vanish, only terms in the sums over the multi-indices in (3.34) give a non-zero contribution where each multi-index occurs at least twice. Moreover, the leading order contribution comes from terms where each multi-index occurs exactly twice and where these pairings take place between the multi-indices of two indices i and j. We say a pairing between the sums i and j takes place as soon as \((A_i,B_i,C_i,D_i)=(\pi [A_j],\pi [B_j],\pi [C_j],\pi [D_j]))\) where \(\pi \) is any permutationFootnote 1 on \((A_j,B_j,C_j,D_j)\). Since there are \(\frac{(2k)!}{k!2^k}\) different ways to construct such sum-pairings, we re-write the right-hand side of (3.34) as

$$\begin{aligned}{} & {} \frac{4!^{k}(2k)!}{k!2^k}\sum _{ (\ne )} \prod _{i=1}^{k} {\mathbb {E}}\left( J_{A_i}^2 J_{B_i}^2 J_{C_i}^2 J_{D_i}^2\right) \left( {\mathbb {E}}_{\sigma } \left( \sigma _{A_i}\sigma _{B_i} \sigma _{C_i} \sigma _{D_i} \right) \right) ^2+R_N(2k)\\{} & {} \qquad \quad \equiv P_N(2k)+R_N(2k). \end{aligned}$$

The first term can be written as

$$\begin{aligned} P_N(2k)= \frac{4!^k(2k)!}{k!2^k}\sum _{ (\ne )} \prod _{i=1}^k\left( {\mathbb {E}}_{\sigma }\left( \sigma _{A_i}\sigma _{B_i}\sigma _{C_i}\sigma _{D_i}\right) \right) ^2. \end{aligned}$$
(3.35)

This term will converge to the appropriate moment of the Gaussian, whereas the \(R_N\)-term tend to zero.

Lemma 3.7

With the notation above,

$$\begin{aligned} \lim _{N\uparrow \infty }\frac{N^{\left( 2pk-4k\right) } a_N^{8k}{\beta }^{8k}}{4!^{2k} } P_N(2k) = \frac{(2k)!}{k!2^k} \sigma ({\beta },p)^{2k}. \end{aligned}$$
(3.36)

Proof

It is elementary to see that

$$\begin{aligned} \sum _{ (\ne )} \prod _{i=1}^{k} {\mathbb {E}}_{\sigma } \left( \sigma _{A_i}\sigma _{B_i} \sigma _{C_i} \sigma _{D_i} \right) ^2= \left( \sum _{ (\ne )} \left( {\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\sigma _{D} \right) \right) ^2\right) ^k\left( 1+O(N^{-p})\right) . \end{aligned}$$
(3.37)

Recalling (3.25),

$$\begin{aligned} \sum _{ (\ne )} \left( {\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\sigma _{D} \right) \right) ^2 =\frac{4!}{a_N^8} {\mathbb {E}}\left( {\mathcal H}_4^2\right) . \end{aligned}$$
(3.38)

Putting these observations together and using (3.32), we arrive at the assertion of the lemma. \(\square \)

We now turn to the remainder term.

Lemma 3.8

$$\begin{aligned} \lim _{N\uparrow \infty } \frac{N^{\left( 2pk-4k\right) } a_N^{8k}}{4!^{2k} }R_N(2k)=0. \end{aligned}$$
(3.39)

Proof

Recall that the sums in (3.34) run over 8k multi-indices which by the pairing condition due to the J is reduced to 4k multi-indices. In \(P_N(2k)\), there are indeed that many sums. We must show that in what is left, i.e. if pairings occur that involve more than two groups, the effective number of summations is further reduced. This means that there are terms where (double) products of the following type appear:

  1. (1)
    $$\begin{aligned} {\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _B\sigma _C\sigma _D\right) {\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _E\sigma _F\sigma _G\right) , \end{aligned}$$

    where (EFG) do not coincide with any of the multi-indices (ABCD) or

  2. (2)
    $$\begin{aligned} {\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _B\sigma _C\sigma _D\right) {\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _B\sigma _E\sigma _F\right) , \end{aligned}$$

    where (EF) do not coincide with any of the multi-indices (ABCD)Footnote 2 or

  3. (3)
    $$\begin{aligned} \textit{sums which appear in pairs but at least one of the pairs coincide.} \end{aligned}$$

The last case it trivially of lower order.

We first look at the terms of type (1). They are of the form

$$\begin{aligned} {\widetilde{\sum }}_{(1)} {\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _B\sigma _C\sigma _D\right) {\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _E\sigma _F\sigma _G\right) \prod _{i=1}^{2k-2} {\mathbb {E}}_{\sigma }\left( \sigma _{A_i}\sigma _{B_i} \sigma _{C_i} \sigma _{D_i}\right) , \end{aligned}$$
(3.40)

where the sum is over at most 4k different multi-indices where moreover ABCDEFG respect the condition stated under (1) and of course the multi-indices with same index i are all different. We first note that

$$\begin{aligned} \sum _{\begin{array}{c} A,B,C,D \in I_N \\ (\ne ) \end{array}}{\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _B\sigma _C\sigma _D\right) \lesssim N^{2p}, \end{aligned}$$
(3.41)

since the expectation over \(\sigma \) vanishes unless all \(\sigma _i\) appearing in the product come in pairs. Thus, we may run A over all \(N^p\) values. Then BCD may each match \(k_B,k_C\) and \(k_D\) with \(k_B+k_C+k_D=p\) of the indices of A. Further, C may in addition match \(\ell _C\) of the \(p-k_B\) free indices of B. Then D must match the remaining \(p-k_B-k_C\) unmatched indices of A, the \(p-k_B-\ell _C\) unmatched indices of B and the \(p-k_C-\ell _C\) free indices of C. This leaves \(N^{p-k_B}\) choices for B, \(N^{p-k_C-\ell _C}\) choices for C, and just one for D. Clearly, \(\ell _C=k_D\), since D must match the \(p-k_B-k_C\) unmatched indices of A. Thus, the number of choices for the four multi-indices is \(N^{p+p-k_B+p-k_C-\ell _C}=N^{2p}\). If in addition one of the multi-indices is fixed, we are left with

$$\begin{aligned} \sum _{\begin{array}{c} B,C,D \in I_N \\ (\ne ) \end{array}}{\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _B\sigma _C\sigma _D\right) \lesssim N^{p}, \end{aligned}$$
(3.42)

where the BCD must also be different from A. If two multi-indices are fixed,

$$\begin{aligned} \sum _{\begin{array}{c} C,D \in I_N \\ (\ne ) \end{array}}{\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _B\sigma _C\sigma _D\right) \lesssim N^{p-1}. \end{aligned}$$
(3.43)

This bound comes from the case when B matches the largest possible number of the indices in A, namely \(p-1\). In that case, C has to just match the one remaining index from A, leaving \(N^{p-1}\) choices that then have to be matched by D. Finally, if all four multi-indices are fixed there is only one contribution.

Let us now return to the sum (3.40),

$$\begin{aligned} {\widetilde{\sum }}_{(1)} {\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _B\sigma _C\sigma _D\right) {\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _E\sigma _F\sigma _G\right) \prod _{i=1}^{2k-2} {\mathbb {E}}_{\sigma }\left( \sigma _{A_i}\sigma _{B_i} \sigma _{C_i} \sigma _{D_i}\right) . \end{aligned}$$
(3.44)

The sum over the seven multi-indices ABCDEFG gives at most \(N^{3p}\) terms: The sum over A gives \(N^p\), and then, according to the discussion above, the BCD and the EFG \(N^p\) each. The remaining sum is over \(4(2k-2)\) multi-indices, of which 6 have to be matched to BCDEFG, and all others must be paired. This leaves \(4k-7\) sums over multi-indices to be summed, which gives due to the constraints created by the \(\sigma \)-sums at most \(N^{p(2k-7/2)}\) terms. So overall, (3.44) is bounded by a constant times \(N^{p(2k-1/2)}\ll N^{2kp}\).

Terms of Type (2) are of the form

$$\begin{aligned} {\widetilde{\sum }}_{(2)} {\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _B\sigma _C\sigma _D\right) {\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _B\sigma _E\sigma _F\right) \prod _{i=1}^{2k-2} {\mathbb {E}}_{\sigma }\left( \sigma _{A_i}\sigma _{B_i} \sigma _{C_i} \sigma _{D_i}\right) . \end{aligned}$$
(3.45)

To bound the sum over the first six multi-indices, we have to be more careful. First, there are \(N^p\) choices for A. Then, if we choose B such that \(k_B\) indices match those of B, there are \(N^{p-k_B}\) choices for B. Finally, we must choose \(k_C\) and \(\ell _C\) as in the discussion above, thus that \(k_B+k_C+\ell _C=p\), and equally \(k_E\) and \(\ell _E\) with the same property. This allows \(N^{p-k_B}\) choices for each of these multi-indices. Finally, E and F are determined. Altogether, this leaves \(N^{4p-k_B-k_C-\ell _C-k_E-\ell _E}=N^{2p+k_B}\) terms, for \(k_B\) given. But since \(B\ne A\), \(k_B\le p-1\), so that the sum over these 6 indices contribute at most \(O(N^{3p-1})\) terms. From the remaining \(4(2k-2)\) multi-indices, four are fixed to match CDEF, and all others must be paired. This leaves \(2(2k-3)\) free multi-indices which can at most contribute \(N^{p(2k-3)}\) terms. So in all the sum in (3.45) is bounded by \(Cont. N^{2kp-1}\), which is again of lower order than \(N^{2kp}\).

Finally, if any multi-index occurs four times, we loose a factor of \(N^{2p}\) and also these terms are negligable. Combining these observations we have proven the lemma. \(\square \)

The assertion of Lemma 3.5 follows immediately. \(\square \)

Proof of Lemma 3.6

In the case of odd moments, pairing of the multi-indices between always just two blocks is obviously impossible, so that the terms that contributed to the leading \(P_N(2k+1)\) do not exist. Thus

$$\begin{aligned} {\mathbb {E}}\left( \left( N^{p-2} \mathcal {H}_4\right) ^{2k+1}\right) = \frac{N^{(p-2)(2k+1)} a_N^{4(2k+1)}}{4!^{2k+1} } R_N(2k+1) \lesssim \frac{1}{N^{(2k+1)p}} R_N(2k+1). \end{aligned}$$
(3.46)

By the same arguments as in the proof of Lemma 3.8, \(R_N(2k+1)\) is of smaller order than \(N^{(2k+1)p}\) and hence the right-hand side of (3.46) tends to zero. This proves Lemma 3.6. \(\square \)

This also concludes the proof of Proposition 3.1 for p odd.

3.3 Proof of Proposition 3.1: p even

Recall that for p even,

$$\begin{aligned} \alpha _N(p) \left( T_N({\beta })-1\right)= & {} {\beta }^4 N^{\left( \frac{3p}{4}-\frac{3}{2}\right) }\left( \frac{-{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^2\right) ^2}{8}+\frac{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^4\right) }{4!} \right) \nonumber \\{} & {} +{\beta }^3N^{\left( \frac{3p}{4}-\frac{3}{2}\right) }\frac{{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }{3!}. \end{aligned}$$
(3.47)

We first show that only the last term is relevant.

Lemma 3.9

$$\begin{aligned} \lim _{N\uparrow \infty }N^{\left( \frac{3p}{4}-\frac{3}{2}\right) }\left( -\frac{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^2\right) ^2}{8}+\frac{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^4\right) }{4!} \right) =0. \end{aligned}$$
(3.48)

Proof

By Lemma 3.3,

$$\begin{aligned}{} & {} N^{\left( \frac{3p}{4}-\frac{3}{2}\right) }\left( -\frac{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^2\right) ^2}{8}+\frac{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^4\right) }{4!}\right) \nonumber \\{} & {} \qquad =-N^{\left( \frac{3p}{4}-\frac{3}{2}\right) }\frac{ a_N^4}{12}\sum _{A \in I}J_{A}^4+N^{\left( \frac{3p}{4}-\frac{3}{2}\right) }\mathcal {H}_4. \end{aligned}$$
(3.49)

By the law of large numbers (see (3.21)), the first term in the right converges to zero in probability. By Lemma 3.4, \(N^{p-2}{\mathcal H}_4\) converges to a constant in \(L^2\). Since \(\frac{3p}{4}-\frac{3}{2}< p-2\) if \(p>2\), this implies that the last term in (3.49) also converges to zero in probability. This proves the lemma. \(\square \)

Thus, it only remains to prove that

$$\begin{aligned} {\beta }^3 N^{\left( \frac{3p}{4}-\frac{3}{2}\right) }\frac{{\mathbb {E}}_{\sigma } \left( - H_N(\sigma )^3\right) }{3!}\buildrel {\mathcal D}\over \rightarrow \mathcal {N}(0,\sigma ({\beta },p)^2). \end{aligned}$$
(3.50)

to conclude the proof of Proposition 3.1. We break this up into three lemmata as in the odd case.

Lemma 3.10

(Second moment). For any \({\beta }\ge 0\),

$$\begin{aligned} \lim _{N \rightarrow +\infty } {\beta }^6{\mathbb {E}}\left( \left( N^{\left( \frac{3p}{4}-\frac{3}{2}\right) } {\mathbb {E}}_{\sigma }\left( \frac{-H_N(\sigma )^3}{3!}\right) \right) ^2\right) = \sigma ({\beta },p)^2. \end{aligned}$$
(3.51)

Lemma 3.11

(Even moments). For any \({\beta }\ge 0\),

$$\begin{aligned} \lim _{N \rightarrow +\infty } {\beta }^{6k}{\mathbb {E}}\left( \left( N^{\left( \frac{3p}{4}-\frac{3}{2}\right) } {\mathbb {E}}_{\sigma }\left( \frac{-H_N(\sigma )^3}{3!}\right) \right) ^{2k}\right) =\frac{(2k)!}{2^kk!}\sigma ({\beta },p)^{2k}. \end{aligned}$$
(3.52)

Lemma 3.12

(Odd moments). For any \({\beta }\ge 0\),

$$\begin{aligned} \lim _{N \rightarrow +\infty } {\mathbb {E}}\left( \left( N^{\left( \frac{3p}{4}-\frac{3}{2}\right) } {\mathbb {E}}_{\sigma }\left( \frac{-H_N(\sigma )^3}{3!}\right) \right) ^{2k+1}\right) =0. \end{aligned}$$
(3.53)

Proof of Lemma 3.10

We have that

$$\begin{aligned}{} & {} {\mathbb {E}}\left( {\mathbb {E}}_{\sigma } \left( H_N(\sigma )^3\right) ^2\right) \nonumber \\{} & {} \quad ={a_N^6} \sum _{\begin{array}{c} A,B,C \in I_N (\ne ) \\ D,E,F \in I_N (\ne ) \end{array}} {\mathbb {E}}\left( J_AJ_BJ_CJ_DJ_EJ_F\right) {\mathbb {E}}_{\sigma ,\sigma ^{\prime }}\left( \sigma _{A}\sigma _{B}\sigma _{C}\sigma _{D}^{\prime }\sigma _{E}^{\prime }\sigma _{F}^{\prime }\right) . \end{aligned}$$
(3.54)

We rearrange the summation according to the possible sub-cases: i) all four multi-indices coincide, ii) four multi-indices coincide and two multi-indices come in a distinct pair; iii) six multi-indices come in three different pairs. Thus the right-hand side of (3.54) equals

$$\begin{aligned}{} & {} a_N^6{\mathbb {E}}\left( J^6\right) \sum _{A \in I_N} {\mathbb {E}}_{\sigma ,\sigma ^{\prime }}\left( \sigma _{A}\sigma _{A}^{\prime }\right) +a_N^6\left( {\begin{array}{c}6\\ 2\end{array}}\right) {\mathbb {E}}\left( J^4\right) {\mathbb {E}}\left( J^2\right) \sum _{A \ne B \in I_N }{\mathbb {E}}_{\sigma ,\sigma ^{\prime }}\left( \sigma _{A}\sigma _{B}^{\prime }\right) \nonumber \\{} & {} \quad \quad +6a_N^6{\mathbb {E}}\left( J^2\right) ^3 \sum _{{A,B,C \in I_N (\ne ) }}{\mathbb {E}}_{\sigma ,\sigma ^{\prime }} \left( \sigma _{A}\sigma _{B}\sigma _{C}\sigma _{A}^{\prime }\sigma _{B}^{\prime }\sigma _{C}^{\prime }\right) \nonumber \\{} & {} \quad =6{a_N^6} \sum _{A,B,C\in I_N } {\mathbb {E}}_{\sigma ,\sigma ^{\prime }}\left( \sigma _{A}\sigma _{B}\sigma _{C} \sigma _{A}^{\prime }\sigma _{B}^{\prime }\sigma _{C}^{\prime }\right) , \end{aligned}$$
(3.55)

where the factor 6 accounts for the 3! possible pairings that all give the same contribution. In the last line we dropped the condition \((\ne )\), since all terms where this is not satisfied vanish. We conclude that

$$\begin{aligned} {\mathbb {E}}\left( {{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^3\right) ^2}\right) =6{ a_N^6} {\mathbb {E}}_{\sigma ,\sigma ^{\prime }}\left[ \left( \sum _{A \in I_N}\sigma _{A}\sigma _{A}^{\prime }\right) ^3\right] =6 \sum _{m \in \Gamma _N} \left( Nf_N^p\left( m \right) \right) ^3 p_N(m). \end{aligned}$$
(3.56)

From here we get

$$\begin{aligned} {\mathbb {E}}\left( N^{2\left( \frac{3p}{4}-\frac{3}{2}\right) }{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^3\right) ^2}\right) ={3!} \sum _{m \in \Gamma _N}\left( N^{\frac{p}{2}} f_N^p\left( m \right) \right) ^3p_N(m). \end{aligned}$$
(3.57)

Exactly as in the proof of Lemma 3.4 it now follows that

$$\begin{aligned} \lim _{N \uparrow \infty } {\mathbb {E}}\left( N^{2\left( \frac{3p}{4}-\frac{3}{2}\right) }\frac{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^3\right) ^2}{3!^2}\right) = \frac{1}{3\sqrt{2\pi }} \int _{-\infty }^{+\infty }\left( \sum _{k=0}^{[p/2]} d_{p-2k}{m}^{p-2k}\right) ^3{\mathrm e}^{-\frac{m^2}{2}}dm, \end{aligned}$$
(3.58)

which proves the lemma. \(\square \)

Proof of Lemma 3.11

For \(k>1\), we consider

$$\begin{aligned}{} & {} {\mathbb {E}}\left( \left( N^{\left( \frac{3p}{4}-\frac{3}{2}\right) } {\mathbb {E}}_{\sigma }\left( \frac{H_N(\sigma )^3}{3!}\right) \right) ^{2k}\right) \nonumber \\{} & {} \qquad =\frac{N^{\left( \frac{3pk}{2}-3k\right) } a_N^{6k}}{3!^{2k} }{\mathbb {E}}\left( \left( \sum _{(\ne )} J_AJ_BJ_C {\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\right) \right) ^{2k}\right) . \end{aligned}$$
(3.59)

Expanding the 2k-moment inside the expectation yields

$$\begin{aligned}{} & {} {\mathbb {E}}\left( \left( \sum _{ (\ne )} J_{A_1}J_{B_1}J_{C_1}{\mathbb {E}}_{\sigma }\left( \sigma _{A_1}\sigma _{A_2}\sigma _{A_3}\right) \right) ^{2k}\right) \nonumber \\{} & {} \qquad = \prod _{i=1}^{2k} \sum _{ (\ne )} {\mathbb {E}}\left( \prod _{i=1}^{2k} J_{A_i} J_{B_i} J_{C_i} \right) \prod _{i=1}^{2k} {\mathbb {E}}_{\sigma }\left( \sigma _{A_i}\sigma _{B_i}\sigma _{C_i}\right) . \end{aligned}$$
(3.60)

We now proceed as in the case p odd. The principal term in the sum comes form the multi-indices within two blocks i, j are \((A_i,B_i,C_i)=(\pi [A_j],\pi [B_j],\pi [C_j]))\) matched. Since there are \(\frac{(2k)!}{k!2^k}\) different ways to construct such sum-pairings, we re-write the right-hand side of (3.60) as

$$\begin{aligned}{} & {} \frac{3!^k(2k)!}{k!2^k}\sum _{\begin{array}{c} A_1,B_1,C_1 \dots A_{k},B_{k},C_{k} \in I_N \\ (\ne ) \end{array}} \prod _{i=1}^{k} {\mathbb {E}}\left( J_{A_i}^2 J_{B_i}^2 J_{C_i}^2\right) \prod _{i=1}^{k} {\mathbb {E}}_{\sigma }\left( \sigma _{A_i}\sigma _{B_i} \sigma _{C_i}\right) ^2+R_N(2k) \nonumber \\{} & {} \quad \equiv P_N(2k)+R_N(2k). \end{aligned}$$
(3.61)

As in the odd case, we have the following results.

Lemma 3.13

With the notation above,

$$\begin{aligned} \lim _{N\uparrow \infty } N^{\left( \frac{3pk}{2}-3k\right) } \frac{{\beta }^{6k} a_N^{6k}}{3!^{2k}}P_N(2k)= \frac{(2k)!}{k!2^k}\sigma ({\beta },p)^{2k}. \end{aligned}$$
(3.62)

Lemma 3.14

$$\begin{aligned} \lim _{N\uparrow \infty } N^{\left( \frac{3pk}{2}-3k\right) } \frac{a_N^{6k}}{3!^{2k}}R_N(2k)=0. \end{aligned}$$
(3.63)

Proof of Lemma 3.13

The proof is completely analogous to that of Lemma 3.7 and will be omitted. \(\square \)

Proof of Lemma 3.14

The non-trivial terms that appear in the expression for \(R_N(2k)\) must contain a term of the form

$$\begin{aligned} {\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _B\sigma _C\right) {\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _D\sigma _E\right) , \end{aligned}$$
(3.64)

where (DE) do not coincide with any of the multi-indices (ABC)Footnote 3 That is, we have to control sums of the form

$$\begin{aligned} {\widetilde{\sum }}_{(1)} {\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _B\sigma _C\right) {\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _D\sigma _E\right) \prod _{i=1}^{2k-2} {\mathbb {E}}_{\sigma }\left( \sigma _{A_i}\sigma _{B_i} \sigma _{C_i}\right) , \end{aligned}$$
(3.65)

where ABCDE are as above and all multi-indices must be paired. By a computation analogous to that in the proof of Lemma 3.7, we get that

$$\begin{aligned} \sum _{\begin{array}{c} A,B,C \in I_N \\ (\ne ) \end{array}}{\mathbb {E}}_{\sigma }\left( \sigma _A\sigma _B\sigma _C\right) \lesssim N^{\frac{3p}{2}}. \end{aligned}$$
(3.66)

Looking at (3.64), we see that the sum over the (ABCDE) produces \(O(N^{2p})\) terms. Of the remaining \(3(2k-2)\) multi-indices, four must match BCDE while the remaining ones must be paired. This leaves \((3k-5)\) free multi-indices to sum over. This yields at most \(N^{p(3k-5)/2}\) terms, so that altogether the sum in (3.65) is of order at most \(N^{p(3k-1)/2}\). Inserting this into (3.63) shows that the left-hand side is of order \(N^{-p/2}\) and converges to zero as claimed. \(\square \)

Lemma 3.13 and Lemma 3.14 yield the assertion of Lemma 3.11. \(\square \)

Proof of Lemma 3.12

\({\mathbb {E}}_{\sigma }\left( H_N(\sigma )^3\right) ^{2k+1}\) is a sum of a product of \(6k+3\) standard normal random variables, which is an odd number: At least one of the J. will be to the power of an odd number. The expectation value of \({\mathbb {E}}_{\sigma }\left( H_N(\sigma )^3\right) ^{2k+1}\) with respect to \({\mathbb {E}}\) is thus equal to 0. \(\square \)

3.4 Proof of Proposition 3.2: p even

We want to show that

$$\begin{aligned} \lim _{N\uparrow \infty } N^{3p/4-3/2} \left| {\mathcal Z}_N({\beta })- T_N({\beta })\right| =0. \end{aligned}$$
(3.67)

Using the definition of \(T_N({\beta })\)

$$\begin{aligned}{} & {} \left| {\mathcal Z}_N({\beta })- T_N({\beta })\right| \le \left| {\mathcal Z}_N({\beta })-1- \frac{{\beta }^3}{3!}{{\mathbb {E}}_{\sigma }\left( -H_N(\sigma )^3\right) }\right| + \left| \frac{{\beta }^4}{8}{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^2\right) ^2}\right. \nonumber \\{} & {} \qquad \left. -\frac{{\beta }^4}{4!}{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^4\right) }\right| \nonumber \\{} & {} \quad \le \left| Z_\epsilon ^\le -{\mathcal Z}_N({\beta })\right| + \left| {\mathbb {E}}\left( Z_\epsilon ^\le \right) -1\right| + \left| Z_\epsilon ^\le -{\mathbb {E}}\left( Z_\epsilon ^\le \right) -\frac{{\beta }^3}{3!}{{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }\right| \nonumber \\{} & {} \qquad + \left| \frac{{\beta }^4}{8}{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^2\right) ^2}-\frac{{\beta }^4}{4!}{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^4\right) }\right| . \end{aligned}$$
(3.68)

The first term in the second line is negligible by Lemma (2.3), the second by (2.18) together with Lemma (2.3). By Lemma 3.9, the last term on the right of (3.68) will vanish if it is inserted into (3.67). To control the remaining third term, we bound its second moment,

$$\begin{aligned}{} & {} {\mathbb {E}}\left( \left| Z_\epsilon ^\le -{\mathbb {E}}(Z_\epsilon ^\le )-\frac{{\beta }^3}{3!}{{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }\right| ^2 \right) ={\mathbb {E}}\left( \left( Z_\epsilon ^\le \right) ^2-\left( {\mathbb {E}}(Z^\le _\epsilon )\right) ^2\right) \nonumber \\{} & {} \quad +\frac{{\beta }^6}{3!^2}{\mathbb {E}}\left( {{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^3\right) ^2}\right) -\frac{2{\beta }^3}{3!}{\mathbb {E}}\left( {\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) \left( Z^{\le }_\epsilon -{\mathbb {E}}\left( Z_\epsilon ^\le \right) \right) \right) .\qquad \quad \end{aligned}$$
(3.69)

\({\mathbb {E}}\left( {\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) \right) =0\) by symmetry. Therefore, the right-hand side of (3.69) is equal to

$$\begin{aligned}{} & {} {\mathbb {E}}\left( (Z_\epsilon ^\le )^2\right) - \left( {\mathbb {E}}\left( Z_\epsilon ^\le \right) \right) ^2 -\frac{{\beta }^6}{3!^2}{\mathbb {E}}\left( {{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^3\right) ^2}\right) \nonumber \\{} & {} \quad -\frac{2{\beta }^3}{3!}{\mathbb {E}}\left( {{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) } \left( {Z_\epsilon ^\le }-\frac{{\beta }^3}{3!}{{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }\right) \right) . \end{aligned}$$
(3.70)

In order to prove that the first term on the r.h.s. of (3.68) vanishes, it thus remains to prove that

Lemma 3.15

For all \({\beta }<{\beta }_p\),

$$\begin{aligned} \lim _{N \uparrow \infty }N^{\left( \frac{3p}{2}-3\right) }\left| {\mathbb {E}}\left( (Z_\epsilon ^\le )^2\right) - \left( {\mathbb {E}}\left( Z_\epsilon ^\le \right) \right) ^2 -\frac{{\beta }^6}{3!^2}{\mathbb {E}}\left( {{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^3\right) ^2}\right) \right| =0 \end{aligned}$$
(3.71)

and

Lemma 3.16

For all \({\beta }\in {\mathbb {R}}_+\),

$$\begin{aligned} \lim _{N \uparrow \infty }N^{\left( \frac{3p}{2}-3\right) }\left| {\mathbb {E}}\left( {{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }\left( Z_\epsilon ^\le -\frac{{\beta }^3}{3!}{{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }\right) \right) \right| =0. \end{aligned}$$
(3.72)

Lemma 3.15 and 3.16 clearly imply Proposition 3.2 for p even.

Proof of Lemma 3.15

We will now improve the estimate of the second moment of \({Z_\epsilon ^\le }\) started with Eq. (2.29). We write

$$\begin{aligned} {\mathbb {E}}\left[ (Z_\epsilon ^\le )^2\right] = A+B, \end{aligned}$$
(3.73)

with AB are given in (2.29) and \(A\le A_1+A_2\), with \(A_1,A_2\) defined in (2.48) and (2.49). The estimates obtained in Sect. 2 for B (Lemma 2.5) and \(A_1\) (Eq. 2.58) are good enough, but we need to improve the bound on \(A_2\). Recall that in the final bound (2.67) for \(A_1\) there was an error term of order \(N^{3-3p/2}\), which would not vanish if multiplied with the \(N^{3p/2-3}\). This term is due to the cubic term in the expansion (2.62). But this term reads

$$\begin{aligned} \frac{1}{3!} \sum _{m \in \Gamma _N}{\left( \frac{{\beta }^2 N f_N^p\left( m \right) }{2 {\beta }^2 a_N^2+1}\right) ^3} p_N\left( m\right) . \end{aligned}$$
(3.74)

But recall that

$$\begin{aligned} {\mathbb {E}}\left( \frac{{\beta }^6}{3!^2}{{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^3\right) ^2}\right) = \sum _{m \in \Gamma _N} \frac{ \left( {\beta }^2 Nf_N^p\left( m \right) \right) ^3 }{3!}p_N(m). \end{aligned}$$
(3.75)

Therefore, im the expression in (3.71), this term exactly cancels the unpleasant cubic term in the expansion of \(A_2\).

Recalling (2.63),

$$\begin{aligned} A_2 = \sum _{m \in \Gamma _N} \left( 1 +\frac{1}{2}{\left( \frac{{\beta }^2 N f_N^p\left( m \right) }{2 {\beta }^2 a_N^2+1}\right) ^2} +\frac{1}{3!}{\left( \frac{{\beta }^2 N f_N^p\left( m \right) }{2 {\beta }^2 a_N^2+1}\right) ^3}\right) p_N\left( m\right) + O\left( N^{4-2p}\right) . \end{aligned}$$
(3.76)

Hence, we arrive at

$$\begin{aligned} {\mathbb {E}}\left( {Z_{\epsilon }^{\le }}^2\right)= & {} \left( 1+\frac{{\beta }^4Na_N^2}{2}+\frac{1}{3!}\sum _{m \in \Gamma _N} {\left( {\beta }^2 N f_N^p\left( m \right) \right) ^3} p_N(m) \right) \nonumber \\{} & {} \left( 1-{\beta }^4N a_N^2+O(N^{4-2p})\right) \nonumber \\= & {} 1-\frac{{\beta }^4Na_N^2}{2}+\frac{1}{3!}\sum _{m \in \Gamma _N} {\left( {\beta }^2 N f_N^p\left( m \right) \right) ^3} p_N(m) +O(N^{4-2p}).\qquad \quad \end{aligned}$$
(3.77)

By (2.76),

$$\begin{aligned} {\mathbb {E}}({Z_{\epsilon }^{\le }})^2=1- \frac{{\beta }^4 N a_N^2}{2}+O\left( N^{4-2p}\right) , \end{aligned}$$
(3.78)

finally using (3.75), we get Lemma 3.15 and the lemma is proven. \(\square \)

Proof of Lemma 3.16

By definition of \(Z_{\epsilon }^{\le }\), we can re-write

$$\begin{aligned}{} & {} \left| {\mathbb {E}}\left( {{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }\left( {Z_{\epsilon }^{\le }}-\frac{{\beta }^3}{3!}{{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }\right) \right) \right| \nonumber \\{} & {} \quad =\left| {\mathbb {E}}\left( {{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) } \left( \mathcal {Z}_N({\beta })-Z_{\epsilon }^{>}\right) \right) -\frac{{\beta }^3}{3!}{\mathbb {E}}\left( \left( {{\mathbb {E}}_{\sigma }\left( H_N(\sigma )^3\right) }\right) ^2\right) \right| \nonumber \\{} & {} \quad \le \left| {\mathbb {E}}\left( {{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }\mathcal {Z}_N({\beta })\right) -\frac{{\beta }^3}{3!}{\mathbb {E}}\left( \left( {{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^3\right) }\right) ^2\right) \right| \nonumber \\{} & {} \qquad +\left| {\mathbb {E}}\left( {{\mathbb {E}}_{\sigma } \left( H_N(\sigma )^3\right) }Z_{\epsilon }^{>}\right) \right| . \end{aligned}$$
(3.79)

The first term of the last line can be calculated explicitly. By (3.13),

$$\begin{aligned}{} & {} {\mathbb {E}}\left( {{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }\mathcal {Z}_N({\beta })\right) ={ a_N^3} \sum _{(\ne )}{\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\right) {\mathbb {E}}\left( J_A J_B J_C \mathcal {Z}_N({\beta })\right) \nonumber \\{} & {} \quad ={ a_N^3} \sum _{(\ne )}{\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\right) {\mathbb {E}}_{\sigma ^{\prime }} {\mathbb {E}}\left( J_A J_B J_C {\mathrm e}^{-{\beta }H_N(\sigma ^{\prime })-NJ_N({\beta })}\right) . \end{aligned}$$
(3.80)

Now,

$$\begin{aligned}{} & {} {\mathbb {E}}\left( J_A J_B J_C {\mathrm e}^{-{\beta }H_N(\sigma )-NJ_N({\beta })}\right) = {\mathbb {E}}\left( J_A J_B J_C {\mathrm e}^{\sum _{D\in I_N} \left( {\beta }a_N\sigma _DJ_D-\frac{{\beta }^2a_N^2}{2}J_D^2\right) }\right) \nonumber \\{} & {} \quad =\prod _{D \ \in I \setminus \{A,B,C\}}{\mathbb {E}}\left( {\mathrm e}^{ {\beta }a_N \sigma _{D}J_{D}- \frac{{\beta }^2 a_N^2}{2}J_{D}^2}\right) \prod _{D \ \in \{A,B,C\}}{\mathbb {E}}\left( J_D {\mathrm e}^{a_N {\beta }\sigma _{D}J_{D}- \frac{a_N^2 {\beta }^2}{2}J_{D}^2}\right) .\nonumber \\ \end{aligned}$$
(3.81)

We already have computed the terms in the first product, see (2.16). For the second, we get by elementary integration,

$$\begin{aligned} {\mathbb {E}}\left( J_{D}{\mathrm e}^{ {\beta }a_N \sigma _{D}J_{D}- \frac{1}{2} {\beta }^2 a_N^2J_{D}^2}\right) ={\mathrm e}^{\left( \frac{{\beta }^2 a_N^2}{2(1+2 {\beta }^2 a_N^2)}-\frac{1}{2}\ln {(1+{\beta }^2 a_N^2)}\right) }\frac{ {\beta }a_N \sigma _{D}}{\left( 1+ {\beta }^2 a_N^2\right) }. \end{aligned}$$
(3.82)

Therefore,

$$\begin{aligned} {\mathbb {E}}\left( J_A J_B J_C {\mathrm e}^{H-NJ_N({\beta })}\right) = {\mathrm e}^{\left( {\begin{array}{c}N\\ p\end{array}}\right) \left( \frac{{\beta }^2 a_N^2}{2(1+2 {\beta }^2 a_N^2)}-\frac{1}{2}\ln {(1+{\beta }^2 a_N^2)}\right) }\frac{ {\beta }^3 a_N^3 \sigma _{A}\sigma _{B}\sigma _{C}}{\left( 1+{\beta }^2 a_N^2\right) ^3}. \end{aligned}$$
(3.83)

Using (3.83) in (3.80) gives that

$$\begin{aligned}{} & {} {\mathbb {E}}\left( {{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }\mathcal {Z}_N({\beta })\right) \nonumber \\{} & {} \quad =\frac{{\beta }^3a_N^6}{\left( 1+ {\beta }^2 a_N^2\right) ^3} {\mathrm e}^{\left( {\begin{array}{c}N\\ p\end{array}}\right) \left( \frac{{\beta }^2 a_N^2}{2(1+2 {\beta }^2 a_N^2)}-\frac{1}{2}\ln {(1+{\beta }^2 a_N^2)}\right) } \sum _{(\ne )}{\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\right) ^2\nonumber \\{} & {} \quad =\frac{{\beta }^3{\mathbb {E}}\left( \left( {{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }\right) ^2\right) }{3!\left( 1+ {\beta }^2 a_N^2\right) ^3} {\mathrm e}^{\left( {\begin{array}{c}N\\ p\end{array}}\right) \left( \frac{{\beta }^2 a_N^2}{2(1+2 {\beta }^2 a_N^2)}-\frac{1}{2}\ln {(1+{\beta }^2 a_N^2)}\right) }. \end{aligned}$$
(3.84)

Using (3.84), we get

$$\begin{aligned}{} & {} \left| {\mathbb {E}}\left( {{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }\mathcal {Z}_N({\beta })\right) -\frac{{\beta }^3}{3!} {\mathbb {E}}\left( \left( {{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }\right) ^2\right) \right| \nonumber \\{} & {} \quad =\frac{{\beta }^3}{3!}{\mathbb {E}}\left( \left( {\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) \right) ^2\right) \left( \frac{{\mathrm e}^{\left( {\begin{array}{c}N\\ p\end{array}}\right) \left( \frac{{\beta }^2 a_N^2}{2(1+2 {\beta }^2 a_N^2)}-\frac{1}{2}\ln {(1+ {\beta }^2 a_N^2)}\right) }}{\left( 1+{\beta }^2 a_N^2\right) ^3}-1\right) .\qquad \end{aligned}$$
(3.85)

A simple expansion shows that

$$\begin{aligned} \frac{{\mathrm e}^{\left( {\begin{array}{c}N\\ p\end{array}}\right) \left( \frac{{\beta }^2 a_N^2}{2(1+2 {\beta }^2 a_N^2)}-\frac{1}{2}\ln {(1+ {\beta }^2 a_N^2)}\right) }}{\left( 1+{\beta }^2 a_N^2\right) ^3}-1=O(N^{2-p}). \end{aligned}$$
(3.86)

Since

$$\begin{aligned} N^{\frac{3p}{2}-3}\frac{{\beta }^6}{3!^2}{\mathbb {E}}\left( \left( {\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) \right) ^2\right) \rightarrow \sigma ({\beta }, p)^2, \end{aligned}$$
(3.87)

and therefore

$$\begin{aligned} \lim _{N \rightarrow \infty }N^{\left( \frac{3p}{2}-3\right) }\left| {\mathbb {E}}\left( {{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) } \mathcal {Z}_N({\beta })\right) -\frac{{\beta }^3}{3!}{\mathbb {E}}\left( \left( {{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }\right) ^2\right) \right| =0. \end{aligned}$$
(3.88)

It remains to prove that \(\left| {\mathbb {E}}\left( \frac{{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }{3!}Z_{\epsilon }^{>}\right) \right| \) tends to 0. To see that, we use the Hölder inequality.

$$\begin{aligned}{} & {} \left| {\mathbb {E}}\left( {{\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) }Z_{\epsilon }^{>}\right) \right| \le {\mathbb {E}}\left( \left| {{\mathbb {E}}_{\sigma ^{\prime }} \left( -H_N(\sigma )^3\right) }\right| {\mathbb {E}}_\sigma \left( {\mathrm e}^{-{\beta }H_N(\sigma )} \right. \right. \nonumber \\{} & {} \qquad \left. \left. \mathbbm {1}_{\left| -H_N(\sigma )-{\beta }N\right|> \epsilon \beta N \}} {\mathrm e}^{-NJ_N({\beta })}\right) \right) \nonumber \\{} & {} \quad = {\mathbb {E}}_\sigma \left( {\mathbb {E}}\left( \left| {{\mathbb {E}}_{\sigma ^{\prime }} \left( -H_N(\sigma )^3\right) }\right| {\mathrm e}^{-{\beta }H_N(\sigma )} \mathbbm {1}_{\left| -H_N(\sigma )-{\beta }N\right|> \epsilon \beta N \}} {\mathrm e}^{-NJ_N({\beta })}\right) \right) \nonumber \\{} & {} \quad \le {\mathbb {E}}_\sigma \bigg ({\mathbb {E}}\left( {\mathrm e}^{ q_1 {\beta }\sqrt{N} X_\sigma } \mathbbm {1}_{\left| X_\sigma -{\beta }\sqrt{N}\right| > \epsilon \beta \sqrt{N} \}}\right) ^{\frac{1}{q_1}} \nonumber \\{} & {} \qquad {\mathbb {E}}\left( \left| {{\mathbb {E}}_{\sigma ^{\prime }} \left( H_N(\sigma )^3\right) }\right| ^{q_2} {\mathrm e}^{-q_2NJ_N({\beta })} \right) ^{\frac{1}{q_2}}\bigg ), \end{aligned}$$
(3.89)

for \(\frac{1}{q_1}+\frac{1}{q_2}=1\). For the last factor, the Cauchy-Schwarz inequality gives

$$\begin{aligned}{} & {} {\mathbb {E}}\left( \left| {{\mathbb {E}}_{\sigma ^{\prime }} \left( H_N(\sigma )^3\right) }\right| ^{q_2} {\mathrm e}^{-q_2NJ_N({\beta })} \right) ^{\frac{1}{q_2}} \nonumber \\{} & {} \qquad \le {\mathbb {E}}\left( \left| {{\mathbb {E}}_{\sigma ^{\prime }} \left( -H_N(\sigma )^3\right) }\right| ^{2q_2} \right) ^{\frac{1}{2q_2}}{\mathbb {E}}\left( {\mathrm e}^{-2q_2NJ_N({\beta })} \right) ^{\frac{1}{2q_2}}. \end{aligned}$$
(3.90)

Again by Fact I in the appendix, the last line in (3.89) is bounded from above by

$$\begin{aligned} {\mathrm e}^{\left( -\frac{(1+\epsilon )^2 {\beta }^2 N}{2 q_1}+(1+\epsilon ) {\beta }^2 N \right) } \left( {\mathbb {E}}\left( \left| {\mathbb {E}}_{\sigma } \left( -H_N(\sigma )^3\right) \right| ^{2q_2} \right) ^{\frac{1}{2q_2}} {\mathbb {E}}\left( {\mathrm e}^{-2q_2NJ_N({\beta })} \right) ^{\frac{1}{ 2q_2}}\right) . \end{aligned}$$
(3.91)

Finally, by explicit computation,

$$\begin{aligned} {\mathbb {E}}\left( {\mathrm e}^{-2q_2NJ_N({\beta })} \right) ^{1/ 2q_2}= {\mathrm e}^{-N{\beta }^2/2 + O\left( N^{2-p}\right) } . \end{aligned}$$
(3.92)

Combining (3.91) and (3.92), we obtain

$$\begin{aligned}{} & {} \left| {\mathbb {E}}\left( {{\mathbb {E}}_{\sigma ^{\prime }} \left( H_N(\sigma )^3\right) }Z_{\epsilon }^{>}\right) \right| \nonumber \\{} & {} \quad \le \exp {\left( -{\beta }^2 N \left( \frac{\epsilon ^2}{2} + O\left( q_1-1\right) +O\left( N^{2-p} \right) \right) \right) }\nonumber \\{} & {} \qquad \quad {\mathbb {E}}\left( \left| {{\mathbb {E}}_{\sigma ^{\prime }} \left( -H_N(\sigma )^3\right) }\right| ^{2q_2} \right) ^{\frac{1}{2q_2}}. \end{aligned}$$
(3.93)

For every \(\epsilon >0\), we can choose \(q_1\) close to 1 such that the first term on the r.h.s. of (3.93) is exponentially small. The second term will however stay polynomial. This concludes the proof of Lemma 3.16. \(\square \)

This concludes the proof of Proposition 3.2 in case of p even. \(\square \)

3.5 Proof of Proposition 3.2: p odd

The proof in the odd case is in principle similar to the even case. It is enough to show that

$$\begin{aligned} \lim _{N \uparrow \infty }N^{p-2}\left( {Z_{\epsilon }^{\le }}- T_N({\beta })\right) =0, \end{aligned}$$
(3.94)

in probability. Using (3.19) we decompose

$$\begin{aligned} \left| Z_{\epsilon }^{\le }- T_N({\beta })\right| \le \left| Z_{\epsilon }^{\le }-{\beta }^4\mathcal {H}_4-{\mathbb {E}}\left( Z_{\epsilon }^{\le }\right) \right| +\left| {\mathbb {E}}\left( Z_{\epsilon }^{\le }\right) -1+\frac{2{\beta }^4 a_N^4}{4!}\sum _{A \in I}J_{A}^4\right| . \end{aligned}$$
(3.95)

The second term is irrelevant. Using (2.24) and the law of large numbers from (3.21), we see that the second term is smaller than \(o(N^{2-p})\) and hence gives a vanishing contribution to (3.94). For the first term in (3.95) we control its second moment. We write

$$\begin{aligned}{} & {} {\mathbb {E}}\left( \left( Z_{\epsilon }^{\le }-{\beta }^4\mathcal {H}_4-{\mathbb {E}}\left( Z_{\epsilon }^{\le }\right) \right) ^2\right) =2{\beta }^4{\mathbb {E}}\left( \mathcal {H}_4{\mathbb {E}}\left( Z_{\epsilon }^{\le }\right) \right) -2{\beta }^4{\mathbb {E}}\left( \mathcal {H}_4 Z_{\epsilon }^{\le }\right) \nonumber \\{} & {} \qquad +{\mathbb {E}}\left( {Z_{\epsilon }^{\le }}^2\right) -{\mathbb {E}}\left( {Z_{\epsilon }^{\le }}\right) ^2+{\beta }^8{\mathbb {E}}\left( {\mathcal {H}_4}^2\right) \nonumber \\{} & {} \quad =2{\beta }^4{\mathbb {E}}\left( \mathcal {H}_4 ({\beta }^4{\mathcal H}_4-Z_{\epsilon }^\le ))\right) +{\mathbb {E}}\left( {Z_{\epsilon }^{\le }}^2\right) -{\mathbb {E}}\left( Z_{\epsilon }^{\le }\right) ^2-{\beta }^8E\left( \mathcal {H}_4^2\right) , \end{aligned}$$
(3.96)

where we used that \({\mathcal H}_4\) has zero mean.

We will prove the following two lemmata.

Lemma 3.17

For all \({\beta }\),

$$\begin{aligned} \lim _{N \rightarrow +\infty }N^{2p-4}\left| {\mathbb {E}}\left( \mathcal {H}_4 (Z_{\epsilon }^{\le }-{\beta }^4\mathcal {H}_4)\right) \right| =0. \end{aligned}$$
(3.97)

and

Lemma 3.18

For all \({\beta }<{\beta }_p\),

$$\begin{aligned} \lim _{N \rightarrow +\infty }N^{2p-4}\left| {\mathbb {E}}\left( {Z_{\epsilon }^{\le }}^2\right) -{\mathbb {E}}\left( {Z_{\epsilon }^{\le }}\right) ^2-{\beta }^8{\mathbb {E}}\left( \mathcal {H}_4^2\right) \right| =0. \end{aligned}$$
(3.98)

We will first prove Lemma 3.17 by following exactly the same strategy as for the case p even.

Proof of Lemma 3.17

The proof of this lemma is very similar to that of Lemma 3.16 and we omit many details. As in (3.79), we start with

$$\begin{aligned} \left| {\mathbb {E}}\left( \mathcal {H}_4 (Z_{\epsilon }^{\le }-{\beta }^4\mathcal {H}_4)\right) \right| \le \left| {\mathbb {E}}\left( \mathcal {H}_4 \mathcal {Z}_N({\beta })\right) -{\beta }^4{\mathbb {E}}\left( \mathcal {H}_4^2\right) \right| + \left| {\mathbb {E}}\left( \mathcal {H}_4 Z_{\epsilon }^{>}\right) \right| . \end{aligned}$$
(3.99)

For the first term on the r.h.s. of (3.99), we have

$$\begin{aligned} {\mathbb {E}}\left( \mathcal {H}_4 \mathcal {Z}_N({\beta })\right) =\frac{a_N^4}{4!}\sum _{(\ne )} {\mathbb {E}}_{\sigma }\left( \sigma _{A}\sigma _{B}\sigma _{C}\sigma _{D}\right) {\mathbb {E}}_{\sigma }{\mathbb {E}}\left( J_{A}J_{B}J_{C}J_{D}{\mathrm e}^{-H_N(s)-NJ_N({\beta })}\right) . \end{aligned}$$
(3.100)

Following now the exact same steps as in the proof of 3.16, we arrive at the analog of (3.84),

$$\begin{aligned} {\mathbb {E}}\left( \mathcal {H}_4 \mathcal {Z}_N({\beta })\right) = \frac{{\beta }^4{\mathbb {E}}\left( \mathcal {H}_4^2\right) }{\left( 1+ {\beta }^2 a_N^2\right) ^4} {\mathrm e}^{\left( {\begin{array}{c}N\\ p\end{array}}\right) \left( \frac{{\beta }^2 a_N^2}{2(1+2 {\beta }^2 a_N^2)}-\frac{1}{2}\ln {(1+{\beta }^2 a_N^2)}\right) }. \end{aligned}$$
(3.101)

From here one concludes that

$$\begin{aligned} \lim _{N\uparrow \infty }N^{2p-4}\left| {\mathbb {E}}\left( \mathcal {H}_4 \mathcal {Z}_N({\beta })\right) -{\beta }^4{\mathbb {E}}\left( \mathcal {H}_4^2\right) \right| =0. \end{aligned}$$
(3.102)

The second term on the right of (3.99) is shown to be exponentially small exactly as the second term in (3.79). This concludes the proof of Lemma 3.17. \(\square \)

Proof of Lemma 3.18

It remains to prove that

$$\begin{aligned} \lim _{N \rightarrow +\infty }N^{2p-4}\left| {\mathbb {E}}\left( {Z_{\epsilon }^{\le }}^2\right) -{\mathbb {E}}\left( {Z_{\epsilon }^{\le }}\right) ^2-{\beta }^8{\mathbb {E}}\left( \mathcal {H}_4^2\right) \right| =0 . \end{aligned}$$
(3.103)

As in the proof of Lemma 3.15, we improve the estimate on \({\mathbb {E}}\left( (Z_\epsilon ^\le )^2\right) \) by retaining an additional term in the expansion of the exponential that then is cancelled by the \({\mathbb {E}}\left( {\mathcal H}_4^2\right) \). Again this involves only the term \(A_2\). This time, this requires to push the expansion further and to use that

$$\begin{aligned} \left| \exp ( \xi ) - 1 - \xi - \frac{1}{2}\xi ^2- \frac{1}{3!}\xi ^3 - \frac{1}{4!}\xi ^4-\frac{1}{5!}\xi ^5\right| \le \frac{1}{6!}\xi ^6 \exp |\xi |. \end{aligned}$$
(3.104)

This leads to the estimate

$$\begin{aligned} {\mathbb {E}}\left( {Z_{\epsilon }^{\le }}^2\right)= & {} \sum _{m \in \Gamma _N} \left( 1 +\frac{1}{2}\left( \frac{{\beta }^2 N f_N^p\left( m \right) }{2a_N^2+1}\right) ^2 +\frac{1}{4!}\left( \frac{{\beta }^2 N f_N^p\left( m \right) }{2a_N^2+1}\right) ^4\right) \nonumber \\ {}{} & {} {\mathrm e}^{ - {{\beta }^4Na_N^2}+O\left( N^{3-2p}\right) }+o(N^{4-2p}), \end{aligned}$$
(3.105)

where we used that the terms of odd order vanish by symmetry. The quadratic term equals \(\frac{{\beta }^4N^2}{2\left( {\begin{array}{c}N\\ p\end{array}}\right) (2 {\beta }^2 a_N^2+1)^2}\). Moreover, the quartic term gives

$$\begin{aligned} \frac{1}{4!}\sum _{m\in \Gamma _N}p_N(m)\left( \frac{{\beta }^2 N f_N^p\left( m \right) }{2a_N^2+1}\right) ^4 ={\beta }^8{\mathbb {E}}\left( {\mathcal H}_4^2\right) +\frac{{\beta }^8 N^2a_N^4}{8} +O\left( N^{4-3p}\right) . \end{aligned}$$
(3.106)

Furthermore, using (2.13) we have that

$$\begin{aligned} {\mathbb {E}}({Z_{\epsilon }^{\le }})^2= & {} \left( 1- \frac{{\beta }^4}{4} N a_N^2 +\frac{{\beta }^8}{32} N^2 a_N^4 +O\left( N^{3-2p}\right) \right) ^2\nonumber \\= & {} 1- \frac{{\beta }^4 Na_N^2}{2}+\frac{2{\beta }^8 N^2a_N^4}{16}+O\left( N^{3-2p}\right) . \end{aligned}$$
(3.107)

Combining these observations, the assertion of Lemma 3.18 follows. \(\square \)

This concludes the proof of Proposition 3.2 and hence of Theorem 1.2.