1 Introduction and statement of results

1.1 Introduction

Let \((x_n)_{n \in \mathbb {N}} \subseteq [0,1]\) be a sequence of points in the unit interval. For \(s > 0\) and for any positive integer \(N \geqslant 1,\) we define the pair correlation function of the sequence \((x_n)_{n \in \mathbb {N}}\) to be

$$\begin{aligned} R_2(s,N) = \frac{1}{N}\#\Big \{m,n \leqslant N, m\ne n : \Vert x_m-x_n\Vert \leqslant \frac{s}{N} \Big \}, \end{aligned}$$

where \(\Vert x\Vert\) denotes the distance of \(x\in \mathbb {R}\) to the nearest integer (see Sect. 1.4 for a proper definition). We say that the sequence \((x_n)_{n \in \mathbb {N}}\) has Poissonian pair correlations (from now on abbreviated as PPC) if

$$\begin{aligned} \lim _{N\rightarrow \infty } R_2(s,N) = 2s \qquad \text { for all } s>0. \end{aligned}$$

The notion of pair correlations of sequences has been studied in various contexts. Its natural connection with mathematical physics is exhibited by the famous Berry–Tabor conjecture [3]. A series of more recent papers have studied pair correlations from a purely theoretical point of view. To mention an example, it was an open problem within the theoretical setup to determine the relation of Poissonian pair correlations with uniform distribution. It has been recently shown that any sequence \((x_n)_{n \in \mathbb {N}}\) with PPC is also uniformly distributed mod 1,  that is, we have

$$\begin{aligned} \lim _{N\rightarrow \infty } \frac{1}{N}\# \left\{i\leqslant N : x_i \in [a,b] \right\} = b-a \quad \text { for all }\, 0\leqslant a < b \leqslant 1. \end{aligned}$$

This result was established by Aistleiter, Larcher and Lewko [2] and independently by Grepstad and Larcher [8], while a subsequent proof was also given by Steinerberger [24] and, in a much more general setup, by Marklof [16].

In the present paper, we focus our attention on the notion of weak Poissonian pair correlations with parameter \(0\leqslant \beta \leqslant 1\). As we shall soon explain, this notion forms a weaker variant of the classical property of PPC, a fact which explains the term “weak”. Our main purpose is to compare it with the standard property of Poissonian pair correlations and demonstrate several differences. To the best of our knowledge, the notion of weak Poissonian pair correlations was first introduced by Nair and Pollicot in [18].

Let \(N \geqslant 1, 0 \leqslant \beta \leqslant 1\) and \(s>0\). We define the pair correlation function with parameter \(\beta\) of a sequence \((x_n)_{n \in \mathbb {N}}\subseteq [0,1]\) to be

$$\begin{aligned} R_2(\beta ;s,N) = \frac{1}{N^{2 - \beta }} \#\Big \{m,n \leqslant N, m\ne n : \Vert x_m-x_n\Vert \leqslant \frac{s}{N^{\beta }} \Big \}. \end{aligned}$$

From a statistical perspective, the variants of the pair correlation function (with a different scaling factor) which are considered in the present paper fall into the framework of Ripley’s K-function [19].

We say that the sequence \((x_n)_{n \in \mathbb {N}}\) has weak Poissonian pair correlations with parameter \(0< \beta < 1\) (also called Poissonian \(\beta\)-pair correlations, or, for abbreviation, \(\beta\)-PPC) if

$$\begin{aligned} \lim _{N \rightarrow \infty } R_2(\beta ;s,N) = 2s \qquad \text { for all }\, s> 0. \end{aligned}$$
(1)

For the value \(\beta =0\), we say that the sequence \((x_n)_{n \in \mathbb {N}}\) has 0-PPC if

$$\begin{aligned} \lim _{N \rightarrow \infty } R_2(\beta ;s,N) = 2s \qquad \text { for all }\, 0 < s \leqslant \frac{1}{2}\cdot \end{aligned}$$

The reason why in the definition of 0-PPC the values of the scale s are restricted to the range \(0< s\leqslant \frac{1}{2}\) is quite simple: when \(\beta = 0\) we trivially have \(R_2(0;s,N) \leqslant 1\) and thus (1) cannot hold for \(s > \frac{1}{2}\).

It is clear that for \(\beta =1\) the notion of \(\beta\)-PPC coincides with the classical Poissonian pair correlations property. For \(\beta =0\), the property of 0-PPC appears to be known as being equivalent to uniform distribution, see e.g. [24, Section 2.2]. Since we have not managed to find a rigorous explanation in the literature, we provide a proof of this fact.

Theorem 1

Let \((x_n)_{n \in \mathbb {N}} \subseteq [0,1]\) be a sequence. The following are equivalent.

  1. (i)

    The sequence \((x_n)_{n \in \mathbb {N}}\) is uniformly distributed mod 1.

  2. (ii)

    The sequence \((x_n)_{n \in \mathbb {N}}\) has 0-PPC.

We now turn our attention to a rather striking aspect of weak Poissonian correlations which, in our opinion, constitutes the most remarkable difference of \(\beta\)-PPC on the one hand, and the classical property of PPC, on the other. We show that in order for a given sequence \((x_n)_{n \in \mathbb {N}}\) to have \(\beta\)-PPC for some \(0\leqslant \beta <1\), it is sufficient that the defining condition \(\lim \limits _{N \rightarrow \infty } R_2(\beta ;s,N) = 2s\) holds for all values of s that lie within an interval of the form \((0,s_0),\) where \(s_0>0\) can be chosen to be arbitrarily small.

Theorem 2

Let \(0 \leqslant \beta < 1\). Assume there exists some constant \(s_0>0\) such that

$$\begin{aligned} \lim _{N \rightarrow \infty }R_2(\beta ;s,N) = 2s \qquad \text { for all } s < s_0.\end{aligned}$$

Then the sequence \((x_n)_{n \in \mathbb {N}}\) has \(\beta\)-PPC.

The key that served as a hint for this surprising fact was the proof of Theorem 1. As the reader will be able to see, to deduce uniform distribution from the hypothesis of 0-PPC in the proof of Theorem 1, we only employ the fact that \(R_2(0;s,N)\rightarrow 2s\) for all values of \(s>0\) that are sufficiently small.

As already alluded to, the assumption that \(\beta\) is strictly less than 1 in Theorem 2 turns out to be essential. In other words, it is not possible to extend Theorem 2 to the setup of PPC. We actually prove a rather stronger statement: for any choice of the number \(S>0\), we can find a sequence \((x_n)_{n \in \mathbb {N}}\) with \(R_2(s,N)\rightarrow 2s\) for all \(s<S\) which is not even uniformly distributed, whence it does not have PPC.

Theorem 3

For every \(S > 0\), there exists a sequence \((x_n)_{n \in \mathbb {N}}\) such that

$$\begin{aligned} \lim _{N\rightarrow \infty } R_2(s,N) = 2s \qquad \text { for all } s<S \end{aligned}$$

but the sequence \((x_n)_{n \in \mathbb {N}}\) is not uniformly distributed, and hence does not have PPC.

The statement of Theorem 3 is in agreement with a phenomenon that enthusiasts of the theory of Poissonian correlations might have observed: all existing proofs of the fact that PPC implies uniform distribution [1, 8, 24] at some point use the fact that the convergence \(R_2(s,N) \rightarrow 2s\) holds for arbitrarily large values of \(s>0.\)

In a remark after the proof of Theorem 1, we give a short, elementary proof of the fact that sequences with \(R_2(s,N)\rightarrow 2s\) for all \(s>s_0\) are uniformly distributed modulo 1. This assumption can actually be relaxed: in order for a sequence to be uniformly distributed mod 1, it suffices that \(R_2(s,N)\rightarrow 2s\) holds for all positive integers \(s\in \mathbb {N}\). This was proved in [11] in a multidimensional setup, and later in [24], where it was further pointed out that a sequence is equidistributed as long as \(R_2(s,N)\rightarrow 2s\) holds for all s in a discrete set satisfying some “maximum gap” condition. The fact that sequences with \(R_2(s,N)\rightarrow 2s\) for all \(s>s_0\) are uniformly distributed follows directly from this observation in [24], but we nevertheless decided to include one more proof because of its simplicity and its underlying connection with the arguments in [1].

Concerning the relation of weak Poissonian pair correlations with uniform distribution, Steinerberger [23, 24] proved that if the sequence \((x_n)_{n \in \mathbb {N}}\) has \(\beta\)-PPC for some \(0<\beta \leqslant 1\) then it is uniformly distributed. This showed that the property of \(\beta\)-PPC is stronger than 0-PPC for any \(0<\beta \leqslant 1,\) extending the already known fact that 1-PPC is a stronger property than 0-PPC (uniform distribution).

Furthermore, Steinerberger refers in [24] to \(\beta\)-PPC (for a given \(0< \beta < 1)\) as a property that interpolates between uniform distribution and PPC. The following result describes this phenomenon in a more precise fashion: whenever \(0\leqslant \alpha < \beta \leqslant 1,\) the property of \(\beta\)-PPC is stronger than \(\alpha\)-PPC.

Theorem 4

Let \(0 \leqslant \alpha < \beta \leqslant 1.\) If a sequence \((x_n)_{n \in \mathbb {N}} \subseteq [0,1]\) has weak Poissonian pair correlations with parameter \(\beta\), then it also has weak Poissonian pair correlations with parameter \(\alpha\).

In turn, Theorem 4 leads naturally to the following question: are the notions of \(\beta\)-PPC for the various values of \(0<\beta <1\) essentially different or do they actually coincide, possibly for certain values of \(\beta\)? We prove that the former is the case.

Theorem 5

For any \(0< \beta < 1\), there exists a sequence \((y_n)_{n\in {\mathbb {N}}}\) that has Poissonian \(\alpha\)-pair correlations for any \(0\leqslant \alpha < \beta\) but does not have Poissonian \(\alpha\)-pair correlations for any \(\beta \leqslant \alpha \leqslant 1\).

Therefore for the different values of the parameter \(0\leqslant \beta \leqslant 1\) the property of \(\beta \text {-PPC}\) can be seen as covering a spectrum ranging from PPC to uniform distribution, with \(\beta _1\)-PPC being genuinely stronger than \(\beta _2\)-PPC whenever \(\beta _1 > \beta _2.\) We note that an analogue of Theorem 5 is already known for the value \(\beta =1\): the van der Corput sequence has \(\alpha\)-PPC for any \(\alpha <1\) but does not have PPC, see e.g. [27]. The same paper [27] also contains an alternative proof of Theorem 4; in our opinion, though, this proof contains an oversight since the range s is allowed to vary depending on N.

1.2 A metric consideration

Another difference of the weak Poissonian correlations compared to the standard PPC occurs in the metric setup. From the metric point of view, an increasing sequence \((a_n)_{n\in \mathbb {N}}\) of positive integers is considered fixed, and we examine the Lebesgue measure of the set of those \(x\in [0,1]\) for which the sequence \((a_n x)_{n\in \mathbb {N}}\) has Poissonian pair correlations. Towards this direction, several results have been proved for specific choices of the sequence \((a_n)_{n\in \mathbb {N}}\). To name a few examples, we mention that for any exponent \(k\geqslant 2\) the sequence \((n^k x)_{n\in \mathbb {N}}\) has Poissonian pair correlations for almost all \(x\in [0,1]\), see [10, 20]. On the other hand, writing \(p_n\) for the n-th prime number, the sequence \((p_n x)_{n\in \mathbb {N}}\) does not have Poissonian pair correlations for almost all \(x\in [0,1]\), see [26]. For more such results we refer to [2, 21, 22].

A fundamental question in the theory of metric Poissonian pair correlations is whether a zero-one law holds in the setup described above. That is, given an increasing sequence \((a_n)_{n\in \mathbb {N}}\subseteq {\mathbb {N}}\), does the set of \(x\in [0,1]\) such that \((a_n x)_{n\in \mathbb {N}}\) has PPC have Lebesgue measure either 0 or 1? Although all results so far suggest that the answer is positive, this question still remains unanswered.

We now briefly examine \(\beta\)-pair correlations from the metric point of view. Since \(\beta -\)PPC is equivalent to uniform distribution, for the 0-PPC a zero-one law is true in a trivial sense: for any choice of the sequence \((a_n)_{n\in \mathbb {N}}\) the sequence \((a_n x)\) is uniformly distributed and hence has \(0-\)PPC for almost all x (see [12, Theorem 4.1]). We show that this is also the case with weak pair correlations for all ranges \(0<\beta <1.\)

In view of the multiple examples of sequences \((a_n)_{n\in \mathbb {N}}\subseteq {\mathbb {N}}\) for which \((a_n x)_{n\in \mathbb {N}}\) fails to have PPC almost surely (see references above), the following theorem exhibits another difference between the properties of weak Poissonian correlations with the standard notion of PPC.

Theorem 6

Let \((a_n)_{n\in {\mathbb {N}}}\) be an increasing sequence of positive integers and \(0< \beta <1.\) Then for almost all \(x\in [0,1]\) the sequence \((a_n x)_{n\in {\mathbb {N}}}\) has Poissonian \(\beta\)-pair correlations.

1.3 Weak correlations of higher orders

As a last part of this paper, we extend the previous discussion on weaker variants of Poissonian correlations to orders greater than 2. We shall study the k-th order correlations of sequences rescaled by a factor of \(N^\beta ,\) which for convenience we name \((k,\beta )\)-correlations. Given any integer \(k \geqslant 2\), a parameter \(0\leqslant \beta \leqslant 1\) and a closed rectangle

$$\begin{aligned} \mathcal {R} = [a_1,b_1]\times [a_2,b_2] \times \ldots \times [a_{k-1},b_{k-1}] \subseteq \mathbb {R}^{k-1},\end{aligned}$$
(2)

we define the \((k,\beta )\)-correlation function of a sequence \((x_n)_{n \in \mathbb {N}}\) as

$$\begin{aligned}&R_{k}(\beta ;\mathcal {R},N)\! =\! \frac{1}{N^{k - (k-1)\beta }} \# \left\{ \begin{array}{ll}i_1,\ldots ,i_k\leqslant N \begin{array}{ll} : N^{\beta } ( (\!(x_{i_1}- x_{i_2})\!),\ldots , (\!(x_{i_1} - x_{i_{k-1}})\!) ) \in \mathcal {R} \end{array} \\ \text { \, distinct } \end{array} \right\} \!. \end{aligned}$$

(Here and in what follows, we say that the indices \(i_1,\ldots ,i_k\) are distinct when \(i_m\ne i_n\) for \(m\ne n.\)) As one might expect, when \(0< \beta \leqslant 1\) we define a sequence \((x_n)_{n \in \mathbb {N}}\) to have Poissonian \((k,\beta )\)-correlations if for all rectangles \(\mathcal {R}\subseteq \mathbb {R}^{k-1}\) as in (2) it holds that

$$\begin{aligned} \lim _{N \rightarrow \infty } R_{k}(\beta ; \mathcal {R},N) = \lambda (\mathcal {R}), \end{aligned}$$

where \(\lambda\) denotes the \((k-1)\)-dimensional Lebesgue measure. Evidently, when \(\beta =1\) in the preceding definition, one obtains the usual definition of Poissonian k-th order correlations [13].

For the specific value \(\beta =0\), we say that a sequence has Poissonian (k, 0)-correlations if \(\lim _{N\rightarrow \infty }R_k(0;\mathcal {R},N) =\lambda (\mathcal {R})\) holds for any rectangle \(\mathcal {R} \subseteq [-\frac{1}{2}, \frac{1}{2}]^{k-1}.\)

We note that the previous definitions are generalizations of just one of the equivalent ways to define Poissonian k-th order correlations. We refer the reader to [9, Appendix A] for a relevant discussion, which can be easily adapted to the context of weak correlations.

As a first result in this section, we prove that for any order \(k\geqslant 2\), Poissonian (k, 0)-correlations are equivalent to the property of uniform distribution, thus generalising Theorem 1.

Theorem 7

Let \(k \geqslant 2\) be an integer. Then a sequence \((x_n)_{n \in \mathbb {N}} \subseteq [0,1]\) is uniformly distributed if and only if it has Poissonian (k, 0)-correlations.

As a corollary, Theorem 7 implies that Poissonian \((k+1,0)\)-correlations are equivalent to Poissonian (k, 0)-correlations for any \(k \geqslant 2\). This brings us to the main motivating point for studying weak correlations of higher orders.

It is an interesting open problem to determine whether the property of Poissonian correlations of some given order \(k\geqslant 3\) is a property stronger than Poissonian correlations of lower orders. In the opposite direction, Lutsko, Sourmelidis and Technau recently [15] gave explicit examples of sequences with Poissonian pair correlations, but without Poissonian triple correlations. Their argument can be easily modified to yield the existence of sequences with Poissonian pair correlations but without Poissonian k-th order correlations for \(k>3.\) In addition, the sequence \((\sqrt{n})_{n^2 \notin \mathbb {N}}\) is known to have Poissonian pair correlations [5] but does not have Poissonian correlations of k-th order for at least one value of \(k\geqslant 3\); if the correlations of all orders followed the Poisson model, the gap distribution of this sequence would be the Poisson distribution [13, Appendix]. However, this is not the case for \((\sqrt{n})_{n^2 \notin \mathbb {N}}\), as was proved by Elkies and McMullen in [6].

It still remains unknown whether the other implication holds, i.e. whether Poissonian \(k+1\)-correlations imply Poissonian k-correlations. In the setup of weak correlations, it turns out that we are able to answer such a question positively.

Theorem 8

Let \(k > 2, 0 \leqslant \beta < 1\). If a sequence \((x_n)_{n \in \mathbb {N}}\) has Poisonnian \((k,\beta )\)-correlations, then it also has Poissonian \((k-1,\beta )\)-correlations. In particular, Poissonian (\(k,\beta\))-correlations imply weak Poissonian pair correlations with parameter \(\alpha \leqslant \beta\) and therefore, uniform distribution.

We end the introductory part of the paper with some directions for further research. Concerning the relation between the properties of Poissonian correlations of different orders, in view of Theorem 8 one might expect that any sequence with Poissonian \((k+1)\)-order correlations also has Poissonian correlations of order k. However, we are hesitant to conjecture that this is indeed the case. Attempting to adapt the proof of Theorem 8 to the case \(\beta =1,\) one can notice that the impasse arises from inequality (26) appearing therein. More precisely, when \(\beta =1\) the terms in (26) that involve correlation functions of orders \(2\leqslant m \leqslant k-1\) are not negligible when \(N\rightarrow \infty ,\) and this in turn does not allow for a characterisation of Poissonian correlations of order k in terms of the functions \(R_k^*\) introduced later in the paper.

Further, we are confident that \((k,\beta )\)-Poissonian correlations of any order \(k\geqslant 3\) is a property that can be detected at small scales, in other words an analogue of Theorem 2 holds for weak correlations of higher orders. We believe that in order to prove this fact it would suffice to establish a condition equivalent to Poissonian \((k,\beta )\)-correlations in terms of the function \(F_\beta\) defined later in (3).

1.4 Notation

Given two functions \(f,g\!:\!(0,\infty )\rightarrow \mathbb {R},\) we shall write \(f(t) = \mathcal {O}(g(t))\), \(t\rightarrow \infty\) and \(f(t)= o(g(t)),\, t\rightarrow \infty\) when

$$\begin{aligned}\limsup _{t\rightarrow \infty } \frac{|f(t)|}{|g(t)|} < \infty \quad \text { or } \quad \lim _{t\rightarrow \infty } \frac{f(t)}{g(t)} =0 \end{aligned}$$

respectively. For a real number \(x\in {\mathbb {R}},\) we write \(\{x\}\) for the fractional part of x, \(\Vert x\Vert =\min \{|x-k|: k\in \mathbb {N}\}\) for the distance of x from its nearest integer, and

$$\begin{aligned} (\!(x)\!)={\left\{ \begin{array}{ll} \{x\}, &{}\text { if } 0\leqslant \{x\} \leqslant \tfrac{1}{2} \\ \{x\}-1, &{}\text { if } \tfrac{1}{2}< \{x\} < 1 \end{array}\right. } \end{aligned}$$

for the signed distance of x from the origin modulo 1. Further, we use the symbol \(\{\,\cdot \,\}^+\) for the function

$$\begin{aligned} \{x\}^+ = {\left\{ \begin{array}{ll}x, &{}\text { if } x\geqslant 0 \\ 0, &{}\text { if } x<0 . \end{array}\right. } \end{aligned}$$

We use the standard notation \(e(x)=e^{2 \pi ix}\) and also write \(B(x_0,r)= \{x\in [0,1] : \Vert x- x_0\Vert \leqslant r\}\) for the closed interval with center \(x_0\) and length 2r modulo 1. Given a positive integer \(m\geqslant 1\) we write \([m]=\{1,\ldots , m\}\) for the set of positive integers which are less than or equal to m.

2 Preliminary results

For any value of \(0\leqslant \beta \leqslant 1\) we define the function

$$\begin{aligned} F_{\beta }(t,s,N) = \frac{1}{N^{1-\beta } }\#\Big \{1 \leqslant n \leqslant N: \left| \left| x_n - t\right| \right| \leqslant \frac{s}{2N^{\beta }}\Big \}, \quad 0\leqslant t \leqslant 1. \end{aligned}$$
(3)

Heuristically, \(F_\beta\) can be thought of as counting the number of points among the first N terms of \((x_n)_{n \in \mathbb {N}}\) lying within an interval of length \(s/N^{\beta }\) and random center \(0\leqslant t\leqslant 1,\) uniformly distributed in [0, 1]. We also define the integral

$$\begin{aligned} I_\beta (s,N) = \int _{0}^1 F_\beta (t,s,N)^2 \,\textrm{d}t. \end{aligned}$$
(4)

The importance of \(I_\beta (s,N)\) in the proofs of our main results is clear from the following lemma, which is an analogue of [9, Proposition 9] in the context of weak correlations.

Lemma 9

Let \((x_n)_{n \in \mathbb {N}} \subseteq [0,1]\) be a sequence, \(0\leqslant \beta \leqslant 1\) and \(s_0>0\) be a constant. The following are equivalent.

  1. (i)

    The \(\beta\)-pair correlation function satisfies

    $$\begin{aligned} \lim _{N\rightarrow \infty } R_2(\beta ;s,N) = 2s \qquad \text { for all } s<s_0. \end{aligned}$$
  2. (ii)

    We have

    $$\begin{aligned} \lim _{N \rightarrow \infty } \frac{1}{s}\int _{0}^s \!R_2(\beta ;\sigma ,N) \,\textrm{d}\sigma = s \qquad \text { for all } s < s_0. \end{aligned}$$
  3. (iii)

    The integral \(I_\beta (s,N)\) defined in (4) satisfies

    $$\begin{aligned}\lim _{N \rightarrow \infty } I_{\beta }(s,N) = {\left\{ \begin{array}{ll} s^2, &{}\text { if } \beta< 1\\ s^2 +s, &{}\text { if } \beta = 1\end{array}\right. } \qquad \text { for all } s<s_0.\end{aligned}$$

Proof

We first prove the equivalence of (i) with (ii). Assuming (i) is true, let \(s<s_0.\) Then by the monotonicity of \(R_2(\beta ;\sigma ,N)\) as a function of \(\sigma\), for any integer \(M\geqslant 1\) and for any \(0\leqslant j \leqslant M-1\) we get

$$\begin{aligned} \frac{s}{M}R_2\Big (\beta ; \frac{js}{M}, N\Big ) \leqslant \int _{[\frac{js}{M},\frac{(j+1)s}{M} ]}R_2\big (\beta ;\sigma , N\big ) \,\textrm{d}\sigma \leqslant \frac{s}{M}R_2\Big ( \beta ; \frac{(j+1)s}{M}, N\Big ). \end{aligned}$$

Summing over all \(0\leqslant j \leqslant M-1\) we get

$$\begin{aligned} \frac{s}{M}\sum _{0\leqslant j \leqslant M-1}\! R_2\Big (\beta ; \frac{js}{M}, N\Big ) \leqslant \int _{0}^s R_2(\beta ;\sigma , N ) \,\textrm{d}\sigma \leqslant \frac{s}{M}\sum _{1\leqslant j \leqslant M} R_2\Big ( \beta ; \frac{js}{M}, N\Big ). \end{aligned}$$

Letting \(N\rightarrow \infty\) and using the hypothesis in (i) we deduce that (ii) holds.

Conversely, when (ii) is true, using again the monotonicity of \(R(\beta ;\sigma ,N)\) in \(\sigma\) gives that

$$\begin{aligned}\frac{1}{\varepsilon }\int _{s-\varepsilon }^s R_2(\beta ;\sigma ,N)\,\textrm{d}\sigma \leqslant R_2(\beta ; s,N) \leqslant \frac{1}{\varepsilon }\int _{s}^{s+\varepsilon } R_2(\beta ;\sigma ,N)\,\textrm{d}\sigma \end{aligned}$$

for \(s<s_0\) fixed and for all \(\varepsilon >0\) sufficiently small. Letting first \(N\rightarrow \infty\) and then \(\varepsilon \rightarrow 0\) we conclude that \(\lim \limits _{N\rightarrow \infty }R_2(\beta ;s,N)=2s.\) We now proceed to prove that (ii) is equivalent to (iii). Observe that

$$\begin{aligned} F_\beta (t,s,N)^2&= \frac{1}{N^{2 - 2\beta }}\sum _{m,n \leqslant N} \mathbbm {1}_{B\left( x_m,\frac{s}{2N^{\beta }}\right) \cap B\left( x_n,\frac{s}{2N^{\beta }}\right) }(t) . \end{aligned}$$

Denoting by \(\lambda\) the 1-dimensional Lebesgue measure, we can write

$$\begin{aligned} I_\beta (s,N)&= \frac{1}{N^{2 - 2\beta }}\sum _{m,n \leqslant N} \lambda \Big (B\Big (x_m,\frac{s}{2N^{\beta }}\Big )\cap B\Big (x_n,\frac{s}{2N^{\beta }}\Big )\Big ) \\ {}&= \frac{1}{N^{2 - 2\beta }}\sum _{\begin{array}{c} m,n \leqslant N\\ m \ne n \end{array}} \lambda \Big (B\Big (x_m,\frac{s}{2N^{\beta }}\Big )\cap B\Big (x_n,\frac{s}{2N^{\beta }}\Big )\Big ) + \frac{1}{N^{2 - 2\beta }}\cdot \frac{Ns}{N^{\beta }} \\&= \frac{1}{N^{2 - 2\beta }}\sum _{\begin{array}{c} m,n \leqslant N \\ m \ne n \end{array}} \left\{ \frac{s}{N^{\beta }} - \left| \left| x_n-x_m\right| \right| \right\} ^+ + \frac{s}{N^{1-\beta }} \\ {}&= \frac{s}{N^{2-\beta }}\sum _{\begin{array}{c} m,n \leqslant N\\ m \ne n \end{array}} \left\{ 1 - \frac{\left| \left| x_n-x_m\right| \right| }{s/N^{\beta }}\right\} ^+ + \frac{s}{N^{1-\beta }} \, \cdot \end{aligned}$$

On the other hand,

$$\begin{aligned} \frac{1}{s}\int _{0}^s R_2(\beta ;\sigma ,N) \,\textrm{d}\sigma&= \frac{1}{s}\int _{0}^s \frac{1}{N^{2 - \beta }}\sum _{\begin{array}{c} m,n \leqslant N\\ m \ne n \end{array}} \mathbbm {1}_{B(0,\frac{\sigma }{N^{\beta }})}\left( x_m-x_n\right) \,\textrm{d}\sigma \\&= \frac{1}{sN^{2-\beta }}\sum _{\begin{array}{c} m,n\leqslant N\\ m \ne n \end{array}} \int _{0}^s \mathbbm {1}_{[\, \left| \left| x_m-x_n\right| \right| N^{\beta },\infty )}(\sigma ) \,\textrm{d}\sigma \\ {}&= \frac{1}{sN^{2-\beta }}\sum _{\begin{array}{c} m,n \leqslant N\\ m \ne n \end{array}} \left\{ s - \frac{\left| \left| x_n-x_m\right| \right| }{1/N^{\beta }}\right\} ^+\\ {}&= \frac{1}{N^{2-\beta }}\sum _{\begin{array}{c} m,n \leqslant N\\ m \ne n \end{array}} \left\{ 1 - \frac{\left| \left| x_n-x_m\right| \right| }{s/N^{\beta }}\right\} ^+, \end{aligned}$$

and this finishes the proof. \(\square\)

Finally, we mention the following lemma on the size of the integral \(I_\beta (s,N)\) defined in (4) that will be used in the proofs of many of the main results.

Lemma 10

Let \(0\leqslant \beta \leqslant 1\). For any \(s>0\) and \(N\geqslant 1\), we have \(I_\beta (s,N) \geqslant s^2.\)

Proof

By the Cauchy–Schwarz inequality,

$$\begin{aligned} I_\beta (s,N) = \int _{0}^1 F_\beta (t,s,N)^2 \,\textrm{d}t \geqslant \left( \int _{0}^1 \!\!F_\beta (t,s,N)\,\textrm{d}t \right) ^2 = s^2. \end{aligned}$$

\(\square\)

3 Proof of Theorem 1

We now establish a relation between the correlation function \(R_2(0;s,N_k)\) and the function \(F_0(t,s,N_k)\) defined in (3). A counting argument gives

$$\begin{aligned} \frac{1}{N_k^2}&\#\Big \{1 \leqslant n \ne m \leqslant N_k: \Vert x_n-x_m \Vert \leqslant s\Big \} \nonumber \\&\quad \geqslant \frac{1}{N_k^2}\#\Big \{n\leqslant N_k: \Vert x_n-t\Vert \leqslant \frac{s}{2} \Big \}^2 - \frac{1}{N_k^2}\#\Big \{n\leqslant N_k: \Vert x_n-t\Vert \leqslant \frac{s}{2}\Big \} \\&\quad \geqslant \Big (\frac{1}{N_k} \#\Big \{n\leqslant N_k: \Vert x_n-t\Vert \leqslant \frac{s}{2}\Big \}\Big )^2 - \frac{1}{N_k},\quad \text { for any } 0\leqslant t \leqslant 1. \nonumber \end{aligned}$$
(5)

By the assumption that \((x_n)_{n \in \mathbb {N}}\) has 0-PPC, letting \(k\rightarrow \infty\) in (5) we get

$$\begin{aligned} 2 s&= \limsup _{k \rightarrow \infty } \frac{1}{N_k^2} \#\Big \{1 \leqslant n \ne m \leqslant N_k: \Vert x_n-x_m \Vert \leqslant s \Big \} \\ {}&\geqslant \limsup _{k \rightarrow \infty } \Big (\frac{1}{N_k} \#\Big \{n\leqslant N_k: \Vert x_n-t\Vert \leqslant \frac{s}{2}\Big \}\Big )^2 \end{aligned}$$

for all \(0\leqslant t \leqslant 1\) and all \(0<s\leqslant \frac{1}{2}\). Thus if we fix some \(\varepsilon >0,\) there exist some \(s > 0\) and \(K\in \mathbb {N}\) such that for all \(k \geqslant K,\)

$$\begin{aligned}\frac{1}{N_k}\#\Big \{n\leqslant N_k: \Vert x_n\Vert \leqslant \frac{s}{2}\Big \}\, < \, \frac{\varepsilon }{3} \end{aligned}$$

and

$$\begin{aligned}\frac{1}{N_k}\#\Big \{n\leqslant N_k: \Vert x_n-a\Vert \leqslant \frac{s}{2}\Big \}\, < \, \frac{\varepsilon }{3} \cdot \end{aligned}$$

In view of (??), we can additionally assume that for all \(k\geqslant K\) we have

$$\begin{aligned}\frac{1}{N_k}\#\big \{n \leqslant N_k: x_n \in [0,a]\big \} > b -\frac{\varepsilon }{3} \cdot \end{aligned}$$

Observe that the function \(F_0(t,s,N)\) defined in (3) satisfies

$$\begin{aligned} \int _0^a F_0(t,s,N_k)\,\textrm{d}t= & {} \frac{1}{N_k}\sum _{n\leqslant N_k}\int _0^1 \mathbbm {1}_{B\left( x_n,\frac{s}{2}\right) }(t)\mathbbm {1}_{[0,a]}(t)\,\textrm{d}t \\= & {} \frac{1}{N_k}\sum _{n\leqslant N_k}\lambda \Big (B(x_n, \tfrac{s}{2}) \cap [0,a] \Big ) \\\geqslant & {} \frac{1}{N_k}\hspace{-2mm}\sum _{\begin{array}{c} n\leqslant N_k\\ \frac{s}{2}\leqslant x_n \leqslant a-\frac{s}{2} \end{array}}\hspace{-3mm} \lambda \Big (B\big (x_n, \tfrac{s}{2}\big ) \Big ) \\= & {} \frac{s}{N_k} \#\Big \{n\leqslant N_k: \frac{s}{2}\leqslant x_n \leqslant a- \frac{s}{2} \Big \}. \end{aligned}$$

Thus for all \(k \geqslant K\),

$$\begin{aligned}\frac{1}{s}\int _0^a F_0(t,s,N_k)\,\textrm{d}t&\geqslant \frac{1}{N_k} \#\Big \{n\leqslant N_k: \frac{s}{2}\leqslant x_n \leqslant a- \frac{s}{2} \Big \} \\ {}&\geqslant \frac{1}{N_k}\#\{n \leqslant N_k: x_n \in [0,a]\} \\ {}&\hspace{4mm} - \frac{1}{N_k}\#\Big \{n\leqslant N_k: \Vert x_n\Vert \leqslant \frac{s}{2}\Big \} - \frac{1}{N_k}\#\Big \{n\leqslant N_k: \Vert x_n-a\Vert \leqslant \frac{s}{2}\Big \} \\ {}&\geqslant b- \varepsilon \end{aligned}$$

and similarly, we can show that

$$\begin{aligned}\frac{1}{s}\int _a^1 F_0(t,s,N_k)\,\textrm{d}t \geqslant 1 - b - \varepsilon . \end{aligned}$$

By applying the Cauchy-Schwarz inequality, we obtain

$$\begin{aligned} \int _0^1 F_0(t,s,N_k)^2 \,\textrm{d}t= & {} \int _0^a F_0(t,s,N_k)^2\,\textrm{d}t + \int _a^1 F_0(t,s,N_k)^2 \,\textrm{d}t \\\geqslant & {} \frac{1}{a}\left( \int _0^a F_0(t,s,N_k)\,\textrm{d}t\right) ^2 + \frac{1}{1-a}\left( \int _a^1 F_0(t,s,N_k)\,\textrm{d}t\right) ^2 \\\geqslant & {} \frac{s^2}{a}(b-\varepsilon )^2 + \frac{s^2}{1-a}(1-b-\varepsilon )^2. \end{aligned}$$

Letting \(k\rightarrow \infty ,\) the left hand side converges to \(s^2\) by the assumption of 0-PPC (this follows by Lemma 9), therefore

$$\begin{aligned}1> \frac{(b-\varepsilon )^2}{a} + \frac{(1-b-\varepsilon )^2}{1-a} \cdot \end{aligned}$$

This is a contradiction for \(\varepsilon\) sufficiently small because \(b\ne a.\)

Remark

The arguments of the second part of the previous proof, i.e. the proof of the fact that 0-PPC implies uniform distribution, can be used to deduce that \(\beta\)-PPC implies uniform distribution for any \(\beta <1.\) When \(\beta =1\), the same method can actually yield something more: any sequence for which

$$\begin{aligned}\lim _{N\rightarrow \infty }R_2(s,N) = 2s \qquad \text { for all } s\geqslant s_0 \end{aligned}$$

is uniformly distributed. Indeed, under this hypothesis a modification of the proof of Lemma 9 gives that for all \(s\geqslant s_0,\)

$$\begin{aligned}\limsup _{N \rightarrow \infty } \int _0^1 F_1(t,s,N)^2\, \textrm{d}t \leqslant s^2 + s + s_0^2.\end{aligned}$$

The implicit constant here does not depend on the value of s. Assuming such a sequence is not uniformly distributed, we obtain (??) and thus for all k sufficiently large we get

$$\begin{aligned}\int _0^a F_1(t,s,N_k)\,\textrm{d}t \geqslant (b-\varepsilon )s\quad \text { and }\quad \int _a^1 F_1(t,s,N_k)\,\textrm{d}t \geqslant (1-b-\varepsilon )s. \end{aligned}$$

With an application of the Cauchy–Schwarz inequality this gives

$$\begin{aligned} \int _0^1 F_1(t,s,N_k)^2 \,\textrm{d}t \geqslant \frac{s^2}{a}(b-\varepsilon )^2 + \frac{s^2}{1-a}(1-b-\varepsilon )^2. \end{aligned}$$

Letting \(k\rightarrow \infty\) we obtain

$$\begin{aligned} \frac{s^2}{a}(b-\varepsilon )^2 + \frac{s^2}{1-a}(1-b-\varepsilon )^2 \leqslant s^2 + s + s_0^2 \quad \text {for all } s\geqslant s_0\end{aligned}$$

and if we choose \(s>0\) sufficiently large, we arrive at a contradiction. We note that this proof can be seen as a “continuous analogue” of the proof of Aistleitner, Lachmann and Pausinger [1] that avoids the need to use the positivity of the Fejér kernel.

4 Proof of Theorem 2

In this section we shall prove that whenever \(0<\beta <1\) and

$$\begin{aligned} \lim _{N \rightarrow \infty }R_2(\beta ;s,N) = 2s \qquad \text { for all } s < s_0.\end{aligned}$$
(6)

holds for some value of \(s_0>0,\) the sequence \((x_n)_{n \in \mathbb {N}}\) has \(\beta\)-PPC. In view of Lemmas 9 and 10, it suffices to show that for all \(s> 0\) we have

$$\begin{aligned}\limsup _{N \rightarrow \infty } I_{\beta }(s,N) \leqslant s^2.\end{aligned}$$

To do so, note that for any fixed \(s>0\), there exists an \(N_0= N_0(s,s_0)\) such that for any even integer \(K \geqslant N_0\) we have \(s/K < s_0\). Note that when N is sufficiently large, for any \(t\in [0,1]\) we have

$$\begin{aligned} B\Big (t, \frac{s}{2N^{\beta }}\Big ) \subseteq \bigcup _{|\ell | \leqslant \frac{K}{2} }\hspace{-1mm} B\Big (t + \frac{\ell s }{KN^\beta }, \frac{s}{2KN^\beta }\Big ) \end{aligned}$$

and therefore,

$$\begin{aligned} F_\beta (t,s,N) \leqslant \sum _{|\ell |\leqslant \frac{K}{2}} F_\beta \Big (t + \frac{ \ell s}{KN^{\beta }},\frac{s}{K},N\Big ).\end{aligned}$$

The Cauchy–Schwarz inequality gives

$$\begin{aligned} \int _{0}^{1} F_{\beta }(t,s,N)^2 \,\textrm{d}t&\leqslant \int _{0}^{1} \Big (\sum _{ |\ell |\leqslant \frac{K}{2} } F_\beta \Big (t + \frac{ \ell s }{KN^{\beta }},\frac{s}{K},N\Big )\Big )^2 \,\textrm{d}t \\ {}&\leqslant \int _{0}^{1} (K +1) \sum _{|\ell |\leqslant \frac{K}{2} }F_\beta \Big (t + \frac{ \ell s}{KN^{\beta }},\frac{s}{K},N\Big )^2 \, \textrm{d}t \\ {}&= (K +1)^2 \int _{0}^{1} F_{\beta }\Big (t,\frac{s}{K},N\Big )^2 \, \textrm{d}t. \end{aligned}$$

By the assumption of (6) and using Lemma 9,

$$\begin{aligned}\lim _{N \rightarrow \infty }\int _{0}^{1} F_\beta \Big (t,\frac{s}{K},N\Big )^2 \,\textrm{d}t = \frac{s^2}{K^2},\end{aligned}$$

so we obtain

$$\begin{aligned} \limsup _{N \rightarrow \infty } \int _{0}^{1} F_{\beta }(t,s,N)^2 \,\textrm{d}t \leqslant \frac{(K+1)^2}{K^2}s^2. \end{aligned}$$

Letting \(K \rightarrow \infty\), the result follows.

5 Proof of Theorem 3

We prove that given any \(S>0,\) we can find a sequence \((x_n)_{n \in \mathbb {N}} \subseteq [0,1]\) such that on the one hand \(R_2(s,N)\rightarrow 2s\) for all \(s<S\) but on the other hand, \((x_n)_{n \in \mathbb {N}}\) is not uniformly distributed in [0, 1]. During the course of the proof we shall deal with the correlation functions of different sequences at the same time, and thus for convenience we stress out the dependence of the correlation function on the sequence.

We begin with a brief description of the main heuristic idea of the proof. Given the value \(S>0\), we first want to define a sequence \((x_n)_{n \in \mathbb {N}}\) whose pair correlation function is asymptotically smaller than the value that corresponds to the Poisson model for all \(s<S.\) More specifically, we want to find a sequence such that \(R_2(s,N) \rightarrow 2cs\) for all \(s<\frac{1}{c}S\), where \(0<c<1\) is some constant. We then expect that if we contract the sequence by a factor c,  on the one hand the “contracted” sequence will have pair correlations asymptotically equal to 2s for all \(s<S,\) on the other hand it cannot be uniformly distributed modulo 1,  since it will be contained within the interval [0, c].

We thus need to take a careful look on how contracting a sequence affects its pair correlation function. Let \((x_n)_{n \in \mathbb {N}}\) be a sequence in [0, 1] and \(0< c < 1\) be an arbitrary constant. For any two distinct terms \(x_n,x_m \in [0,1]\), we have

$$\begin{aligned} \begin{aligned} \Vert c(x_n - x_m) \Vert \leqslant \frac{s}{N}\quad&\Leftrightarrow \quad \exists k \in \mathbb {Z}: |c(x_n - x_m) - k|\leqslant \frac{s}{N} \\ {}&\Leftrightarrow \quad \exists k \in \mathbb {Z}: \big |x_n - x_m - k/c\big |\leqslant \frac{s/c}{N} \cdot \end{aligned} \end{aligned}$$
(7)

We now observe that for all \(N\geqslant 1\) sufficiently large, the only candidate for k is 0: If \(k \ne 0\), then \(|k/c |\geqslant 1 + \delta\) for some fixed \(\delta > 0\). Now if \(N\geqslant 1\) is large enough such that \(\dfrac{s/c}{N} < \delta\), since \(x_n - x_m \in [-1,1]\) we see that \(|x_n - x_m - \frac{k}{c}|> \dfrac{s/c}{N}\) and the inequalities in (7) will have to fail. Hence,

$$\begin{aligned}\Vert c(x_n - x_m) \Vert \leqslant \frac{s}{N} \quad&\Leftrightarrow \quad |x_n - x_m|\leqslant \frac{s/c}{N} \\ {}&\Leftrightarrow \quad \Vert x_n - x_m \Vert \leqslant \frac{s/c}{N}\hspace{3mm} \text { and }\hspace{3mm} |x_n - x_m |< 1 - \frac{s/c}{N} \cdot \end{aligned}$$

Therefore writing \(R_2^{\mathfrak {X}}(s,N)\) and \(R_2^{c\mathfrak {X}}(s,N)\) for the correlation functions of the sequences \((x_n)_{n \in \mathbb {N}}\) and \((cx_n)_{n\in {\mathbb {N}}},\) respectively, we have

$$\begin{aligned} R_2^{c\mathfrak {X}}(s,N)&= \frac{1}{N}\#\Big \{n \ne m \leqslant N: \Vert x_n - x_m \Vert \leqslant \frac{s/c}{N}\, \, \& \, \, |x_n - x_m |< 1 - \frac{s/c}{N}\Big \} \\ {}&= \frac{1}{N}\#\Big \{ n\ne m \leqslant N: \Vert x_n - x_m \Vert \leqslant \frac{s/c}{N}\Big \} \\ {}&\quad - \frac{1}{N}\#\Big \{ n \ne m \leqslant N: \Vert x_n - x_m \Vert \leqslant \frac{s/c}{N} \, \, \& \, \, |x_n - x_m |\geqslant 1 - \frac{s/c}{N} \Big \} \\ {}&= R_2^{\mathfrak {X}}(s/c,N) - E^{\mathfrak {X}}(s/c,N), \end{aligned}$$

where \(E^{\mathfrak {X}}(s,N)\) is an error term, by definition equal to

$$\begin{aligned} E^{\mathfrak {X}}(s,N) = \frac{1}{N}\#\Big \{ n \ne m \leqslant N: \Vert x_n - x_m \Vert \leqslant \frac{s}{N}\, \& \,|x_n - x_m |\geqslant 1 - \frac{s}{N}\Big \}. \end{aligned}$$
(8)

The upshot is that for sequences \((x_n)_{n \in \mathbb {N}}\) for which the error term \(E^{\mathfrak {X}}(s/c,N)\) tends to 0 as \(N\rightarrow \infty ,\) we will have

$$\begin{aligned} \lim _{N \rightarrow \infty } R_2^{c\mathfrak {X}}(s,N) = \lim _{N \rightarrow \infty } R_2^{\mathfrak {X}}(s/c,N),\end{aligned}$$

provided, of course, that the limit on the right-hand side exists. So as long as we find a sequence \((x_n)_{n \in \mathbb {N}}\) with \(E^{\mathfrak {X}}(s,N)\rightarrow 0\) for any \(s>0\) and a number \(0<c<1\) such that the correlation function \(R_2^{\mathfrak {X}}(s,N)\) of \((x_n)_{n \in \mathbb {N}}\) satisfies

$$\begin{aligned} \lim _{N \rightarrow \infty } R_2^{\mathfrak {X}}(s,N) = 2cs \qquad \text {for any } s < \frac{1}{c}S,\end{aligned}$$
(9)

the proof of Theorem 3 can be finished as already mentioned: the sequence \((cx_n)_{n\in {\mathbb {N}}}\) satisfies \(\lim _{N \rightarrow \infty } R_2^{c\mathfrak {X} }(s,N) = 2s\) for every \(s< S\), but is only supported in [0, c], which makes it impossible to be uniformly distributed.

We now move our discussion to how we can construct a sequence that satisfies (9). The idea is to start with some sequence \((y_n)_{n \in \mathbb {N}}\) that has PPC and “dilute” its pair correlations by inserting periodically terms from some other sequence \((z_n)_{n \in \mathbb {N}}\), for which the pair correlation function is 0 for all sufficiently small scales \(s>0.\)

More precisely, we consider a positive integer \(M \geqslant 2S\), a sequence \((y_n)_{n \in \mathbb {N}}\) that will be specified later, as well as the binary van der Corput sequence \((z_n)_{n \in \mathbb {N}}\). By the definition of \((z_n)_{n \in \mathbb {N}}\) (see e.g. [12, p. 127]) one can deduce that for any \(N \geqslant 1,\) all gaps between elements of the set \(\{z_1,\ldots ,z_N\}\) are at least 1/(2N). It follows immediately that the correlation function \(R_2^{\mathcal {Z}}(s,N)\) of \((z_n)_{n \in \mathbb {N}}\) satisfies

$$\begin{aligned} \lim _{N \rightarrow \infty }R_2^{\mathcal {Z}}(s,N) = 0 \qquad \text { for any }\, 0< s < \frac{1}{2} \cdot \end{aligned}$$
(10)

We now build \((x_n)_{n \in \mathbb {N}}\) by inserting an element of \((z_n)_{n \in \mathbb {N}}\) after every \(M-1\) elements of \((y_n)_{n \in \mathbb {N}}\); that is, writing \(n = kM + r\) where \(k \in \mathbb {N}\) and \(0 \leqslant r < M\), we define

$$\begin{aligned} x_n = {\left\{ \begin{array}{ll} z_k, &{}\text { if} r = 0 \\ y_{k(M-1)+r}, &{} \text { if} r \ne 0. \end{array}\right. } \end{aligned}$$
(11)

We then compute

$$\begin{aligned} R_2^{ \mathfrak {X}}(s,MN)&= \frac{1}{MN}\#\Big \{n \ne m \leqslant (M-1)N: \Vert y_n - y_m\Vert \leqslant \frac{s/M}{N}\Big \} \nonumber \\ {}&\quad + \frac{1}{MN}\#\Big \{n \ne m \leqslant N: \Vert z_n - z_m\Vert \leqslant \frac{s/M}{N}\Big \} \nonumber \\ {}&\quad + \frac{2}{MN}\#\Big \{n \leqslant (M-1)N, m \leqslant N: \Vert y_n - z_m\Vert \leqslant \frac{s/M}{N}\Big \}\nonumber \\&= \frac{M-1}{M}R_2^{\Upsilon }\left( s(M-1)/M,(M-1)N\right) + \frac{1}{M}R_2^{\mathcal {Z}}(s/M,N) \nonumber \\ {}&\quad + \frac{2}{MN}\#\Big \{n \leqslant (M-1)N, m \leqslant N: \Vert y_n - z_m\Vert \leqslant \frac{s/M}{N}\Big \}. \end{aligned}$$
(12)

In view of (10) and the assumption that \(M \geqslant 2S\), we see that \(\lim _{N \rightarrow \infty } R_2^{\mathcal {Z}}(s/M,N) = 0\) for any \(s < S\). So we are close to the end of the proof as long as we find a sequence \((y_n)_{n \in \mathbb {N}}\) fulfilling the following properties:

(i):

\((y_n)_{n \in \mathbb {N}}\) has PPC,

(ii):

for every \(s > 0,\)

$$\begin{aligned} \lim _{N \rightarrow \infty } \frac{1}{N}\#\Big \{n \leqslant (M-1)N, m \leqslant N: \Vert y_n - z_m\Vert \leqslant \frac{s/M}{N}\Big \} = \frac{2s(M-1)}{M}, \end{aligned}$$
(13)
(iii):

for every \(s>0,\) the error term \(E^{\Upsilon }(s,N)\) defined in (8) corresponding to the sequence \((y_n)_{n \in \mathbb {N}}\) satisfies \(\lim \limits _{N\rightarrow \infty }E^{\Upsilon }(s,N)=0\).

The existence of such a sequence \((y_n)_{n \in \mathbb {N}}\) is guaranteed by the following lemma.

Lemma 11

Let \(M\geqslant 1\) be an integer and let \((z_n)_{n \in \mathbb {N}}\) denote the binary van der Corput sequence. Furthermore let \((Y_n)_{n\in \mathbb {N}}\) be a sequence of independent, uniformly distributed random variables in [0, 1]. Then almost surely, the sequence \((Y_n(\omega ))_{n\in \mathbb {N}}\) has PPC, satisfies property (13) and \(\lim \limits _{N\rightarrow \infty }E(s,N)=0\).

Proof

It is a well-known fact that any sequence \((Y_n)_{n\in \mathbb {N}}\) of independent, uniformly distributed random variables in [0, 1] has PPC almost surely. For a proof, we refer to [9, Appendix B] in the setup of higher order correlations, or also to [11] for sequences in higher dimensions.

We now prove the second property, namely that \((Y_n)_{n\in \mathbb {N}}\) satisfies (13) almost surely. Choose an arbitrary \(s > 0\). Writing

$$\begin{aligned}Y_{n,N}(s) = \#\Big \{1\leqslant m \leqslant N: \Vert Y_n - z_m\Vert \leqslant \frac{s}{MN}\Big \}, \quad n = 1, \ldots , (M-1)N,\end{aligned}$$

the quantity in question is equal to

$$\begin{aligned} I_{N}(s) = \frac{1}{N} \sum _{n \leqslant (M-1)N} Y_{n,N}(s).\end{aligned}$$

It is easy to see that

$$\begin{aligned}\mathbb {E}[ I_N(s) ] = \frac{2s(M-1)}{M}, \qquad N\geqslant 1 \end{aligned}$$

and we proceed to compute the variance of \(I_N.\) We first observe that for any \(N\geqslant 1\), the functions \((Y_{n,N})_{n=1}^{N(M-1)}\) form a family of independent random variables. Furthermore, for \(2^k \leqslant N < 2^{k+1}\) we have \((z_m)_{m=1}^{N} \subseteq \{\frac{\ell }{2^{k+1}}, \ell = 1,\ldots , 2^{k+1}\}\), so we can bound

$$\begin{aligned}Y_{n,N}(s) \leqslant \#\Big \{1 \leqslant m \leqslant 2^{k+1}: \frac{m}{2^{k+1}} \in \Big [y_n - \frac{2s/M}{2^{k+1}}, y_n + \frac{2s/M}{2^{k+1}}\Big ]\Big \} \leqslant \frac{4s}{M} + 2.\end{aligned}$$

This implies that

$$\begin{aligned} \text {Var}[I_N]\! =\!\frac{1}{N^2}\hspace{-3mm}\sum _{n=1}^{N(M-1)}\hspace{-3mm}\text {Var}[Y_{n,N}] = \frac{1}{N^2}\hspace{-3mm}\sum _{n=1}^{N(M-1)}\hspace{-3mm}\Big ( \mathbb {E}[Y_{n,N}^2] -\mathbb {E}[Y_{n,N}]^2\Big ) \!=\! \mathcal {O}_{M,s}\Big (\frac{1}{N}\Big ). \end{aligned}$$
(14)

To finish the proof we use a standard approximation argument (for more details see e.g. [22]). Let \(\gamma > 0\) and consider the subsequence

$$\begin{aligned} B_N := \left\lceil {N^{\gamma }}\right\rceil ,\qquad N\geqslant 1. \end{aligned}$$

For \(s>0\) fixed, we use Chebyshev’s inequality, the first Borel–Cantelli Lemma and the variance estimate from (14) to see that

$$\begin{aligned} \lim _{N \rightarrow \infty } I_{B_N}(s) = \frac{2s(M-1)}{M} \qquad \text { almost surely } \end{aligned}$$
(15)

(where the zero-measure set depends on s). Repeating the argument for all s lying in a dense, countable subset of \(\mathbb {R}_+\) and employing the monotonicity of \(I_N(s)\) as a function of s, we see that (15) actually holds for all \(s>0\). Next, if \(N\geqslant 1\) is an arbitrary integer, we let \(K\geqslant 1\) be such that \(B_K \leqslant N < B_{K+1}\) and observe that for any \(s>0,\)

$$\begin{aligned} \frac{B_K}{B_{K+1}} I_{B_K}\Big (\frac{B_Ks}{B_{K+1}}\Big ) \leqslant I_N(s) \leqslant \frac{B_{K+1}}{B_K}I_{B_{K+1}}\Big ( \frac{B_{K+1}s}{B_K}\Big ). \end{aligned}$$
(16)

Since \(\lim \limits _{N \rightarrow \infty } \dfrac{B_N}{B_{N+1}} = 1\), we deduce from (15) that (13) holds almost surely for any fixed \(s > 0\).

Finally, it remains to prove that the third property, namely \(\lim _{N\rightarrow \infty } E(s,N)=0,\) is satisfied almost surely. For the error term \(E^{\Upsilon }(s,N)\) corresponding to \((y_n)_{n \in \mathbb {N}}\), we provide the upper bound

$$\begin{aligned} \begin{aligned} E^{\Upsilon }(s,N)&\leqslant \frac{1}{N}\#\Big \{ n \ne m \leqslant N: |y_n - y_m |\geqslant 1 - \frac{s}{N}\Big \}\\&\leqslant \frac{2}{N} \#\Big \{ n \leqslant N: y_n \geqslant 1 - \frac{s}{N}\Big \}\cdot \#\Big \{ n \leqslant N: y_n \leqslant \frac{s}{N} \Big \}. \end{aligned} \end{aligned}$$
(17)

It is therefore sufficient to prove that almost surely, we have

$$\begin{aligned} \lim _{N \rightarrow \infty } \frac{1}{N}\#\Big \{ n \leqslant N: Y_n \geqslant 1 - \frac{s}{N}\Big \}\cdot \#\Big \{ n \leqslant N: Y_n \leqslant \frac{s}{N} \Big \} =0. \end{aligned}$$

This can be proved using a mean-variance argument, completely analogous to the one used to prove (13). We leave the details to the interested reader. \(\square\)

Having proved the existence of a sequence \((y_n)_{n \in \mathbb {N}}\) with the three desired properties as in Lemma 11, we define \((x_n)_{n \in \mathbb {N}}\) as in (11). In view of these properties, (12) implies that for any \(s>0,\) the pair correlation \(R_2^{\mathfrak {X}}(s,N)\) of \((x_n)_{n \in \mathbb {N}}\) satisfies

$$\begin{aligned} R_2^{\mathfrak {X}}(s, MN) = \Big (1 - \frac{1}{M^2}\Big )\,2s + \frac{1}{M} R_2^{\mathcal {Z}}(s/M,N) + o(1), \quad N\rightarrow \infty . \end{aligned}$$

From now on, we focus our attention on values \(s<M/2.\) By the assumption that \(M\geqslant 2S,\) these values include all scales \(s<S.\) Since for \(s < M/2\) we have \(R_2^{\mathcal {Z}}(\frac{s}{M}, MN\big ) \rightarrow 0\), we deduce that for \(s< M/2,\)

$$\begin{aligned} R_2^{\mathfrak {X}}(s, MN) = \Big (1 - \frac{1}{M^2}\Big )\,2s + o(1),\qquad N\rightarrow \infty .\end{aligned}$$

Then a standard approximation argument, similar to the one we used to derive (16), gives

$$\begin{aligned} \lim _{N \rightarrow \infty }R_2^{\mathfrak {X}}(s, N) = \Big (1 - \frac{1}{M^2}\Big )\,2s \qquad \text { for all } s< \frac{M}{2}\cdot \end{aligned}$$
(18)

Finally, we set \(c = 1 - \dfrac{1}{M^2}\in (0,1)\) and consider the sequence \((cx_n)_{n\in {\mathbb {N}}}.\) In order to prove that this sequence satisfies the statement of Theorem 2, it remains to verify that the error term \(E^{\mathfrak {X}}(s,N)\) corresponding to \((x_n)_{n\in {\mathbb {N}}}\) tends to 0 for any \(s>0.\) As in (17), we find

$$\begin{aligned} E^{\mathfrak {X}}(s,MN)&= \frac{1}{MN}\#\Big \{ n \ne m \leqslant MN: \Vert x_n - x_m \Vert \leqslant \frac{s}{MN}\, \& \,|x_n - x_m |\geqslant 1 - \frac{s}{MN}\Big \}\\&\leqslant \frac{M-1}{M}E^{\Upsilon }(s(M-1)/M,(M-1)N) + \frac{1}{M}E^{\mathcal {Z}}(s/M,N) \\ {}&\,\,+ \frac{1}{MN}\#\Big \{n \leqslant (M-1)N: y_n \leqslant \frac{s/M}{N}\Big \}\cdot \#\Big \{n \leqslant N: z_n \geqslant 1-\frac{s/M}{N}\Big \} \\ {}&\,\,+ \frac{1}{MN}\#\Big \{n \leqslant (M-1)N: y_n \geqslant 1-\frac{s/M}{N}\Big \}\cdot \#\Big \{n \leqslant N: z_n \leqslant \frac{s/M}{N}\Big \}. \end{aligned}$$

Using that \(s/M < 1/2\) and the property \(\{z_1,\ldots z_N\} \subseteq [1/(2N),1-1/(2N)]\) that follows straightforwardly from the definition of the binary van der Corput sequence, we see that

$$\begin{aligned}\#\Big \{n \leqslant N: z_n \geqslant 1-\frac{s/M}{N}\Big \} = \#\Big \{n \leqslant N: z_n \leqslant \frac{s/M}{N}\Big \} = 0,\end{aligned}$$

whence \(E^{\mathcal {Z}}(s/M,N)=0\) and

$$\begin{aligned}E^{\mathfrak {X}}(s,MN) \leqslant \frac{M-1}{M}E^{\Upsilon }(s(M-1)/M,(M-1)N).\end{aligned}$$

By the construction of \((y_n)_{n \in \mathbb {N}}\), \(E^{\Upsilon }(s(M-1)/M,(M-1)N) \rightarrow 0\) as \(N\rightarrow \infty\), so \(\lim \limits _{N\rightarrow \infty }E^{\mathfrak {X}}(s,MN)=0\) and subsequently \(\lim \limits _{N\rightarrow \infty }E^{\mathfrak {X}}(s,N)=0\) follows. According to the discussion in the beginning of the section, this allows us to deduce that for the rescaled sequence \((cx_n)_{n\in {\mathbb {N}}}\) we have

$$\begin{aligned} R_2^{c\mathfrak {X}}(s,N) = R_2^{\mathfrak {X}}(s/c,N) + o(1), \quad N\rightarrow \infty . \end{aligned}$$

Combined with (18), this implies that

$$\begin{aligned} \lim _{N\rightarrow \infty } R_2^{c\mathfrak {X}}(s,N) = 2s \quad \text { for all } s<S \end{aligned}$$

which concludes the proof of Theorem 3 by the discussion above.

6 Proof of Theorem 4

We now proceed to the proof of Theorem 4: we prove that when \(\alpha <\beta\), the property of \(\beta\)-PPC is stronger than that of \(\alpha\)-PPC. We shall distinguish two different cases according to whether \(\beta =1\) or \(\beta <1.\) In both cases, by Lemma 9 and Lemma 10, it suffices to show that for all \(s > 0\)

$$\begin{aligned} \limsup _{N \rightarrow \infty } I_\alpha (s,N) \leqslant s^2. \end{aligned}$$

6.1 The case \(0<\beta <1\)

Let \(0\leqslant \alpha< \beta < 1.\) We bound the function \(F_\alpha (t,s,N)\) from above in terms of \(F_\beta (t,s,N)\). Writing

$$\begin{aligned} M = \Big \lceil \frac{ N^{\beta -\alpha }}{2} \Big \rceil , \end{aligned}$$

we use the same reasoning as in the proof of Theorem 2: we note that

$$\begin{aligned} B\Big (t, \frac{s}{2N^{\alpha }}\Big ) \subseteq \bigcup _{|\ell | \leqslant M }\hspace{-1mm} B\Big (t + \frac{\ell s}{N^\beta }, \frac{s}{2N^\beta }\Big ),\end{aligned}$$

whence

$$\begin{aligned} F_\alpha (t,s,N) \leqslant \frac{1}{ N^{\beta - \alpha }}\sum _{|\ell |\leqslant M} F_\beta \Big (t + \frac{ \ell s}{N^{\beta }},s,N\Big ). \end{aligned}$$

Another application of the Cauchy-Schwarz inequality gives

$$\begin{aligned} \int _{0}^{1} F_{\alpha }(t,s,N)^2 \,\textrm{d}t&\leqslant \int _{0}^{1} \frac{1}{N^{2(\beta - \alpha )}}\Bigg (\sum _{ |\ell |\leqslant M } F_\beta \Big (t + \frac{ \ell s}{N^{\beta }},s,N\Big )\Bigg )^2 \,\textrm{d}t \\ {}&\leqslant \int _{0}^{1} \frac{ N^{\beta - \alpha }+1 }{N^{2(\beta - \alpha )}} \sum _{|\ell |\leqslant M }F_\beta \Big (t + \frac{ \ell s}{N^{\beta }},s,N\Big )^2 \,\textrm{d}t \\ {}&= \frac{N^{\beta - \alpha }+1 }{N^{2(\beta - \alpha )}} \sum _{|\ell |\leqslant M} \int _{0}^{1} F_{\beta }(t,s,N)^2 \,\textrm{d}t \\ {}&\leqslant \Big (1 + \frac{3 }{ N^{\beta -\alpha }} \Big ) \int _{0}^{1} F_{\beta }(t,s,N)^2 \,\textrm{d}t. \end{aligned}$$

By the assumption of \(\beta\)-PPC and Lemma 9, we have

$$\begin{aligned}\lim _{N \rightarrow \infty }\int _{0}^{1} F_\beta (t,s,N)^2 \textrm{d}t = s^2,\end{aligned}$$

so the result follows.

6.2 The case \(\beta =1\)

Assume the sequence \((x_n)_{n \in \mathbb {N}}\) has PPC and let \(s > 0\). For all integers \(K, N \geqslant 1\) we set

$$\begin{aligned}M = M(N,K) = \Bigg \lceil \frac{N^{1-\alpha }}{2K}\Bigg \rceil . \end{aligned}$$

As in the previous case, we have

$$\begin{aligned} B\Big (t, \frac{s}{2N^{\alpha }}\Big ) \subseteq \bigcup _{|\ell | \leqslant M }\hspace{-1mm} B\Big (t + \frac{\ell sK}{N}, \frac{sK}{2N}\Big ) \end{aligned}$$

and therefore

$$\begin{aligned} F_\alpha (t,s,N) \leqslant \frac{1}{ N^{1- \alpha }}\sum _{|\ell |\leqslant M} F_1\Big (t + \frac{ \ell sK}{N},sK,N\Big ). \end{aligned}$$

By the Cauchy-Schwarz inequality and argumentations as previously, it follows that

$$\begin{aligned} \int _{0}^{1} F_{\alpha }(t,s,N)^2 \,\textrm{d}t&\leqslant \int _{0}^{1} \frac{1}{N^{2(1-\alpha )}}\Big (\sum _{ |\ell |\leqslant M } F_1\Big (t + \frac{ \ell sK}{N},sK,N\Big )\Big )^2 \,\textrm{d}t \\ {}&\leqslant \int _{0}^{1} \frac{2M+1}{N^{2(1-\alpha )}} \sum _{|\ell |\leqslant M }F_1\Big (t + \frac{ \ell sK}{N},sK,N\Big )^2 \,\textrm{d}t \\ {}&= \frac{(2M+1)^2}{N^{2(1-\alpha )}} \int _{0}^{1} F_{1}(t,sK,N)^2 \,\textrm{d}t. \end{aligned}$$

Clearly,

$$\begin{aligned} \lim _{N\rightarrow \infty } \frac{(2M+1)^2}{N^{2(1-\alpha )}} = \frac{1}{K^2} \end{aligned}$$

and by the assumption of PPC we have

$$\begin{aligned}\lim _{N \rightarrow \infty }\int _{0}^{1} F_1(t,sK,N)^2\, \textrm{d}t = (sK)^2 + sK.\end{aligned}$$

Therefore, we can deduce that

$$\begin{aligned}\limsup _{N \rightarrow \infty } \int _{0}^{1} F_{\alpha }(t,s,N)^2 \,\textrm{d}t \leqslant \frac{s^2K^2 + sK}{K^2}\end{aligned}$$

for any \(K \in \mathbb {N}\). Letting \(K \rightarrow \infty\), the statement follows.

7 Proof of Theorem 5

In the current section we fix a value of \(0< \beta <1\) and construct a sequence \((x_n)_{n \in \mathbb {N}}\) that has \(\alpha\)-PPC for all \(\alpha <\beta\) but does not have \(\alpha\)-PPC for any \(\alpha \geqslant \beta .\) The construction is based on the following idea: we start with an arbitrary sequence \((y_n)_{n\in {\mathbb {N}}}\) that has PPC. The required sequence \((x_n)_{n \in \mathbb {N}}\) is then defined in a way that for each \(1 \leqslant n \leqslant N\), the number \(y_n\) occurs asymptotically \(N^{\tfrac{1}{\beta }-1}\) times in the first \(N^{\tfrac{1}{\beta }}\) elements of \((x_n)_{n \in \mathbb {N}}\). In that way, the contribution to \(R_2(\alpha ; s,N)\) of terms \(x_i,x_j\) with \(1\leqslant i\ne j \leqslant N\) and \(x_i = x_j\) will be of order \(N^{2-\beta }\). Looking at the factor \(\frac{1}{N^{2-\alpha }}\) in the definition of \(R_2(\alpha ;s,N)\), we see that the contribution of these pairs is negligible when \(\alpha < \beta\), whereas for \(\alpha > \beta\), it is impossible to have the Poissonian correlation property since the correlation function will diverge.

To achieve the aforementioned construction, we employ an “expansion” of positive integers with respect to a sequence that grows like \(\frac{1}{\beta }\)-th powers of integers.

Fix \(0< \beta <1\) and let \((y_n)_{n\in {\mathbb {N}}}\) be a sequence with PPC such that \(y_m \ne y_n\) whenever \(m\ne n\). (The existence of such a sequence is justified, for example, by [20, Theorem 1].) Also, consider the sequence of indices

$$\begin{aligned} A_N = N \lfloor N^{\frac{1}{\beta }-1 } \rfloor , \qquad N\geqslant 1. \end{aligned}$$

Every \(N\geqslant 1\) can be written uniquely as

$$\begin{aligned} N = A_M + \varepsilon _N\lfloor M^{\frac{1}{\beta }-1}\rfloor + q_N(M+1) + r_N, \end{aligned}$$
(19)

where

  1. (i)

    \(M = M_N \geqslant 1\) is the unique integer such that \(A_M < N \leqslant A_{M+1},\)

  2. (ii)

    \(\varepsilon _N \in \{ 0, 1\}, \quad q_N \geqslant 0, \quad 1 \leqslant r_N \leqslant M+1, \quad\) and

  3. (iii)

    \(\varepsilon _N = 0\) if and only if \(A_M < N \leqslant A_M + \lfloor M^{\frac{1}{\beta } - 1}\rfloor .\)

We now define a new sequence \((x_n)_{n \in \mathbb {N}}\) by letting \(x_1 = y_1\) and

$$\begin{aligned} x_N = {\left\{ \begin{array}{ll} y_{M+1}, &{}\text { if } \varepsilon _N =0, \\ y_{r_N }, &{}\text { otherwise.} \end{array}\right. }\end{aligned}$$

Here \(M\geqslant 1, \varepsilon _N \in \{0,1\}\) and \(r_N\geqslant 1\) are as in (19).

We aim to show that \((x_n)_{n \in \mathbb {N}}\) has the required property, that is, it does not have \(\beta\)-PPC but it has \(\alpha\)-PPC for any \(\alpha <\beta .\) We first need the following lemma.

Lemma 12

For any \(M\geqslant 1,\) we have

$$\begin{aligned} \#\{1\leqslant n \leqslant A_M : x_n = y_k\} = \lfloor M^{\frac{1}{\beta } -1}\rfloor , \quad k=1,2,\ldots , M. \end{aligned}$$
(20)

Proof

We prove this statement by induction on M. Trivially, the statement holds for \(M=1\), so we assume (20) is true for some fixed \(M \geqslant 1\). By construction of the sequence, \(x_N = y_{m+1}\) for \(A_M < N \leqslant A_M + \lfloor M^{\frac{1}{\beta } - 1}\rfloor = (M+1)\lfloor M^{\frac{1}{\beta } - 1}\rfloor .\) This implies that for all \(\ell ,k \in [M+1]\),

$$\begin{aligned} \#\{ n \leqslant (M+1)\lfloor M^{\frac{1}{\beta } - 1}\rfloor : x_n = y_k\} = \#\{ n \leqslant (M+1)\lfloor M^{\frac{1}{\beta } - 1}\rfloor : x_n = y_{\ell }\}.\end{aligned}$$
(21)

Since both \(A_{M+1}\) and \(A_M +\lfloor M^{\frac{1}{\beta } - 1}\rfloor = (M+1)\lfloor M^{\frac{1}{\beta } - 1}\rfloor\) are divisible by \(M+1\), so is their difference. Thus, as N runs through the values \((M+1)\lfloor M^{\frac{1}{\beta } - 1}\rfloor < N \leqslant A_{M+1}\), it attains each residue class mod \({M+1}\) the same number of times. This means that \(r_N\) attains each value between 1 and \(M+1\) the same number of times, and finally \(x_N\) assumes the values \(y_1,\ldots , y_{M+1}\) equally often. Hence, we can extend (21) to \(1 \leqslant n \leqslant A_{M+1}\). Note that no element \(y_k\) with \(k \geqslant M+2\) appears in \(\{x_1,\ldots , x_{A_{M+1}}\}\), so the statement follows. \(\square\)

We can now show that \((x_n)_{n \in \mathbb {N}}\) does not have \(\beta\)-PPC. Let \(s>0\). Whenever \(m,n \leqslant A_M,\) the inequality \(\Vert x_m - x_n\Vert \leqslant s/A_M^{\, \beta }\) is satisfied:

  • either when \(x_m=y_{k}\) and \(x_n=y_\ell\) for some indices \(1\leqslant k,\ell \leqslant M\) with \(k\ne \ell\) such that \(\Vert y_k - y_\ell \Vert \leqslant s/A_M^{\, \beta },\) or

  • when \(x_n=x_m=y_k\) for some \(1\leqslant k\leqslant M.\)

Therefore by Lemma 12, as \(M\rightarrow \infty\) we have

$$\begin{aligned} R_2(\beta ; s, A_M)&= \frac{1}{A_M^{2-\beta }}\#\Big \{ 1\leqslant m \ne n \leqslant A_M : \Vert x_m - x_n \Vert \leqslant \frac{s}{A_M^{\, \beta }} \Big \} \\&= \frac{M\lfloor M^{\frac{1}{\beta }-1}\rfloor ^2}{A_M^{2-\beta }} \cdot \frac{1}{M}\#\Big \{1\leqslant k \ne \ell \leqslant M : \Vert y_k - y_\ell \Vert \leqslant \frac{s}{A_M^{\beta }} \Big \} \\&\qquad + \frac{1}{A_M^{2-\beta }} \lfloor M^{\frac{1}{\beta }-1} \rfloor \big (\lfloor M^{\frac{1}{\beta }-1} \rfloor -1 \big ) M \\&= \big (1 + o(1) \big )\cdot \frac{1}{M}\#\Big \{1\leqslant k \ne \ell \leqslant M : \Vert y_k - y_\ell \Vert \leqslant \frac{s}{A_M^{\beta }} \Big \} + 1 +o(1). \end{aligned}$$

Let \(\varepsilon >0.\) Since \(\lim \limits _{M\rightarrow \infty } \dfrac{A_M^{\beta }}{M} =1,\) for all \(M\geqslant 1\) large enough we have

$$\begin{aligned} \begin{aligned} R_2^{\Upsilon }((1-\varepsilon )s,M)&= \frac{1}{M}\#\Big \{1\leqslant k \ne \ell \leqslant M : \Vert y_k - y_\ell \Vert \leqslant (1-\varepsilon )\frac{s}{M} \Big \} \\&\leqslant \frac{1}{M}\#\Big \{1\leqslant k \ne \ell \leqslant M : \Vert y_k - y_\ell \Vert \leqslant \frac{s}{A_M^{\beta }} \Big \} \\&\leqslant \frac{1}{M}\#\Big \{1\leqslant k \ne \ell \leqslant M : \Vert y_k - y_\ell \Vert \leqslant (1+\varepsilon )\frac{s}{M} \Big \} \\&= R_2^{\Upsilon }((1+\varepsilon )s,M),\end{aligned} \end{aligned}$$
(22)

where we write \(R_2^{\Upsilon }(s,N)\) for the pair correlation function of the sequence \((y_n)_{n \in \mathbb {N}}\) By the hypothesis that \((y_n)_{n \in \mathbb {N}}\) has PPC, we deduce that

$$\begin{aligned}\lim _{M\rightarrow \infty } \frac{1}{M}\#\Big \{1\leqslant k \ne \ell \leqslant M : \Vert y_k - y_\ell \Vert \leqslant \frac{s}{A_M^{\beta }} \Big \} = 2s. \end{aligned}$$

Therefore,

$$\begin{aligned} \lim _{M\rightarrow \infty } R_2(\beta ; s, A_M) = 2s + 1, \end{aligned}$$

which implies that \((x_n)_{n \in \mathbb {N}}\) does not have \(\beta\)-PPC. By Theorem 4, it does not have \(\alpha\)-PPC for any \(\alpha >\beta .\) (We leave it to the interested reader to verify that actually for \(\alpha >\beta ,\) \(\lim \limits _{N\rightarrow \infty }R_2(\alpha ;s,N)=\infty\) for any \(s>0.)\)

We now fix \(0 \leqslant \alpha <\beta .\) Arguing as previously we obtain

$$\begin{aligned} R_2(\alpha ;s,A_M)&= \frac{1}{A_M^{2-\alpha }} \#\Big \{ 1\leqslant m \ne n \leqslant A_M : \Vert x_m - x_n \Vert \leqslant \frac{s}{A_M^\alpha } \Big \} \\&= \frac{ \lfloor M^{\frac{1}{\beta }-1}\rfloor ^2}{A_M^{2-\alpha }} \cdot \#\Big \{1\leqslant k \ne \ell \leqslant M : \Vert y_k - y_\ell \Vert \leqslant \frac{s}{A_M^{\alpha }} \Big \} \\&\qquad + \frac{1}{A_M^{2-\alpha }} \lfloor M^{\frac{1}{\beta }-1} \rfloor \big (\lfloor M^{\frac{1}{\beta }-1} \rfloor -1 \big ) M. \end{aligned}$$

We first notice that since \(\alpha < \beta ,\)

$$\begin{aligned}\frac{M}{A_M^{2-\alpha }} \lfloor M^{\frac{1}{\beta }-1} \rfloor \big (\lfloor M^{\frac{1}{\beta }-1} \rfloor -1 \big ) = \frac{1}{M^{1 - \frac{\alpha }{\beta }}} + o(1) = o(1),\qquad M\rightarrow \infty . \end{aligned}$$

It remains to understand the asymptotic behaviour of the factor

$$\begin{aligned} \#\Big \{1\leqslant k \ne \ell \leqslant M : \Vert y_k - y_\ell \Vert \leqslant \frac{s}{A_M^{\alpha }} \Big \}.\end{aligned}$$

By the definition of \((A_M)_{M \in \mathbb {N}},\) we have \(\lim \limits _{M\rightarrow \infty }\dfrac{A_M^{\alpha } }{ M^{\frac{\alpha }{\beta }}} =1.\) The sequence \((y_n)_{n \in \mathbb {N}}\) has PPC, so by Theorem 4 it also has \((\alpha /\beta )\)-PPC. Thus arguing as in (22) we deduce that

$$\begin{aligned} \lim _{M\rightarrow \infty } \frac{1}{M^{2-\frac{\alpha }{\beta }}}\#\Big \{1\leqslant k \ne \ell \leqslant M : \Vert y_k - y_\ell \Vert \leqslant \frac{s}{A_M^{\alpha }} \Big \} = 2s. \end{aligned}$$

Therefore,

$$\begin{aligned} R_2(\alpha ;s,A_M)&= \frac{\lfloor M^{\frac{1}{\beta }-1}\rfloor ^2}{A_M^{2-\alpha }}\#\Big \{1\leqslant k \ne \ell \leqslant M : \Vert y_k - y_\ell \Vert \leqslant \frac{s}{A_M^{\alpha }} \Big \} + o(1) \\&= \frac{\lfloor M^{\frac{1}{\beta }-1}\rfloor ^2}{A_M^{2-\alpha }} M^{2-\frac{\alpha }{\beta }} \big ( 2s + o(1) \big ) + o(1)\\&= 2s + o(1), \qquad M\rightarrow \infty . \end{aligned}$$

To complete the proof that \((x_n)_{n \in \mathbb {N}}\) has \(\alpha\)-PPC, we observe that \(\lim \limits _{M\rightarrow \infty }\dfrac{A_{M+1}}{A_M}=1\) and by an approximation argument analogous to (16), we deduce that

$$\begin{aligned} \lim \limits _{N\rightarrow \infty } R_2(\alpha ; s, N) = 2s.\end{aligned}$$

8 Proof of Theorem 6

Theorem 6 will follow from a standard argument that, to the best of our knowledge, was employed for the first time in the context of pair correlations in [20] and subsequently has been used quite often in the relevant literature, see e.g. [2] and the references therein. We have decided to provide only the basic elements of the proof.

We stress out the dependence of the pair correlation function with parameter \(\beta\) of the sequence \((a_nx)_{n\in {\mathbb {N}}}\) on \(x\in [0,1)\) by writing

$$\begin{aligned}R_2(\beta ;s,N,x) := \frac{1}{N^{2-\beta }}\left\{ 1 \leqslant i \ne j \leqslant N: \left| \left| a_ix-a_jx\right| \right| \leqslant \frac{s}{N^{\beta }}\right\} .\end{aligned}$$

It then suffices to show that for Lebesgue-almost all \(x \in [0,1),\) we have

$$\begin{aligned}\lim _{N \rightarrow \infty } R_2(\beta ;s,N,x) = 2s\qquad \text { for any } s>0. \end{aligned}$$

First we fix a value of \(s>0\). It follows by the definition of \(R_2(\beta ;s,N,x)\) that

$$\begin{aligned}\int _{0}^{1}R_2(\beta ;s,N,x)\,\textrm{d}x = \frac{2(N-1)s}{N} \cdot \end{aligned}$$

A sufficient upper bound on the variance is given by the following lemma.

Lemma 13

For \(\varepsilon := \frac{1}{2}(1-\beta ) > 0\), we have

$$\begin{aligned}\int _{0}^{1}\left( R_2(s,N,\beta ,x) - \frac{2(N-1)s}{N}\right) ^2 \textrm{d}x \ll N^{-\varepsilon }, \quad N\rightarrow \infty .\end{aligned}$$

Proof

The proof goes along the lines of [2]. We use the Fourier series expansion

$$\begin{aligned} \mathbbm {1}_{[-\tfrac{s}{N^{\beta }},\tfrac{s}{N^{\beta }}]}(x) \sim \sum _{n=-\infty }^{+\infty }c_n e(nx), \end{aligned}$$

where

$$\begin{aligned} c_0 = \frac{2s}{N^{\beta }} \qquad \text { and } \qquad c_n = \frac{1}{\pi n}\sin \Big (\frac{2\pi n s}{N^{\beta }}\Big ), \quad n\ne 0. \end{aligned}$$

Using Parseval’s identity we get

$$\begin{aligned} \int _{0}^{1}\!\left( R_2(\beta ;s,N,x) - \frac{2(N-1)s}{N}\right) ^2\!\!\textrm{d}x = \frac{1}{N^{4-2\beta }}\hspace{-4mm}\sum _{m,n \in \mathbb {Z} \setminus \{0\}}\hspace{-2mm}r_N(m)r_N(n)\hspace{-2mm}\sum _{\begin{array}{c} i,j \in \mathbb {Z} \setminus \{0\}\\ im = jn \end{array}}\hspace{-4mm}c_{i}c_{j} \end{aligned}$$

where

$$\begin{aligned}r_N(m) = \# \{1 \leqslant i, j \leqslant N, a_i - a_j = m\}, \quad m\geqslant 1 \end{aligned}$$

denotes the number of representations of \(m\geqslant 1\) as a difference of two elements of \((a_i)_{i\leqslant N}\). It can now be shown as in [2] that for some absolute constant \(C>0,\)

$$\begin{aligned} \sum _{\begin{array}{c} i,j \in \mathbb {Z} \setminus \{0\}\\ im = jn \end{array}} |c_{i}c_{j}|\leqslant C (\log N) \frac{s}{N^{\beta }} \frac{\gcd (m,n)}{\sqrt{|mn |}}, \quad N\geqslant 1. \end{aligned}$$

Since

$$\begin{aligned}\sum _{m \geqslant 1} r_N(m)^2 = \#\{i,j,k,\ell \leqslant N: a_i - a_j = a_k - a_\ell \} \leqslant N^3,\end{aligned}$$

we can use the gcd-sum estimates from [2, Lemma 1] (in fact, the result from [4] suffices) to show that

$$\begin{aligned}\frac{1}{N^3}\sum _{m,n\geqslant 1} r_N(m) r_N(n)\frac{\text {gcd}(m,n)}{\sqrt{m n}} \ll \exp \left( \frac{c \sqrt{(\log N) \log \log \log N}}{\sqrt{\log \log N}}\right) \leqslant N^{\varepsilon }\end{aligned}$$

which proves the desired statement. \(\square\)

The proof of Theorem 6 can be completed by arguments identical to the ones used at the end of Theorem 3. We let \(\gamma >1/\varepsilon\) and consider the sequence \(B_M := \left\lceil {M^{\gamma }}\right\rceil,\, M\geqslant 1.\) We use Chebyshev’s inequality and the first Borel–Cantelli Lemma to deduce that

$$\begin{aligned}\lim _{M \rightarrow \infty } R_2(\beta ;s,B_M,x) = 2s\end{aligned}$$

for almost all x (where the zero-measure set depends on s). For an arbitrary integer \(N\geqslant 1\), let \(M=M_N\) be the unique index such that \(B_M \leqslant N < B_{M+1}.\) Since

$$\begin{aligned} \frac{B_M}{B_{M+1}}R_2\Big (\beta ;\frac{B_Ms}{B_{M+1}} , B_M,x\Big ) \leqslant R_2(\beta ;s,N,x) \leqslant \frac{B_{M+1}}{B_M} R_2\Big (\beta ;\frac{B_{M+1}s}{B_M} , B_{M+1}, x \Big ) \end{aligned}$$

holds for any value of x and also \(\lim \limits _{M \rightarrow \infty } \dfrac{B_M}{B_{M+1}} = 1,\) we get

$$\begin{aligned}\lim _{N \rightarrow \infty } R_2(\beta ;s,N,x) = 2s \end{aligned}$$

for all x in a set of full Lebesgue measure. It remains to show that the almost sure convergence is actually true for all \(s > 0\); this follows if we consider a dense countable set in \(\mathbb {R}_+\) and employ the monotonicity of \(R_2(\beta ;s,N,x)\) as a function of \(s>0\).

9 Proof of Theorems 7 and 8

In this last section we prove the results relevant to weak correlations of higher orders. We start with the proof Theorem 8 and then use some of the arguments involved to deduce Theorem 7.

Throughout this section, we shall use the following notation. Given \(k \geqslant 2\), a rectangle \(\mathcal {R}\subseteq {\mathbb {R}}^{k-1}\) and \(0\leqslant \beta \leqslant 1\), we define a new correlation counting function by

$$\begin{aligned} R_{k}^{*}(\beta ;\mathcal {R},N)\!=\! \frac{1}{N^{k - (k-1)\beta }} \#\!\!\left\{ i_1,\ldots ,i_k\leqslant N\!\!: N^{\beta } ( (\!(x_{i_1}- x_{i_2})\!),\ldots , (\!(x_{i_1} - x_{i_{k-1}})\!) )\!\in \!\mathcal {R} \right\} \!. \end{aligned}$$

That is, in the definition of \(R_k^*\) we allow indices to be equal, in contrast to the definition of \(R_k\) where all indices have to be pairwise distinct. Also whenever \(a<b\) we write

$$\begin{aligned}y_i(a,b) = y_i(a,b,N,\beta ) = \#\big \{ j\leqslant N : a\leqslant N^{\beta }(\!( x_i - x_j)\!) \leqslant b \big \}, \quad i \leqslant N.\end{aligned}$$

Under this notation, we observe that for the rectangle \(\mathcal {R}= [a_1, b_1]\times \ldots \times [a_{k-1},b_{k-1}]\) we have

$$\begin{aligned} R_k^*(\beta ;\mathcal {R},N) = \frac{1}{N^{k-(k-1)\beta }}\sum _{i\leqslant N} y_i(a_1,b_1)\cdot \ldots \cdot y_i(a_{k-1},b_{k-1}).\end{aligned}$$
(23)

As we shall soon see, this form makes \(R_k^*\) much easier to handle than \(R_k\).

The following lemma states that for \(\beta < 1\), under the assumption of Poissonian \((k,\beta )\)-correlations, the asymptotic size of \(R_k\) is the same as that of \(R_k^{*}\) as \(N \rightarrow \infty\). In the proof of the lemma we make use of certain facts on \(R_k\) and \(R_k^*\) that appear in [9]. There are only minor modifications, and these are due to the fact that here we deal with weak correlations; we have chosen to refer the interested reader to the proofs in [9] and explain here briefly where the minor differences come from.

Lemma 14

Let \((x_n)_{n \in \mathbb {N}}\) be a sequence and \(k \geqslant 2\). The following are equivalent.

  1. (i)

    \((x_n)_{n \in \mathbb {N}}\) has Poissonian \((k,\beta )\)-correlations.

  2. (ii)

    For all closed rectangles \(\mathcal {R} \subseteq \mathbb {R}^{k-1}\) of the form \([a_1,b_1]\times \ldots \times [a_{k-1},b_{k-1}]\) we have

    $$\begin{aligned} \lim _{N \rightarrow \infty } R_k^{*}(\beta ;\mathcal {R},N) = \lambda (\mathcal {R}).\end{aligned}$$
    (24)
  3. (iii)

    For all cubes \(\mathcal {C} := [a,b]^{k-1}, a,b \in \mathbb {R}\) we have

    $$\begin{aligned} \limsup _{N \rightarrow \infty } R_{k}^{*}(\beta ;\mathcal {C},N) \leqslant \lambda (\mathcal {C}).\end{aligned}$$
    (25)

Proof

We start by proving the equivalence of (i) and (ii). We observe that for any rectangle \(\mathcal {R}\subseteq \mathbb {R}^{k-1},\) we have

$$\begin{aligned} R_{k}(\beta ;\mathcal {R},N) \leqslant R_{k}^*(\beta ;\mathcal {R},N) \leqslant R_{k}(\beta ;\mathcal {R},N) + \sum _{m= 1}^{k-1}\Big (\frac{1}{N^{1-\beta }}\Big )^{k-m}b_mR_{m}(\beta ;\mathcal {R}_m,N) \end{aligned}$$
(26)

where \(R_1(s;N) := 1\), for each \(1\leqslant m \leqslant k-1, \mathcal {R}_{m}\subseteq \mathbb {R}^{m-1}\) is an \((m-1)\)-dimensional rectangle depending on \(\mathcal {R},\) and \(b_1,\ldots , b_{k-1} \in \mathbb {N}\) are constants depending only on k. The proof of (26) is essentially the same as that of [9, Proposition 7], which deals with standard correlation functions (i.e. when \(\beta = 1\)). The only differences are the appearance of the coefficients \((N^{-(1-\beta )})^{k-m}\), that come from the different scaling factor for weak correlations, and the fact that (26) is about arbitrary rectangles \(\mathcal {R}\) rather than rectangles symmetric with respect to the axes.

Next, we observe that whenever \(\mathrm{(i)}\) or \(\mathrm{(ii)}\) holds, arguing analogously to [9, Theorem 3], we can show that for \(m \leqslant k-1\) we have

$$\begin{aligned} \limsup _{N\rightarrow \infty }R_{m}(\beta ;\mathcal {S},N) < \infty \quad \text { for any rectangle } \mathcal {S}\subset \mathbb {R}^{m-1}. \end{aligned}$$
(27)

Combining (26) and (27) completes the proof of this equivalence.

Since (ii) \(\Rightarrow\) (iii) is trival, it remains to show (iii) \(\Rightarrow\) (ii). So let us assume that (25) holds and let \(\mathcal {R}= [a_1,b_1]\times \ldots \times [a_{k-1},b_{k-1}]\) be an arbitrary rectangle in \(\mathbb {R}^{k-1}\). Applying the Hölder inequality with exponents \(p_i = 1/(k-1), ( 1\leqslant i \leqslant k-1)\) to (23) we obtain

$$\begin{aligned} R_{k}^{*}(\beta ;\mathcal {R},N)&= \frac{1}{N^{k-(k-1)\beta }}\sum _{i\leqslant N} y_i(a_1,b_1)\cdot \ldots \cdot y_i(a_{k-1},b_{k-1})\\&\leqslant \frac{1}{N^{k-(k-1)\beta }}\Bigg (\sum _{i\leqslant N} y_i(a_1,b_1)^{k-1}\Bigg )^{\frac{1}{k-1}}\hspace{-2mm}\cdot \ldots \cdot \Bigg (\sum _{i\leqslant N} y_i(a_{k-1},b_{k-1})^{k-1}\Bigg )^{\frac{1}{k-1}} \\&= R_{k}^{*}(\beta ;\mathcal {C}_{a_1,b_1},N)^{\frac{1}{k-1}} \cdot \ldots \cdot R_{k}^{*}(\beta ;\mathcal {C}_{a_{k-1},b_{k-1}},N)^{\frac{1}{k-1}}, \end{aligned}$$

where we set \(\mathcal {C}_{a_i,b_i}=[a_i, b_i]^{k-1}\) for each \(1\leqslant i \leqslant k-1.\) Using (25) we are able to deduce that

$$\begin{aligned} \limsup _{N \rightarrow \infty } R_{k}^{*}(\beta ;\mathcal {R},N) \leqslant \lambda (\mathcal {R})\quad \text { for any rectangle } \mathcal {R}\subset \mathbb {R}^{k-1}.\end{aligned}$$
(28)

For the remaining part of the proof, it will be convenient to adopt the following notation: given any \({\textbf {t}}= (t_1,\ldots , t_{k-1})\in \mathbb {R}^{k-1}\) with \(t_i > 0, i=1,\ldots , k-1\) we shall write

$$\begin{aligned}\mathcal {R}_{{\textbf {t}}} = [-t_1, t_1] \times \ldots \times [-t_{k-1}, t_{k-1}] \subseteq \mathbb {R}^{k-1} \end{aligned}$$

for the rectangle in \(\mathbb {R}^{k-1}\) that is symmetric with respect to each of the axes and has a vertex at the point \({\textbf {t}}.\)

Continuing with the proof of the implication (iii) \(\Rightarrow\) (ii), we assume for contradiction that (24) does not hold. Then by (28) there must exist \({\textbf {a}}= (a_1,\ldots ,a_{k-1}),\) \({\textbf {b}}=(b_1,\ldots ,b_{k-1}) \in \mathbb {R}^{k-1}\) such that for the rectangle \(\mathcal {R}_{[{\textbf {a}},{\textbf {b}}]}:=[a_1,b_1]\times \ldots \times [a_{k-1},b_{k-1}]\) we have

$$\begin{aligned} \liminf _{N \rightarrow \infty }R_{k}^{*}(\beta ;\mathcal {R}_{[{\textbf {a}},{\textbf {b}}]},N) < \lambda (\mathcal {R}_{[{\textbf {a}},{\textbf {b}}]}).\end{aligned}$$
(29)

Setting \(s = \max \{\vert a_i\vert , \vert b_i\vert : i=1,\ldots , k-1 \}\) and \({\textbf {s}}= (s, \ldots , s)\), we can find \(n \in \mathbb {N}\) and rectangles \(\mathcal {R}_{j}, 1\leqslant j\leqslant n\) such that the cube \(\mathcal {C}_s := [-s, s]^{k-1}\) satisfies

$$\begin{aligned} \mathcal {C}_s = \mathcal {R}_{[{\textbf {a}},{\textbf {b}}]} \cup \bigcup _{j = 1}^n \mathcal {R}_{j} \end{aligned}$$

and all rectangles in the right-hand side have disjoint interiors. Using (28) on every \(\mathcal {R}_j\) and (29), we obtain

$$\begin{aligned} \liminf _{N \rightarrow \infty }R_k^{*}(\beta ; \mathcal {C}_s ,N)&\leqslant \liminf _{N \rightarrow \infty }R_{k}^{*}(\beta ;\mathcal {R}_{[{\textbf {a}},{\textbf {b}}]},N)+ \sum _{j =1}^n \limsup _{N \rightarrow \infty } R_{k}^{*}(\beta ;\mathcal {R}_{j},N) \\ {}&< (2s)^{k-1}. \end{aligned}$$

Consequently, there exist an \(\eta >0\) and a sequence \((N_r)_{r \in \mathbb {N}}\subseteq \mathbb {N}\) such that

$$\begin{aligned} R_k^{*}(\beta ; \mathcal {C}_s ,N_r) < (2s)^{k-1} - \eta \qquad \text {for all } r\geqslant 1. \end{aligned}$$

Observe that there exists a cube \(\mathcal {C}_0 = [s-\delta , s]\times \ldots \times [s- \delta , s] \subseteq \mathbb {R}^{k-1}\) with positive \((k-1)\)-dimensional Lebesgue measure such that for any \(r\geqslant 1,\)

$$\begin{aligned} R_{k}^{*}(\beta ;\mathcal {R}_{\textbf {t}},N_r) < (2t_1)(2t_2)\cdots (2t_{k-1})- \frac{\eta }{2}\end{aligned}$$
(30)

for any \({\textbf {t}}= (t_1,\ldots , t_{k-1}) \in \mathcal {C}_0\).

We shall obtain the desired contradiction by deriving a lower and an upper bound for the integral

$$\begin{aligned} \int _{[0,s]^{k-1}} R_k^{*}(\beta ,\mathcal {R}_{\textbf {t}}\,,N) \,\textrm{d}{\textbf {t}}. \end{aligned}$$

Arguing as in [9, Proposition 11], we can show that

$$\begin{aligned} \int _{[0,s]^{k-1}} R_k^{*}(\beta ,\mathcal {R}_{\textbf {t}}\,,N) \,\textrm{d}{\textbf {t}}\geqslant s^{2(k-1)} \quad \text { for any } N \geqslant 1.\end{aligned}$$
(31)

Let \(\varepsilon > 0\) be arbitrary. For any integer \(M \in \mathbb {N},\) we can split the integration domain \([0,s]^{k-1}\) in \(M^{k-1}\) cubes of the form

$$\begin{aligned}\mathcal {C}_{{\textbf {j}}} := \left[ \frac{(j_1-1) s}{M},\frac{j_1 s}{M}\right] \times \ldots \times \left[ \frac{(j_{k-1}-1) s}{M},\frac{j_{k-1}s}{M}\right] , \end{aligned}$$

where \({\textbf {j}}= (j_1, \ldots , j_{k-1}) \in [M]^{k-1}\), and if \(M\geqslant 1\) is large enough, then for all indices \({\textbf {j}}\in [M]^{k-1}\) we have

$$\begin{aligned} |\lambda (\mathcal {R}_{{\textbf {t}}_1}) - \lambda (\mathcal {R}_{{\textbf {t}}_2})|\leqslant \frac{\varepsilon }{2} \qquad \text { for any } {\textbf {t}}_1, {\textbf {t}}_2 \in \mathcal {C}_{{\textbf {j}}}. \end{aligned}$$
(32)

For each of the cubes \(\mathcal {C}_{{\textbf {j}}}\) defined above, we denote its top corner by

$$\begin{aligned} {\textbf {c}}_{{\textbf {j}}} = \left( \frac{j_1s}{M},\frac{j_2s}{M},\ldots , \frac{j_{k-1}s}{M}\right) \in \mathbb {R}^{k-1}. \end{aligned}$$

In view of (28), we know that for all \(r\geqslant 1\) sufficiently large, we have

$$\begin{aligned} R_k^{*}(\beta ;\mathcal {R}_{{\textbf {c}}_{\textbf {j}}},N_r) \leqslant \lambda (\mathcal {R}_{{\textbf {c}}_{\textbf {j}}}) + \frac{\varepsilon }{2} \qquad \text {for all } {\textbf {j}}\in [M]^{k-1}. \end{aligned}$$
(33)

(Note that the values of r can be chosen independently of \({\textbf {j}}\) since the number of considered \({\textbf {j}}\)’s is finite.) Given \({\textbf {t}}\in [0,s]^{k-1}\) we write \({\textbf {c}}({\textbf {t}})\) for the top corner \({\textbf {c}}_{{\textbf {j}}}\) of the cube \(\mathcal {C}_{{\textbf {j}}}\) that contains the point \({\textbf {t}}.\) Then for all \(r\geqslant 1\) large enough, we have

$$\begin{aligned} R_k^{*}(\beta ,\mathcal {R}_{\textbf {t}},N_r)&\leqslant R_k^{*}(\beta ,\mathcal {R}_{{\textbf {c}}({\textbf {t}})},N_r) \nonumber \\ {}&\leqslant \lambda (\mathcal {R}_{{\textbf {c}}_{\textbf {j}}}) + \frac{\varepsilon }{2} \quad \qquad \text { (by (9))}\nonumber \\&\leqslant \lambda (\mathcal {R}_{{\textbf {t}}}) + \varepsilon \quad \qquad \, \, \, \text { (by (9))} \end{aligned}$$
(34)

for any \({\textbf {t}}\in [0,s]^{k-1}\). Combining (30) and (34), we obtain

$$\begin{aligned} \int _{[0,s]^{k-1}} R_k^{*}(\beta ,\mathcal {R}_{\textbf {t}}\,,N_r) \,\textrm{d}{\textbf {t}}&= \int _{[0,s]^{k-1} \setminus \mathcal {C}_0} R_k^{*}(\beta ;\mathcal {R}_{\textbf {t}}\,,N_r) \,\textrm{d}{\textbf {t}}+ \int _{\mathcal {C}_0} R_k^{*}(\beta ;\mathcal {R}_{\textbf {t}}\,,N_r) \,\textrm{d}{\textbf {t}}\\ {}&\leqslant s^{2(k-1)} + \varepsilon s^{k-1} - \eta \lambda (\mathcal {C}_0), \end{aligned}$$

a contradiction to (31) when \(\varepsilon\) is sufficiently small. \(\square\)

Having established Lemma 14, we can now proceed to the proof of Theorem 8.

Proof of Theorem 8

We first prove that having Poissonian \((k,\beta )\)-correlations is a property stronger than \(\beta\)-PPC. We then use this fact to prove that this is also stronger than \((k-1, \beta )\)-correlations. We apply the Hölder inequality with exponents \(p = (k-1)/(k-2)\) and \(q = k-1\) to obtain

$$\begin{aligned} R_2^{*}(\beta ;[a,b],N)^{k-1}&= \frac{1}{N^{2(k-1)-(k-1)\beta }}\Big (\sum _{i\leqslant N} y_i(a,b)\Big )^{k-1} \nonumber \\&\leqslant \frac{1}{N^{k-(k-1)\beta }}\sum _{i\leqslant N} y_i(a,b)^{k-1} = R_k^{*}(\beta ;[a,b]^{k-1},N). \end{aligned}$$
(35)

Assuming Poissonian \((k,\beta )\)-correlations, the right-hand side of (35) tends to

\((b-a)^{k-1}\) as \(N \rightarrow \infty\). Therefore, we have for all \(a < b \in \mathbb {R}\) that

$$\begin{aligned}\limsup _{N \rightarrow \infty } R_2^{*}(\beta ;[a,b],N) \leqslant b-a,\end{aligned}$$

which by Lemma 14 implies \(\beta\)-PPC.

Now we proceed to prove that \((x_n)_{n \in \mathbb {N}}\) also has Poissonian \((k-1,\beta )\)-correlations. We make use of the well-known inequality

$$\begin{aligned} N\sum _{i\leqslant N} x_i^{k-1} \, \geqslant \, \Big (\sum _{i\leqslant N} x_i\Big )\Big (\sum _{i\leqslant N} x_i^{k-2} \Big ) \end{aligned}$$

which holds for \(N \geqslant 1, x_i \geqslant 0\) to show that

$$\begin{aligned}R_k^*(\beta ;[a,b]^{k-1},N)&= \frac{1}{N^{k-(k-1)\beta }}\sum _{i\leqslant N} y_i(a,b)^{k-1} \\ {}&\geqslant \frac{1}{N^{2-\beta }} \sum _{i\leqslant N} y_i(a,b) \cdot \frac{1}{N^{k-1-(k-2)\beta }}\sum _{i\leqslant N} y_i(a,b)^{k-2} \\&= R_2^{*}(\beta ;[a,b],N)\cdot R_{k-1}^*(\beta ;[a,b]^{k-2},N). \end{aligned}$$

By Poissonian \((k,\beta )\)-correlations and \(\beta\)-PPC, we can deduce that

$$\begin{aligned}\limsup _{N \rightarrow \infty } R_{k-1}^*(\beta ;[a,b]^{k-2},N) \leqslant (b-a)^{k-2},\end{aligned}$$

and in view of Lemma 14 this completes the proof. \(\square\)

Remark

Note that Theorem 8 holds for \(\beta = 0\), the proof follows the same lines as above with the only difference being \(s < \frac{1}{2}\). However for \(\beta = 1\), this argumentation fails since in case of Poissonian k-correlations, \(R_k^*(\mathcal {R},N)\) will not tend to \(\lambda (\mathcal {R})\).

Proof of Theorem 7

The one direction of the statement, namely that Poissonian (k, 0)-correlations imply uniform distribution, follows from Theorem 8: for \(\beta = 0\), we have proven that Poissonian (k, 0)-correlations imply 0-PPC, which in turn imply uniform distribution by Theorem 1.

We proceed to showing that uniform distribution implies Poissonian (k, 0)-correlations. In view of Lemma 14, it suffices to prove that for any cube \(\mathcal {C} = [a,b]^{k-1} \subseteq [-\tfrac{1}{2},\tfrac{1}{2} ]^{k-1}\) we have

$$\begin{aligned} \limsup _{N \rightarrow \infty } R_k^{*}(0;\mathcal {C} ,N) \leqslant \lambda (\mathcal {C} ).\end{aligned}$$
(36)

As expected, the proof goes along the lines of Theorem 1. We observe that

$$\begin{aligned} R_k^{*}(0; \mathcal {C},N) - \lambda (\mathcal {C})&\, =\, \frac{1}{N^k}\hspace{-3mm}\sum _{i_1,\ldots , i_k \leqslant N}\hspace{-3mm}\mathbbm {1}_{[a,b]}(x_{i_1}-x_{i_2}) \cdots \mathbbm {1}_{[a,b]}(x_{i_1}-x_{i_{k-1}}) - \lambda (\mathcal {C}) \\&= \frac{1}{N}\sum _{n\leqslant N}\Bigg (\frac{1}{N}\sum _{m\leqslant N} \mathbbm {1}_{[x_{n}-b, x_{n}-a]} (x_{m})\Bigg )^{k-1}- (b-a)^{k-1}. \end{aligned}$$

Since

$$\begin{aligned} \frac{1}{N}\sum _{m\leqslant N} \mathbbm {1}_{[x_{n}-b, x_{n}-a]} (x_{m}) \leqslant b-a + D_N,\quad n = 1,\ldots , N, \end{aligned}$$

with \(D_N\) denoting the discrepancy of \((x_n)_{n \in \mathbb {N}}\), we deduce that

$$\begin{aligned} R_k^{*}(0;\mathcal {C} ,N) - \lambda (\mathcal {C} ) \leqslant (b-a + D_N)^{k-1} - (b-a)^{k-1} = \mathcal {O}_{a,b}( D_N ).\end{aligned}$$

Assuming that \((x_n)_{n \in \mathbb {N}}\) is uniformly distributed mod 1, we have \(D_N \rightarrow 0\) as \(N\rightarrow \infty\) and hence (36) follows. This concludes the proof. \(\square\)