Abstract
Let \(\displaystyle Z_{\mathbf {i}}=\left( X_{\mathbf {i}},\ Y_{\mathbf {i}}\right) _{\mathbf {i}\in \mathbb {N}^{N}\, N \ge 1}\), be a \( \mathbb {R}^d\times \mathbb {R}\)-valued measurable strictly stationary spatial process. We consider the problem of estimating the regression function of \(Y_{\mathbf {i}}\) given \(X_{\mathbf {i}}\). We construct an alternative kernel estimate of the regression function based on the minimization of the mean squared relative error. Under some general mixing assumptions, the almost complete consistency and the asymptotic normality of this estimator are obtained. Its finite-sample performance is compared with a standard kernel regression estimator via a Monte Carlo study and real data example.
Similar content being viewed by others
Notes
Let \((z_n)_{n\in \mathbbm {N}}\) be a sequence of real r.v.’s. We say that \(z_n\) converges almost completely (a.co.) toward zero if, and only if, \(\forall \epsilon > 0\), \(\sum _{n=1}^\infty P(|z_n| >\epsilon ) < \infty \). Moreover, we say that the rate of the almost complete convergence of \(z_n\) to zero is of order \(u_n\) (with \(u_n\rightarrow 0)\) and we write \(z_n = O_{a.co.}(u_n)\) if, and only if, \(\exists \epsilon > 0\) such that \(\sum _{n=1}^\infty P(|z_n| >\epsilon u_n) < \infty \). This kind of convergence implies both almost sure convergence and convergence in probability.
References
Bernhard FA, Stahlecker P (2003) Relative squared error prediction in the generalized linear regression model. Stat Pap 44:107–115
Biau G, Cadre B (2004) Nonparametric spatial prediction. Stat Inference Stoch Process 7:327–349
Bobbia M, Misiti M, Misiti Y, Poggi JM, Portier B (2015) Spatial outlier detection in the PM10 monitoring network of Normandy. Atmos Pollut Res 6:476–483
Carbon M, Tran LT, Wu B (1997) Kernel density estimation for random fields. Stat Probab Lett 36:115–125
Carbon M, Francq C, Tran LT (2007) Kernel regression estimation for random fields. J Stat Plan Inference 137:778–798
Cressie NA (1993) Statistics for spatial data. Wiley, New York
Dabo-Niang S, Thiam B (2010) Robust quantile estimation and prediction for spatial processes. Stat Probab Lett 80:1447–1458
Dabo-Niang S, Yao AF (2007) Kernel regression estimation for continuous spatial processes. Math Methods Stat 16:1–20
Dabo-Niang S, Ould-Abdi S, Ould-Abdi A, Diop A (2014) Consistency of a nonparametric conditional mode estimator for random fields. Stat Methods Appl 23:1–39
Dabo-Niang S, Yao A, Pischedda L, Cuny P, Gilbert F (2009) Spatial kernel mode estimation for functional random, with application to bioturbation problem. Stoch Environ Res Risk Assess 24:487–497
Diggle P, Ribeiro PJ (2007) Model-based geostatistics. Springer, New York
Doukhan P (1994) Mixing: properties and examples. Lecture Notes in Statistics, vol 85. Springer- Verlag, New York
El Machkouri M, Stoica R (2010) Asymptotic normality of kernel estimates in a regression model for random fields. J Nonparametric Stat 22:955–971
Filzmoser P, Ruiz-Gazen A, Thomas-Agnan C (2014) Identification of local multivariate outliers. Stat Pap 55:29–47
Gheriballah A, Laksaci A, Rouane R (2010) Robust nonparametric estimation for spatial regression. J Stat Plan Inference 140:1656–1670
Guyon X (1987) Estimation d’un champ par pseudo-vraisemblance conditionnelle: Etude asymptotique et application au cas Markovien. In: Proceedings of the sixth Franco-Belgian meeting of statisticians
Hallin M, Lu Z, Yu K (2009) Local linear spatial quantile regression. Bernoulli 15:659–686
Jones MC, Park H, Shinb K, Vines SK, Jeong SO (2008) Relative error prediction via kernel regression smoothers. J Stat Plan Inference 138:2887–2898
Li J, Tran LT (2009) Nonparametric estimation of conditional expectation. J Stat Plan Inference 139:164–175
Liu X, Lu CT, Chen F (2010) Spatial outlier detection: random walk based approaches. In: Proceedings of the 18th ACM SIGSPATIAL international conference on advances in geographic information systems (ACM GIS), San Jose, CA
Lu Z, Chen X (2004) Spatial kernel regression: weak consistency. Stat Probab Lett 68:125–136
Martnez J, Saavedra J, Garca-Nieto PJ, Pieiro JI, Iglesias C, Taboada J, Sancho J, Pastor J (2014) Air quality parameters outliers detection using functional data analysis in the Langreo urban area (Northern Spain). Appl Math Comput 241:1–10
Narula SC, Wellington JF (1977) Prediction, linear regression and the minimum sum of relative errors. Technometrics 19:185–190
Omidi M, Mohammadzadeh M (2015) A new method to build spatio-temporal covariance functions: analysis of ozone data. Stat Pap. doi:10.1007/s00362-015-0674-2
Robinson PM (2011) Asymptotic theory for nonparametric regression with spatial data. J Econom 165:5–19
Shen VY, Yu T, Thebaut SM (1985) Identifying error-prone softwarean empirical study. IEEE Trans Softw Eng 11:317–324
Tran LT (1990) Kernel density estimation on random fields. J Multivar Anal 34:37–53
Volker S (2014) Stochastic geometry, spatial statistics and random fields: models and algorithms. Lecture Notes in Mathematics, vol 2120. Springer, New York
Xu R, Wang J (2008) \(L_1\)- estimation for spatial nonparametric regression. J Nonparametric Stat 20:523–537
Yang Y, Ye F (2013) General relative error criterion and M-estimation. Front Math China 8:695–715
Acknowledgments
The authors greatly thank an Associate Editor and an anonymous referee for a careful reading of the paper. The authors also thank Campus France (France) and Agence Thmatique de Recherche en Sciences et Technologie for their financial support.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Proof of Theorem 1 Let
with
Next, we use the following decomposition:
Thus, Theorem 1 is a consequence of the following intermediate results (cf. Lemmas 1 and 2).
Lemma 1
Under hypotheses (H1), (H2), and (H5), we have, for \(l=1,2\), that:
Proof of Lemma 1
By a change of variables, we get, for \(l=1,2\),
Since both functions f and \(r_l\) are of class \(\mathcal{C}^2\), we use a Taylor expansion of \(g_l(\cdot )\) to write, under (H4)
The last result complete the proof of lemma. \(\square \)
Lemma 2
Under hypotheses (H3)–(H7), we have, for \(l=1,2\), that:
Proof of Lemma 2
Consider
Therefore, it suffices to prove the following intermediates results
and
-
Firstly, for (9), we use the compactness of S to write
with \(d_\mathbf{n}\le \widehat{\mathbf {n}}^{\beta }\) and \(\tau _\mathbf{n}\le d_\mathbf{n}^{-1}\) where \(\beta =\frac{\delta (d+2)}{2}+\frac{1}{2}+\frac{\gamma }{2} \). So, for all \(x\in S\), we pose
Thus, for \(l=1,2\),
-
Furthermore, for \(T_2\), for both \(l=1,2\), we have
$$\begin{aligned} \sup _{x\in S} \left| \widetilde{g_l}^*(x_{k(x)})- E\left[ \widetilde{g_l}^*(x_{k(x)})\right] \right| =\max _{k=1,\ldots d_\mathbf{n}} \left| \widetilde{g_l}^*(x_{k})- E\left[ \widetilde{g_l}^*(x_{k)})\right] \right| \end{aligned}$$Thus it suffices to evaluate almost completely
$$\begin{aligned} \max _{k=1,\ldots d_\mathbf{n}} \left| \widetilde{g_l}^*(x_{k})- E\left[ \widetilde{g_l}^*(x_{k})\right] \right| . \end{aligned}$$To do that, we write:
$$\begin{aligned} \widetilde{g_l}^* (x_k)-E[ \widetilde{g_l}^* (x_k)]= & {} {1\over \widehat{\mathbf {n}}h^d}\sum _{{\mathbf {i}\in \mathcal {I}_\mathbf {n}}}\Delta _{\mathbf {i}} \end{aligned}$$where
$$\begin{aligned} \Delta _\mathbf{i}=Y^{-l}_\mathbf{i}K(h^{-1}(x_k-X_\mathbf{i})){\mathbbm {1}}_{(|Y_\mathbf{i}^{-1}|< \mu _\mathbf{n})}-E\left[ Y^{-l} K(h^{-1}(x_k-X)){\mathbbm {1}}_{(|Y^{-1}|< \mu _\mathbf{n})}\right] . \end{aligned}$$Now, similarly to Tran (1990), we use the classical spatial block decomposition for the sum \(\displaystyle \sum _{{\mathbf {i}\in \mathcal {I}_\mathbf {n}}}\Delta _{\mathbf {i}}\) as follows
$$\begin{aligned} U(1,\mathbf {n},\mathbf {j})=\sum _{\begin{array}{c} i_k=2j_kp_{\mathbf {n}}+1\\ k=1,...,N \end{array}}^{2j_kp_{\mathbf {n}}+p_{\mathbf {n}}}\Delta _{\mathbf {i}}, \end{aligned}$$$$\begin{aligned} U(2,\mathbf {n},\mathbf {j})=\displaystyle \sum _{\begin{array}{c} i_k=2j_kp_{\mathbf {n}}+1\\ k=1,...,N-1 \end{array}}^{2j_kp_{\mathbf {n}}+p_{\mathbf {n}}} \quad \displaystyle \sum _{i_N=2j_Np_{\mathbf {n}}+p_{\mathbf {n}}+1}^{(j_N+1)p_{\mathbf {n}}}\Delta _{\mathbf {i}}, \end{aligned}$$$$\begin{aligned} U(3,\mathbf {n},\mathbf {j})=\displaystyle \sum _{\begin{array}{c} i_k=2j_kp_{\mathbf {n}}+1\\ k=1,...,N-2 \end{array}}^{2j_kp_{\mathbf {n}}+p_{\mathbf {n}}} \quad \displaystyle \sum _{i_{N-1}=2j_{N-1}p_{\mathbf {n}}+p_{\mathbf {n}}+1}^{2(j_{N-1}+1)p_{\mathbf {n}}} \quad \displaystyle \sum _{i_N=2j_Np_{\mathbf {n}}+1}^{2j_Np_{\mathbf {n}}+p_{\mathbf {n}}}\Delta _{\mathbf {i}}, \end{aligned}$$$$\begin{aligned} U(4,\mathbf {n},\mathbf {j})=\displaystyle \sum _{\begin{array}{c} i_k=2j_kp_{\mathbf {n}}+1\\ k=1,...,N-2 \end{array}}^{2j_kp_{\mathbf {n}}} \quad \displaystyle \sum _{i_{N-1}=2j_{N-1}p_{\mathbf {n}}+p_{\mathbf {n}}+1}^{2(j_{N-1}+1)p_{\mathbf {n}}}\quad \displaystyle \sum _{i_N=2j_Np_{\mathbf {n}}+p_{\mathbf {n}}+1}^{2(j_N+1)p_{\mathbf {n}}}\Delta _{\mathbf {i}}, \end{aligned}$$and so on. Finally
$$\begin{aligned} U(2^{N-1},\mathbf {n},\mathbf {j})=\displaystyle \sum _{\begin{array}{c} i_k=2j_kp_{\mathbf {n}}+p_{\mathbf {n}}+1\\ k=1,...,N-1 \end{array}}^{2(j_k+1)p_{\mathbf {n}} }\quad \displaystyle \sum _{i_N=2j_Np_{\mathbf {n}}+1}^{2j_Np_{\mathbf {n}}+p_{\mathbf {n}}}\Delta _{\mathbf {i}}, \end{aligned}$$$$\begin{aligned} U(2^N,\mathbf {n},\mathbf {j})=\sum _{\begin{array}{c} i_k=2j_kp_{\mathbf {n}}+p_{\mathbf {n}}+1\\ k=1,...,N \end{array}}^{2(j_k+1)p_{\mathbf {n}}} \Delta _{\mathbf {i}} \end{aligned}$$(10)with \( p_{\mathbf {n}}\) is a real sequence will be specified later. Now, we put for all \(i = 1, \ldots ,2^N\),
$$\begin{aligned} T(\mathbf {n},i)=\sum _{\mathbf {j}\in \mathcal {J}}U(i,\mathbf {n},\mathbf {j}). \end{aligned}$$(11)with \(\mathcal {J}=\{0,...,r_1-1\}\times ...\times \{0,...,r_N-1\}\) and \(r_l =2 n_lp_{\mathbf {n}}^{-1}\); \(l=1,\ldots , N\). Then,
$$\begin{aligned} \left| \widetilde{g_l}^* (x_k)-E[ \widetilde{g_l}^* (x_k)]\right| =\frac{1}{\widehat{\mathbf {n}}h^d}\displaystyle \sum _{i=1}^{2^N}T(\mathbf {n},i). \end{aligned}$$Thus, all it remains to compute
$$\begin{aligned} \displaystyle {\mathbbm {P}}\left( T(\mathbf {n},i)\ge \eta \widehat{\mathbf {n}}h^d\right) ,\,\qquad \text{ for } \text{ all } i=1,\ldots , 2^N . \end{aligned}$$(12)Without loss of generality, we will only consider the case \(i=1\). For this, we enumerate the \(M=\prod _{k=1}^N r_k=2^{-N}\widehat{\mathbf {n}}p_{\mathbf {n}}^{-N}\le \widehat{\mathbf {n}}p_{\mathbf {n}}^{-N}\) random variables \(U(1,\mathbf {n},\mathbf {j});\, \mathbf {j}\in \mathcal {J}\) in the arbitrary way \(Z_1,\ldots Z_M\). The rest of the proof is very similar to Biau and Cadre (2004) which is based on Lemma 4.5 of Carbon et al. (1997). According this Lemma we can find independent random variables \(Z_1^*,\ldots Z_M^*\) has the same low as \(Z_{j=1,\ldots M}\) and such that
$$\begin{aligned} \sum _{j=1}^r E|Z_{\mathbf {j}}-Z_{\mathbf {j}}^*|\le 2C\mu _\mathbf{n} Mp_{\mathbf {n}}^Ns(M-1)p_{\mathbf {n}}^N,p_{\mathbf {n}}^N)\varphi (p_{\mathbf {n}}). \end{aligned}$$It follows that
$$\begin{aligned} \displaystyle {\mathbbm {P}}\left( T(\mathbf {n},1)\ge \eta \widehat{\mathbf {n}}h^d \right) \le {\mathbbm {P}}\left( \left| \sum _{j=1}^MZ_{\mathbf {j}}^*\right| \ge \frac{\eta \widehat{\mathbf {n}}h^d}{2}\right) +{\mathbbm {P}}\left( \sum _{j=1}^M|Z_{\mathbf {j}}-Z_{\mathbf {j}}^*|\ge \frac{\eta \widehat{\mathbf {n}}h^d}{2}\right) . \end{aligned}$$Thus, from the Bernstein and Markov inequalities we deduce that
$$\begin{aligned} \displaystyle B_1:={\mathbbm {P}}\left( \left| \sum _{j=1}^MZ_{\mathbf {j}}^*\right| \ge \frac{M\eta \widehat{\mathbf {n}}h^d}{2M}\right) \le 2\exp \left( -\frac{(\eta \widehat{\mathbf {n}}h^d)^2}{ MVar\left( Z_1^*\right) +Cp_{\mathbf {n}}^N\eta \widehat{\mathbf {n}}h^d}\right) \end{aligned}$$and
$$\begin{aligned} \displaystyle B_2:= & {} {\mathbbm {P}}\left( \sum _{j=1}^M|Z_{\mathbf {j}}-Z_{\mathbf {j}}^*|\ge \frac{\eta \widehat{\mathbf {n}}h^d}{2}\right) \le \frac{2}{\eta \widehat{\mathbf {n}}h^d}\sum _{j=1}^M E|Z_{\mathbf {j}}-Z_{\mathbf {j}}^*|. \end{aligned}$$By using Lemma 4.5 of Carbon et al. (2007), the fact that \(\displaystyle \widehat{\mathbf {n}}=2^NMp_{\mathbf {n}}^N\) and \(s((M-1)p_{\mathbf {n}}^N,p_{\mathbf {n}}^N)\le p_{\mathbf {n}}^N\) we get for \(\eta =\displaystyle \eta _0\sqrt{\frac{\log \widehat{\mathbf {n}}}{\widehat{\mathbf {n}}\, h^d}}\)
$$\begin{aligned} B_2\le \mu _\mathbf{n} \widehat{\mathbf {n}} p_{\mathbf {n}}^N\left( \log \widehat{\mathbf {n}}\right) ^{-1/2}\left( \widehat{\mathbf {n}}h^d\right) ^{-1/2}\varphi (p_{\mathbf {n}}). \end{aligned}$$Since \(p_{\mathbf {n}}= C\left( \frac{\widehat{\mathbf {n}}h^d}{\log \widehat{\mathbf {n}} \mu _\mathbf{n}^2}\right) ^{1/2N}\), then
$$\begin{aligned} B_2 \le \widehat{\mathbf {n}}\, \left( \log \widehat{\mathbf {n}}\right) ^{-1} \,\varphi (p_{\mathbf {n}}). \end{aligned}$$Concerning \(B_1\) term, by a standard arguments we obtain
$$\begin{aligned} Var\left[ Z_1^*\right] =O\left( p_{\mathbf {n}}^Nh^d\right) . \end{aligned}$$Using this last result, together with the definitions of \(p_{\mathbf {n}}\), M and \(\eta \), we get
$$\begin{aligned} B_1 \le \exp \left( -C(\eta _0)\log \widehat{\mathbf {n}} \right) \end{aligned}$$Consequently, from (H6), we have
$$\begin{aligned} \exists \eta _0 \quad \text{ such } \text{ that } \quad d_\mathbf{n}\sum _{\mathbf {n}}(B_1+B_2)<\infty . \end{aligned}$$which complete the first result of this lemma.
-
Now, we evaluate terms \(T_1\) and \(T_3\): To do that, we use the Lipschitz’s condition of the kernel K in (H4) allows to write directly,
$$\begin{aligned} \left| \widetilde{g_l}^*(x)- \widetilde{g_l}^*(x_{k(x)})\right|= & {} \frac{1}{\widehat{\mathbf {n}}h^d}\left| \sum _{{\mathbf {i}\in \mathcal {I}_\mathbf {n}}}Y_\mathbf{i}^{-l}K_\mathbf{i}(x)- \sum _{{\mathbf {i}\in \mathcal {I}_\mathbf {n}}}Y_\mathbf{i}^{-l}K_\mathbf{i}(x_{k(x)})\right| \\\le & {} \frac{C}{\widehat{\mathbf {n}}h^{d+1}}\Vert x- x_{k(x)}\Vert \sum _{{\mathbf {i}\in \mathcal {I}_\mathbf {n}}}Y_\mathbf{i}^{-l}\\\le & {} \frac{C\tau _\mathbf{n} }{\mu _\mathbf{n}^{l}\widehat{\mathbf {n}}h^{d+1}}\le \frac{C\tau _\mathbf{n} }{\mu _\mathbf{n}\widehat{\mathbf {n}}h^{d+1}}. \end{aligned}$$
By the definition of \(\tau _\mathbf{n}\) we obtain
and
-
Secondly, we proof (7). Indeed, we have
$$\begin{aligned} \sup _{x\in S} \left| E\left[ \widetilde{g_l} (x)\right] - E\left[ \widetilde{g_l}^* (x)\right] \right|= & {} \frac{1}{\widehat{\mathbf {n}}h^d}\left| E\left[ \displaystyle \sum _{{\mathbf {i}\in \mathcal {I}_\mathbf {n}}}Y_\mathbf{i}^{-l}{\mathbbm {1}}_{\{|Y_\mathbf{i}^{-1}|> {\mu }_\mathbf{n}\}}K_\mathbf{i}(x)\right] \right| \\\le & {} h^{-d} E\left[ |Y_\mathbf{1}^{-l}|{\mathbbm {1}}_{\{|Y_\mathbf{i}^{-1}|>{\mu }_\mathbf{n}\}}K_\mathbf{1}(x)\right] \\\le & {} h^{-d}E\left[ \exp \left( |Y_{1}^{-l}|/4\right) {\mathbbm {1}}_{\{|Y_\mathbf{i}^{-1}|>\mu _{n}\}}K_\mathbf{1}(x)\right] . \end{aligned}$$Furthermore, using The Holder’s inequality to show that,
$$\begin{aligned} \sup _{x\in S}\left| E\left[ \widetilde{g_l} (x)\right] - E\left[ \widetilde{g_l}^* (x)\right] \right|\le & {} h^{-d} \left( E\left[ \exp \left( |Y_{1}^{-l}|/2\right) {\mathbbm {1}}_{\{Y_\mathbf{i}^{-1}|>{\mu }_{n}\}}\right] \right) ^{\frac{1}{2}} \left( E(K^{2}_\mathbf{1}(x))\right) ^{\frac{1}{2}}\\\le & {} h^{-d}\exp \left( -\mu _\mathbf{n}^l/4\right) \left( E\left[ \exp \left( |Y^{-l}|\right) \right] \right) ^{\frac{1}{2}}\left( E(K^{2}_\mathbf{1}(x)\right) ^{\frac{1}{2}} \\\le & {} C h^{\frac{-d}{2}}\exp \left( -{\mu }_\mathbf{n}^l/4\right) . \end{aligned}$$Since \(\mu _\mathbf{n}=\widehat{\mathbf {n}}^{\gamma /2}\) then, we can write
$$\begin{aligned} \sup _{x\in S} \left| E\left[ \widetilde{g_l} (x_k)\right] - E\left[ \widetilde{g_l}^* (x_k)\right] \right| =o\left( \left( \frac{ \log \widehat{\mathbf {n}}}{\widehat{\mathbf {n}}h^d}\right) ^{1/2}\right) . \end{aligned}$$ -
Thirdly, the proof of the last claimed result (8) is based on the Markov’s inequality. Indeed, observe that, for all \( \epsilon >0\)
$$\begin{aligned} {\mathbbm {P}}\left[ \sup _{x\in S}\left| \widetilde{g_l} (x)- \widetilde{g_l}^* (x)\right| >\epsilon \right]= & {} {\mathbbm {P}}\left( \frac{1}{\widehat{\mathbf {n}}h^d}\displaystyle \sum _{{\mathbf {i}\in \mathcal {I}_\mathbf {n}}}Y_\mathbf{i}^{-l}{\mathbbm {1}}_{|Y_\mathbf{i}^{-l}| >{\mu }_\mathbf{n}^l}K_{i}|>\epsilon \right) \\\le & {} \widehat{\mathbf {n}}{\mathbbm {P}}\left( \displaystyle |Y^{-l}|>{\mu }_\mathbf{n}\right) \\\le & {} \widehat{\mathbf {n}}\exp \left( -{\mu }_\mathbf{n}^l\right) E\left( \exp \left( |Y^{-1}|\right) \right) \\\le & {} C\widehat{\mathbf {n}}\exp \left( -{\mu }_\mathbf{n}^l\right) . \end{aligned}$$So,
$$\begin{aligned} \displaystyle \sum _\mathbf{n}{\mathbbm {P}}\left( \sup _{x\in S}\left| \widetilde{g_l} (x)- \widetilde{g_l}^* (x)\right| >\epsilon _0\left( \sqrt{\frac{ \log \widehat{\mathbf {n}}}{\widehat{\mathbf {n}}h^d}}\right) \right) \le C\displaystyle \sum _\mathbf{n}\widehat{\mathbf {n}}\exp \left( -\mu _\mathbf{n}\right) . \end{aligned}$$(15)The use of the definition \(\mu _\mathbf{n}\) complete the proof of Lemma. \(\square \)
Corollary 2
Under the hypotheses of Theorem 1, we obtain:
Proof of Corollary 2
It is clear that
Thus,
The use of the results of Lemma 1 and Lemma 2 complete the proof of the corollary. \(\square \)
Proof of Theorem 2 We write:
where
and
Therefore,
Finally, Theorem 2 is a consequence of the following results (cf. Lemmas 3 and 4).
Lemma 3
Under the hypotheses of Theorem 2, we obtain:
Proof of Lemma 3
Considering the same notations of Lemma 1 and write,
where
Similarly to Lemma 1 we get, for fixed \(x\in {\mathbbm {R}}^d\)
As \({\mathrm {E}}[B_\mathbf{n}]={\mathrm {E}}[B_\mathbf{n}^*]=0\), then it suffices to show the asymptotic normality of
For this, we put, for \(\mathbf{i=1}\in I_\mathbf{n},\)
So,
where \( S_{\mathbf {n}}= \displaystyle \sum _{\mathbf{i}\in I_\mathbf{n}}\Lambda _\mathbf{i}\). Thus, our claimed result is, now
where \(\sigma _1^2(x)=\left( g_2(x)\right) ^2 \sigma ^2(x)\). The proof of (17) follows the same lines of Lemma 3.2 in Tran (1990). It is based on spatial blocking technique for \( S_{\mathbf {n}}=\displaystyle \sum _{\mathbf{i}\in I_\mathbf{n}}\Lambda _\mathbf{i}\).
Lemma 4
Under the hypotheses of Theorem 2, we obtain:
and
Proof of Lemma 4 For the first limit, we have, by Lemma 1
and by a similar argument as those used in the variance term in Lemma 2 we show that
hence
Next, it is clear that the second limit is consequence for the last convergence. So, it suffices to treat the last one. For this, we use the fact that
and
The last part of Condition (H6\(^{\prime }\)) allows to deduce that
Rights and permissions
About this article
Cite this article
Attouch, M., Laksaci, A. & Messabihi, N. Nonparametric relative error regression for spatial random variables. Stat Papers 58, 987–1008 (2017). https://doi.org/10.1007/s00362-015-0735-6
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362-015-0735-6