Abstract
We propose an alternative to k-nearest neighbors for functional data whereby the approximating neighboring curves are piecewise functions built from a functional sample. Using a locally defined distance function that satisfies stabilization criteria, we establish pointwise and global approximation results in function spaces when the number of data curves is large. We exploit this feature to develop the asymptotic theory when a finite number of curves is observed at time-points given by an i.i.d. sample whose cardinality increases up to infinity. We use these results to investigate the problem of estimating unobserved segments of a partially observed functional data sample as well as to study the problem of functional classification and outlier detection. For such problems our methods are competitive with and sometimes superior to benchmark predictions in the field. The R package localFDA provides routines for computing the localization processes and the estimators proposed in this article.
Similar content being viewed by others
Availability of data and materials
The data is available from the sources indicated in the main text.
Code availability
The R-package localFDA (Elías et al. 2021) provides routines for computing the localization processes and it performs classification and outlier detection.
References
Arribas-Gil A, Romo J (2014) Shape outlier detection and visualization for functional data: the outliergram. Biostatistics 15(4):603–619
Biau B, Cérou F, Guyader A (2010) Rates of convergence of the functional \(k\)-nearest neighbor estimate. IEEE Trans Inf Theory 56:2034–2040
Brito MR, Chávez EL, Quiroz AJ et al (1997) Connectivity of the mutual k-nearest-neighbor graph in clustering and outlier detection. Stat Probab Lett 35:33–42
Chen Y, Carroll C, Dai X, et al (2020) fdapace: functional data analysis and empirical dynamics. R package version 0.5.2. Development version at https://github.com/functionaldata/tPACE
Cuesta-Albertos JA, Febrero-Bande M, Oviedo de la Fuente M (2017) The DD\(^g\)-classifier in the functional setting. TEST 26(1):119–142
Dai W, Genton MG (2018) Multivariate functional data visualization and outlier detection. J Comput Graph Stat 27:923–934
Dai W, Genton MG (2019) Directional outlyingness for multivariate functional data. Comput Stat Data Anal 131:50–65
Elías A, Jiménez R, Yukich JE (2021) localFDA: localization processes for functional data analysis. https://CRAN.R-project.org/package=localFDA, R package version 1.0.0
Elías A, Jiménez R, Shang HL (2022) On projection methods for functional time series forecasting. J Multivar Anal 189(104):890. https://doi.org/10.1016/j.jmva.2021.104890
Febrero-Bande M, Oviedo M (2012) Statistical computing in functional data analysis: the R package fda.usc. J Stat Softw 51(4):1–28
Febrero-Bande M, Galeano P, González-Manteiga W (2019) Estimation, imputation and prediction for the functional linear model with scalar response with responses missing at random. Comput Stat Data Anal 131:91–103
Ferraty F, Vieu P (2006) Nonparametric functional data analysis. Springer, New York
Gao Y, Shang HL, Yang Y (2019) High-dimensional functional time series forecasting: an application to age-specific mortality rates. J Multivar Anal 170:232–243
Györfi L, Kohler M, Krzyzak A et al (2002) A distribution-free theory of nonparametric regression. Springer, New York
Hubert M, Rousseeuw P, Segaert P (2017) Multivariate and functional classification using depth and distance. Adv Data Anal Classif 11:445–466
Hyndman RJ, Booth H (2008) Stochastic population forecasts using functional data models for mortality, fertility and migration. Int J Forecast 24:323–342
Hyndman RJ, Shang HL (2010) Rainbow plots, bagplots and boxplots for functional data. J Comput Graph Stat 19(1):29–45
Hyndman RJ, Ullah S (2007) Robust forecasting of mortality and fertility rates: a functional data approach. Comput Stat Data Anal 51:4942–4956
Kara LZ, Laksaci A, Rachdi M et al (2017) Data-driven \(k\)nn estimation in nonparametric functional data analysis. J Multivar Anal 153:176–188
Kneip A, Liebl D (2020) On the optimal reconstruction of partially observed functional data. Ann Stat 48:1692–1717
Kraus D (2015) Components and completion of partially observed functional data. J R Stat Soc Ser B-Stat Methodol 77:777–801
Kudraszow N, Vieu P (2013) Uniform consistency of \(k\)nn regressors for functional variables. Stat Probab Lett 83:1863–1870
Li J, Cuesta-Albertos JA, Liu RY (2012) DD-classifier: nonparametric classification procedure based on DD-plot. J Am Stat Assoc Theory Methods 107:737–753
Lian H (2011) Convergence of functional k-nearest neighbor regression estimate with functional responses. Electron J Stat 5:31–40
Liebl D (2019) Nonparametric testing for differences in electricity prices: the case of the fukushima nuclear accident. Ann Appl Stat 13:1128–1146
López-Pintado S, Romo J (2009) On the concept of depth for functional data. J Am Stat Assoc Theory Methods 104(486):718–734
Martínez F, Frías MP, Pérez MD et al (2017) A methodology for applying \(k\)-nearest neighbor to time series forecasting. Artif Intell Rev 52:2019–2037
O’Donoghue JJ (2019) Salt and inaction blamed for Aomori having the lowest life expectancy in Japan. The Japan Times Available at https://www.japantimes.co.jp/?post_type=news &p=2340547
Penrose MD (2007) Laws of large numbers in stochastic geometry with statistical applications. Bernoulli 13(4):1124–1150
Penrose MD, Yukich JE (2003) Weak laws of large numbers in geometric probability. Ann Appl Probab 13:277–303
Ramaswamy S, Rastogi R, Shim K (2000) Efficient algorithms for mining outliers from large data sets. In: Proceedings of the ACM SIGMOD conference on management of data, pp 427–438
Ramsay J, Silverman B (2005) Functional data analysis, 2nd edn. Springer, New York
Schreiber T (2010) New perspectives in stochastic geometry, Oxford University Press, Oxford, chap Limit theorems in stochastic geometry, pp 111–144
Segaert P, Hubert M, Rousseeuw P, et al (2019) mrfDepth: depth measures in multivariate, regression and functional settings. https://CRAN.R-project.org/package=mrfDepth, R package version 1.0.11
Shang HL, Hyndman RJ (2017) Grouped functional time series forecasting: an application to age-specific mortality rates. J Comput Graph Stat 26(2):330–343
Sun Y, Genton MG (2011) Functional boxplots. J Comput Graph Stat 20:316–334
Wang JL, Chiou JM, Müller HG (2016) Functional data analysis. Annu Rev Stat Appl 3:257–295
Wu X, Kumar V, Ross Quinlan J et al (2008) Top 10 algorithms in data mining. Knowl Inf Syst 14:1–37
Yao F, Müller HG, Wang JL (2005) Functional data analysis for sparse longitudinal data. J Am Stat Assoc Theory Methods 100:577–590
Zhang S, Jank W, Shmueli G (2010) Real-time forecasting of online auctions via functional k-nearest neighbors. Int J Forecast 26:666–683
Acknowledgements
Antonio Elías and Raúl Jiménez were partially supported by the Spanish Ministerio de Economía y Competitividad under grants ECO2015-66593-P and PID2019-109196GB-I00. The research of J. E. Yukich is supported in part by a Simons collaboration grant. He is also grateful for generous support from the Department of Statistics at Universidad Carlos III de Madrid, where most of this work was completed.
Funding
Not applicable.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no conflicts of interest to declare that are relevant to the content of this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A Proofs of main results and additional figures
Appendix A Proofs of main results and additional figures
Here we provide the proofs of the main results of Sect. 2, the proof of Proposition 1, and three additional figures.
1.1 A.1 Auxiliary results
We prepare for the proofs with two lemmas.
Lemma A.1
Fix \(k \in \mathbb {N}\). For almost all \(t \in [0,1]\), there are random variables \(\{X'_j(t) \}_{j = 1}^n\), coupled to \(\{X_j(t) \}_{j = 1}^n,\) and a Cox process \(\mathcal{P}_{{\kappa }_t(X'_1(t))}\), also coupled to \(\{X_j(t) \}_{j = 1}^n\), such that as \(n \rightarrow \infty \)
Proof
The convergence may be deduced from Section 3 of Penrose and Yukich (2003) and we provide details as follows. For \(x \in \mathbb {R}^d\) and \(r > 0\) let B(x, r) denote the Euclidean ball centered at x with radius r. Note that \(L^{(k)}\) is a stabilizing score function on Poisson input \(\mathcal{P}\) having constant intensity density, that is to say that its value at the origin is determined by the local data consisting of the points in the intersection of the realization of \(\mathcal{P}\) and the ball \(B(\mathbf{0}, R^{L^{(k)}}(\mathbf{0}, \mathcal{P}))\), where \(R^{L^{(k)}}(\mathbf{0}, \mathcal{P})\) is a radius of stabilization. For precise definitions we refer to Penrose and Yukich (2003), Section 3 and Penrose (2007).
The coupling of Section 3 of Penrose and Yukich (2003) shows that we may find \(\{X'_j(t) \}_{j = 2}^n\), where \(X'_j(t) {\mathop {=}\limits ^{\mathcal{D}}}X_j(t), j = 2,\ldots ,n\) and a Cox process \( \mathcal{P}_{{\kappa }_t(X'_1(t))}\) such that if we put \(\mathcal{X}'_{n - 1}(t) = \{X'_j(t) \}_{j = 2}^n\), then for all \(K > 0\)
which is a consequence of the convergence of the point process \(n(\mathcal{X}'_{n - 1}(t) - X'_1(t)) \cap B(\mathbf{0},K)\) to the point process \(\mathcal{P}_{{\kappa }_t(X'_1(t))} \cap B(\mathbf{0},K)\) as \(n \rightarrow \infty \). See Lemma 3.1 of Penrose and Yukich (2003) for details. Fix \(\epsilon > 0\). Now write for all \(\delta > 0\)
Given \(\epsilon > 0\), the last term in (A2) may be made less than \(\epsilon /3\) if \(\delta \) is small. The penultimate term is bounded by \(\mathbb {P}( R^{L^{(k)}}(\mathbf{0}, \mathcal{P}_{\delta }) > K)\), which is less than \(\epsilon /3\) if K is large, since \(R^{L^{(k)}}(\mathbf{0}, \mathcal{P}_{\delta })\) is finite a.s. By (A1) the first term is less than \(\epsilon /3\) if n is large. Thus, for \(\delta \) small and K and n large, the right-hand side of (A2) is less than \(\epsilon \), which concludes the proof. \(\square \)
Lemma A.2
Assume that the data are bounded and regular from below for all \(t \in T_0 \subseteq [0,1]\) as at (6), and where \(T_0\) has Lebesgue measure 1. Then \(\sup _{n \le \infty }\sup _{t \in T_0} \mathbb {E}W_n^{(k)}(t)^2 \le C,\) where C is a finite constant.
Proof
We treat the case \(1 \le n < \infty \), as the case \(n = \infty \) follows by similar methods. Without loss of generality we may assume the data are bounded above by 1 and that \(S({\kappa }_t) = [0,1]\) for all \(t \in T_0\). We first prove the lemma for \(k = 1\) and then for general k. Let \(S_\delta \) be the subinterval of [0, 1] such that \({\kappa }_t(x) \ge {\kappa }_{\text {min}}\) for all \(x \in S_\delta \). Note that \(|S_\delta |=\delta \in (0,1]\) by assumption. We have for all \(r> 0\) and all \(t \in T_0\)
By the boundedness assumption on the data, we have \(\mathbb {P}( L^{(1)}(X_1(t), \{X_j(t) \}_{j = 2}^n ) \ge \frac{r}{2n}) = 0\) for all \(r \in (2n, \infty )\). Thus we may assume without loss of generality that \(r \in [0,2n]\). We bound the double integral from below by
where here and elsewhere \(c > 0\) is a generic constant, possibly changing from line to line. This gives
Random variables having exponentially decaying tails have finite moments of all orders and this proves the lemma for \(k = 1\).
The proof for general k is similar. We show this holds for \(k = 2\) as follows. We have
Given the event \(\{ L^{(2)}(X_1(t), \{X_j(t) \}_{j = 2}^n ) \ge \frac{r}{n} \}\), either the first nearest neighbor to \(X_1(t)\) is at a distance greater than \(\frac{r }{n}\) to \(X_1(t)\) or the first nearest neighbor to \(X_1(t)\) is at a distance less than \(\frac{r}{n}\) to \(X_1(t)\) and the first nearest neighbor among the remaining \(n - 2\) sample points exceeds \(\frac{r}{n}\).
This gives
Since \(\mathbb {P}( |X_1 - X_i |\le \frac{r }{n}) = O(n^{-1})\) for all \(i = 2,\ldots ,n\) we may use the bounds for the case \(k = 1\) to show that \(\mathbb {P}\left( L^{(2)}(X_1(t), \{X_j(t) \}_{j = 2}^n ) \ge \frac{r}{n}\right) \) decays exponentially fast in r. The proof for general k follows in a similar fashion and we leave the details to the reader. This proves the lemma. \(\square \)
1.2 A.2 Proof of Theorem 2.1
We first prove (3). Let \(t \in [0,1]\) be such that the marginal density \(\kappa _t\) exists. By translation invariance of \(L^{(k)}\) we have as \(n \rightarrow \infty \)
where the limit follows since convergence in probability (Lemma A.1) combined with uniform integrability (Lemma A.2) gives convergence in mean. For any constant \(\tau \) we have \(\mathbb {E}L^{(k)} (\mathbf{0}, \mathcal{P}_{\tau } ) = \tau ^{-1} \mathbb {E}L^{(k)} (\mathbf{0}, \mathcal{P}_{1} )\). Notice that \( L^{(k)} (\mathbf{0}, \mathcal{P}_{1} )\) is a Gamma \(\Gamma (k,2)\) random variable with shape parameter k and scale parameter 2 and thus \(\mathbb {E}L^{(k)} (\mathbf{0}, \mathcal{P}_{1}) = k/2\). The proof of (3) is complete.
To prove (4), we replace \(W_n^{(k)}(t)\) by its square in the above computation. This yields
For any constant \(\tau \in (0, \infty )\) we have
where we recall that the second moment of a Gamma \(\Gamma (k,2)\) random variable equals \((k + 1)k/4\). These facts yield (4). The limit (5) is a consequence of Lemma A.1. \(\square \)
1.3 A.3 Proof of Theorem 2.2
Recall that on \(T_0 \subseteq [0,1]\) we have that \(\kappa _t\) exists, the data are bounded, and the data are regular from below, as at (6). By Lemmas A.1 and A.2, the random variables \({W'_n}^{(k)}(t) = W^{(k)}(X'_1(t), \{X'_j(t) \}_{j = 1}^n ), t \in T_0,\) converge in probability and also in mean. It follows that for all \(t \in T_0\) as \(n \rightarrow \infty \)
Now \(\sup _n \sup _{t \in T_0} F_n(t) \le C\) and the bounded convergence theorem gives \(\lim _{n \rightarrow \infty } \int _0^1 F_n(t) dt = 0.\) This gives the first statement of Theorem 2.2. The identity \(\lim _{n \rightarrow \infty } \int _0^1 \mathbb {E}{W}_{\infty }^{(k)}(t) dt = 1\) follows from
and the identity \(\mathbb {E}{W}_{\infty }^{(k)}(t) = |S(\kappa _t) |\), \(t \in T_0\). \(\square \)
1.4 A.4 Proof of Theorem 2.3
Lemma A.1 assumes that k is fixed. The lemma will not always hold if k is growing arbitrarily fast with n. Thus our proof techniques and coupling arguments need to be modified. We break the proof of Theorem 2.3 into five parts.
Part (i) Coupling. We start with a general coupling fact. Given Poisson point processes \(\Sigma _1\) and \(\Sigma _2\) with densities \(f_1\) and \(f_2\), we may find coupled Poisson point processes \(\Sigma '_1\) and \(\Sigma '_2\) with \(\Sigma '_1 {\mathop {=}\limits ^{\mathcal{D}}}\Sigma _1\) and \(\Sigma '_2 {\mathop {=}\limits ^{\mathcal{D}}}\Sigma _2\) such that the probability that the two point processes are not equal on \([-A,A]\) is bounded by
Let \(t \in [0,1]\) be such that the marginal density \(\kappa _t\) exists. As in Theorem 2.3, we assume that \(\kappa _t\) is \(\alpha \)-Hölder continuous for \(\alpha \in (0,1]\). Note that the point process \(n(\mathcal{P}_{n \kappa _t} - y)\) has intensity density \({\kappa }_t( \frac{x}{n} + y), \ x \in n (S(\kappa _t) - y)\). For each \(y \in S(\kappa _t)\), we may find coupled Poisson point processes \(\mathcal{P}'_{n \kappa _t}\) and \( \mathcal{P}'_{\kappa _t(y)}\) with \( n(\mathcal{P}'_{n \kappa _t} - y) {\mathop {=}\limits ^{\mathcal{D}}}n(\mathcal{P}_{n \kappa _t} - y)\) and \(\mathcal{P}'_{\kappa _t(y)} {\mathop {=}\limits ^{\mathcal{D}}}\mathcal{P}_{\kappa _t(y)}\) such that the probability that the point processes \( n(\mathcal{P}'_{n \kappa _t} - y)\) and \(\mathcal{P}'_{\kappa _t(y)}\) are not equal on \([-A, A]\) is bounded uniformly in \(y \in S({\kappa }_t)\) by
We will need this coupling in what follows.
Part (ii) Poissonization. We will first show a Poissonized version of (9). Write k instead of k(n). We assume that we are given a Poisson number of data curves \(\{X_j(t) \}_{j = 1}^{N(n)}\), where N(n) is an independent Poisson random variable with parameter n. We aim to show for almost all \(t \in [0,1]\)
By translation invariance of \(W_n^{(k)}\) we have
We assert that as \(n \rightarrow \infty \)
Combining (A6)-(A7) and recalling that \(\frac{2}{k} \mathbb {E}L^{(k)}(\mathbf{0}, \mathcal{P}_{{\kappa }_t(X_1(y))}) = |S(\kappa _t) |,\) we obtain
which establishes (A5).
It remains to establish (A7). A Poisson point process on a set S with intensity density nh(x), where h is itself a density, may be expressed as the realization of random variables \(X_1,\ldots .,X_{N(n)}\), where N(n) is an independent Poisson random variable with parameter n and where each \(X_i\) has density h on S. Thus the point process \(\{X_j(t) \}_{j = 1}^{N(n)}\) is the Poisson point process \(\mathcal{P}_{n{\kappa }_t}\). To show the assertion (A7) we thus need to show
or equivalently,
where \(\mathcal{P}'_{n \kappa _t}\) and \(\mathcal{P}'_{\kappa _t(y)}\) are as in part (i).
Fix \(\epsilon > 0.\) As in the bound (A2), we have for all \(\delta> 0, K > 0\)
The last term in (A8) may be made less than \(\epsilon /3\) if \(\delta \) is small. The penultimate term is bounded \(\mathbb {P}( R^{L^{(k)}}(\mathbf{0}, \mathcal{P}_{\delta k})> K) = \mathbb {P}( R^{L^{(k)}}(\mathbf{0}, \frac{1}{\delta k} \mathcal{P}_{1})> K) = \mathbb {P}( \frac{1}{\delta k} \Gamma (k,2) >K)\), which by Chebyshev’s inequality is less than \(\epsilon /3\) if K is large. The first term satisfies
where the inequality follows from (A4). By assumption, we have \(\lim _{n \rightarrow \infty } \frac{ k^{1 + \alpha }}{n^{\alpha }} = 0\) and it follows that the first term is less than \(\epsilon /3\) if n is large. Thus, for \(\delta \) small and K and n large, the right-hand side of (A2) is less than \(\epsilon \). Thus
The assertion (A7) follows since convergence in probability combined with uniform integrability gives convergence in mean.
Part (iii) de-Poissonization. We de-Poissonize the above equality to obtain (9). In other words we need to show that the limit does not change when N(n) is replaced by n. Put
Then \(\mathcal{Y}_n {\mathop {=}\limits ^{\mathcal{D}}}\{ X_1(t), X_2(t),\ldots ,X_{n}(t)\}.\) We use this coupling of Poisson and binomial input in all that follows.
We wish to show that \(\hat{X}_{n,1}^{(k)}(t)\) coincides with \(\hat{X}_{N(n),1}^{(k)}(t)\) on a high probability event; in other words we wish to show that the sample points with indices between \(\min (n, N(n))\) and \(\max (n, N(n))\) do not, in general, modify the value of \(\hat{X}_{n,1}^{(k)}(t)\). Consider the event that the Poisson random variable does not differ too much from its mean, i.e.,
and note that tail bounds for Poisson random variables show that there is \(c > 0\) such that \(\mathbb {P}(E_n^c) = O( n^{-2})\). Write
The last summand is o(1), which may be seen using the Cauchy-Schwarz inequality, Lemma A.2, and \(\mathbb {P}(E_n^c) = O(n^{-2})\).
For any \(1 \le j \le c \sqrt{n} \log n\) we define
This is the event that the data curves having index larger than \(\min (n, N(n)) \) are farther away from \(X_1(t)\) than is the data curve \(\hat{X}_{n,1}^{(k)}(t)\).
Given i.i.d. random variables \(Z_i, 1 \le i \le n\), we let \(Z_i^{(k)}\) denote the kth nearest neighbor to \(Z_i\). Given an independent random variable \(Z_0\) having the same distribution as \(Z_i\), the probability that \(Z_0\) belongs to \([Z_i, Z_i^{(k)}]\) coincides with the probability that a uniform random variable on [0, 1] belongs to \([U_i, U_i^{(k)}]\) where \(U_i, 1 \le i \le n,\) are i.i.d. uniform random variables on [0, 1]. By exchangeability this last probability equals \(k/(n-1)\).
It follows that for any \(j = 1,2,\dots \)
whereas
We have
and thus \(\mathbb {P}(A_{n,j} \cap E_n) \ge 1 - \frac{k}{ (n - 1) - c \sqrt{n} \log n}.\) Thus
When \(\frac{k c' \log n}{ \sqrt{n} } = o(1)\) we find that \(\left( W_n^{(k)}(X_1(t), \{X_j(t) \}_{j = 1}^{N(n)} ) - W_n^{(k)}(t)\right) \mathbf{{1}}(E_n)\) converges to zero in probability as \(n \rightarrow \infty \), and thus so does \(\left( W_n^{(k)}(X_1(t), \{X_j(t) \}_{j = 1}^{N(n)} )\right. \left. - W_n^{(k)}(t)\right) .\) By uniform integrability we obtain that \((W_n^{(k)}(X_1(t), \{X_j(t) \}_{j = 1}^{N(n)} ) - W_n^{(k)}(t))\) converges to zero in mean. This completes the proof of (9).
Part (iv) Variance convergence. Replacing \(W_n^{(k)}(X_1(t), \{X_j(t) \}_{j = 1}^{N(n)})\) by its square in the above computation gives
where the last equality makes use of (A3). This gives (10), as desired.
Part (v) \(L^1\) convergence. The limit (11) follows exactly as in the proof of Theorem 2.2. \(\square \)
1.5 A.5 Proof of Theorem 2.4
This result is a straightforward consequence of the central limit theorem for M-dependent random variables. It is enough to prove the central limit theorem for the re-scaled random variables \(\{L(mT_r)\}_{r = 1}^m\). Indeed these random variables \(\{L(mT_r)\}_{r = 1}^m\) have moments of all orders and they are M-dependent since \(\{L(mT_r)\}_{r \in A}\) and \(\{L(mT_r)\}_{r \in B}\) are independent whenever the distance between the index sets A and B exceeds 2M. The asserted asymptotic normality follows by the classical central limit theorem for M-dependent random variables.
\(\square \)
1.6 A.6 Proof of Proposition 1
Proof. It is enough to show for any fixed j and all \(\varepsilon >0\) that
We have
Since \(\kappa _t\) is \(\alpha \)-Hölder continuous with \(\alpha = 1\), we may apply Theorem 2.3 for \(\alpha = 1\) and \(k= k(n) = o(\sqrt{n})\). Thus, as \(n \rightarrow \infty \), the right-hand side goes to 0 by Theorem 2.3, the finiteness of \(\mathbb {E}\int _0^1 W_n^{(k)}(t) dt\), and (18). \(\square \)
1.7 A.7 See Figs. 5, 6 and 7
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Elías, A., Jiménez, R. & Yukich, J.E. Localization processes for functional data analysis. Adv Data Anal Classif 17, 485–517 (2023). https://doi.org/10.1007/s11634-022-00512-8
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11634-022-00512-8