Abstract
This paper obtains asymptotic results for parametric inference using prediction-based estimating functions when the data are high-frequency observations of a diffusion process with an infinite time horizon. Specifically, the data are observations of a diffusion process at n equidistant time points \(\Delta _n i\), and the asymptotic scenario is \(\Delta _n \rightarrow 0\) and \(n\Delta _n \rightarrow \infty\). For useful and tractable classes of prediction-based estimating functions, existence of a consistent estimator is proved under standard weak regularity conditions on the diffusion process and the estimating function. Asymptotic normality of the estimator is established under the additional rate condition \(n\Delta _n^3 \rightarrow 0\). The prediction-based estimating functions are approximate martingale estimating functions to a smaller order than what has previously been studied, and new non-standard asymptotic theory is needed. A Monte Carlo method for calculating the asymptotic variance of the estimators is proposed.
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
References
Aït-Sahalia, Y. (1996). Nonparametric pricing of interest rate derivative securities. Econometrica, 64(3), 527–560.
Aït-Sahalia, Y. (2002). Maximum likelihood estimation of discretely sampled diffusions: A closed-form approximation approach. Econometrica, 70(1), 223–262.
Bandi, F., & Phillips, P. (2003). Fully nonparametric estimation of scalar diffusion models. Econometrica, 71(1), 241–283.
Beskos, A., Papaspiliopoulos, O., & Roberts, G. (2009). Monte carlo maximum likelihood estimation for discretely observed diffusion processes. Annals of Statistics, 37(1), 223–245.
Beskos, A., Papaspiliopoulos, O., Roberts, G., & Fearnhead, P. (2006). Exact and computationally efficient likelihood-based estimation for discretely observed diffusion processes (with discussion). Journal of the Royal Statistical Society, 68(3), 333–382.
Bibby, B., & Sørensen, M. (1995). Martingale estimation functions for discretely observed diffusion processes. Bernoulli, 1(1/2), 17–39.
Bladt, M., Finch, S., & Sørensen, M. (2016). Simulation of multivariate diffusion bridges. Journal of the Royal Statistical Society Series B, 78, 343–369.
Comte, F., Genon-Catalot, V., & Rozenholc, Y. (2007). Penalized nonparametric mean square estimation of the coefficients of diffusion processes. Bernoulli, 13(2), 514–543.
Dacunha-Castelle, D., & Florens-Zmirou, D. (1986). Estimation of the coefficients of a diffusion from discrete observations. Stochastics, 19, 263–284.
Ditlevsen, S., & Sørensen, M. (2004). Inference for observations of integrated diffusion processes. Scandinavian Journal of Statistics, 31, 417–429.
Doukhan, P. (1994). Mixing: Properties and examples. Lecture notes in statistics 85. Berlin: Springer.
Elerian, O., Chib, S., & Shephard, N. (2001). Likelihood inference for discretely observed nonlinear diffusions. Econometrica, 69(4), 959–993.
Eraker, B. (2001). MCMC analysis of diffusion models with application to finance. Journal of Business & Economic Statistics, 19(2), 177–191.
Fan, J. (2005). A selective overview of nonparametric methods in financial econometrics. Statistical Science, 20(4), 317–337.
Florens-Zmirou, D. (1989). Approximate discrete-time schemes for statistics of diffusion processes. Statistics, 20, 547–557.
Florens-Zmirou, D. (1993). On estimating the diffusion coefficient from discrete observations. Journal of Applied Probability, 30(4), 790–804.
Forman, J., & Sørensen, M. (2008). The Pearson diffusions: A class of statistically tractable diffusion processes. Scandinavian Journal of Statistics, 35, 438–465.
Genon-Catalot, V., & Jacod, J. (1993). On the estimation of the diffusion coefficient for multi-dimensional diffusion processes. Annales de l’Institut Henri Poincaré, 29(1), 119–151.
Genon-Catalot, V., Jeantheau, T., & Larédo, C. (2000). Stochastic volatility models as hidden Markov models and statistical applications. Bernoulli, 6(6), 1051–1079.
Genon-Catalot, V., Larédo, C., & Picard, D. (1992). Non-parametric estimation of the diffusion coefficient by wavelet methods. Scandinavian Journal of Statistics, 19(4), 317–335.
Gloter, A. (2000). Discrete sampling of an integrated diffusion process and parameter estimation of the diffusion coefficient. ESAIM: Probability and Statistics, 4, 205–227.
Gobet, E., Hoffmann, M., & Reiß, M. (2004). Nonparametric estimation of scalar diffusions based on low frequency data. Annals of Statistics, 32(5), 2223–2253.
Godambe, V., & Heyde, C. (1987). Quasi-likelihood and optimal estimation. International Statistical Review, 55(3), 231–244.
Hall, P., & Heyde, C. C. (1980). Martingale limit theory and its applications. New York: Academic Press.
Hansen, L., & Scheinkman, J. (1995). Back to the future: Generating moment implications for continuous-time Markov processes. Econometrica, 63(4), 767–804.
Hansen, L., Scheinkman, J., & Touzi, N. (1998). Spectral methods for identifying scalar diffusions. Journal of Econometrics, 86(1), 1–32.
Häusler, E., & Luschgy, H. (2015). Stable convergence and stable limit theorems. Berlin: Springer.
Hoffmann, M. (1999a). Adaptive estimation in diffusion processes. Stochastic Processes and their Applications, 79(1), 135–163.
Hoffmann, M. (1999b). \(l_p\) estimation of the diffusion coefficient. Bernoulli, 5(3), 447–481.
Jacod, J. (2000). Non-parametric kernel estimation of the coefficient of a diffusion. Scandinavian Journal of Statistics, 27, 83–96.
Jacod, J., & Sørensen, M. (2018). A review of asymptotic theory of estimating functions. Statistical Inference for Stochastic Processes, 21, 415–434.
Jakobsen, N., & Sørensen, M. (2017). Efficient estimation for diffusions sampled at high frequency over a fixed time interval. Bernoulli, 23(3), 1874–1910.
Jørgensen, E. (2017). Diffusion models observed at high frequency and applications in finance. Ph.D. thesis, Department of Mathematical Sciences, University of Copenhagen.
Kallenberg, O. (2002). Foundations of modern probability. Berlin: Springer-Verlag.
Kessler, M. (1997). Estimation of an ergodic diffusion from discrete observations. Scandinavian Journal of Statistics, 24, 211–229.
Kessler, M. (2000). Simple and explicit estimating functions for a discretely observed diffusion process. Scandinavian Journal of Statistics, 27, 65–82.
Kessler, M., & Sørensen, M. (1999). Estimating equations based on eigenfunctions for a discretely observed diffusion process. Bernoulli, 5(2), 299–314.
Li, C. (2013). Maximum-likelihood estimation for diffusion processes via closed-form density expansions. Annals of Statistics, 41(3), 1350–1380.
Pardoux, E., & Veretennikov, A. Y. (2001). On the Poisson equation and diffusion approximation. I. Annals of Probability, 29(3), 1061–1085.
Renò, R. (2008). Nonparametric estimation of the diffusion coefficient of stochastic volatility models. Econometric Theory, 24(5), 1174–1206.
Roberts, G., & Stramer, O. (2001). On inference for partially observed nonlinear diffusion models using the Metropolis-Hastings algorithm. Biometrika, 88(3), 603–621.
Rudin, W. (1987). Real and complex analysis. New York: McGraw-Hill.
Sørensen, M. (2000). Prediction-based estimating functions. Econometrics Journal, 3, 123–147.
Sørensen, M. (2011). Prediction-based estimating functions: Review and new developments. Brazilian Journal of Probability and Statistics, 25(3), 362–391.
Sørensen, M. (2012). Estimating functions for diffusion-type processes. In M. Kessler, A. Lindner, & M. Sørensen (Eds.), Statistical methods for stochastic differential equations (pp. 1–107). Boca Raton: CRC Press.
Sørensen, M. (2017). Efficient estimation for ergodic diffusions sampled at high frequency. Working paper.
Uchida, M., & Yoshida, N. (2011). Estimation for misspecified ergodic diffusion processes from discrete observations. ESAIM: Probability and Statistics, 15, 270–290.
Yoshida, N. (1992). Estimation for diffusion processes from discrete observation. Journal of Multivariate Analysis, 41(2), 220–242.
Acknowledgements
We are grateful to the reviewers for their insightful comments, which have improved the paper.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The corresponding author declares that none of the authors have a conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Proofs
Proof of Lemma 3.2
The diffusion process \((X_t)\) is reversible under Condition 2.2, so by Theorems 2.4 and 2.6 in Genon-Catalot et al. (2000) \(\left\Vert P_t^\theta f \right\Vert _2 \le \rho _X(t) \left\Vert f \right\Vert _2 = e^{-\lambda t} \left\Vert f \right\Vert _2\), for any \(f \in \mathscr {L}^2_0(\mu _\theta )\) , where \(\lambda >0\) denotes the spectral gap of \(\mathcal {A}_\theta\). \(\square\)
Proof of Proposition 3.3
Let \(U_\theta ^{(n)}(f)=\int _0^n P_t^\theta f \mathrm {d}t\). By Property P4 in Hansen and Scheinkman (1995), \(U_\theta ^{(n)}(f) \in \mathcal {D}_{\mathcal {A}_\theta }\) for all \(n \in \mathbb {N}\) and:
where limits are w.r.t. \(\left\Vert \cdot \right\Vert _2\). The latter equality holds, because \(\left\Vert P_n^\theta f \right\Vert _2 \le \left\Vert f \right\Vert _2 e^{-\lambda n} \rightarrow 0\).
By Jensen’s inequality, Fubini’s theorem, and Lemma 3.2, \(U_\theta ^{(n)}(f)\) converges to \(U_\theta (f)\) in \(\mathscr {L}^2(\mu _\theta )\) as \(n \rightarrow \infty\):
Taking \(n=0\), we obtain (15). Using that \(\mathcal {A}_\theta\) is closed and linear, we conclude that \(\mathcal {A}_\theta \left( U_\theta (f)\right) = \mathcal {L}_\theta \left( U_\theta (f)\right) = -f\); see, e.g., Property P7, Hansen and Scheinkman (1995). \(\square\)
Proof of Proposition 3.4
The proof is an application of the central limit theorem for martingales. For completeness and because we need to extend the result in a non-standard way later, we give the proof. First, note that:
where we will show that:
With \(A_i:=\int _{(i-1)\Delta _n}^{i\Delta _n} \left[ f(X_s) - f(X_{t^n_{i-1}})\right] \mathrm {d}s\), Fubini’s theorem combined with Lemma B.2 implies that:
![](http://media.springernature.com/lw479/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ109_HTML.png)
for a generic function \(F(x;\theta _0)\) of polynomial growth in x. Since \(n\Delta _n^3 \rightarrow 0\), it follows by Lemma 3.1 that:
![](http://media.springernature.com/lw482/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ110_HTML.png)
Moreover, for all \(k \ge 1\), Jensen’s inequality implies that:
and, hence, by Lemma B.1:
![](http://media.springernature.com/lw552/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ111_HTML.png)
The conclusion (36) now follows from Lemma 9 in Genon-Catalot and Jacod (1993).
To apply the central limit theorem for martingales, note that Proposition 3.3 and Itô’s formula applied to \(U_0(f)\) imply that:
so
The stochastic integral is a true martingale under \(\mathbb {P}_0\) and by the ergodic theorem:
In conclusion:
where convergence in law under \(\mathbb {P}_0\) follows from the continuous-time martingale central limit theorem (e.g., Theorem 6.31 in Häusler and Luschgy (2015)) or the central limit theorem for martingale arrays (e.g., Theorem 3.2 in Hall and Heyde (1980)). The conditional Lyapunov condition can be verified as in the proof of Theorem 4.5.
The alternative expression for the asymptotic variance \(\mathcal {V}_0(f)\) in (16) follows, because with \(g = U_0(f)\) and \(b_0(x) = b(x;\theta _0)\), it follows from Proposition 3.3 that:
where we have used that \(\mu _0(\mathcal {L}_0(g^2)) = 0\), see, e.g., Hansen and Scheinkman (1995), p. 774. \(\square\)
Proof of Theorem 4.2
Under the conditions of theorem, the function \(\kappa\) is 1-1, and \(\kappa ^{-1}\) is continuous. By Lemma 3.1, \(V_n(f) \xrightarrow {\mathbb {P}_0}\kappa (\theta _0)\) as \(n \rightarrow \infty\). We have assumed that \(\theta _0 \in \mathrm{int} \, \Theta\), so \(\kappa (\theta _0) \in \mathrm{int} \, \kappa (\Theta )\), and hence, \(\mathbb {P}_0(V_n(f) \in \kappa (\Theta )) \rightarrow 1\) as \(n \rightarrow \infty\).
When \(V_n(f) \in \kappa (\Theta )\), \({\hat{\theta }}_n = \kappa ^{-1} (V_n(f))\) is the unique \(G_n\)-estimator. When \(V_n(f) \notin \kappa (\Theta )\), we set \({\hat{\theta }}_n := \theta ^*\) for some \(\theta ^* \in \Theta\). Then, \({\hat{\theta }}_n \xrightarrow {\mathbb {P}_0}\theta _0\) as \(n \rightarrow \infty\), and by a Taylor expansion:
so (19) follows from Proposition 3.4. \(\square\)
Proof of Lemma 4.4
To simplify the presentation, we define:
where \(g=(g_1,g_2)^T\) is given by:
As a first step, we verify the expansion (21) of \(\breve{a}_n(\theta )\) in powers of \(\Delta _n\). By Lemma B.2:
![](http://media.springernature.com/lw472/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ112_HTML.png)
which implies that:
![](http://media.springernature.com/lw552/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ113_HTML.png)
where \(\left|R(\Delta _n;\theta ) \right| \le C(\theta )\) for a constant \(C(\theta )>0\). This yields the \(\Delta _n\)-expansion:
and, as a consequence:
This expansion of \(\breve{a}_n(\theta )\) together with
![](http://media.springernature.com/lw411/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ114_HTML.png)
implies that:
![](http://media.springernature.com/lw428/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ43_HTML.png)
Hence, by Lemma 3.1:
![](http://media.springernature.com/lw501/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ115_HTML.png)
where the contribution from the first term vanishes, because \(\mu _0(\mathcal {L}_0f)=0\); see, e.g., Hansen and Scheinkman (1995).
To apply Lemma 9 in Genon-Catalot and Jacod (1993), it remains to show that:
![](http://media.springernature.com/lw306/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ44_HTML.png)
From the expansions (42) and (43), it follows that:
which, in turn, yields the decomposition:
Lemma B.1 implies that:
![](http://media.springernature.com/lw468/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ116_HTML.png)
where we use that \(n\Delta _n \rightarrow \infty\). Similarly:
![](http://media.springernature.com/lw552/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ117_HTML.png)
and finally:
which together implies (45). Thus, by Lemma 9 in Genon-Catalot and Jacod (1993):
Similarly, for \(g_2(\Delta _n,X_{t^n_i},X_{t^n_{i-1}};\theta )\), it follows easily from (44) that:
![](http://media.springernature.com/lw551/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ118_HTML.png)
and hence:
![](http://media.springernature.com/lw542/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ119_HTML.png)
Moreover, since \(g^2_2(\Delta _n,X_{t^n_i},X_{t^n_{i-1}};\theta ) = f^2(X_{t^n_{i-1}})g^2_1(\Delta _n,X_{t^n_i},X_{t^n_{i-1}};\theta )\), we easily see that:
![](http://media.springernature.com/lw319/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ120_HTML.png)
so the first conclusion of the lemma follows from Lemma 9 in Genon-Catalot and Jacod (1993).
To establish the limit of \(\partial _{\theta ^T} H_n(\theta )\), we write:
which implies that:
where \(Z_n(f) := \frac{1}{n} \sum _{i=1}^n Z_{i-1}Z_{i-1}^T\) and \(A_n(\theta ) := -\Delta _n^{-1} \partial _{\theta ^T} \breve{a}_n(\theta )\). By Lemma 3.1:
and applying the expansion (21):
which holds under the regularity assumption (23). Collecting our observations:
To argue that the convergence is uniform over a compact subset \(\mathcal {M}\subseteq \Theta\), note that:
and in particular:
By continuity of norms, \(\left\Vert Z_n(f) \right\Vert \xrightarrow {\mathbb {P}_0}\left\Vert Z(f) \right\Vert\) and \(\left\Vert Z_n(f)-Z(f) \right\Vert = o_{\mathbb {P}_0}(1)\), so (25) follows by observing that:
and using the continuity of \(\theta \mapsto A(\theta )\). \(\square\)
Proof of Theorem 4.5
We continue with the notation (39)–(41) introduced above. Existence of a consistent sequence of \(G_n\)-estimators \((\hat{\theta }_n)\) follows from Theorem 2.5 in Jacod and Sørensen (2018), because the conclusions of Lemma 4.4 and the assumption that \(W(\theta _0)\) is non-singular imply Condition 2.2 in Jacod and Sørensen (2018). The uniqueness result follows from Theorem 2.7 in Jacod and Sørensen (2018) under the identifiability condition \(\gamma (\theta _0;\theta ) \ne 0\) for \(\theta \ne \theta _0\). The function \(\theta \mapsto \gamma (\theta _0;\theta )\) is called \(G(\theta )\) in Jacod and Sørensen (2018) and is necessarily continuous.
Asymptotic normality when \(n\Delta _n^3 \rightarrow 0\) follows from Theorem 2.11 in Jacod and Sørensen (2018). We only need to check that:
We apply the Cramér–Wold device to prove this weak convergence result, i.e., we must prove that for all \(c_1,c_2 \in \mathbb {R}\):
Reusing the expansions (42) and (43), we find that:
where \(f^*_1\) is defined in Condition 4.3. Hence:
because the first term in the expansion is a telesco** sum. Note that asymptotic normality for the first coordinate of the estimating function follows from Proposition 3.4. However, to obtain joint weak convergence, we need to consider the second coordinate too, which requires more work.
By Itô’s formula:
where
and, hence, by applying the expansions (42) and (43) as above:
A straightforward extension of the proof of (36) implies that:
since \(n\Delta _n^3 \rightarrow 0\) and as a consequence:
where \(f^* = c_1 f_1^* + c_2 f_2^*\).
To gather the non-negligible terms, we argue as in (38) that:
![](http://media.springernature.com/lw486/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ121_HTML.png)
which, in turn, yields the stochastic integral representation:
![](http://media.springernature.com/lw552/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ122_HTML.png)
At this point, we can apply the central limit theorem for martingale difference arrays; see, e.g., Hall and Heyde (1980) or Häusler and Luschgy (2015). To shorten the notation in the following, we define:
![](http://media.springernature.com/lw419/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ123_HTML.png)
and
First, by the conditional Itô’ isometry, Tonelli’s theorem, and Lemma B.2:
![](http://media.springernature.com/lw519/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ124_HTML.png)
Moreover, for any \(g \in \mathcal {C}^2_p(S)\) and \(k \ge 2\), the Burkholder–Davis–Gundy inequality, Jensen’s inequality, Tonelli’s theorem, and Lemma B.2, respectively, imply that:
![](http://media.springernature.com/lw552/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ125_HTML.png)
so based on the inequality
we conclude that:
![](http://media.springernature.com/lw542/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ126_HTML.png)
Now, the martingale central limit theorem for triangular arrays implies (47), so (46) follows by the Cramér–Wold device. The alternative expression for the matrix \(\mathcal {V}_0(f)\) follows, because by Proposition 3.4\(\mu _0\left( [\partial _x U_0(g) b( \cdot ;\theta _0)]^2\right) = 2\mu _0\left( g U _0 (g)\right)\) for \(g \in \mathscr {H}^2_0\), and because with \(g_i = U_0(f_i^*)\) and \(b_0(x) = b(x;\theta _0)\) it follows from Proposition 3.3 that:
where we have used that \(\mu _0(\mathcal {L}_0(g_1g_2)) = 0\), see, e.g., Hansen and Scheinkman (1995), p. 774. \(\square\)
Proof of Proposition 5.1
By the Cauchy–Schwarz inequality and the inequality (15):
where \(\lambda _0>0\) denotes the spectral gap of \((X_t)\) under \(\mathbb {P}_0\). Hence:
\(\square\)
Appendix B: Moment expansions
The proofs in Appendix A rely on conditional moment expansions for diffusion models and the following results are essentially taken from Gloter (2000) and Florens-Zmirou (1989), respectively. In the sequel, \(\theta \in \Theta\) is arbitrary and we assume for convenience that \(0<\Delta <1\).
Lemma B.1
Let \(f \in \mathcal {C}^1_p(S)\). For any \(k \ge 1\), there exists a constant \(C_{k,\theta }>0\), such that:
![](http://media.springernature.com/lw482/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ127_HTML.png)
For completeness, we give a rough proof of the following lemma.
Lemma B.2
Suppose that \(a(x;\theta ) \in \mathcal {C}_p^{2k,0}(S \times \Theta )\), \(b(x;\theta ) \in \mathcal {C}_p^{2k,0}(S \times \Theta )\) and \(f \in \mathcal {C}^{2(k+1)}_p(S)\) for some \(k \ge 0\). Then:
![](http://media.springernature.com/lw441/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ128_HTML.png)
Proof
We only consider \(k=0\), the general case may be shown by induction; see Lemma 1.10, Sørensen (2012). By Itô’s formula:
and since \(\partial _x f\) and \(b( \cdot ;\theta )\) are of polynomial, respectively, linear growth in x, the stochastic integral is a true \((\mathcal {F}_t)\)-martingale w.r.t. \(\mathbb {P}_\theta\) and:
![](http://media.springernature.com/lw451/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ129_HTML.png)
Moreover, since \(\mathcal {L}_\theta f\) is of polynomial growth in x:
and hence:
![](http://media.springernature.com/lw369/springer-static/image/art%3A10.1007%2Fs42081-020-00103-x/MediaObjects/42081_2020_103_Equ130_HTML.png)
by a simple application of Lemma B.1. \(\square\)
Rights and permissions
About this article
Cite this article
Jørgensen, E.S., Sørensen, M. Prediction-based estimation for diffusion models with high-frequency data. Jpn J Stat Data Sci 4, 483–511 (2021). https://doi.org/10.1007/s42081-020-00103-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42081-020-00103-x