Periodic Trawl Processes: Simulation, Statistical Inference and Applications in Energy Markets

  • Chapter
  • First Online:
Quantitative Energy Finance
  • 154 Accesses

Abstract

This article introduces the class of periodic trawl processes, which are continuous-time, infinitely divisible, stationary stochastic processes, that allow for periodicity and flexible forms of their serial correlation, including both short- and long-memory settings. We derive some of the key probabilistic properties of periodic trawl processes and present relevant examples. Moreover, we show how such processes can be simulated and establish the asymptotic theory for their sample mean and sample autocovariances. Consequently, we prove the asymptotic normality of a (generalised) method-of-moments estimator for the model parameters. We illustrate the new model and estimation methodology in an application to electricity prices.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 96.29
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
EUR 128.39
Price includes VAT (Germany)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Alomari, H.M., Ayache, A., Fradon, M., Olenko, A.: Estimation of cyclic long-memory parameters. Scand. J. Stat. 47(1), 104–133 (2020). https://doi.org/10.1111/sjos.12404

    Article  MathSciNet  Google Scholar 

  2. Andel, J.: Long memory time series models. Kybernetika 22, 105–123 (1986)

    MathSciNet  Google Scholar 

  3. Arteche, J.: Semiparametric robust tests on seasonal or cyclical long memory time series. J. Time Ser. Anal. 23(3), 251–285 (2002). https://onlinelibrary.wiley.com/doi/abs/10.1111/1467-9892.00264

    Article  MathSciNet  Google Scholar 

  4. Arteche, J., Robinson, P.M.: Semiparametric inference in seasonal and cyclical long memory processes. J. Time Ser. Anal. 21(1), 1–25 (2000). https://onlinelibrary.wiley.com/doi/abs/10.1111/1467-9892.00170

    Article  MathSciNet  Google Scholar 

  5. Ayache, A., Fradon, M., Nanayakkara, R., Olenko, A.: Asymptotic normality of simultaneous estimators of cyclic long-memory processes. Electron. J. Stat. 16(1), 84–115 (2022). https://doi.org/10.1214/21-ejs1953

    Article  MathSciNet  Google Scholar 

  6. Bacro, J., Gaetan, C., Opitz, T., Toulemonde, G.: Hierarchical space-time modeling of asymptotically independent exceedances with an application to precipitation data. J. Am. Stat. Assoc. 115(530), 555–569 (2020). https://doi.org/10.1080/01621459.2019.1617152

    Article  MathSciNet  Google Scholar 

  7. Barndorff-Nielsen, O.E.: Stationary infinitely divisible processes. Braz. J. Probab. Stat. 25(3), 294–322 (2011). https://doi.org/10.1214/11-BJPS140

    Article  MathSciNet  Google Scholar 

  8. Barndorff-Nielsen, O.E., Benth, F.E., Veraart, A.E.D.: Modelling energy spot prices by volatility modulated Lévy-driven Volterra processes. Bernoulli 19(3), 803–845 (2013). http://www.jstor.org/stable/23525714

    Article  MathSciNet  Google Scholar 

  9. Barndorff-Nielsen, O.E., Lunde, A., Shephard, N., Veraart, A.E.D.: Integer-valued trawl processes: a class of stationary infinitely divisible processes. Scand. J. Stat. 41(3), 693–724 (2014)

    Article  MathSciNet  Google Scholar 

  10. Barndorff-Nielsen, O.E., Benth, F.E., Veraart, A.E.D.: Ambit Stochastics, vol. 88. Probability Theory and Stochastic Modelling. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-94129-5

  11. Bennedsen, M., Lunde, A., Shephard, N., Veraart, A.E.D.: Inference and forecasting for continuous-time integer-valued trawl processes. J. Econom. 236(2), 105476 (2023). https://www.sciencedirect.com/science/article/pii/S0304407623001926

    Article  MathSciNet  Google Scholar 

  12. Bennett, W.R.: Statistics of regenerative digital transmission. Bell Syst. Tech. J. 37, 1501–1542 (1958). https://doi.org/10.1002/j.1538-7305.1958.tb01560.x

    Article  MathSciNet  Google Scholar 

  13. Brockwell, P.J., Davis, R.A.: Time Series: Theory and Methods. Springer Series in Statistics. Springer, New York (1987). https://doi.org/10.1007/978-1-4899-0004-3

  14. Cohen, S., Lindner, A.: A central limit theorem for the sample autocorrelations of a Lévy driven continuous time moving average process. J. Stat. Plan. Inference 143(8), 1295–1306 (2013). https://www.sciencedirect.com/science/article/pii/S0378375813000670

    Article  Google Scholar 

  15. Courgeau, V., Veraart, A.E.: Asymptotic theory for the inference of the latent trawl model for extreme values. Scand. J. Stat. 49(4), 1448–1495 (2022). https://onlinelibrary.wiley.com/doi/abs/10.1111/sjos.12563

    Article  MathSciNet  Google Scholar 

  16. Curato, I.V., Stelzer, R.: Weak dependence and GMM estimation of supOU and mixed moving average processes. Electron. J. Stat. 13(1), 310–360 (2019). https://doi.org/10.1214/18-EJS1523

    Article  MathSciNet  Google Scholar 

  17. Das, S., Genton, M.G.: Cyclostationary processes with evolving periods and amplitudes. IEEE Trans. Signal Process. 69, 1579–1590 (2021)

    Article  MathSciNet  Google Scholar 

  18. Doukhan, P., Jakubowski, A., Lopes, S., Surgailis, D.: Discrete-time trawl processes Stoch. Process. Appl. 129(4), 1326–1348 (2019). https://www.sciencedirect.com/science/article/pii/S0304414918301571

    Article  MathSciNet  Google Scholar 

  19. Doukhan, P., Roueff, F., Rynkiewicz, J.: Spectral estimation for non-linear long range dependent discrete time trawl processes. Electron. J. Stat. 14(2), 3157–3191 (2020). https://doi.org/10.1214/20-EJS1742

    Article  MathSciNet  Google Scholar 

  20. Espejo, R.M., Leonenko, N.N., Olenko, A., Ruiz-Medina, M.D.: On a class of minimum contrast estimators for Gegenbauer random fields. TEST 24, 657–680 (2015). https://doi.org/10.1007/s11749-015-0428-4

    Article  MathSciNet  Google Scholar 

  21. Fasen, V.: Extremes of regularly varying Lévy-driven mixed moving average processes. Adv. Appl. Probab. 37(4), 993–1014 (2005)

    Article  MathSciNet  Google Scholar 

  22. Ferrara, L., Guégan, D.: Comparison of parameter estimation methods in cyclical long memory time series. In: Junis, C., Moody, J., Timmermann, A. (eds.) Development in Forecast Combination and Portfolio Choice. Wiley, New York (2001)

    Google Scholar 

  23. Fuchs, F., Stelzer, R.: Mixing conditions for multivariate infinitely divisible processes with an application to mixed moving averages and the supOU stochastic volatility model. ESAIM: Probab. Stat. 17, 455–471 (2013)

    Article  MathSciNet  Google Scholar 

  24. Gardner, W.A., Napolitano, A., Paura, L.: Cyclostationarity: half a century of research. Signal Process. 86(4), 639–697 (2006). https://www.sciencedirect.com/science/article/pii/S0165168405002409

    Article  Google Scholar 

  25. Genton, M.G., Hall, P.: Statistical inference for evolving periodic functions. J. R. Stat. Soc.: Ser. B (Stat. Methodol.) 69(4), 643–657 (2007). https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-9868.2007.00604.x

  26. Geweke, J., Porter-Hudak, S.: The estimation and application of long memory time series models. J. Time Ser. Anal. 4(4), 221–238 (1983). https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-9892.1983.tb00371.x

    Article  MathSciNet  Google Scholar 

  27. Giraitis, L., Hidalgo, J., Robinson, P.M.: Gaussian estimation of parametric spectral density with unknown pole. Ann. Stat. 29(4), 987–1023 (2001). http://www.jstor.org/stable/2674066

    Article  MathSciNet  Google Scholar 

  28. Gladyšev, E.G.: Periodically correlated random sequences. Dokl. Akad. Nauk SSSR 137, 1026–1029 (1961)

    MathSciNet  Google Scholar 

  29. Grahovac, D., Leonenko, N.N., Taqqu, M.S.: Intermittency of trawl processes. Stat. Probab. Lett. 137, 235–242 (2018). https://www.sciencedirect.com/science/article/pii/S0167715218300415

    Article  MathSciNet  Google Scholar 

  30. Gray, H.L., Zhang, N.-F., Woodward, W.A.: On generalized fractional processes. J. Time Ser. Anal. 10(3), 233–257 (1989). https://doi.org/10.1111/j.1467-9892.1989.tb00026.x

    Article  MathSciNet  Google Scholar 

  31. Hall, P., Reimann, J., Rice, J.: Nonparametric estimation of a periodic function. Biometrika 87(3), 545–557 (2000). http://www.jstor.org/stable/2673629

    Article  MathSciNet  Google Scholar 

  32. Hidalgo, J.: Semiparametric estimation for stationary processes whose spectra have an unknown pole. Ann. Stat. 33(4), 1843–1889 (2005). https://doi.org/10.1214/009053605000000318

    Article  MathSciNet  Google Scholar 

  33. Hidalgo, J., Soulier, P.: Estimation of the location and exponent of the spectral singularity of a long memory process. J. Time Ser. Anal. 25(1), 55–81 (2004). https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-9892.2004.00337.x

    Article  MathSciNet  Google Scholar 

  34. Hosking, J.R.M.: Fractional differencing. Biometrika 68(1), 165–176 (1981). http://www.jstor.org/stable/2335817

    Article  MathSciNet  Google Scholar 

  35. Hurd, H.L.: An investigation of periodically correlated stochastic processes. PhD thesis, Duke University, Department of Electrical Engineering (1969)

    Google Scholar 

  36. Hurd, H.L., Miamee, A.: Periodically Correlated Random Sequences: Spectral Theory and Practice. Wiley Series in Probability and Statistics. Wiley-Interscience [John Wiley & Sons], Hoboken (2007). https://doi.org/10.1002/9780470182833

  37. Leonte, D., Veraart, A.E.D.: Simulation methods and error analysis for trawl processes and ambit fields. Math. Comput. Simul. 215, 518–542 (2024). https://doi.org/10.1016/j.matcom.2023.07.018

    Article  MathSciNet  Google Scholar 

  38. Maddanu, F.: A harmonically weighted filter for cyclical long memory processes. AStA Adv. Stat. Anal. 106, 49–78 (2022). https://doi.org/10.1007/s10182-021-00394-9

    Article  MathSciNet  Google Scholar 

  39. Mátyás, L. (ed.): Generalized Method of Moments Estimation. Cambridge University Press, Cambridge (1999). https://doi.org/10.1017/CBO9780511625848

    Google Scholar 

  40. Noven, R.C.: Statistical models for spatio-temporal extrema and dependencies. PhD thesis, Imperial College London (2016). http://hdl.handle.net/10044/1/48048

  41. Noven, R., Veraart, A., Gandy, A.: A latent trawl process model for extreme values. J. Energy Mark. 11(3), 1–24 (2018). https://doi.org/10.21314/JEM.2018.179

    Article  Google Scholar 

  42. Pakkanen, M.S., Passeggeri, R., Sauri, O., Veraart, A.E.D.: Limit theorems for trawl processes. Electron. J. Probab. 26, 1–36 (2021). https://doi.org/10.1214/21-EJP652

    Article  MathSciNet  Google Scholar 

  43. Paulauskas, V.: A note on linear processes with tapered innovations. Lith. Math. J. 60, 64–79 (2020). https://doi.org/10.1007/s10986-019-09445-w

    Article  MathSciNet  Google Scholar 

  44. Pedersen, J.: The Lévy-Itô decomposition of an independently scattered random measure. MaPhySto Research Report 2003-2 (2003). https://www.maphysto.dk/publications/MPS-RR/2003/2.pdf

  45. Quinn, B.G., Thomson, P.J.: Estimating the frequency of a periodic function. Biometrika 78(1), 65–74 (1991). http://www.jstor.org/stable/2336896

    Article  MathSciNet  Google Scholar 

  46. Rajput, B.S., Rosiński, J.: Spectral representations of infinitely divisible processes. Probab. Theory Relat. Fields 82(3), 451–487 (1989)

    Article  MathSciNet  Google Scholar 

  47. Sato, K.: Lévy Processes and Infinitely Divisible Distributions, vol. 68. Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (1999). Translated from the 1990 Japanese original, Revised by the author

    Google Scholar 

  48. Shephard, N., Yang, J.J.: , Likelihood inference for exponential-trawl processes, In: Podolskij, M., Stelzer, R., Thorbjørnsen, S., Veraart, A.E.D. (eds.) The Fascination of Probability, Statistics and Their Applications: In Honour of Ole E. Barndorff-Nielsen, pp. 251–281. Springer International Publishing, Cham (2016). https://doi.org/10.1007/978-3-319-25826-3_12

    Chapter  Google Scholar 

  49. Shephard, N., Yang, J.J.: Continuous time analysis of fleeting discrete price moves. J. Am. Stat. Assoc. 112(519), 1090–1106 (2017). https://doi.org/10.1080/01621459.2016.1192544

    Article  MathSciNet  Google Scholar 

  50. Surgailis, D., Rosinski, J., Mandrekar, V., Cambanis, S.: Stable mixed moving averages. Probab. Theory Relat. Fields 97, 543–558 (1993). https://doi.org/10.1007/BF01192963

    Article  MathSciNet  Google Scholar 

  51. Talarczyk, A., Treszczotko, L.: Limit theorems for integrated trawl processes with symmetric Lévy bases. Electron. J. Probab. 25, 1–24 (2020). https://doi.org/10.1214/20-EJP509

    Article  Google Scholar 

  52. Veraart, A.E.D.: Modeling, simulation and inference for multivariate time series of counts using trawl processes. J. Multivar. Anal. 169, 110–129 (2019). https://doi.org/10.1016/j.jmva.2018.08.012

    Article  MathSciNet  Google Scholar 

  53. Veraart, A.E.D.: ambit: Simulation and Estimation of Ambit Processes. R package version 0.1.2 (2022). https://cran.r-project.org/web/packages/ambit/index.html

  54. Veraart, A.E.D.: PeriodicTrawl-Energy. R code, release v1.0.0 (2023). https://doi.org/10.5281/zenodo.7706091

  55. Veraart, A.E.D., Veraart, L.A.M.: Modelling electricity day-ahead prices by multivariate Lévy semistationary processes. In: Benth, F.E., Kholodnyi, V.A., Laurence, P. (eds.) Quantitative Energy Finance: Modeling, Pricing, and Hedging in Energy and Commodity Markets, pp. 157–188. Springer, New York (2014). https://doi.org/10.1007/978-1-4614-7248-3_6

    Chapter  Google Scholar 

  56. Wolpert, R.L., Brown, L.D.: Stationary infinitely-divisible Markov processes with non-negative integer values. Working paper, April 2011 (2011). https://faculty.wharton.upenn.edu/wp-content/uploads/2011/09/2011d-Stationary-Infinitely-Divisible-Markov-Processes-with-Non-negative-Integer-Values.pdf

  57. Wolpert, R.L., Taqqu, M.S.: Fractional Ornstein–Uhlenbeck Lévy processes and the Telecom process: upstairs and downstairs. Signal Process. 85, 1523–1545 (2005). https://doi.org/10.1016/j.sigpro.2004.09.016

    Article  Google Scholar 

  58. Woodward, W.A., Cheng, Q.C., Gray, H.L.: A k-factor GARMA long-memory model. J. Time Ser. Anal. 19(4), 485–504 (1998). https://doi.org/10.1111/j.1467-9892.1998.00105.x

    Article  MathSciNet  Google Scholar 

  59. Yajima, Y.: Semiparametric estimation of the frequency of unbounded spectral densities. J. Stat. Stud. 26, 143–155 (2007). http://www.jstor.org/stable/27639901

    MathSciNet  Google Scholar 

Download references

Acknowledgements

I would like to thank Paul Doukhan for suggesting a study of periodic trawl processes and for helpful discussions, as well as Michele Nguyen, Fred Espen Benth and an anonymous referee for commenting on an earlier version of this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Almut E. D. Veraart .

Editor information

Editors and Affiliations

Appendix

Appendix

The appendix contains the proofs of all the technical results presented in the main paper, additional examples and a discussion of when the technical assumptions needed in our main theorems hold for periodic trawl processes.

1.1 Proof of the Second Order Properties

First, we derive the joint characteristic/cumulant function.

Proposition 8

Let\(t_1<t_2\)and\(\theta _1, \theta _2 \in \mathbb {R}\). Then

$$\displaystyle \begin{aligned} &{\mathrm{Log}}(\mathbb{E}(\exp(i (\theta_1, \theta_2)(Y_{t_1}, Y_{t_2})^{\top})))={\mathrm{Log}}(\mathbb{E}(\exp(i \theta_1 Y_{t_1} + i\theta_2 Y_{t_2}))) \\ &= \int_{(-\infty,t_1]\times \mathbb{R}}C_{L'}( \theta_1 p(t_1-s)\mathbb{I}_{(0,g(t_1-s))}(x)+\theta_2 p(t_2-s)\mathbb{I}_{(0,g(t_2-s))}(x))dx ds \\ &+ \int_{(t_1,t_2]\times \mathbb{R}}C_{L'}(\theta_2 p(t_2-s)\mathbb{I}_{(0,g(t_2-s))}(x))dx ds, \end{aligned} $$

where\(C_{L'}\)denotes the cumulant function of the Lévy seed\(L'\)associated with the Lévy basis L.

Proof

Let \(t_1<t_2\) and \(\theta _1, \theta _2 \in \mathbb {R}\). Then the joint characteristic function is given by

$$\displaystyle \begin{aligned} &\mathbb{E}(\exp(i (\theta_1, \theta_2)(Y_{t_1}, Y_{t_2})^{\top}))=\mathbb{E}(\exp(i \theta_1 Y_{t_1} + i\theta_2 Y_{t_2})) \\ &=\mathbb{E}\left[\exp\left(i \theta_1 \int_{(-\infty,t_1]\times \mathbb{R}}p(t_1-s)\mathbb{I}_{(0,g(t_1-s))}(x)L(dx,ds) \right.\right.\\ & + i\theta_2 \int_{(-\infty,t_1]\times \mathbb{R}}p(t_2-s)\mathbb{I}_{(0,g(t_2-s))}(x)L(dx,ds) \\ &\quad \left.\left. +i\theta_2\int_{(t_1,t_2]\times \mathbb{R}}p(t_2-s)\mathbb{I}_{(0,g(t_2-s))}(x)L(dx,ds)\right)\right]\\ &=\mathbb{E}\left[\exp\left( i\int_{(-\infty,t_1]\times \mathbb{R}}\{ \theta_1 p(t_1-s)\mathbb{I}_{(0,g(t_1-s))}(x)\right.\right.\\ &\quad +\theta_2 p(t_2-s)\mathbb{I}_{(0,g(t_2-s))}(x)\}L(dx,ds) \\ & \left.\left.+i\theta_2\int_{(t_1,t_2]\times \mathbb{R}}p(t_2-s)\mathbb{I}_{(0,g(t_2-s))}(x)L(dx,ds)\right)\right]\\ &=\exp\left( \int_{(-\infty,t_1]\times \mathbb{R}}C( \theta_1 p(t_1-s)\mathbb{I}_{(0,g(t_1-s))}(x) \right.\\ &\quad \left.+\theta_2 p(t_2-s)\mathbb{I}_{(0,g(t_2-s))}(x); L')dx ds \right)\\ & \quad \cdot \exp\left( \int_{(t_1,t_2]\times \mathbb{R}}C(\theta_2 p(t_2-s)\mathbb{I}_{(0,g(t_2-s))}(x); L')dx ds \right). \end{aligned} $$

I.e.

$$\displaystyle \begin{aligned} &\log(\mathbb{E}(\exp(i \theta_1 Y_{t_1} + i\theta_2 Y_{t_2}))) \\ &= \int_{(-\infty,t_1]\times \mathbb{R}}C( \theta_1 p(t_1-s)\mathbb{I}_{(0,g(t_1-s))}(x)+\theta_2 p(t_2-s)\mathbb{I}_{(0,g(t_2-s))}(x); L')dx ds \\ &+ \int_{(t_1,t_2]\times \mathbb{R}}C(\theta_2 p(t_2-s)\mathbb{I}_{(0,g(t_2-s))}(x); L')dx ds. \end{aligned} $$

â–¡

We can now easily derive the second-order properties of the periodic trawl process:

Proof (Proof of Proposition5)

For \(t, t_1, t_2\in \mathbb {R}, t_1<t_2\), we have

$$\displaystyle \begin{aligned} \mathbb{E}(Y_t)&= \mathbb{E}(L')\int_{-\infty}^{t}p(t-s) g(t-s)ds=\mathbb{E}(L')\int_{0}^{\infty}p(u) g(u)du,\\ {\mathrm{Var}}(Y_{t})&= {\mathrm{Var}}(L')\int_{0}^{\infty}p^2(u)g(u)du,\\ {\mathrm{Cov}}(Y_{t_1},Y_{t_2})&=-\left.\frac{\partial^2}{\partial \theta_1 \partial \theta_2}\log(\mathbb{E}(\exp(i\theta_1 Y_{t_1}+i\theta_2 Y_{t_2})))\right|{}_{\theta_1=\theta_2=0} \\ &= {\mathrm{Var}}(L')\int_{-\infty}^{t_1}p(t_1-s)p(t_2-s)\min(g(t_1-s),g(t_2-s))ds,\\ {\mathrm{Cor}}(Y_{t_1},Y_{t_2}) &= \frac{\int_{-\infty}^{t_1}p(t_1-s)p(t_2-s)\min(g(t_1-s),g(t_2-s))ds}{\int_{0}^{\infty}p^2(u)g(u)du}. \end{aligned} $$

Recall that we assume that g is monotonically decreasing, i.e. if \(x\leq y\), then \(g(x)\geq g(y)\).

Since \(t_1<t_2\), we have \(t_1-s<t_2-s\) for \(s<t_1\) and

$$\displaystyle \begin{aligned} \min(g(t_1-s),g(t_2-s))=g(t_2-s), \end{aligned} $$

hence the above expressions simplify to

$$\displaystyle \begin{aligned} {\mathrm{Cov}}(Y_{t_1},Y_{t_2}) &={\mathrm{Var}}(L')\int_{-\infty}^{t_1}p(t_1-s)p(t_2-s)g(t_2-s)ds\\ &={\mathrm{Var}}(L')\int_{0}^{\infty}p(u)p(t_2-t_1+u)g(t_2-t_1+u)du,\\ {\mathrm{Cor}}(Y_{t_1},Y_{t_2}) &= \frac{\int_{0}^{\infty}p(u)p(t_2-t_1+u)g(t_2-t_1+u)du}{\int_{0}^{\infty}p^2(u)g(u)du}. \end{aligned} $$

â–¡

Proof (Proof of Proposition6)

Recall that

$$\displaystyle \begin{aligned} {\mathrm{Cor}}(Y_{0},Y_{t})= \frac{\int_{0}^{\infty}p(u)p(t+u)g(t+u)du}{\int_{0}^{\infty}p^2(u)g(u)du}. \end{aligned} $$

We consider a constant \(M>\tau \). Since p is periodic with period \(\tau \), there exist \(\xi _1, \xi _2 \in [0,\tau ]\) such that,

$$\displaystyle \begin{aligned} \int_{0}^{M}p(u)p(t+u)g(t+u)du&=p(\xi_1)p(\xi_1+t)\int_{0}^{M}g(t+u)du,\\ \int_{0}^{M}(p(u))^2g(u)du&=(p(\xi_2))^2\int_{0}^{M}g(t+u)du, \end{aligned} $$

by the mean value theorem. We note that p is assumed to be continuous, and since it is also periodic, it is bounded. Also, the integrability conditions in (7) guarantee the existence of the integrals when taking the limit as \(M\to \infty \). Taking the limit and setting \(c(t)=p(\xi _1)p(\xi _1+t)/(p(\xi _2))^2\) leads the result; since \(\mathrm {Cor}(Y_0,Y_0)=1\), we deduce that \(c(0)=1\). Also, we observe that c is proportional to the \(\tau \)-periodic function p and is hence \(\tau \)-periodic itself. â–¡

Remark 11

As mentioned in Remark 2, Barndorff-Nielsen et al. [9] proposed adding a periodic function as a multiplicative factor to g rather than as kernel function as in (8), which results in a process \((Z_t)_{t\geq 0}\) with

$$\displaystyle \begin{aligned} {} Z_t &= \int_{\mathbb{R}\times \mathbb{R}}\mathbb{I}_{(0,p(t-s)g(t-s))}(x)\mathbb{I}_{[0,\infty)}(t-s) L(dx,ds), \end{aligned} $$
(19)

compared to our earlier definition of \((Y_t)_{t\geq 0}\) with

$$\displaystyle \begin{aligned} Y_t &= \int_{\mathbb{R}\times \mathbb{R}}p(t-s)\mathbb{I}_{(0,g(t-s))}(x)\mathbb{I}_{[0,\infty)}(t-s) L(dx,ds). \end{aligned} $$

The autocorrelation function of the process Z is of the form, for \(t_1<t_2\),

$$\displaystyle \begin{aligned} {\mathrm{Cor}}(Z_{t_1},Z_{t_2}) &= \frac{\int_{-\infty}^{t_1}\min(p(t_1-s)g(t_1-s), p(t_2-s)g(t_2-s))ds}{\int_{0}^{\infty}p(u)g(u)du}, \end{aligned} $$

which is potentially slightly more difficult to deal with than the autocorrelation function of our proposed periodic trawl process Y .

1.2 Proofs of the Asymptotic Theory

The following proofs extend the ideas presented in the work by Cohen and Lindner [14]. Alternatively, we could have deduced the results from the more recent work by Curato and Stelzer [16].

Proof (Proof of Theorem1)

The proof is a straightforward extension of the arguments provided in the proof of Theorem 2.1 in [14]. For the convenience of the reader and to keep this article self-contained, we will present the steps to extend the proof by Cohen and Lindner [14] to our more general setting of mixed moving average processes driven by homogeneous Lévy bases.

First of all, we continue the function \(F_{\Delta }\) periodically on \(\mathbb {R}\) by setting

$$\displaystyle \begin{aligned} F_{\Delta}(x,u)=\sum_{j=-\infty}^{\infty}|f(x, u+j\Delta)|, \quad u \in \mathbb{R}, \end{aligned} $$

where \(F_{\Delta }(x,u)=F_{\Delta }(x,u+j\Delta )\) for all \(j \in \mathbb {Z}\), \(u, x \in \mathbb {R}\).

We note that the autocovariance function of Y  satisfies

$$\displaystyle \begin{aligned} |\gamma_f(j\Delta)| \leq \kappa_2\int_{\mathbb{R} \times \mathbb{R}}|f(x, -s)||f(x, j\Delta-s)|dx ds, \end{aligned} $$

for any \(j \in \mathbb {Z}\) and

$$\displaystyle \begin{aligned} \begin{array}{rcl} {} \frac{1}{\kappa_2}\sum_{j=-\infty}^{\infty}|\gamma_f(j \Delta)| & \leq&\displaystyle \sum_{j=-\infty}^{\infty}\int_{\mathbb{R} \times \mathbb{R}}|f(x, -s)||f(x, j\Delta-s)|dx ds\\ & \leq&\displaystyle \int_{\mathbb{R} \times \mathbb{R}}|f(x, -s)|\sum_{j=-\infty}^{\infty}|f(x, j\Delta-s)|dx ds\\ & =&\displaystyle \int_{\mathbb{R} \times \mathbb{R}}|f(x, -s)|F_{\Delta}(x, -s)dx ds \\ & =&\displaystyle \int_{\mathbb{R} \times \mathbb{R}}|f(x, s)|F_{\Delta}(x, s)dx ds \\ & =&\displaystyle \sum_{j=-\infty}^{\infty}\int_{\mathbb{R} \times [0,\Delta]}|f(x, j\Delta +s)|F_{\Delta}(x, s)dx ds \\ & =&\displaystyle \int_{\mathbb{R} \times [0,\Delta]} \sum_{j=-\infty}^{\infty}|f(x, j\Delta +s)|F_{\Delta}(x, s)dx ds\\ & =&\displaystyle \int_{\mathbb{R} \times [0,\Delta]} F_{\Delta}^2(x, s)dx ds< \infty. \end{array} \end{aligned} $$
(20)

The above computations can be repeated without the modulus, which implies that \(\sum _{j=-\infty }^{\infty }\gamma _f(j \Delta )=V_{\Delta }\).

To simplify the exposition, we shall now assume that \(\mu =0\). We proceed as in [14]. Define the function \(f_{m;\Delta }(x,s):=f(x,s)\mathbb {I}_{(-m \Delta , m \Delta )}(s)\), for \(m \in \mathbb {N}, x, s \in \mathbb {R}\), and set

$$\displaystyle \begin{aligned} Y_{j; \Delta}^m&:=\int_{\mathbb{R} \times \mathbb{R}}f_{m; \Delta}(x,s)L(dx, ds)= \int_{\mathbb{R} \times (-m \Delta, m \Delta)}f(x, s) L(dx, ds) \\ &=\int_{\mathbb{R} \times ((-m+j)\Delta, (m+j)\Delta)}f(x, j\Delta-s) L(dx, ds). \end{aligned} $$

Since L is independently scattered, we can deduce that \((Y_{j;\Delta }^{(m)})_{j \in \mathbb {Z}}\) is a \((2m-1)\)-dependent sequence, which is also strictly stationary. Hence, by Brockwell and Davis [13, Theorem 6.4.2], we know that

$$\displaystyle \begin{aligned} \sqrt{n} \; \overline{Y}_{n; \Delta}^{(m)} = n^{-1/2}\sum_{j=1}^n Y_{j;\Delta}^{(m)}\stackrel{\mathrm{d}}{\to}Z^{(m)}_{\Delta}, \quad \mathrm{as} \; n \to \infty, \end{aligned} $$

where the random variable \(Z^{(m)}_{\Delta }\) satisfies \(Z^{(m)}_{\Delta }\stackrel {\mathrm {d}}{=}\mathrm {N}(0, V^{(m)})\), where

$$\displaystyle \begin{aligned} V^{(m)}_{\Delta}=\sum_{j=-2m}^{2m}\gamma_{f_m}(j\Delta), \end{aligned} $$

for

$$\displaystyle \begin{aligned} \gamma_{f_m}(j\Delta) &={\mathrm{Cov}}(Y_{0;\Delta}^{(m)}, Y_{j;\Delta}^{(m)})=\kappa_2\int_{\mathbb{R} \times \mathbb{R}}f_{m;\Delta}(x, -s)f_{m;\Delta}(x, j\Delta-s)dx ds\\ &=\int_{\mathbb{R} \times ((-m+j)\Delta, (m+j)\Delta)}f(x, -s) f(x, j\Delta-s) dx ds. \end{aligned} $$

We observe that \(\lim _{m\to \infty }\gamma _{f_m}(j\Delta )=\gamma _{f}(j\Delta )\) for all \(j \in \mathbb {Z}\); also

$$\displaystyle \begin{aligned} |\gamma_{f_m}(j\Delta)|\leq \kappa_2 \int_{\mathbb{R} \times \mathbb{R}}|f(x, -s)| |f(x, j\Delta-s)| dx ds, \end{aligned} $$

and \(\sum _{j=-\infty }^{\infty } \int _{\mathbb {R} \times \mathbb {R}}|f(x, -s)| |f(x, j\Delta -s)|<\infty \) by the computations in (20). Hence, Lebesgue’s Dominated Convergence Theorem implies that \(\lim _{m\to \infty }V^{(m)}_{\Delta }=V_{\Delta }\) and we get that \(Z_{\Delta }^{(m)}\stackrel {\mathrm {d}}{\to } Z_{\Delta }\), where \(Z \stackrel {\mathrm {d}}{=} \mathrm {N}(0, V_{\Delta })\).

It remains to control the difference \(n^{1/2}(\overline {Y}_{n; \Delta }-\overline {Y}_{n; \Delta }^{(m)})\). We argue as follows. Using similar arguments as above, we note that \(\lim _{m\to \infty }\sum _{j=-\infty }^{\infty }\gamma _{f-f_{m;\Delta }}(j\Delta ) =0\). Hence, we have that

$$\displaystyle \begin{aligned} &\lim_{m\to \infty} \lim_{n\to \infty} {\mathrm{Var}}(n^{1/2}(\overline{Y}_{n; \Delta}-\overline{Y}_{n; \Delta}^{(m)})) \\ &{\,=\,}\lim_{m\to \infty} \lim_{n\to \infty} n{\mathrm{Var}}\left(n^{-1}\sum_{j=1}^n\int_{\mathbb{R} \times \mathbb{R}}(f(x, j\Delta{\,-\,}s)-f_{m;\Delta}(x, j\Delta{\,-\,}s))L(dx, ds)\right)\\ &\stackrel{(\star)}{=}\lim_{m\to \infty} \sum_{j=-\infty}^{\infty} \gamma_{f-f_{m;\Delta}}(j\Delta)=0, \end{aligned} $$

where the equality \((\star )\) follows from [13, Theorem 7.1.1]. Chebyshev’s inequality allows us to conclude that, for any \(\epsilon >0\),

$$\displaystyle \begin{aligned} \lim_{m\to \infty} \limsup_{n\to \infty} \mathbb{P}(n^{1/2}|\overline{Y}_{n; \Delta}-\overline{Y}_{n; \Delta}^{(m)}|>\epsilon)=0. \end{aligned} $$

As stated in [14], the final step of the proof consists of an application of a Slutsky-type theorem as presented in [13, Proposition 6.3.9]. â–¡

Proof (Proof of Lemma1)

For \(t_1, t_2, t_3, t_4 \in \mathbb {R}\), we have, for any \(a_1, a_2, a_3, a_4 \in \mathbb {R}\), the following expression for the joint characteristic function

$$\displaystyle \begin{aligned} &\psi((a_1,a_2,a_3,a_4);(Y_{t_1},Y_{t_2},Y_{t_3},Y_{t_4})):= \mathbb{E}(\exp(i(a_1Y_{t_1}+a_2Y_{t_2}+a_3Y_{t_3}+a_4Y_{t_4}))\\ &=\mathbb{E}\left[\exp\left(i \int_{\mathbb{R}\times \mathbb{R}}\left(\sum_{j=1}^4a_jf(x, t_j-s)\right)L(ds, dx)\right) \right]\\ &= \exp\left[ \int_{\mathbb{R} \times \mathbb{R}} C\left(\sum_{j=1}^4a_jf(x, t_j-s); L' \right) dx ds \right], \end{aligned} $$

where \(C(\cdot ; L')\) denotes the cumulant function of the Lévy seed \(L'\), which we will present next.

Suppose \(L'\) has characteristic triplet \((c, A, \nu )\) w.r.t. the truncation function \(\tau (y)=\mathbb {I}_{[-1,1]}(y)\). I.e. we have the following representation for its characteristic function, for any \(\theta \in \mathbb {R}\),

$$\displaystyle \begin{aligned} \mathbb{E}(\exp(i \theta L'))=\exp\left(i c \theta -\frac{1}{2}A \theta^2 + \int_{\mathbb{R} }(e^{iy \theta}-1-i\theta y \tau(y))\nu(dy)\right). \end{aligned} $$

We recall that \(\mathbb {E}(L')=c +\int _{\mathbb {R}}y(1-\tau (y))\nu (dy)\). Since we are assuming that \(\mathbb {E}(L')=0\), we get that \(c=-\int _{\mathbb {R}}y(1-\tau (y))\nu (dy)=-\int _{\mathbb {R}}y\mathbb {I}_{[-1,1]^c}(y)\nu (dy)\) and, hence,

$$\displaystyle \begin{aligned} \mathbb{E}(\exp(i \theta L'))=\exp\left( -\frac{1}{2}A \theta^2 + \int_{\mathbb{R} }(e^{iy \theta}-1-i\theta y )\nu(dy)\right). \end{aligned} $$

I.e. the corresponding cumulant function is given by

$$\displaystyle \begin{aligned} C(\theta; L')= -\frac{1}{2}A \theta^2 + \int_{\mathbb{R} }(e^{iy \theta}-1-i\theta y )\nu(dy). \end{aligned} $$

Moreover,

$$\displaystyle \begin{aligned} &C((a_1,a_2,a_3,a_4);(Y_{t_1},Y_{t_2},Y_{t_3},Y_{t_4})) :=\int_{\mathbb{R} \times \mathbb{R}} C\left(\sum_{j=1}^4a_jf(x, t_j-s); L' \right) dx ds\\ &= -\frac{1}{2}A \int_{\mathbb{R} \times \mathbb{R}} \left(\sum_{j=1}^4a_jf(x, t_j-s)\right)^2 dx ds \\ & \quad +\int_{\mathbb{R} \times \mathbb{R}} \int_{\mathbb{R} }\left(e^{iy \sum_{j=1}^4a_jf(x, t_j-s)}-1-i\sum_{j=1}^4a_jf(x, t_j-s) y \right)\nu(dy) dx ds\\ &= -\frac{1}{2}A \sum_{j,k=1}^4 a_j a_k \int_{\mathbb{R} \times \mathbb{R}} f(x, t_j-s) f(x, t_k-s) dx ds \\ & \quad +\int_{\mathbb{R} \times \mathbb{R}} \int_{\mathbb{R} }\left(e^{iy \sum_{j=1}^4a_jf(x, t_j-s)}-1-i\sum_{j=1}^4a_jf(x, t_j-s) y \right)\nu(dy) dx ds\\ &= -\frac{1}{2}A \sum_{j=1}^4 a_j^2 \int_{\mathbb{R} \times \mathbb{R}} f^2(x, t_j-s) dx ds \\ & \quad -\frac{1}{2}A \sum_{j,k=1, j\neq k}^4 a_j a_k \int_{\mathbb{R} \times \mathbb{R}} f(x, t_j-s) f(x, t_k-s) dx ds \\ & \quad +\int_{\mathbb{R} \times \mathbb{R}} \int_{\mathbb{R} }\left(e^{iy \sum_{j=1}^4a_jf(x, t_j-s)}-1-i\sum_{j=1}^4a_jf(x, t_j-s) y \right)\nu(dy) dx ds. \end{aligned} $$

Next, we compute the fourth moments, where we recall that

$$\displaystyle \begin{aligned} & \left.\frac{\partial^4}{\partial a_1 \partial a_2 \partial a_3 \partial a_4}\psi((a_1,a_2,a_3,a_4);(Y_{t_1},Y_{t_2},Y_{t_3},Y_{t_4}))\right|{}_{a_1=a_2=a_3=a_4=0} \\ &\quad = \mathbb{E}(Y_{t_1}Y_{t_2}Y_{t_3}Y_{t_4}). \end{aligned} $$

We now abbreviate the functions to \(\psi \) and C without stating their arguments and a subscript denotes the corresponding partial derivative, e.g. \(C_{a_1}=\frac {\partial }{\partial a_1}C((a_1,a_2,a_3,a_4);(Y_{t_1},Y_{t_2},Y_{t_3},Y_{t_4})\) and similarly for higher order partial derivatives. Since \(\psi = \exp (C)\), we have

$$\displaystyle \begin{aligned} \psi_{a1} &= \psi C_{a_1},\\ \psi_{a_1,a_2} &= \psi [ C_{a_1, a_2}+ C_{a_1} C_{a_2}],\\ \psi_{a_1,a_2,a_3} &= \psi [ (C_{a_1, a_2}+ C_{a_1} C_{a_2}) C_{a_3} +C_{a_1, a_2, a_3} +C_{a_1, a_3}C_{a_2}+C_{a_1}C_{a_2, a_3} ]\\ &= \psi [ C_{a_1} C_{a_2}C_{a_3} +C_{a_1}C_{a_2, a_3} +C_{a_2} C_{a_1, a_3} +C_{a_3} C_{a_1, a_2} + +C_{a_1, a_2, a_3} ],\\ \psi_{a_1,a_2,a_3,a_4} &= \psi [ C_{a_1} C_{a_2}C_{a_3} C_{a_4} +C_{a_1}C_{a_2, a_3} C_{a_4} +C_{a_2} C_{a_1, a_3}C_{a_4} \\ & \quad +C_{a_3} C_{a_1, a_2} C_{a_4} +C_{a_1, a_2, a_3}C_{a_4} \\ & + C_{a_1, a_4} C_{a_2}C_{a_3} + C_{a_1} C_{a_2, a_4}C_{a_3} + C_{a_1} C_{a_2}C_{a_3, a_4}\\ & \quad + C_{a_1, a_4}C_{a_2, a_3} + C_{a_1}C_{a_2, a_3, a_4} \\ & + C_{a_2, a_4} C_{a_1, a_3} + C_{a_2} C_{a_1, a_3, a_4} + C_{a_3,a_4} C_{a_1, a_2} \\ &\quad + C_{a_3} C_{a_1, a_2, a_4} +C_{a_1, a_2, a_3, a_4} ]. \end{aligned} $$

Here we have

$$\displaystyle \begin{aligned} C_{a_i}|{}_{a_1=a_2=a_3=a_4=0}&=0, \quad \mathrm{for}\; i=1, 2, 3, 4,\\ C_{a_i,a_j}|{}_{a_1=a_2=a_3=a_4=0} &=-\left(A+ \int_{\mathbb{R}} y^2 \nu(dy)\right) \int_{\mathbb{R} \times \mathbb{R}} f(x, t_i-s) f(x, t_j-s) dx ds,\\ & \qquad \mathrm{for}\, i, j = 1, 2, 3, 4,\\ C_{a_1,a_2,a_3,a_4}&=\int_{\mathbb{R}\times \mathbb{R}}\prod_{j=1}^4 f(x, t_j-s)dxds \int_{\mathbb{R}}y^4\nu(dy). \end{aligned} $$

The above results imply that

$$\displaystyle \begin{aligned} &\mathbb{E}(Y_{t_1}Y_{t_2}Y_{t_3}Y_{t_4})\\ &= \left(A+ \int_{\mathbb{R}} y^2 \nu(dy)\right)^2 \\ &\left( \int_{\mathbb{R} \times \mathbb{R}} f(x, t_1-s) f(x, t_2-s)dx ds \int_{\mathbb{R} \times \mathbb{R}} f(x, t_3-s) f(x, t_4-s)dx ds \right .\\ & + \int_{\mathbb{R} \times \mathbb{R}} f(x, t_1-s) f(x, t_3-s)dx ds \int_{\mathbb{R} \times \mathbb{R}} f(x, t_2-s) f(x, t_4-s)dx ds \\ &\left. +\int_{\mathbb{R} \times \mathbb{R}} f(x, t_1-s) f(x, t_4-s)dx ds \int_{\mathbb{R} \times \mathbb{R}} f(x, t_2-s) f(x, t_3-s)dx ds \right)\\ & + \int_{\mathbb{R}} y^4 \nu(dy) \int_{\mathbb{R} \times \mathbb{R}} f(x, t_1-s) f(x, t_2-s) f(x, t_3-s) f(x, t_4-s) dx ds. \end{aligned} $$

We note that \(\kappa _4:=\int _{\mathbb {R}}y^4\nu (dy)=(\eta -3)\kappa _2^2\) and \(\kappa _2=A+\int _{\mathbb {R}}y^2\nu (dy)\). We can further simplify the above formula as follows:

$$\displaystyle \begin{aligned} &\mathbb{E}(Y_{t_1}Y_{t_2}Y_{t_3}Y_{t_4})\\ &= \left(A+ \int_{\mathbb{R}} y^2 \nu(dy)\right)^2 \\ &\left( \int_{\mathbb{R} \times \mathbb{R}} f(x, t_1-t_2+s) f(x, s)dx ds \int_{\mathbb{R} \times \mathbb{R}} f(x, t_3-t_4+s) f(x, s)dx ds \right .\\ & + \int_{\mathbb{R} \times \mathbb{R}} f(x, t_1-t_3+s) f(x, s)dx ds \int_{\mathbb{R} \times \mathbb{R}} f(x, t_2-t_4+s) f(x, s)dx ds \\ &\left. +\int_{\mathbb{R} \times \mathbb{R}} f(x, t_1-t_4+s) f(x, s)dx ds \int_{\mathbb{R} \times \mathbb{R}} f(x, t_2-t_3+s) f(x, s)dx ds \right)\\ & + \int_{\mathbb{R}} y^4 \nu(dy) \int_{\mathbb{R} \times \mathbb{R}} f(x, t_1-t_3+s) f(x, t_2-t_3) f(x, s) f(x, t_4-t_3+s) dx ds\\ &= \gamma(t_1-t_2)\gamma(t_3-t_4) + \gamma(t_1-t_3) \gamma(t_2-t_4) +\gamma(t_1-t_4)\gamma(t_2-t_3) \\ & + \kappa_4 \int_{\mathbb{R} \times \mathbb{R}}f(x, t_1-t_3+s) f(x, t_2-t_3) f(x, s) f(x, t_4-t_3+s) dx ds. \end{aligned} $$

â–¡

Proof (Proof of Proposition7)

We first expand the covariance of the sample autocovariances as follows

$$\displaystyle \begin{aligned} {\mathrm{Cov}}(\widehat{\gamma}_{n;\Delta}^*(\Delta p), \widehat{\gamma}_{n,\Delta}^*(\Delta q)) {\,=\,}\mathbb{E}(\widehat{\gamma}_{n;\Delta}^*(\Delta p) \widehat{\gamma}_{n;\Delta}^*(\Delta q)) {\,-\,} \mathbb{E}(\widehat{\gamma}_{n;\Delta}^*(\Delta p)) \mathbb{E}(\widehat{\gamma}_{n;\Delta}^*(\Delta q)), \end{aligned} $$

where

$$\displaystyle \begin{aligned} \mathbb{E}(\widehat{\gamma}_{n;\Delta}^*(\Delta p)) &=\frac{1}{n}\sum_{j=1}^{n}\mathbb{E}(Y_{j\Delta}Y_{(j+p)\Delta})= \frac{1}{n}\sum_{j=1}^{n}\gamma(p\Delta)=\gamma(p\Delta),\\ \mathbb{E}(\widehat{\gamma}_{n;\Delta}^*(\Delta q)) &=\frac{1}{n}\sum_{k=1}^{n}\mathbb{E}(Y_{k\Delta}Y_{(k+q)\Delta})= \frac{1}{n}\sum_{k=1}^{n}\gamma(q\Delta)=\gamma(q\Delta). \end{aligned} $$

Also,

$$\displaystyle \begin{aligned} &\mathbb{E}(\widehat{\gamma}_{n;\Delta}^*(\Delta p) \widehat{\gamma}_{n;\Delta}^*(\Delta q)) =\frac{1}{n^2}\sum_{j=1}^{n}\sum_{k=1}^{n}\mathbb{E}( Y_{j\Delta}Y_{(j+p)\Delta} Y_{k\Delta}Y_{(k+q)\Delta} ), \end{aligned} $$

where

$$\displaystyle \begin{aligned} &\mathbb{E}( Y_{j\Delta}Y_{(j+p)\Delta} Y_{k\Delta}Y_{(k+q)\Delta} )\\ &=\gamma(p\Delta)\gamma(q\Delta) + \gamma((k-j)\Delta) \gamma((j+p-k-q)\Delta) \\ &\quad +\gamma((j-k-q)\Delta)\gamma((j+p-k)\Delta) \\ & \quad + \kappa_4 \int_{\mathbb{R} \times \mathbb{R}} f(x, (j-k)\Delta+s) f(x, (j-k+p)\Delta+s) f(x, s) \\ &\quad f(x, q\Delta+s) dx ds\\ &=\gamma(p\Delta)\gamma(q\Delta) + \gamma((j-k)\Delta) \gamma((j-k+p-q)\Delta) \\ &\quad +\gamma((j-k-q)\Delta)\gamma((j-k+p)\Delta) \\ & \quad + \kappa_4 \int_{\mathbb{R} \times \mathbb{R}} f(x, (j-k)\Delta+s) f(x, (j-k+p)\Delta+s) f(x, s)\\ &\quad f(x, q\Delta+s) dx ds\\ &\stackrel{l=j-k}{=}\gamma(p\Delta)\gamma(q\Delta) + \gamma(l\Delta) \gamma((l+p-q)\Delta) +\gamma((l-q)\Delta)\gamma((l+p)\Delta) \\ & \quad + \kappa_4 \int_{\mathbb{R} \times \mathbb{R}} f(x, l\Delta+s) f(x, (l+p)\Delta+s) f(x, s) f(x, q\Delta+s) dx ds. \end{aligned} $$

Now we subtract \(\gamma (p\Delta )\gamma (q\Delta )\), we set \(l=j-k\), interchange the order of summation and use the stationarity to obtain

$$\displaystyle \begin{aligned} &{\mathrm{Cov}}(\widehat{\gamma}_{n;\Delta}^*(\Delta p), \widehat{\gamma}_{n;\Delta}^*(\Delta q))\\ &=\frac{1}{n^2}\sum_{j=1}^{n}\sum_{k=1}^{n}\mathbb{E}( Y_{j\Delta}Y_{(j+p)\Delta} Y_{k\Delta}Y_{(k+q)\Delta} ) -\gamma(p\Delta)\gamma(q\Delta)\\ & =\frac{1}{n^2}\sum_{j=1}^{n}\sum_{k=1}^{n}\left( \gamma((j-k)\Delta) \gamma((j-k+p-q)\Delta) \right. \\ &\quad +\gamma((j-k-q)\Delta)\gamma((j-k+p)\Delta) \\ & \quad + \kappa_4 \int_{\mathbb{R} \times \mathbb{R}} f(x, (j-k)\Delta+s) f(x, (j-k+p)\Delta+s) \\ &\quad f(x, s) f(x, q\Delta+s) dx ds\Big)\\ & =\frac{1}{n^2}\sum_{|l|<n}\sum_{k=1}^{n-|l|}\left( \gamma(l\Delta) \gamma((l+p-q)\Delta) +\gamma((l-q)\Delta)\gamma((l+p)\Delta)\right. \\ & \quad + \left. \kappa_4 \int_{\mathbb{R} \times \mathbb{R}} f(x, l\Delta+s) f(x, (l+p)\Delta+s) f(x, s) f(x, q\Delta+s) dx ds\right)\\ & =\frac{1}{n^2}\sum_{|l|<n}(n-|l|)T_{l,p,q;\Delta} =\frac{1}{n}\sum_{|l|<n}\left(1-\frac{|l|}{n}\right)T_{l,p,q;\Delta}, \end{aligned} $$

where

$$\displaystyle \begin{aligned} T_{l,p,q;\Delta}&:= \gamma(l\Delta) \gamma((l+p-q)\Delta) +\gamma((l-q)\Delta)\gamma((l+p)\Delta) \\ & \quad + \kappa_4 \int_{\mathbb{R} \times \mathbb{R}} f(x, l\Delta+s) f(x, (l+p)\Delta+s) f(x, s) f(x, q\Delta+s) dx ds. \end{aligned} $$

Hence, we have

$$\displaystyle \begin{aligned} \lim_{n\to \infty}n{\mathrm{Cov}}(\widehat{\gamma}_{n;\Delta}^*(\Delta p), \widehat{\gamma}_{n;\Delta}^*(\Delta q)) &=\lim_{n\to \infty}\sum_{|l|<n}\left(1-\frac{|l|}{n}\right)T_{l,p,q;\Delta}\\ &=\sum_{l=-\infty}^{\infty}T_{l,p,q;\Delta}, \end{aligned} $$

where

$$\displaystyle \begin{aligned} \sum_{l=-\infty}^{\infty}T_{l,p,q;\Delta} &= \sum_{l=-\infty}^{\infty}[\gamma(l\Delta) \gamma((l+p-q)\Delta) +\gamma((l-q)\Delta)\gamma((l+p)\Delta)] \\ &\quad + \kappa_4 \int_{\mathbb{R} \times \mathbb{R}}\sum_{l=-\infty}^{\infty} f(x, l\Delta+s) f(x, (l+p)\Delta+s) \\ & \qquad f(x, s) f(x, q\Delta+s) dx ds, \end{aligned} $$

by the Dominated Convergence Theorem since (13) holds. More precisely, let us justify why \(\sum _{l=-\infty }^{\infty }|T_{l,p,q;\Delta }|<\infty \). The finiteness of \(\sum _{l=-\infty }^{\infty }|\gamma (l\Delta ) \gamma ((l+p-q)\Delta ) +\gamma ((l-q)\Delta )\gamma ((l+p)\Delta )|\) follows from (13). For the second term, for \(q \in \mathbb {Z}\), define

$$\displaystyle \begin{aligned} & \Bigg(\widetilde{G}_{q;\Delta}:\mathbb{R}\times [0, \Delta] \to \mathbb{R}, (x, u) \mapsto \widetilde{G}_{q;\Delta}(x,u) \\ &\quad =\sum_{j=-\infty}^{\infty}|f(x, u+j\Delta)||f(x, u+(j+q)\Delta)|\Bigg), \end{aligned} $$

which is in \(L^2(\mathbb {R} \times [0, \Delta ])\) due to (11). We consider the periodic continuation of \(\widetilde {G}_{q;\Delta }\) and set

$$\displaystyle \begin{aligned} & \Bigg(\widetilde{G}_{q;\Delta}:\mathbb{R}\times \mathbb{R} \to \mathbb{R}, (x, u) \mapsto \widetilde{G}_{q;\Delta}(x,u)\\ &\quad =\sum_{j=-\infty}^{\infty}|f(x, u+j\Delta)||f(x, u+(j+q)\Delta)|\Bigg). \end{aligned} $$

Since \(\widetilde {G}_{q;\Delta }\) is periodic and, restricted to \(\mathbb {R}\times [0, \Delta ]\) square-integrable, we have

$$\displaystyle \begin{aligned} &\int_{\mathbb{R} \times \mathbb{R}} \sum_{l=-\infty}^{\infty} |f(x, l\Delta+s) f(x, (l+p)\Delta+s)| |f(x, s) f(x, q\Delta+s)| dx ds\\ &=\int_{\mathbb{R} \times \mathbb{R}}\widetilde{G}_p(s) |f(x, s) f(x, q\Delta+s)| dx ds \\ &=\sum_{j=-\infty}^{\infty}\int_{\mathbb{R} \times [j\Delta, (j+1)\Delta]}\widetilde{G}_p(s) |f(x, s) f(x, q\Delta+s)| dx ds\\ &=\sum_{j=-\infty}^{\infty}\int_{\mathbb{R} \times [0, \Delta]}\widetilde{G}_p(s+j\Delta) |f(x, s+j\Delta) f(x, (q+j)\Delta+s)| dx ds\\ &=\sum_{j=-\infty}^{\infty}\int_{\mathbb{R} \times [0, \Delta]}\widetilde{G}_p(s) |f(x, s+j\Delta) f(x, (q+j)\Delta+s)| dx ds\\ &=\int_{\mathbb{R} \times [0, \Delta]}\widetilde{G}_p(s) \sum_{j=-\infty}^{\infty}|f(x, s+j\Delta) f(x, (q+j)\Delta+s)| dx ds\\ &=\int_{\mathbb{R} \times [0, \Delta]}\widetilde{G}_p(s) \widetilde{G}_q(s) dx ds<\infty. \end{aligned} $$

Equation (14) follows from the same calculations as above without the modulus sign in the definition of \(\widetilde {G}\). â–¡

Proof (Proof of Theorem2)

  1. 1.

    For a function f with compact support, the result can be deduced as in [13, Proposition 7.3.2]. The general case can be handled as follows, where we adapt the proof of [14, Theorem 3.5] to our more general setting. As in the proof for the sample mean, define the function \(f_{m;\Delta }(x,s):=f(x,s)\mathbb {I}_{(-m \Delta , m \Delta )}(s)\), for \(m \in \mathbb {N}, x, s \in \mathbb {R}\), and set

    $$\displaystyle \begin{aligned} Y_{j; \Delta}^m &:=\int_{\mathbb{R} \times \mathbb{R}}f_{m; \Delta}(x,s)L(dx, ds) \\ & =\int_{\mathbb{R} \times ((-m+j)\Delta, (m+j)\Delta)}f(x, j\Delta-s) L(dx, ds). \end{aligned} $$

    We denote by \(\gamma _m\) the autocovariance function of the process \((Y_{j; \Delta }^m)_{j\in \mathbb {Z}}\). We set

    $$\displaystyle \begin{aligned} \gamma_{n;\Delta;m}^*(p\Delta)=\sum_{j=1}^n Y_{j; \Delta}^mY_{j+p; \Delta}^m, \quad p=0, \ldots, h. \end{aligned} $$

    Then, we have

    $$\displaystyle \begin{aligned} &\sqrt{n}(\gamma_{n;\Delta;m}^*(0)-\gamma_m(0), \ldots, \gamma_{n;\Delta;m}^*(h\Delta)-\gamma_m(h\Delta))^{\top} \\ &\quad \stackrel{\mathrm{d}}{\to}Z_{\Delta;m}\sim \mathrm{N}(0,V_{\Delta;m}), \quad n \to \infty, \end{aligned} $$

    where the asymptotic covariance matrix is given by \(V_{\Delta ;m}=(v_{pq; \Delta ;m})_{p,q=0,\ldots ,h} \in \mathbb {R}^{h+1,h+1}\) with \(v_{pq;\Delta ;m}\) defined as

    $$\displaystyle \begin{aligned} v_{pq;\Delta;m}&:=(\eta-3)\kappa_2^2\int_{\mathbb{R}\times[0,\Delta]}G_{p;\Delta;m}(x,u)G_{q;\Delta;m}(x,u)dxdu \\ &+\sum_{l=-\infty}^{\infty}[\gamma_m(l\Delta)\gamma_m((l+p-q)\Delta)\\ &+ \gamma_m((l-q)\Delta)\gamma_m((l+p)\Delta)], \\ G_{q;\Delta;m}(x,u)&:=\sum_{j=-\infty}^{\infty}f_m(x, u+j\Delta)f_m(x, u+(j+q)\Delta), \quad u \in [0, \Delta]. \end{aligned} $$

    We would like to show that \(\lim _{m\to \infty }V_{\Delta ;m}=V_{\Delta }\). For this, we note that

    $$\displaystyle \begin{aligned} G_{q;\Delta;m}(x,u)&=\sum_{j=-\infty}^{\infty}f_m(x, u+j\Delta)f_m(x, u+(j+q)\Delta)\\ & \to G_{q;\Delta}(x,u)=\sum_{j=-\infty}^{\infty}f(x, u+j\Delta)f(x, u+(j+q)\Delta), \end{aligned} $$

    uniformly in \(u \in [0, \Delta ]\), as \(m\to \infty \), by Lebesgue’s Dominated Convergence Theorem, since the function \((x,u)\mapsto \sum _{j=-\infty }^{\infty }|f(x, u+j\Delta )||f(x, u+(j+q)\Delta )|\) is in \(L^2(\mathbb {R} \times [0, \Delta ])\) by (11) and is therefore almost surely finite. Moreover, we note that

    $$\displaystyle \begin{aligned} |G_{q;\Delta;m}(x,u)|&\leq\sum_{j=-\infty}^{\infty}|f(x, u+j\Delta)||f(x, u+(j+q)\Delta)|, \end{aligned} $$

    uniformly in u and m. Hence, an application of the Dominated Convergence Theorem leads to \(G_{q;\Delta ;m}\to G_{q;\Delta }\) in \(L^2(\mathbb {R} \times [0, \Delta ])\) as \(m \to \infty \). We also note that

    $$\displaystyle \begin{aligned} |\gamma_{m}(j\Delta)|&\leq \int_{\mathbb{R}\times \mathbb{R}}|f(x, u)||f(x, u+j\Delta)|dxdu, \quad \forall m \in \mathbb{N}, \forall j \in \mathbb{Z}. \end{aligned} $$

    Moreover, \(\lim _{m\to \infty }\gamma _m(j\Delta )=\gamma (j\Delta )\) for all \(j \in \mathbb {Z}\). Assumption (15) together with the Dominated Convergence Theorem allows us to conclude that \((\gamma _m(j\Delta ))_{j \in \mathbb {Z}}\) converges in \(L^2(\mathbb {Z})\) to \((\gamma (j\Delta ))_{j \in \mathbb {Z}}\). Combining this result with our earlier finding of the convergence of \(G_{q;\Delta ;m}\) implies that \(\lim _{m\to \infty }V_{\Delta ;m}=V_{\Delta }\) and

    $$\displaystyle \begin{aligned} Z_{\Delta;m}\stackrel{\mathrm{d}}{\to} Z_{\Delta}, \quad m \to \infty, \end{aligned} $$

    where \(Z_{\Delta }\stackrel {\mathrm {d}}{=}\mathrm {N}(0, V_{\Delta })\).

    Now, using the same arguments as in the proof of [13, Equation (7.3.9)], we can show that

    $$\displaystyle \begin{aligned} \lim_{m\to \infty}\limsup_{n\to \infty}\mathbb{P}(n^{1/2}|\gamma_{n;\Delta;m}^*(q\Delta)-\gamma_m(q\Delta)-\gamma^*(q \Delta)+\gamma(q\Delta)|>\epsilon)=0, \end{aligned} $$

    for all \(\epsilon >0, q\in \{0, \ldots , h\}\). An application of a variant of Slutsky’s theorem, see [13, Proposition 6.3.9] completes the proof.

  2. 2.

    This part can be proven in similar way as the proof of [13, Proposition 7.3.4]. Also, as in the proof of [14, Theorem 3.5 b)], we observe that \(\sqrt {n} \overline {Y}_{n;\Delta }\) converges to a Gaussian random variable as \(n \to \infty \) due to Theorem 1 and \(\overline {Y}_{n;\Delta }\) converges to 0 in probability as \(n \to \infty \) (since we assume here that \(\mu =0).\)

  3. 3.

    For the final part of the theorem, we can argue as in the proof of [13, Theorem 7.2.1], where the \(w_{pq;\Delta }\) are obtained via the Bartlett formula

    $$\displaystyle \begin{aligned} w_{pq;\Delta}=(v_{pq;\Delta}-\rho(p\Delta)v_{0q;\Delta} -\rho(q\Delta)v_{p0;\Delta} +\rho(p\Delta)\rho(q\Delta)v_{00;\Delta})/\gamma^2(0). \end{aligned} $$

    We can simplify the above formula and write

    $$\displaystyle \begin{aligned} w_{pq;\Delta}=w_{pq;\Delta}^{(1)}+w_{pq;\Delta}^{(2)}, \end{aligned} $$

    where

    $$\displaystyle \begin{aligned} w_{pq;\Delta}^{(1)}&:=\frac{(\eta-3)\kappa_2^2}{\gamma^2(0)} \int_{\mathbb{R} \times [0,\Delta]} (G_{p;\Delta}(x,u)-G_{0;\Delta}(x,u)\rho(p\Delta))\\ &\quad \cdot (G_{q;\Delta}(x,u)-G_{0;\Delta}(x,u)\rho(q\Delta))dxdu, \end{aligned} $$

    and

    $$\displaystyle \begin{aligned} w_{pq;\Delta}^{(2)}&:= \sum_{l=-\infty}^{\infty} \left[ \rho(l\Delta)\rho((l+p-q)\Delta)+\rho((l-q)\Delta)\rho((l+p)\Delta) \right.\\ & -2\rho(l\Delta)\rho((l-q)\Delta)\rho(p\Delta) -2\rho(l\Delta)\rho((l+p)\Delta)\rho(q\Delta)\\ &\left. +2\rho(p\Delta)\rho(q\Delta)\rho^2(l\Delta)\right]. \end{aligned} $$

    Note that

    $$\displaystyle \begin{aligned} \sum_{l=-\infty}^{\infty} \rho(l\Delta)\rho((l+p-q)\Delta) &= \sum_{l=-\infty}^{\infty} \rho((l+q)\Delta)\rho((l+p)\Delta),\\ \sum_{l=-\infty}^{\infty} \rho(l\Delta)\rho((l-q)\Delta)\rho(p\Delta) &= \sum_{l=-\infty}^{\infty} \rho((l+q)\Delta)\rho(l\Delta)\rho(p\Delta). \end{aligned} $$

    Hence,

    $$\displaystyle \begin{aligned} w_{pq;\Delta}^{(2)}&= \sum_{l=-\infty}^{\infty} \left[ \rho((l+q)\Delta)\rho((l+p)\Delta)+\rho((l-q)\Delta)\rho((l+p)\Delta) \right.\\ & -2\rho((l+q)\Delta)\rho(l\Delta)\rho(p\Delta) -2\rho(l\Delta)\rho((l+p)\Delta)\rho(q\Delta)\\ &\left. +2\rho(p\Delta)\rho(q\Delta)\rho^2(l\Delta)\right]. \end{aligned} $$

    Hence,

    $$\displaystyle \begin{aligned} w_{pq;\Delta}&= \frac{(\eta-3)\kappa_2^2}{\gamma^2(0)} \int_{\mathbb{R} \times [0,\Delta]} (G_{p;\Delta}(x,u)-G_{0;\Delta}(x,u)\rho(p\Delta))\\ &\quad \cdot (G_{q;\Delta}(x,u)-G_{0;\Delta}(x,u)\rho(q\Delta))dxdu\\ &+\sum_{l=-\infty}^{\infty} \left[ \rho((l+q)\Delta)\rho((l+p)\Delta)+\rho((l-q)\Delta)\rho((l+p)\Delta) \right.\\ & -2\rho((l+q)\Delta)\rho(l\Delta)\rho(p\Delta) -2\rho(l\Delta)\rho((l+p)\Delta)\rho(q\Delta)\\ &\left. +2\rho(p\Delta)\rho(q\Delta)\rho^2(l\Delta)\right]. \end{aligned} $$

â–¡

1.3 Examples

In this subsection, we show how the asymptotic variances appearing in the asymptotic theory for the sample mean can be computed for trawl processes with either an exponential or a supGamma trawl function.

Note that, in the case when \(p\equiv 1\), i.e. for the (non-periodic) trawl process, we get

$$\displaystyle \begin{aligned} \gamma(h)={\mathrm{Cov}}(Y_t, Y_{t+h})=\int_{|h|}^{\infty}g(u)du, \end{aligned} $$

for \(t, h \in \mathbb {R}\).

1.3.1 Exponential Trawl

Consider the case of an exponential trawl function with \(g(x)=\exp (-\lambda x)\), for \(x \geq 0\). The autocovariance function is given by

$$\displaystyle \begin{aligned} \gamma(t)={\mathrm{Cov}}(Y_t, Y_0)=\frac{1}{\lambda}e^{-\lambda |t|}, \end{aligned} $$

for \(t\in \mathbb {R}\), and the autocorrelation function is given by

$$\displaystyle \begin{aligned} \rho(t)={\mathrm{Cor}}(Y_t, Y_0)=e^{-\lambda |t|}, \end{aligned} $$

for \(t\in \mathbb {R}\).

For the sample mean, we have the following result. Suppose that \(\mathbb {E}(L')=0, \kappa _2={\mathrm {Var}}(L')<\infty , \mu \in \mathbb {R}\) and \(\Delta >0\). Then

$$\displaystyle \begin{aligned} & \left(F_{\Delta}:\mathbb{R}\times [0, \Delta] \to [0, \infty], (x, u) \mapsto F_{\Delta}(x,u)=\sum_{j=-\infty}^{\infty}|f(x, u+j\Delta)|\right)\\ & \quad \in L^2(\mathbb{R} \times [0, \Delta]) \end{aligned} $$

since

$$\displaystyle \begin{aligned} F_{\Delta}(x,u)=\sum_{j=-\infty}^{\infty}|f(x, u+j\Delta)| =\sum_{j=-\infty}^{\infty}\mathbb{I}_{(0, g(u+j\Delta)}(x), \end{aligned} $$

and we have that

$$\displaystyle \begin{aligned} & \int_{\mathbb{R} \times [0, \Delta]}|F_{\Delta}(x,u)|{}^2dx du \\ &\quad =\int_{\mathbb{R} \times [0, \Delta]} \sum_{j=-\infty}^{\infty}\sum_{k=-\infty}^{\infty}\mathbb{I}_{(0, g(u+j\Delta)}(x) \mathbb{I}_{(0, g(u+k\Delta)}(x)dx du \\ &\quad =\sum_{j=-\infty}^{\infty}\sum_{k=-\infty}^{\infty}\int_{\mathbb{R} \times [0, \Delta]} \mathbb{I}_{(0, g(u+j\Delta)}(x) \mathbb{I}_{(0, g(u+k\Delta)}(x)dx du\\ &\quad =\sum_{j=0}^{\infty}\sum_{k=0}^{\infty}\int_{ [0, \Delta]} \min\{g(u+j\Delta),g(u+k\Delta)\} du\\ &\quad =\sum_{j=0}^{\infty}\int_{ [0, \Delta]} g(u+j\Delta) du +2 \sum_{j=0}^{\infty}\sum_{k=j+1}^{\infty}\int_{ [0, \Delta]} g(u+k\Delta) du, \end{aligned} $$

where we applied Tonelli’s theorem. Hence, we deduce that \(\sum _{j=-\infty }^{\infty } |\gamma (\Delta j)| < \infty \),

$$\displaystyle \begin{aligned} {} V_{\Delta}:=\sum_{j=-\infty}^{\infty}\gamma( \Delta j) = \kappa_2 \int_{\mathbb{R} \times [0, \Delta]} \left(\sum_{j=1}^{\infty}f(x, u+j\Delta)\right)^2 dx du, \end{aligned} $$
(21)

and the sample mean of \(Y_{\Delta i}\), for \(i=1, \ldots , n\), is asymptotically Gaussian as \(n \to \infty \), i.e.

$$\displaystyle \begin{aligned} \sqrt{n}\left(\overline{Y}_{n; \Delta} - \mu \right) \stackrel{\mathrm{d}}{\to} \mathrm{N}\left(0, V_{\Delta}\right), \quad \mathrm{as} \, n \to \infty. \end{aligned} $$

For the case of an exponential trawl function, we get

$$\displaystyle \begin{aligned} V_{\Delta}&=\sum_{j=-\infty}^{\infty}\gamma(\Delta j) =\sum_{j=-\infty}^{\infty}\frac{1}{\lambda}e^{-\lambda |j|\Delta}\\ & =\frac{1}{\lambda}\left(1+2\sum_{j=1}^{\infty}e^{-\lambda j \Delta}\right) =\frac{1}{\lambda}\left(1+2\frac{e^{-\lambda \Delta}}{1-e^{-\lambda \Delta}}\right)\\ &=\frac{1+e^{-\lambda \Delta}}{\lambda(1-e^{-\lambda \Delta})}=\frac{1+e^{\lambda \Delta}}{\lambda(e^{\lambda \Delta}-1)}. \end{aligned} $$

1.3.2 SupGamma Trawl

In the case when \(g(x)=(1+x/\alpha )^{-H}\), \(\alpha >0, H>2\), i.e. we require a short-memory setting, \(x \geq 0\), we have

$$\displaystyle \begin{aligned} \gamma(h)=\int_{|h|}^{\infty}g(x)dx=\frac{\alpha}{H-1}\left(1+\frac{|h|}{\alpha}\right)^{1-H}. \end{aligned} $$

Then

$$\displaystyle \begin{aligned} V_{\Delta}&=\sum_{j=-\infty}^{\infty}\gamma(j \Delta) =\frac{\alpha}{H-1}\sum_{j=-\infty}^{\infty} \left(1+\frac{|j|\Delta}{\alpha}\right)^{1-H} \\ &=\frac{\alpha}{H-1}\left(\frac{\Delta}{\alpha}\right)^{1-H} \sum_{j=-\infty}^{\infty} \left(\frac{\alpha}{\Delta}+|j|\right)^{1-H} \\ &=\frac{\alpha}{H-1}\left(\frac{\Delta}{\alpha}\right)^{1-H} \left(\frac{\alpha}{\Delta} \right)^{1-H} \left[ 2 \left(\frac{\alpha}{\Delta} \right)^{H-1}\zeta(H-1, \alpha/\Delta)-1\right]\\ &=\frac{\alpha}{H-1} \left[ 2 \left(\frac{\alpha}{\Delta} \right)^{H-1}\zeta(H-1, \alpha/\Delta)-1\right], \end{aligned} $$

where \(\zeta \) denotes the Hurwitz Zeta function defined by \( \zeta (s, a)=\sum _{k=0}^{\infty }\frac {1}{(k+a)^s}\), for \(\mathrm {Re}(s)>1\).

1.4 Verifying the Assumptions of Theorem 2 for Selected Periodic Trawl Processes

For the applications discussed in Sect. 5, we need to verify the condition (11) from Proposition 7 and Assumption (15) from Theorem 2 assuming that the corresponding moment assumptions for the Lévy seed hold.

For both conditions, it is sufficient to check that a (non-periodic) trawl process satisfies the stated conditions since the periodic function is bounded. Hence, in the following, we shall set \(p\equiv 1\).

1.4.1 Verifying Condition (11) from Proposition 7

We need to check that

$$\displaystyle \begin{aligned} \left(\mathbb{R}\times [0, \Delta] \to \mathbb{R}, (x, u) \mapsto \sum_{j=-\infty}^{\infty}f^2(x, u+j\Delta)\right) \in L^2(\mathbb{R} \times [0, \Delta]). \end{aligned} $$

This condition holds for trawl processes if, for \((x, u)\in \mathbb {R}\times [0, \Delta ]\),

$$\displaystyle \begin{aligned} & \sum_{j=-\infty}^{\infty}f^2(x, u+j\Delta) =\sum_{j=-\infty}^{\infty}f(x, u+j\Delta) \\ &\quad =\sum_{j=-\infty}^{\infty}\mathbb{I}_{(0, g(u+j\Delta))}(x) \mathbb{I}_{[0, \infty)}(u+j\Delta) =F_{\Delta}(x,u)\in L^2(\mathbb{R} \times [0, \Delta]). \end{aligned} $$

This is equivalent to checking that

$$\displaystyle \begin{aligned} \begin{aligned}{} & \int_{\mathbb{R} \times [0, \Delta]}|F_{\Delta}(x,u)|{}^2dxdu \\ &= \int_{\mathbb{R} \times [0, \Delta]} \sum_{j=-\infty}^{\infty}\sum_{k=-\infty}^{\infty}\mathbb{I}_{(0, g(u+j\Delta))}(x) \mathbb{I}_{[0, \infty)}(u+j\Delta) \mathbb{I}_{(0, g(u+k\Delta))}(x)\\ &\qquad \mathbb{I}_{[0, \infty)}(u+k\Delta) dxdu\\ &\propto \sum_{j=-\infty}^{\infty}\gamma(j \Delta) < \infty.\end{aligned} \end{aligned} $$
(22)

This condition is satisfied both for an exponential trawl function and also for a supGamma trawl function with short memory. In the latter case, we have that \(\gamma (x) \propto (1+|x|/\alpha )^{1-H}\) for \(\alpha >0, H>2\). Then, the finiteness of (22) follows using the \(\zeta \)-function representation.

1.4.2 Verifying Assumption (15) from Theorem 2

We need to verify

$$\displaystyle \begin{aligned} \sum_{j=-\infty}^{\infty}\left(\int_{\mathbb{R} \times \mathbb{R} } |f(x, u)||f(x, u+j\Delta)| dxdu\right)^2<\infty. \end{aligned} $$

Using very similar computations as before, we find that the above condition is equivalent to

$$\displaystyle \begin{aligned} &\sum_{j=-\infty}^{\infty}\left(\int_{\mathbb{R} \times \mathbb{R} } |f(x, u)||f(x, u+j\Delta)| dxdu\right)^2\\ &=\sum_{j=-\infty}^{\infty}\left(\int_{\mathbb{R} \times \mathbb{R} } \mathbb{I}_{(0, g(u)}(x) \mathbb{I}_{[0, \infty)}(u) \mathbb{I}_{(0, g(u+j\Delta))}(x) \mathbb{I}_{[0, \infty)}(u+j\Delta) dxdu\right)^2\\ & \propto\sum_{j=-\infty}^{\infty}\gamma^2(j\Delta)<\infty, \end{aligned} $$

which is satisfied by the exponential trawl function and the supGamma trawl functions with \(H>3/2\), which includes some long-memory settings.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Veraart, A.E.D. (2024). Periodic Trawl Processes: Simulation, Statistical Inference and Applications in Energy Markets. In: Benth, F.E., Veraart, A.E.D. (eds) Quantitative Energy Finance. Springer, Cham. https://doi.org/10.1007/978-3-031-50597-3_3

Download citation

Publish with us

Policies and ethics

Navigation