Log in

A computationally efficient algorithm to estimate the parameters of a two-dimensional chirp model with the product term

  • Published:
Multidimensional Systems and Signal Processing Aims and scope Submit manuscript

Abstract

Chirp signal models and their generalizations have been used to model many natural and man-made phenomena in signal processing and time series literature. In recent times, several methods have been proposed for parameter estimation of these models. However, these methods are either statistically sub-optimal or computationally burdensome, especially for two dimensional chirp models. In this paper, we consider the problem of parameter estimation of two dimensional chirp models and propose a computationally efficient estimator and establish asymptotic theoretical properties of the proposed estimators. Moreover, the proposed estimators are observed to have the same rates of convergence as the least squares estimators. Further, the proposed estimators of chirp rate parameters are shown to be asymptotically optimal. Extensive and detailed numerical simulations are conducted, which support the theoretical results of the proposed estimators.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Germany)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data Availability

Not applicable.

Code Availability

The authors can share the code used for simulations, on individual requests.

References

  • Barbarossa, S., Scaglione, A., & Giannakis, G. B. (1998). Product high-order ambiguity function for multicomponent polynomial-phase signal modeling. IEEE Transactions on Signal Processing, 46(3), 691–708.

    Article  Google Scholar 

  • Barbarossa, S., Di Lorenzo, P., & Vecchiarelli, P. (2014). Parameter estimation of 2D multi-component polynomial phase signals: An application to SAR imaging of moving targets. IEEE Transactions on Signal Processing, 62(17), 4375–4389.

    Article  MathSciNet  MATH  Google Scholar 

  • Djurović, I., Wang, P., & Ioana, C. (2010). Parameter estimation of 2-D cubic phase signal using cubic phase function with genetic algorithm. Signal Processing, 90(9), 2698–2707.

    Article  MATH  Google Scholar 

  • Djurović, I., & Stanković, L. (2014). Quasi-maximum-likelihood estimator of polynomial phase signals. IET Signal Processing, 8(4), 347–359.

    Article  Google Scholar 

  • Djurović, I. (2017). Quasi ML algorithm for 2-D PPS estimation. Multidimensional Systems and Signal Processing, 28(2), 371–387.

    Article  MathSciNet  MATH  Google Scholar 

  • Francos, J. M., & Friedlander, B. (1995). The polynomial phase difference operator for modeling of nonhomogeneous images. In Proceedings., International Conference on Image Processing (Vol. 2, pp. 276–279). IEEE.

  • Friedlander, B., & Francos, J. M. (1996). Model based phase unwrap** of 2-D signals. IEEE Transactions on Signal Processing, 44(12), 2999–3007.

    Article  Google Scholar 

  • Francos, J. M., & Friedlander, B. (1998). Two-dimensional polynomial phase signals: Parameter estimation and bounds. Multidimensional Systems and Signal Processing, 9(2), 173–205.

    Article  MathSciNet  MATH  Google Scholar 

  • Francos, J. M., & Friedlander, B. (1999). Parameter estimation of 2-D random amplitude polynomial-phase signals. IEEE Transactions on Signal Processing, 47(7), 1795–1810.

    Article  MathSciNet  MATH  Google Scholar 

  • Fuller, W. A. (1996). Introduction to Statistical Time Series (2nd ed.). John Wiley and Sons.

  • Grover, R., Kundu, D., & Mitra, A. (2018). Approximate least squares estimators of a two-dimensional Chirp model and their asymptotic properties. Journal of Multivariate Analysis, 168, 211–220.

    Article  MathSciNet  MATH  Google Scholar 

  • Grover, R., Kundu, D., & Mitra, A. (2021). An efficient methodology to estimate the parameters of a two-dimensional Chirp signal model. Multidimensional Systems and Signal Processing, 32(1), 49–75.

    Article  MathSciNet  MATH  Google Scholar 

  • Guo, Y., & Li, B. Z. (2018). Novel method for parameter estimation of Newton’s rings based on CFRFT and ER-WCA. Signal Processing, 144, 118–126.

    Article  Google Scholar 

  • Lahiri, A., Kundu, D., & Mitra, A. (2013). Efficient algorithm for estimating the parameters of two dimensional Chirp signal. Sankhya B, 75(1), 65–89.

    Article  MathSciNet  MATH  Google Scholar 

  • Lahiri, A., Kundu, D., & Mitra, A. (2015). Estimating the parameters of multiple chirp signals. Journal of Multivariate Analysis, 139, 189–206.

    Article  MathSciNet  MATH  Google Scholar 

  • Lahiri, A., & Kundu, D. (2017). On parameter estimation of two-dimensional polynomial phase signal model. Statistica Sinica, 27, 1779–1792.

    MathSciNet  MATH  Google Scholar 

  • Montgomery, H. L. (1994). Ten lectures on the interface between analytic number theory and harmonic analysis (No. 84). American Mathematical Soc.

  • Nandi, S., & Kundu, D. (2004). Asymptotic properties of the least squares estimators of the parameters of the chirp signals. Annals of the Institute of Statistical Mathematics, 56(3), 529–544.

    Article  MathSciNet  MATH  Google Scholar 

  • O’shea, P. (2002). A new technique for instantaneous frequency rate estimation. IEEE Signal Processing Letters, 9(8), 251–252.

    Article  Google Scholar 

  • Peleg, S., & Porat, B. (1991). Estimation and classification of polynomial-phase signals. IEEE Transactions on Information Theory, 37(2), 422–430.

    Article  MathSciNet  Google Scholar 

  • Stankovic, S., Djurovic, I., & Pitas, I. (2001). Watermarking in the space/spatial-frequency domain using two-dimensional Radon-Wigner distribution. IEEE Transactions on Image Processing, 10(4), 650–658.

    Article  MATH  Google Scholar 

  • Wu, Y., So, H. C., & Liu, H. (2008). Subspace-based algorithm for parameter estimation of polynomial phase signals. IEEE Transactions on Signal Processing, 56(10), 4977–4983.

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang, K., Wang, S., & Cao, F. (2008). Product cubic phase function algorithm for estimating the instantaneous frequency rate of multicomponent two-dimensional Chirp signals. In 2008 Congress on Image and Signal Processing (Vol. 5, pp. 498–502). IEEE.

  • Zhang, Y., Mobasseri, B. G., Dogahe, B. M., & Amin, M. G. (2010). Image-adaptive watermarking using 2D Chirps. Signal, Image and Video Processing, 4(1), 105–121.

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank the Editor for his encouragement and continuous support, and anonymous reviewer for providing critical and constructive comments. Part of the work of the third author has been supported by a research grant from the Science and Engineering Research Board, Government of India.

Funding

Part of the work of the third author has been supported by a research grant from the Science and Engineering Research Board, Government of India.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to study the conceptualization, methodology, review and editing. Formal analysis of proofs and simulations analysis were performed by AS. The first draft of the manuscript was written by AS and all authors commented on revising the previous versions of the manuscript. All authors read and approved the final manuscript. AM and DK had supervised the whole research project.

Corresponding author

Correspondence to Abhinek Shukla.

Ethics declarations

Conflict of interest

The authors declares that they have no conflict of interest.

Consent for Publication

Not applicable.

Ethical Approval

Not applicable.

Consent to Participate

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A

Proof of Theorem 1:

Given the data matrix \({\varvec{Y}}\), we compute the LSEs of \(\alpha ^0+n_0\mu ^0\) and \(\beta ^0\) corresponding to \(n_0^{th}\) column vector of \({\varvec{Y}}\). We denote the obtained estimators by \({\widehat{\alpha }}_{n_0}\) and \({\widehat{\beta }}_{n_0}\) to emphasize that these depend on \(n_0\). Similarly, for fixed \(m_0^{th}\) row of \({\varvec{Y}}\), we have denoted the LSEs of \(\gamma ^0+m_0\mu ^0\) and \(\delta ^0\) by \({\widehat{\gamma }}_{m_0}\) and \( {\widehat{\delta }}_{m_0}\). Under assumptions that \(X(m_0,n_0)\) are stationary (11) and (12), see Nandi and Kundu (2004), we have

$$\begin{aligned} {\widehat{\beta }}_{n_0}&= \beta ^0+o\bigg (\frac{1}{M^2}\bigg ), {\widehat{\alpha }}_{n_0}=\alpha ^0+n_0\mu ^0+o\bigg (\frac{1}{M}\bigg ),\end{aligned}$$
(15)
$$\begin{aligned} {\widehat{\gamma }}_{m_0}&= \gamma ^0+m_0\mu ^0+o\bigg (\frac{1}{N}\bigg ), {\widehat{\delta }}_{m_0}=\delta ^0+o\bigg (\frac{1}{N^2}\bigg ). \end{aligned}$$
(16)

The final estimator of \(\beta ^0\) given by

$$\begin{aligned} {\widehat{\beta }}= \frac{\displaystyle \sum _{n_0=1}^{N}{\widehat{\beta }}_{n_0}}{N} \end{aligned}$$

is strongly consistent estimate of \(\beta ^0\) which is observed by (15) and also that \( {\widehat{\beta }}_{n_0}\) is strongly consistent for \(\beta ^0\) as \(M\xrightarrow {} \infty \). For proof, one may refer to Lahiri et al. (2015).

Similarly \({\widehat{\delta }}=\displaystyle \frac{1}{M}\sum _{m_0=1}^{M}{\widehat{\delta }}_{m_0}\) is strongly consistent estimator of \(\delta ^0\).

Denote

We now prove the consistency of the frequency parameter estimators \( {\widehat{\alpha }}, {\widehat{\gamma }}\), and that of \({\widehat{\mu }}\), estimator of the interaction term parameter. From (15) and (16), we have thefollowing:

$$\begin{aligned} \begin{bmatrix} {\widehat{\alpha }}\\ l{\widehat{\gamma }}\\ {\widehat{\mu }} \end{bmatrix} =\Big ({\varvec{\Gamma }}^\top {\varvec{\Gamma }}\Big )^{-1}{\varvec{\Gamma }}^\top {\varvec{\Lambda }}= ({\varvec{\Gamma }}^\top {\varvec{\Gamma }})^{-1}{\varvec{\Gamma }}^\top \bigg ({\varvec{\Gamma }} \begin{bmatrix} \alpha ^0\\ \gamma ^0\\ \mu ^0 \end{bmatrix}+{\varvec{\tau }}\bigg ), \end{aligned}$$
(17)

where \({\varvec{\Gamma }}^\top =\begin{bmatrix} 1&{}1&{}\cdots &{}1&{}0&{}0&{}\cdots &{}0\\ 0&{}0&{}\cdots &{}0&{}1&{}1&{}\cdots &{}1\\ 1&{}2&{}\cdots &{}N&{}1&{}2&{}\cdots &{}M \end{bmatrix}_{3\times (M+N)}\), and \({\varvec{\Lambda }}^\top =\begin{bmatrix} {\widehat{\alpha }}_1&{\widehat{\alpha }}_2&\cdots&{\widehat{\alpha }}_N&{\widehat{\gamma }}_1&{\widehat{\gamma }}_2&\cdots&{\widehat{\gamma }}_M \end{bmatrix}\).

This implies that

$$\begin{aligned}&\begin{bmatrix} {\widehat{\alpha }}\\ {\widehat{\gamma }}\\ {\widehat{\mu }} \end{bmatrix}= \begin{bmatrix} \alpha ^0\\ \gamma ^0\\ \mu ^0 \end{bmatrix}+({\varvec{\Gamma }}^\top {\varvec{\Gamma }})^{-1}\begin{bmatrix} N\times o\bigg (\frac{1}{M}\bigg )\\ M\times o\bigg (\frac{1}{N}\bigg )\\ \frac{N(N+1)}{2}\times o\bigg (\frac{1}{M}\bigg )+\frac{M(M+1)}{2}\times o\bigg (\frac{1}{N}\bigg ) \end{bmatrix}. \end{aligned}$$
(18)

Now we look at the first element \(a_1+a_2-a_3\) of the following matrix

$$\begin{aligned}({\varvec{\Gamma }}^\top {\varvec{\Gamma }})^{-1}\begin{bmatrix} N\times o\bigg (\frac{1}{M}\bigg )\\ M\times o\bigg (\frac{1}{N}\bigg )\\ \frac{N(N+1)}{2}\times o\bigg (\frac{1}{M}\bigg )+\frac{M(M+1)}{2}\times o\bigg (\frac{1}{N}\bigg ) \end{bmatrix}.\end{aligned}$$

where,

$$\begin{aligned} a_1&=\bigg (MK-\frac{M^2(M+1)^2}{4}\bigg ) N\times o\bigg (\frac{1}{M}\bigg )/\** ,&\\ a_2&= \frac{MN(M+1)(N+1)}{4} M\times o\bigg (\frac{1}{N}\bigg )/\** ,&\\ a_3&= \frac{MN(N+1)}{2}\Bigg (\frac{N(N+1)}{2}\times o\bigg (\frac{1}{M}\bigg )+\frac{M(M+1)}{2}\times o\bigg (\frac{1}{N}\bigg )\Bigg )/\** ,&\end{aligned}$$

and

$$\begin{aligned} \** = \frac{MN}{12} \bigg (N(N^2-1)+M(M^2-1)\bigg ),K=\frac{N(N+1)(2N+1)}{6}+\frac{M(M+1)(2M+1)}{6}. \end{aligned}$$

Now we look at \(a_1, a_2\) and \(a_3\) individually and compute their limits.

$$\begin{aligned} a_1&=\bigg (MK-\frac{M^2(M+1)^2}{4}\bigg ) N\times o\bigg (\frac{1}{M}\bigg )/\**&\\~\\&= N\times \frac{o(1)}{\** }\times \bigg (\frac{N(N+1)(2N+1)}{6}+\frac{M(M+1)(M-1)}{12}\bigg )&\\~\\ {}&= \frac{o(1)}{M}\times \frac{\bigg (2N(N+1)(2N+1)+M(M^2-1)\bigg )}{\bigg (N(N^2-1)+M(M^2-1)\bigg )}.&\end{aligned}$$

\( \hbox { This implies that } a_1\xrightarrow {a.s.} 0\hbox { as } \min \{M,N\}\rightarrow \infty .\) Here,

$$\begin{aligned} a_2&= \frac{MN(M+1)(N+1)}{4} M\times o\bigg (\frac{1}{N}\bigg )/\** = o(1)\times \frac{M(M+1)}{\bigg (N(N^2-1)+M(M^2-1)\bigg )}.&\end{aligned}$$

\(\hbox { This implies that } a_2 \xrightarrow {a.s.} 0\hbox { as } \min \{M,N\}\rightarrow \infty .\)

$$\begin{aligned} a_3&= \frac{MN(N+1)}{2}\Bigg (\frac{N(N+1)}{2}\times o\bigg (\frac{1}{M}\bigg )+\frac{M(M+1)}{2}\times o\bigg (\frac{1}{N}\bigg )\Bigg )/\**&\\~\\ {}&= \frac{(N+1)}{2\** }\Bigg (N^2(N+1)\times o(1)+M^2(M+1)\times o(1)\Bigg ).&\end{aligned}$$

\(\hbox { This implies that } a_3 \xrightarrow {a.s.} 0\hbox { as } \min \{M,N\}\rightarrow \infty .\) Hence, \({\widehat{\alpha }}\) is strongly consistent estimate of \(\alpha ^0\). Similarly strong consistency of \({\widehat{\gamma }}\) for \(\gamma ^0\) can be derived.

Now, the third element of \(({\varvec{\Gamma }}^\top {\varvec{\Gamma }})^{-1}\begin{bmatrix} N\times o\bigg (\frac{1}{M}\bigg )\\ M\times o\bigg (\frac{1}{N}\bigg )\\ \frac{N(N+1)}{2}\times o\bigg (\frac{1}{M}\bigg )+\frac{M(M+1)}{2}\times o\bigg (\frac{1}{N}\bigg ) \end{bmatrix}\) can be written as \(-b_1-b_2+b_3\), where

$$\begin{aligned} b_1&=\frac{MN(N+1)}{2}\times N\times o\bigg (\frac{1}{M}\bigg )/\** theorem. \(\square \)

Appendix B

Proof of Theorem 2:

Suppose \({\varvec{\kappa }}^\top =(A,B,\alpha ,\beta ) \) and

\({\varvec{\kappa }}^{0^\top }= (A^0(n_0),B^0(n_0),\alpha ^0+n_0\mu ^0,\beta ^0)\). We define sum of squares as follows:

$$\begin{aligned}Q_{n_0}({\varvec{\kappa }})= \displaystyle \sum _{m_0=1}^M\bigg (y(m_0,n_0)-A\cos (\alpha m_0+\beta m_0^2)-B\sin (\alpha m_0+\beta m_0^2)\bigg )^2.\end{aligned}$$

Let \(\widehat{{\varvec{\kappa }}}\) be the minimizer of \(Q_{n_0}({\varvec{\kappa }})\), then using Taylor Series expansion on the first derivative vector \(Q_{n_0}^{\prime }({\varvec{\kappa }})\) around the point \({\varvec{\kappa }}^{0}\), we get:

$$\begin{aligned} {\varvec{Q}}_{n_0}^{\prime }(\widehat{{\varvec{\kappa }}})-{\varvec{Q}}_{n_0}^{\prime }({\varvec{\kappa }}^0) = {\varvec{Q}}_{n_0}^{\prime \prime }(\breve{{\varvec{\kappa }}}) (\widehat{{\varvec{\kappa }}}-{\varvec{\kappa }}^0), \end{aligned}$$
(19)

where \(\breve{{\varvec{\kappa }}}\) is a point between \(\widehat{{\varvec{\kappa }}}\) and \({\varvec{\kappa }}^0\). Also \({\varvec{Q}}_{n_0}^{\prime }(\widehat{{\varvec{\kappa }}})=0\) as \(\widehat{{\varvec{\kappa }}}\) is LSE of \({\varvec{\kappa }}^0\).

Let us denote \({\varvec{D}}_1^{-1}=diag(M^{1/2},M^{1/2},M^{3/2},M^{5/2})\). Then on multiplying \({\varvec{D}}_1^{-1}\) both sides of Eq. (19), it gives:

$$\begin{aligned}&-\big [{\varvec{D_1}}{\varvec{Q}}_{n_0}^{\prime \prime }(\breve{{\varvec{\kappa }}}){\varvec{D_1}}\big ]^{-1}{\varvec{\Sigma }}_{n_0} \displaystyle {\varvec{\Sigma }}_{n_0}^{-1}{\varvec{D_1}}{\varvec{Q}}_{n_0}^{\prime }({\varvec{\kappa }}^0), = {\varvec{D_1}}^{-1}(\widehat{{\varvec{\kappa }}}-{\varvec{\kappa }}^0)\nonumber \\&\quad \hbox {where } \lim \limits _{M\rightarrow \infty }\big [{\varvec{D_1}}{\varvec{Q}}_{n_0}^{\prime \prime }(\breve{{\varvec{\kappa }}}){\varvec{D_1}}\big ]^{-1}{\varvec{\Sigma }}_{n_0} = I_{4\times 4}, \end{aligned}$$
(20)

and

$$\begin{aligned} \displaystyle \displaystyle {\varvec{\Sigma }}_{n_0}^{-1}= \frac{2}{A^{0^2}+B^{0^2}} \begin{bmatrix} \frac{A^{0^2}(n_0)+9B^{0^{2}}(n_0)}{2}&{}-4A^0(n_0)B^0(n_0)&{}-18B^0(n_0)&{}15B^0(n_0)\\ -4A^0(n_0)B^0(n_0)&{}\frac{9A^{0^2}(n_0)+B^{0^{2}}(n_0)}{2}&{}18A^0(n_0)&{}-15A^0(n_0)\\ -18B^0(n_0)&{}18A^0(n_0)&{}96&{}-90\\ 15B^0(n_0)&{}-15A^0(n_0)&{}-90&{}90 \end{bmatrix}. \end{aligned}$$

The expression of \({\varvec{\Sigma }}_{n_0}^{-1}\) can be obtained by proof of Theorem 2 shown in Lahiri et al. (2015). By using Eq. (20) and above expression of \({\varvec{\Sigma }}_{n_0}^{-1}\), we get following asymptotically equivalent (a.e.) expression of the third element of vector \({\varvec{D_1}}^{-1}(\widehat{{\varvec{\kappa }}}-{\varvec{\kappa }}^0)\) as:

(21)

where, \(\eta (m_0,n_0)=X(m_0,n_0)\big (A^0\sin \phi (m_0,n_0,{\varvec{\xi }}^0)-B^0\cos \phi (m_0,n_0,{\varvec{\xi }}^0)\big )\),

\(\phi (m_0,n_0,{\varvec{\xi }}^0)= \alpha ^0m_0+\beta ^0m_0^2+\gamma ^0n_0+\delta ^0n_0^2+\mu ^0 m_0n_0\). We have used these notations for brevity. Similarly expressions of fourth, fifth and sixth element of \({\varvec{D_1}}^{-1}(\widehat{{\varvec{\kappa }}}-{\varvec{\kappa }}^0)\) can be written as in Eqs. (22), (23) and (24) respectively:

(22)
(23)
(24)

From Eqs. (22) and (24) above, we can see that estimators \({\widehat{\beta }}\) and \({\widehat{\delta }}\) of \(\beta ^0\) and \(\delta ^0\) are asymptotically equivalent to the LSEs as they have same asymptotic variances by applying central limit theorem for stationary linear processes, see Fuller (1996). So now, remaining is to show asymptotic properties of estimators \( ( {\widehat{\alpha }}, {\widehat{\gamma }}, {\widehat{\mu }})^\top \).

The expression of proposed estimators of \((\alpha ^0,\gamma ^0, \mu ^0)^\top \) obtained is:

$$\begin{aligned}{} & {} \begin{bmatrix} {\widehat{\alpha }}\\ {\widehat{\gamma }}\\ {\widehat{\mu }} \end{bmatrix} = \begin{bmatrix} \bigg (MK-\frac{M^2\big (M+1\big )^2}{4}\bigg )c_{\alpha }+ \frac{MN(M+1)(N+1)}{4}c_{\gamma }-\frac{MN(N+1)}{2}c_{\alpha \gamma }\\ ~\\ \frac{MN(M+1)(N+1)}{4}c_{\alpha }+\bigg (KN-\frac{N^2(N+1)^2}{4}\bigg )c_{\gamma }-\frac{NM(M+1)}{2}c_{\alpha \gamma }\\ ~\\ -\frac{MN(N+1)}{2}c_{\alpha }-\frac{NM(M+1)}{2}c_{\gamma }+MNc_{\alpha \gamma } \end{bmatrix}\nonumber \\{} & {} \quad /| {\varvec{\Gamma }}^\top {\varvec{\Gamma }}|,\nonumber \\{} & {} \quad c_{\alpha }=\displaystyle \sum _{n_0=1}^{N}{\widehat{\alpha }}_{n_0}, c_{\gamma }=\displaystyle \sum _{m_0=1}^M{\widehat{\gamma }}_{m_0},\nonumber \\{} & {} \quad c_{\alpha \gamma } = \displaystyle \sum _{n_0=1}^{N}n_0{\widehat{\alpha }}_{n_0}+\displaystyle \sum _{m_0=1}^Mm_0{\widehat{\gamma }}_{m_0},\nonumber \\{} & {} \quad K=\frac{N(N+1)(2N+1)}{6}+\frac{M(M+1)(2M+1)}{6}, \big |{\varvec{\Gamma }}^\top {\varvec{\Gamma }}\big |\nonumber \\{} & {} \quad = \frac{MN}{12} \bigg (N(N^2-1)+M(M^2-1)\bigg ). \end{aligned}$$
(25)

From (25), we get:

$$\begin{aligned} M^{3/2}N^{1/2}\big ({\widehat{\alpha }}-\alpha ^0\big )&= \bigg (\frac{2N(N+1)(2N+1)+M(M^2-1)}{N(N^2-1)+M(M^2-1)}\bigg )\\&\quad \; \frac{1}{\sqrt{N}}\displaystyle \sum _{n_0=1}^{N}M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\\&\quad \;+ \bigg (\frac{3M^2(M+1)}{N(N^2-1)+M(M^2-1)}\bigg )\\&\quad \; \frac{1}{\sqrt{M}}\displaystyle \sum _{m_0=1}^MN^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\\&\quad \;- \bigg (\frac{6M^{3/2}N^{1/2}(N+1)}{N(N^2-1)+M(M^2-1)}\bigg )\bigg [\displaystyle \sum _{n_0=1}^{N}n_0\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\\ {}&\quad \;+\displaystyle \sum _{m_0=1}^Mm_0\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg ]. \end{aligned}$$

For sufficiently large M and N, we have:

$$\begin{aligned} M^{3/2}N^{1/2}\big ({\widehat{\alpha }}-\alpha ^0\big )&=\bigg (\frac{4N^3+M^3}{N^3+M^3}\bigg ) \frac{1}{\sqrt{N}}\displaystyle \sum _{n_0=1}^{N}M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\nonumber \\ {}&\quad \;+\bigg (\frac{3M^3}{N^3+M^3}\bigg )\frac{1}{\sqrt{M}}\displaystyle \sum _{m_0=1}^MN^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\nonumber \\ {}&\quad \;-\bigg (\frac{6N^3}{N^3+M^3}\bigg )\frac{1}{N^{3/2}}\displaystyle \sum _{n_0=1}^{N}n_0M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\nonumber \\ {}&\quad \;- \bigg (\frac{6M^3}{N^3+M^3}\bigg )\frac{1}{M^{3/2}}\displaystyle \sum _{m_0=1}^Mm_0N^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big ). \end{aligned}$$
(26)

Asymptotic normality of the estimators for \(M=N\rightarrow \infty \) with given rates of convergence follows by applying central limit theorem for stationary processes (Fuller, 1996).

We now present some important results which will be used to find the asymptotic variance-covariance matrix of the proposed estimators of non-linear parameters. Using Eqs. (21) and (23) in the paper, we have the following observations:

  1. 1.

    \(Asy Var \bigg (\frac{1}{2\sqrt{N}}\displaystyle \sum _{n_0=1}^{N}M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\bigg )=\frac{c\sigma ^2 96}{A^{0^2}+B^{0^2}}\),

  2. 2.

    \(Asy Var \bigg (\frac{1}{2\sqrt{M}}\displaystyle \sum _{m_0=1}^MN^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=\frac{c\sigma ^296}{A^{0^2}+B^{0^2}}\),

  3. 3.

    \(Asy Var \bigg (\frac{1}{2N^{3/2}}\displaystyle \sum _{n_0=1}^{N}n_0M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\bigg )=\frac{c\sigma ^232}{A^{0^2}+B^{0^2}}\),

  4. 4.

    \(Asy Var \bigg (\frac{1}{2M^{3/2}}\displaystyle \sum _{m_0=1}^Mm_0N^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=\frac{c\sigma ^232}{A^{0^2}+B^{0^2}}\),

  5. 5.

    \(Asy Covar \bigg (\frac{1}{2\sqrt{N}}\displaystyle \sum _{n_0=1}^{N}M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big ),\frac{1}{2\sqrt{M}}\displaystyle \sum _{m_0=1}^MN^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=0\),

  6. 6.

    \(Asy Covar \bigg (\frac{1}{2\sqrt{N}}\displaystyle \sum _{n_0=1}^{N}M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big ),\frac{1}{2N^{3/2}}\displaystyle \sum _{n_0=1}^{N}n_0M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\bigg )=\frac{c\sigma ^248}{A^{0^2}+B^{0^2}}\),

  7. 7.

    \(Asy Covar \bigg (\frac{1}{2\sqrt{N}}\displaystyle \sum _{n_0=1}^{N}M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big ),\frac{1}{2M^{3/2}}\displaystyle \sum _{m_0=1}^Mm_0N^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=0\),

  8. 8.

    \(Asy Covar \bigg (\frac{1}{2\sqrt{M}}\displaystyle \sum _{m_0=1}^MN^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big ),\frac{1}{2N^{3/2}}\displaystyle \sum _{n_0=1}^{N}n_0M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\bigg )=0\),

  9. 9.

    \(Asy Covar \bigg (\frac{1}{2\sqrt{M}}\displaystyle \sum _{m_0=1}^MN^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big ),\frac{1}{2M^{3/2}}\displaystyle \sum _{m_0=1}^Mm_0N^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=\frac{c\sigma ^248}{A^{0^2}+B^{0^2}}\),

  10. 10.

    \(Asy Covar \bigg (\frac{1}{2N^{3/2}}\displaystyle \sum _{n_0=1}^{N}n_0M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big ),\frac{1}{2M^{3/2}}\displaystyle \sum _{m_0=1}^Mm_0N^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=\frac{c\sigma ^2}{2(A^{0^2}+B^{0^2})}\).

where \(c=\displaystyle {\sum _{i=-\infty }^{\infty }\sum _{j=-\infty }^{\infty }}a^2(i,j)\).

Using the above results in Eq. (26) from the paper, we get asymptotic variance-covariance matrix of

\(\begin{bmatrix} M^{3/2}N^{1/2}({\widehat{\alpha }}-\alpha ^0)\\ N^{3/2}M^{1/2}({\widehat{\gamma }}-\gamma ^0)\\ M^{3/2}N^{3/2}({\widehat{\mu }}-\mu ^0) \end{bmatrix}\) as: \( \frac{c\sigma ^2}{(A^{0^2}+B^{0^2})}\begin{bmatrix} 996&{}612&{}-1224\\ 612 &{}996&{}-1224\\ -1224 &{}-1224&{}2448 \end{bmatrix}. \)

From (21), (22) and (23) equations of the paper, it is further observed that:

  1. 1.

    \(AsyCovar\bigg ( M^{5/2}N^{1/2}\big ({\widehat{\beta }}-\beta ^0\big ), M^{1/2}N^{5/2}\big ({\widehat{\delta }}-\delta ^0\big )\bigg )=0\),

  2. 2.

    \(AsyCovar\bigg ( M^{5/2}N^{1/2}\big ({\widehat{\beta }}-\beta ^0\big ),\frac{1}{\sqrt{N}}\displaystyle \sum _{n_0=1}^{N}M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\bigg )=\frac{-360c\sigma ^2}{A^{0^2}+B^{0^2}}\),

  3. 3.

    \(AsyCovar\bigg ( M^{5/2}N^{1/2}\big ({\widehat{\beta }}-\beta ^0\big ),\frac{1}{\sqrt{M}}\displaystyle \sum _{m_0=1}^MN^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=0\),

  4. 4.

    \(AsyCovar\bigg ( M^{5/2}N^{1/2}\big ({\widehat{\beta }}-\beta ^0\big ),\frac{1}{N\sqrt{N}}\displaystyle \sum _{n_0=1}^{N}M^{3/2}n_0\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\bigg )=\frac{-180c\sigma ^2}{A^{0^2}+B^{0^2}}\),

  5. 5.

    \(AsyCovar\bigg ( M^{5/2}N^{1/2}\big ({\widehat{\beta }}-\beta ^0\big ),\frac{1}{M\sqrt{M}}\displaystyle \sum _{m_0=1}^Mm_0N^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=0.\)

Similar results can be derived for \(M^{1/2}N^{5/2}\big ({\widehat{\delta }}-\delta ^0\big )\).

Asymptotic variance-covariance matrix of the proposed estimators of non-linear parameters is given by:

$$\begin{aligned} \frac{c\sigma ^2}{(A^{0^2}+B^{0^2})}\begin{bmatrix} 996&{}-360&{}612&{}0&{}-1224\\ -360&{}360&{}0&{}0&{}0\\ 612&{}0&{}996&{}-360&{}-1224\\ 0&{}0&{}-360&{}360&{}0\\ -1224&{}0&{}-1224&{}0&{}2448 \end{bmatrix}. \end{aligned}$$
(27)

Next, we derive the asymptotics of amplitude estimators, please recall that by using Taylor series expansion of \(\cos \phi (m_0,n_0,\widehat{{\varvec{\xi }}})\) around the point \({\varvec{\xi }}^0\), we can write:

$$\begin{aligned}\cos {\widehat{\phi }}-\cos \phi ^0=-\sin \breve{\phi }\big (\widehat{{\varvec{\xi }}}-{\varvec{\xi }}^0\big )^\top \begin{bmatrix} m_0\\ m_0^2\\ n_0\\ n_0^2\\ m_0n_0 \end{bmatrix}.\end{aligned}$$

For brevity, we have denoted \(\cos \phi (m_0,n_0,\widehat{{\varvec{\xi }}})\) by \(\cos {\widehat{\phi }}\), \(\cos \phi (m_0,n_0,\widehat{{\varvec{\xi }}})\) by \(\cos \phi ^0\), and \(\sin \phi (m_0,n_0,\breve{{\varvec{\xi }}})\) by \(\sin \breve{\phi }\), where \(\breve{{\varvec{\xi }}}\) is a point lying between \(\widehat{{\varvec{\xi }}}\) and \({\varvec{\xi }}^0\).

Now consider first element of the following vector,

$$\begin{aligned} \sqrt{MN}\Bigg (\begin{bmatrix} \frac{2}{{MN}}\displaystyle \sum _{m_0=1}^M\sum _{n_0=1}^{N}y(m_0,n_0)\cos {\widehat{\phi }} \\ \frac{2}{{MN}}\displaystyle \sum _{m_0=1}^M\sum _{n_0=1}^{N}y(m_0,n_0)\sin {\widehat{\phi }} \end{bmatrix}-\begin{bmatrix} A^0\\ B^0 \end{bmatrix}\Bigg ), \end{aligned}$$

we get:

$$\begin{aligned} \sqrt{MN}\bigg ( \frac{2}{{MN}}\displaystyle \sum _{m_0=1}^M\sum _{n_0=1}^{N}y(m_0,n_0)\cos {\phi ^0}-\frac{2}{{MN}}\displaystyle \sum _{m_0=1}^M\sum _{n_0=1}^{N}y(m_0,n_0)\sin {\breve{\phi }}R(m_0,n_0)-A^0\bigg ), \end{aligned}$$
(28)

where \(R(m_0,n_0)=\big (\widehat{{\varvec{\xi }}}-{\varvec{\xi }}^0\big )^\top \begin{bmatrix} m_0\\ m_0^2\\ n_0\\ n_0^2\\ m_0n_0 \end{bmatrix}\), and second element of the above amplitude vector can be written as:

$$\begin{aligned} \sqrt{MN}\bigg ( \frac{2}{{MN}}\displaystyle \sum _{m_0=1}^M\sum _{n_0=1}^{N}y(m_0,n_0)\sin {\phi ^0}+\frac{2}{{MN}}\displaystyle \sum _{m_0=1}^M\sum _{n_0=1}^{N}y(m_0,n_0)\sin {\breve{\phi }}R(m_0,n_0)-B^0\bigg ). \end{aligned}$$
(29)

Now let us look at the first and the last term of Eq. (28) and putting value of \(y(m_0,n_0)\) from the model (1),

(30)

The above result has been obtained from a famous number theory conjecture by Montgomery (1994).

In second term of (28), \(R(m_0,n_0)\) is a sum of five terms, so now consider first term of

\(\frac{2}{{MN}}\displaystyle \sum _{m_0=1}^M\sum _{n_0=1}^{N}y(m_0,n_0)\sin {\breve{\phi }}R(m_0,n_0)\),

So, finding the asymptotic distribution of (28) boils down to finding asymptotic distribution of

$$\begin{aligned} \frac{2}{\sqrt{MN}}\displaystyle \sum _{m_0=1}^M\sum _{n_0=1}^{N}X(m_0,n_0)\cos \phi ^0-\bigg (\frac{2}{{\sqrt{MN}}}\displaystyle \sum _{m_0=1}^M\sum _{n_0=1}^{N}B^0\sin \phi ^0\sin {\breve{\phi }}\bigg )R(m_0,n_0), \end{aligned}$$

which is further asymptotically equivalent to:

$$\begin{aligned}&\frac{2}{\sqrt{MN}}\displaystyle \sum _{m_0=1}^M\sum _{n_0=1}^{N}X(m_0,n_0)\cos \phi ^0-B^0\bigg (\frac{1}{2}M^{3/2}N^{1/2} ({\widehat{\alpha }}-\alpha ^0)+\frac{1}{3}M^{5/2}N^{1/2}({\widehat{\beta }}-\beta ^0)\nonumber \\&\quad +\frac{1}{2}N^{3/2}M^{1/2}({\widehat{\gamma }}-\gamma ^0) +\frac{1}{3}N^{5/2}M^{1/2}({\widehat{\delta }}-\delta ^0)+\frac{1}{4}M^{3/2}N^{3/2}({\widehat{\mu }}-\mu ^0)\bigg ). \end{aligned}$$
(31)

Asymptotic normality of amplitude estimators is thus proved by Eq. (31). Now we need to derive the expression of their asymptotic variances.

After lengthy calculations, we get the asymptotic variance of \({\widehat{A}}\) as follows:

$$\begin{aligned} \frac{c\sigma ^2}{(A^{0^2}+B^{0^2})}(2A^{0^2}+187B^{0^2}). \end{aligned}$$

Similarly by calculating other terms too, we get complete variance co-variance matrix \({\varvec{\Sigma }}\) as mentioned in Theorem 2.

Hence the result. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shukla, A., Grover, R., Kundu, D. et al. A computationally efficient algorithm to estimate the parameters of a two-dimensional chirp model with the product term. Multidim Syst Sign Process 34, 633–655 (2023). https://doi.org/10.1007/s11045-023-00879-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11045-023-00879-7

Keywords

Navigation