Abstract
Chirp signal models and their generalizations have been used to model many natural and man-made phenomena in signal processing and time series literature. In recent times, several methods have been proposed for parameter estimation of these models. However, these methods are either statistically sub-optimal or computationally burdensome, especially for two dimensional chirp models. In this paper, we consider the problem of parameter estimation of two dimensional chirp models and propose a computationally efficient estimator and establish asymptotic theoretical properties of the proposed estimators. Moreover, the proposed estimators are observed to have the same rates of convergence as the least squares estimators. Further, the proposed estimators of chirp rate parameters are shown to be asymptotically optimal. Extensive and detailed numerical simulations are conducted, which support the theoretical results of the proposed estimators.
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11045-023-00879-7/MediaObjects/11045_2023_879_Fig1_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11045-023-00879-7/MediaObjects/11045_2023_879_Fig2_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11045-023-00879-7/MediaObjects/11045_2023_879_Fig3_HTML.png)
Similar content being viewed by others
Data Availability
Not applicable.
Code Availability
The authors can share the code used for simulations, on individual requests.
References
Barbarossa, S., Scaglione, A., & Giannakis, G. B. (1998). Product high-order ambiguity function for multicomponent polynomial-phase signal modeling. IEEE Transactions on Signal Processing, 46(3), 691–708.
Barbarossa, S., Di Lorenzo, P., & Vecchiarelli, P. (2014). Parameter estimation of 2D multi-component polynomial phase signals: An application to SAR imaging of moving targets. IEEE Transactions on Signal Processing, 62(17), 4375–4389.
Djurović, I., Wang, P., & Ioana, C. (2010). Parameter estimation of 2-D cubic phase signal using cubic phase function with genetic algorithm. Signal Processing, 90(9), 2698–2707.
Djurović, I., & Stanković, L. (2014). Quasi-maximum-likelihood estimator of polynomial phase signals. IET Signal Processing, 8(4), 347–359.
Djurović, I. (2017). Quasi ML algorithm for 2-D PPS estimation. Multidimensional Systems and Signal Processing, 28(2), 371–387.
Francos, J. M., & Friedlander, B. (1995). The polynomial phase difference operator for modeling of nonhomogeneous images. In Proceedings., International Conference on Image Processing (Vol. 2, pp. 276–279). IEEE.
Friedlander, B., & Francos, J. M. (1996). Model based phase unwrap** of 2-D signals. IEEE Transactions on Signal Processing, 44(12), 2999–3007.
Francos, J. M., & Friedlander, B. (1998). Two-dimensional polynomial phase signals: Parameter estimation and bounds. Multidimensional Systems and Signal Processing, 9(2), 173–205.
Francos, J. M., & Friedlander, B. (1999). Parameter estimation of 2-D random amplitude polynomial-phase signals. IEEE Transactions on Signal Processing, 47(7), 1795–1810.
Fuller, W. A. (1996). Introduction to Statistical Time Series (2nd ed.). John Wiley and Sons.
Grover, R., Kundu, D., & Mitra, A. (2018). Approximate least squares estimators of a two-dimensional Chirp model and their asymptotic properties. Journal of Multivariate Analysis, 168, 211–220.
Grover, R., Kundu, D., & Mitra, A. (2021). An efficient methodology to estimate the parameters of a two-dimensional Chirp signal model. Multidimensional Systems and Signal Processing, 32(1), 49–75.
Guo, Y., & Li, B. Z. (2018). Novel method for parameter estimation of Newton’s rings based on CFRFT and ER-WCA. Signal Processing, 144, 118–126.
Lahiri, A., Kundu, D., & Mitra, A. (2013). Efficient algorithm for estimating the parameters of two dimensional Chirp signal. Sankhya B, 75(1), 65–89.
Lahiri, A., Kundu, D., & Mitra, A. (2015). Estimating the parameters of multiple chirp signals. Journal of Multivariate Analysis, 139, 189–206.
Lahiri, A., & Kundu, D. (2017). On parameter estimation of two-dimensional polynomial phase signal model. Statistica Sinica, 27, 1779–1792.
Montgomery, H. L. (1994). Ten lectures on the interface between analytic number theory and harmonic analysis (No. 84). American Mathematical Soc.
Nandi, S., & Kundu, D. (2004). Asymptotic properties of the least squares estimators of the parameters of the chirp signals. Annals of the Institute of Statistical Mathematics, 56(3), 529–544.
O’shea, P. (2002). A new technique for instantaneous frequency rate estimation. IEEE Signal Processing Letters, 9(8), 251–252.
Peleg, S., & Porat, B. (1991). Estimation and classification of polynomial-phase signals. IEEE Transactions on Information Theory, 37(2), 422–430.
Stankovic, S., Djurovic, I., & Pitas, I. (2001). Watermarking in the space/spatial-frequency domain using two-dimensional Radon-Wigner distribution. IEEE Transactions on Image Processing, 10(4), 650–658.
Wu, Y., So, H. C., & Liu, H. (2008). Subspace-based algorithm for parameter estimation of polynomial phase signals. IEEE Transactions on Signal Processing, 56(10), 4977–4983.
Zhang, K., Wang, S., & Cao, F. (2008). Product cubic phase function algorithm for estimating the instantaneous frequency rate of multicomponent two-dimensional Chirp signals. In 2008 Congress on Image and Signal Processing (Vol. 5, pp. 498–502). IEEE.
Zhang, Y., Mobasseri, B. G., Dogahe, B. M., & Amin, M. G. (2010). Image-adaptive watermarking using 2D Chirps. Signal, Image and Video Processing, 4(1), 105–121.
Acknowledgements
We would like to thank the Editor for his encouragement and continuous support, and anonymous reviewer for providing critical and constructive comments. Part of the work of the third author has been supported by a research grant from the Science and Engineering Research Board, Government of India.
Funding
Part of the work of the third author has been supported by a research grant from the Science and Engineering Research Board, Government of India.
Author information
Authors and Affiliations
Contributions
All authors contributed to study the conceptualization, methodology, review and editing. Formal analysis of proofs and simulations analysis were performed by AS. The first draft of the manuscript was written by AS and all authors commented on revising the previous versions of the manuscript. All authors read and approved the final manuscript. AM and DK had supervised the whole research project.
Corresponding author
Ethics declarations
Conflict of interest
The authors declares that they have no conflict of interest.
Consent for Publication
Not applicable.
Ethical Approval
Not applicable.
Consent to Participate
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A
Proof of Theorem 1:
Given the data matrix \({\varvec{Y}}\), we compute the LSEs of \(\alpha ^0+n_0\mu ^0\) and \(\beta ^0\) corresponding to \(n_0^{th}\) column vector of \({\varvec{Y}}\). We denote the obtained estimators by \({\widehat{\alpha }}_{n_0}\) and \({\widehat{\beta }}_{n_0}\) to emphasize that these depend on \(n_0\). Similarly, for fixed \(m_0^{th}\) row of \({\varvec{Y}}\), we have denoted the LSEs of \(\gamma ^0+m_0\mu ^0\) and \(\delta ^0\) by \({\widehat{\gamma }}_{m_0}\) and \( {\widehat{\delta }}_{m_0}\). Under assumptions that \(X(m_0,n_0)\) are stationary (11) and (12), see Nandi and Kundu (2004), we have
The final estimator of \(\beta ^0\) given by
is strongly consistent estimate of \(\beta ^0\) which is observed by (15) and also that \( {\widehat{\beta }}_{n_0}\) is strongly consistent for \(\beta ^0\) as \(M\xrightarrow {} \infty \). For proof, one may refer to Lahiri et al. (2015).
Similarly \({\widehat{\delta }}=\displaystyle \frac{1}{M}\sum _{m_0=1}^{M}{\widehat{\delta }}_{m_0}\) is strongly consistent estimator of \(\delta ^0\).
Denote
We now prove the consistency of the frequency parameter estimators \( {\widehat{\alpha }}, {\widehat{\gamma }}\), and that of \({\widehat{\mu }}\), estimator of the interaction term parameter. From (15) and (16), we have thefollowing:
where \({\varvec{\Gamma }}^\top =\begin{bmatrix} 1&{}1&{}\cdots &{}1&{}0&{}0&{}\cdots &{}0\\ 0&{}0&{}\cdots &{}0&{}1&{}1&{}\cdots &{}1\\ 1&{}2&{}\cdots &{}N&{}1&{}2&{}\cdots &{}M \end{bmatrix}_{3\times (M+N)}\), and \({\varvec{\Lambda }}^\top =\begin{bmatrix} {\widehat{\alpha }}_1&{\widehat{\alpha }}_2&\cdots&{\widehat{\alpha }}_N&{\widehat{\gamma }}_1&{\widehat{\gamma }}_2&\cdots&{\widehat{\gamma }}_M \end{bmatrix}\).
This implies that
Now we look at the first element \(a_1+a_2-a_3\) of the following matrix
where,
and
Now we look at \(a_1, a_2\) and \(a_3\) individually and compute their limits.
\( \hbox { This implies that } a_1\xrightarrow {a.s.} 0\hbox { as } \min \{M,N\}\rightarrow \infty .\) Here,
\(\hbox { This implies that } a_2 \xrightarrow {a.s.} 0\hbox { as } \min \{M,N\}\rightarrow \infty .\)
\(\hbox { This implies that } a_3 \xrightarrow {a.s.} 0\hbox { as } \min \{M,N\}\rightarrow \infty .\) Hence, \({\widehat{\alpha }}\) is strongly consistent estimate of \(\alpha ^0\). Similarly strong consistency of \({\widehat{\gamma }}\) for \(\gamma ^0\) can be derived.
Now, the third element of \(({\varvec{\Gamma }}^\top {\varvec{\Gamma }})^{-1}\begin{bmatrix} N\times o\bigg (\frac{1}{M}\bigg )\\ M\times o\bigg (\frac{1}{N}\bigg )\\ \frac{N(N+1)}{2}\times o\bigg (\frac{1}{M}\bigg )+\frac{M(M+1)}{2}\times o\bigg (\frac{1}{N}\bigg ) \end{bmatrix}\) can be written as \(-b_1-b_2+b_3\), where
Appendix B
Proof of Theorem 2:
Suppose \({\varvec{\kappa }}^\top =(A,B,\alpha ,\beta ) \) and
\({\varvec{\kappa }}^{0^\top }= (A^0(n_0),B^0(n_0),\alpha ^0+n_0\mu ^0,\beta ^0)\). We define sum of squares as follows:
Let \(\widehat{{\varvec{\kappa }}}\) be the minimizer of \(Q_{n_0}({\varvec{\kappa }})\), then using Taylor Series expansion on the first derivative vector \(Q_{n_0}^{\prime }({\varvec{\kappa }})\) around the point \({\varvec{\kappa }}^{0}\), we get:
where \(\breve{{\varvec{\kappa }}}\) is a point between \(\widehat{{\varvec{\kappa }}}\) and \({\varvec{\kappa }}^0\). Also \({\varvec{Q}}_{n_0}^{\prime }(\widehat{{\varvec{\kappa }}})=0\) as \(\widehat{{\varvec{\kappa }}}\) is LSE of \({\varvec{\kappa }}^0\).
Let us denote \({\varvec{D}}_1^{-1}=diag(M^{1/2},M^{1/2},M^{3/2},M^{5/2})\). Then on multiplying \({\varvec{D}}_1^{-1}\) both sides of Eq. (19), it gives:
and
The expression of \({\varvec{\Sigma }}_{n_0}^{-1}\) can be obtained by proof of Theorem 2 shown in Lahiri et al. (2015). By using Eq. (20) and above expression of \({\varvec{\Sigma }}_{n_0}^{-1}\), we get following asymptotically equivalent (a.e.) expression of the third element of vector \({\varvec{D_1}}^{-1}(\widehat{{\varvec{\kappa }}}-{\varvec{\kappa }}^0)\) as:
![](http://media.springernature.com/lw330/springer-static/image/art%3A10.1007%2Fs11045-023-00879-7/MediaObjects/11045_2023_879_Equ21_HTML.png)
where, \(\eta (m_0,n_0)=X(m_0,n_0)\big (A^0\sin \phi (m_0,n_0,{\varvec{\xi }}^0)-B^0\cos \phi (m_0,n_0,{\varvec{\xi }}^0)\big )\),
\(\phi (m_0,n_0,{\varvec{\xi }}^0)= \alpha ^0m_0+\beta ^0m_0^2+\gamma ^0n_0+\delta ^0n_0^2+\mu ^0 m_0n_0\). We have used these notations for brevity. Similarly expressions of fourth, fifth and sixth element of \({\varvec{D_1}}^{-1}(\widehat{{\varvec{\kappa }}}-{\varvec{\kappa }}^0)\) can be written as in Eqs. (22), (23) and (24) respectively:
![](http://media.springernature.com/lw382/springer-static/image/art%3A10.1007%2Fs11045-023-00879-7/MediaObjects/11045_2023_879_Equ22_HTML.png)
![](http://media.springernature.com/lw351/springer-static/image/art%3A10.1007%2Fs11045-023-00879-7/MediaObjects/11045_2023_879_Equ23_HTML.png)
![](http://media.springernature.com/lw370/springer-static/image/art%3A10.1007%2Fs11045-023-00879-7/MediaObjects/11045_2023_879_Equ24_HTML.png)
From Eqs. (22) and (24) above, we can see that estimators \({\widehat{\beta }}\) and \({\widehat{\delta }}\) of \(\beta ^0\) and \(\delta ^0\) are asymptotically equivalent to the LSEs as they have same asymptotic variances by applying central limit theorem for stationary linear processes, see Fuller (1996). So now, remaining is to show asymptotic properties of estimators \( ( {\widehat{\alpha }}, {\widehat{\gamma }}, {\widehat{\mu }})^\top \).
The expression of proposed estimators of \((\alpha ^0,\gamma ^0, \mu ^0)^\top \) obtained is:
From (25), we get:
For sufficiently large M and N, we have:
Asymptotic normality of the estimators for \(M=N\rightarrow \infty \) with given rates of convergence follows by applying central limit theorem for stationary processes (Fuller, 1996).
We now present some important results which will be used to find the asymptotic variance-covariance matrix of the proposed estimators of non-linear parameters. Using Eqs. (21) and (23) in the paper, we have the following observations:
-
1.
\(Asy Var \bigg (\frac{1}{2\sqrt{N}}\displaystyle \sum _{n_0=1}^{N}M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\bigg )=\frac{c\sigma ^2 96}{A^{0^2}+B^{0^2}}\),
-
2.
\(Asy Var \bigg (\frac{1}{2\sqrt{M}}\displaystyle \sum _{m_0=1}^MN^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=\frac{c\sigma ^296}{A^{0^2}+B^{0^2}}\),
-
3.
\(Asy Var \bigg (\frac{1}{2N^{3/2}}\displaystyle \sum _{n_0=1}^{N}n_0M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\bigg )=\frac{c\sigma ^232}{A^{0^2}+B^{0^2}}\),
-
4.
\(Asy Var \bigg (\frac{1}{2M^{3/2}}\displaystyle \sum _{m_0=1}^Mm_0N^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=\frac{c\sigma ^232}{A^{0^2}+B^{0^2}}\),
-
5.
\(Asy Covar \bigg (\frac{1}{2\sqrt{N}}\displaystyle \sum _{n_0=1}^{N}M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big ),\frac{1}{2\sqrt{M}}\displaystyle \sum _{m_0=1}^MN^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=0\),
-
6.
\(Asy Covar \bigg (\frac{1}{2\sqrt{N}}\displaystyle \sum _{n_0=1}^{N}M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big ),\frac{1}{2N^{3/2}}\displaystyle \sum _{n_0=1}^{N}n_0M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\bigg )=\frac{c\sigma ^248}{A^{0^2}+B^{0^2}}\),
-
7.
\(Asy Covar \bigg (\frac{1}{2\sqrt{N}}\displaystyle \sum _{n_0=1}^{N}M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big ),\frac{1}{2M^{3/2}}\displaystyle \sum _{m_0=1}^Mm_0N^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=0\),
-
8.
\(Asy Covar \bigg (\frac{1}{2\sqrt{M}}\displaystyle \sum _{m_0=1}^MN^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big ),\frac{1}{2N^{3/2}}\displaystyle \sum _{n_0=1}^{N}n_0M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\bigg )=0\),
-
9.
\(Asy Covar \bigg (\frac{1}{2\sqrt{M}}\displaystyle \sum _{m_0=1}^MN^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big ),\frac{1}{2M^{3/2}}\displaystyle \sum _{m_0=1}^Mm_0N^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=\frac{c\sigma ^248}{A^{0^2}+B^{0^2}}\),
-
10.
\(Asy Covar \bigg (\frac{1}{2N^{3/2}}\displaystyle \sum _{n_0=1}^{N}n_0M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big ),\frac{1}{2M^{3/2}}\displaystyle \sum _{m_0=1}^Mm_0N^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=\frac{c\sigma ^2}{2(A^{0^2}+B^{0^2})}\).
where \(c=\displaystyle {\sum _{i=-\infty }^{\infty }\sum _{j=-\infty }^{\infty }}a^2(i,j)\).
Using the above results in Eq. (26) from the paper, we get asymptotic variance-covariance matrix of
\(\begin{bmatrix} M^{3/2}N^{1/2}({\widehat{\alpha }}-\alpha ^0)\\ N^{3/2}M^{1/2}({\widehat{\gamma }}-\gamma ^0)\\ M^{3/2}N^{3/2}({\widehat{\mu }}-\mu ^0) \end{bmatrix}\) as: \( \frac{c\sigma ^2}{(A^{0^2}+B^{0^2})}\begin{bmatrix} 996&{}612&{}-1224\\ 612 &{}996&{}-1224\\ -1224 &{}-1224&{}2448 \end{bmatrix}. \)
From (21), (22) and (23) equations of the paper, it is further observed that:
-
1.
\(AsyCovar\bigg ( M^{5/2}N^{1/2}\big ({\widehat{\beta }}-\beta ^0\big ), M^{1/2}N^{5/2}\big ({\widehat{\delta }}-\delta ^0\big )\bigg )=0\),
-
2.
\(AsyCovar\bigg ( M^{5/2}N^{1/2}\big ({\widehat{\beta }}-\beta ^0\big ),\frac{1}{\sqrt{N}}\displaystyle \sum _{n_0=1}^{N}M^{3/2}\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\bigg )=\frac{-360c\sigma ^2}{A^{0^2}+B^{0^2}}\),
-
3.
\(AsyCovar\bigg ( M^{5/2}N^{1/2}\big ({\widehat{\beta }}-\beta ^0\big ),\frac{1}{\sqrt{M}}\displaystyle \sum _{m_0=1}^MN^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=0\),
-
4.
\(AsyCovar\bigg ( M^{5/2}N^{1/2}\big ({\widehat{\beta }}-\beta ^0\big ),\frac{1}{N\sqrt{N}}\displaystyle \sum _{n_0=1}^{N}M^{3/2}n_0\big ({\widehat{\alpha }}_{n_0}-(\alpha ^0+n_0\mu ^0)\big )\bigg )=\frac{-180c\sigma ^2}{A^{0^2}+B^{0^2}}\),
-
5.
\(AsyCovar\bigg ( M^{5/2}N^{1/2}\big ({\widehat{\beta }}-\beta ^0\big ),\frac{1}{M\sqrt{M}}\displaystyle \sum _{m_0=1}^Mm_0N^{3/2}\big ({\widehat{\gamma }}_{m_0}-(\gamma ^0+m_0\mu ^0)\big )\bigg )=0.\)
Similar results can be derived for \(M^{1/2}N^{5/2}\big ({\widehat{\delta }}-\delta ^0\big )\).
Asymptotic variance-covariance matrix of the proposed estimators of non-linear parameters is given by:
Next, we derive the asymptotics of amplitude estimators, please recall that by using Taylor series expansion of \(\cos \phi (m_0,n_0,\widehat{{\varvec{\xi }}})\) around the point \({\varvec{\xi }}^0\), we can write:
For brevity, we have denoted \(\cos \phi (m_0,n_0,\widehat{{\varvec{\xi }}})\) by \(\cos {\widehat{\phi }}\), \(\cos \phi (m_0,n_0,\widehat{{\varvec{\xi }}})\) by \(\cos \phi ^0\), and \(\sin \phi (m_0,n_0,\breve{{\varvec{\xi }}})\) by \(\sin \breve{\phi }\), where \(\breve{{\varvec{\xi }}}\) is a point lying between \(\widehat{{\varvec{\xi }}}\) and \({\varvec{\xi }}^0\).
Now consider first element of the following vector,
we get:
where \(R(m_0,n_0)=\big (\widehat{{\varvec{\xi }}}-{\varvec{\xi }}^0\big )^\top \begin{bmatrix} m_0\\ m_0^2\\ n_0\\ n_0^2\\ m_0n_0 \end{bmatrix}\), and second element of the above amplitude vector can be written as:
Now let us look at the first and the last term of Eq. (28) and putting value of \(y(m_0,n_0)\) from the model (1),
![](http://media.springernature.com/lw506/springer-static/image/art%3A10.1007%2Fs11045-023-00879-7/MediaObjects/11045_2023_879_Equ30_HTML.png)
The above result has been obtained from a famous number theory conjecture by Montgomery (1994).
In second term of (28), \(R(m_0,n_0)\) is a sum of five terms, so now consider first term of
\(\frac{2}{{MN}}\displaystyle \sum _{m_0=1}^M\sum _{n_0=1}^{N}y(m_0,n_0)\sin {\breve{\phi }}R(m_0,n_0)\),
![](http://media.springernature.com/lw588/springer-static/image/art%3A10.1007%2Fs11045-023-00879-7/MediaObjects/11045_2023_879_Equ53_HTML.png)
So, finding the asymptotic distribution of (28) boils down to finding asymptotic distribution of
which is further asymptotically equivalent to:
Asymptotic normality of amplitude estimators is thus proved by Eq. (31). Now we need to derive the expression of their asymptotic variances.
After lengthy calculations, we get the asymptotic variance of \({\widehat{A}}\) as follows:
Similarly by calculating other terms too, we get complete variance co-variance matrix \({\varvec{\Sigma }}\) as mentioned in Theorem 2.
Hence the result. \(\square \)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Shukla, A., Grover, R., Kundu, D. et al. A computationally efficient algorithm to estimate the parameters of a two-dimensional chirp model with the product term. Multidim Syst Sign Process 34, 633–655 (2023). https://doi.org/10.1007/s11045-023-00879-7
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11045-023-00879-7