Asymptotic Properties of the LS Estimator

  • Chapter
  • First Online:
Design of Experiments in Nonlinear Models

Part of the book series: Lecture Notes in Statistics ((LNS,volume 212))

Abstract

We consider asymptotic properties (N) of the (ordinary) LS estimator \(\hat{\theta }_{LS}^{N}\) for a model defined by the mean (or expected) response η(x, θ).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In fact, they consider the more general situation where the errors \(\epsilon _{k}\) form a martingale difference sequence with respect to an increasing sequence of σ-fields \(\mathcal{F}_{k}\) such that \(\sup _{k}(\epsilon _{k}^{2}\vert \mathcal{F}_{k-1}) < \infty \). When the errors \(\epsilon _{k}\) are i.i.d. with zero mean and variance σ 2 > 0, they also show that λ min(M N ) →  is both necessary and sufficient for the strong consistency of \(\hat{\theta }_{LS}^{N}\).

  2. 2.

    His proof, based on properties of Hilbert space valued martingales, requires a condition that gives for linear regression \(\lambda _{\max }(\mathbf{M}_{N}) = \mathcal{O}\{{[\lambda _{\min }(\mathbf{M}_{N})]}^{\rho }\}\) for some ρ ∈ (1, 2), to be compared to the condition loglogλ max(M N ) = o[λ min(M N )].

  3. 3.

    A sequence of random variables z n is bounded in probability if for any ε > 0, there exist A and n 0 such that ∀n > n 0, Prob{ | z n  |  > A} < ε.

  4. 4.

    Taking only a finite number of observations at another place than x  ∗  might seem an odd strategy; note, however, that Wynn’s algorithm [Wynn, 1972] for the minimization of \([\partial h(\theta )/{\partial \theta }^{\top }\,{\mathbf{M}}^{-}(\xi )\,\partial h(\theta )/\partial \theta ]_{{\theta }^{{\ast}}}\) generates such a sequence of design points when the design space is \(\mathcal{X} = [-1, 1]\), see Pázman and Pronzato [2006b], or when \(\mathcal{X}\) is a finite set containing x  ∗ .

  5. 5.

    However, we shall in Remark 3.28-(iv) that two steps are enough to obtain the same asymptotic behavior as the maximum likelihood estimator for normal errors.

  6. 6.

    See page 33 for the definition.

  7. 7.

    The variance function λ(x, θ) may be nonlinear.

  8. 8.

    It seems therefore more reasonable to consider β an unknown nuisance parameter for the estimation of θ; this approach will be considered in the next section. See also Remark 3.23.

  9. 9.

    The asymptotic normality mentioned above for \(\hat{\delta }_{1}^{N}\) extends Theorem 1 of Jobson and Fuller [1980] which concerns the case where η(x, θ) is linear in θ and the errors \(\epsilon _{k}\) are normally distributed.

  10. 10.

    By enforcing constraints c(θ) = 0 in the estimation in a situation where \(\mathbf{c}(\bar{\theta })\neq \mathbf{0}\), we introduce a modeling error, the effect of which on the asymptotic properties of the LS estimator \(\hat{\theta }_{LS}^{N}\) could be taken into account by combining the developments below with those in Sect. 3.4.

  11. 11.

    We only pay attention to rates slower than \(\sqrt{N}\) because \(\mathcal{X}\) is compact, but notice that by allowing the design points to expand to infinity, we might easily generate convergence rates faster than \(\sqrt{N}\).

  12. 12.

    However it is not always so: adaptive estimation precisely concerns efficient parameter estimation for models involving a nonparametric component; see the references in Sect. 4.4.2.

References

  • Atkinson, A. (2003). Transforming both sides and optimum experimental design for a nonlinear model arising from second-order chemical kinetics. In Tatra Mountains Math. Pub., Volume 26, pp. 29–39.

    MATH  Google Scholar 

  • Atkinson, A. (2004). Some Bayesian optimum designs for response transformation in nonlinear models with nonconstant variance. In A. Di Bucchianico, H. Läuter, and H. Wynn (Eds.), mODa’7 – Advances in Model–Oriented Design and Analysis, Proc. 7th Int. Workshop, Heeze (Netherlands), Heidelberg, pp. 13–21. Physica Verlag.

    Google Scholar 

  • Atkinson, A. and R. Cook (1996). Designing for a response transformation parameter. J. Roy. Statist. Soc. B59, 111–124.

    Google Scholar 

  • Bates, D. and D. Watts (Eds.) (1988). Nonlinear regression Analysis and its Applications. New York: Wiley.

    MATH  Google Scholar 

  • Bierens, H. (1994). Topics in Advanced Econometrics. Cambridge: Cambridge Univ. Press.

    Book  MATH  Google Scholar 

  • Box, G. and D. Cox (1964). An analysis of transformations (with discussion). J. Roy. Statist. Soc. B26, 211–252.

    Google Scholar 

  • Carroll, R. and D. Ruppert (1982). A comparison between maximum likelihood and generalized least squares in a heteroscedastic linear model. J. Amer. Statist. Assoc. 77(380), 878–882.

    Article  MathSciNet  MATH  Google Scholar 

  • del Pino, G. (1989). The unifying role of iterative generalized least squares in statistical algorithms (with discussion). Statist. Sci. 4(4), 394–408.

    Article  MathSciNet  MATH  Google Scholar 

  • Downing, D., V. Fedorov, and S. Leonov (2001). Extracting information from the variance function: optimal design. In A. Atkinson, P. Hackl, and W. Müller (Eds.), mODa’6 – Advances in Model–Oriented Design and Analysis, Proc. 6th Int. Workshop, Puchberg/Schneberg (Austria), pp. 45–52. Heidelberg: Physica Verlag.

    Google Scholar 

  • Elfving, G. (1952). Optimum allocation in linear regression. Ann. Math. Statist. 23, 255–262.

    Article  MathSciNet  MATH  Google Scholar 

  • Green, P. (1984). Iteratively reweighted least squares for maximum likelihood estimation, and some robust and resistant alternatives (with discussion). J. Roy. Statist. Soc. B-46(2), 149–192.

    Google Scholar 

  • Jennrich, R. (1969). Asymptotic properties of nonlinear least squares estimation. Ann. Math. Statist. 40, 633–643.

    Article  MathSciNet  MATH  Google Scholar 

  • Jobson, J. and W. Fuller (1980). Least squares estimation when the covariance matrix and parameter vector are functionally related. J. Amer. Statist. Assoc. 75(369), 176–181.

    Article  MathSciNet  MATH  Google Scholar 

  • Kim, J. and D. Pollard (1990). Cube root asymptotics. Ann. Statist. 18(1), 191–219.

    Article  MathSciNet  MATH  Google Scholar 

  • Lai, T. (1994). Asymptotic properties of nonlinear least squares estimates in stochastic regression models. Ann. Statist. 22(4), 1917–1930.

    Article  MathSciNet  MATH  Google Scholar 

  • Lai, T., H. Robbins, and C. Wei (1978). Strong consistency of least squares estimates in multiple regression. Proc. Nat. Acad. Sci. USA 75(7), 3034–3036.

    Article  MathSciNet  MATH  Google Scholar 

  • Lai, T., H. Robbins, and C. Wei (1979). Strong consistency of least squares estimates in multiple regression II. J. Multivariate Anal. 9, 343–361.

    Article  MathSciNet  MATH  Google Scholar 

  • Lai, T. and C. Wei (1982). Least squares estimates in stochastic regression models with applications to identification and control of dynamic systems. Ann. Statist. 10(1), 154–166.

    Article  MathSciNet  Google Scholar 

  • Lehmann, E. and G. Casella (1998). Theory of Point Estimation. Heidelberg: Springer.

    MATH  Google Scholar 

  • Parzen, E. (1962). On estimation of a probability density function and mode. Ann. Math. Statist. 35, 1065–1076.

    Article  MathSciNet  Google Scholar 

  • Pázman, A. (1980). Singular experimental designs. Math. Operationsforsch. Statist. Ser. Statist. 16, 137–149.

    Google Scholar 

  • Pázman, A. (1993b). Nonlinear Statistical Models. Dordrecht: Kluwer.

    MATH  Google Scholar 

  • Pázman, A. (2002a). Optimal design of nonlinear experiments with parameter constraints. Metrika 56, 113–130.

    Article  MathSciNet  Google Scholar 

  • Pázman, A. and L. Pronzato (2004). Simultaneous choice of design and estimator in nonlinear regression with parameterized variance. In A. Di Bucchianico, H. Läuter, and H. Wynn (Eds.), mODa’7 – Advances in Model–Oriented Design and Analysis, Proc. 7th Int. Workshop, Heeze (Netherlands), Heidelberg, pp. 117–124. Physica Verlag.

    Google Scholar 

  • Pázman, A. and L. Pronzato (2006a). Asymptotic criteria for designs in nonlinear regression with model errors. Math. Slovaca 56(5), 543–553.

    MathSciNet  MATH  Google Scholar 

  • Pázman, A. and L. Pronzato (2006b). On the irregular behavior of LS estimators for asymptotically singular designs. Statist. Probab. Lett. 76, 1089–1096.

    Article  MathSciNet  MATH  Google Scholar 

  • Pázman, A. and L. Pronzato (2009). Asymptotic normality of nonlinear least squares under singular experimental designs. In L. Pronzato and A. Zhigljavsky (Eds.), Optimal Design and Related Areas in Optimization and Statistics, Chapter 8, pp. 167–191. Springer.

    Google Scholar 

  • Phillips, R. (2002). Least absolute deviations estimation via the EM algorithm. Stat. Comput. 12, 281–285.

    Article  MathSciNet  Google Scholar 

  • Pronzato, L. (2009a). Asymptotic properties of nonlinear estimates in stochastic models with finite design space. Statist. Probab. Lett. 79, 2307–2313.

    Article  MathSciNet  MATH  Google Scholar 

  • Pronzato, L. and A. Pázman (2004). Recursively re-weighted least-squares estimation in regression models with parameterized variance. In Proc. EUSIPCO’2004, Vienna, Austria, pp. 621–624.

    Google Scholar 

  • Rousseeuw, P. (1984). Least median of squares regression. J. Amer. Statist. Assoc. 79, 871–880.

    Article  MathSciNet  MATH  Google Scholar 

  • Rousseeuw, P. and A. Leroy (1987). Robust Regression and Outlier Detection. New York: Wiley.

    Book  MATH  Google Scholar 

  • Schlossmacher, E. (1973). An iterative technique for absolute deviations curve fitting. J. Amer. Statist. Assoc. 68(344), 857–859.

    Article  MATH  Google Scholar 

  • Shiryaev, A. (1996). Probability. Berlin: Springer.

    Google Scholar 

  • Silvey, S. (1980). Optimal Design. London: Chapman & Hall.

    Book  MATH  Google Scholar 

  • Stoer, J. and R. Bulirsch (1993). Introduction to Numerical Analysis (2nd Edition). Heidelberg: Springer.

    Google Scholar 

  • van der Vaart, A. (1998). Asymptotic Statistics. Cambridge: Cambridge Univ. Press.

    Book  MATH  Google Scholar 

  • Wald, A. (1949). Note on the consistency of the maximum likelihood estimate. Ann. Math. Statist. 20, 595–601.

    Article  MathSciNet  MATH  Google Scholar 

  • Wu, C. (1980). Characterizing the consistent directions of least squares estimates. Ann. Statist. 8(4), 789–801.

    Article  MathSciNet  MATH  Google Scholar 

  • Wu, C. (1981). Asymptotic theory of nonlinear least squares estimation. Ann. Statist. 9(3), 501–513.

    Article  MathSciNet  MATH  Google Scholar 

  • Wu, C. (1983). Further results on the consistent directions of least squares estimators. Ann. Statist. 11(4), 1257–1262.

    MathSciNet  MATH  Google Scholar 

  • Wynn, H. (1972). Results in the theory and construction of D-optimum experimental designs. J. Roy. Statist. Soc. B34, 133–147.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Cite this chapter

Pronzato, L., Pázman, A. (2013). Asymptotic Properties of the LS Estimator. In: Design of Experiments in Nonlinear Models. Lecture Notes in Statistics, vol 212. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-6363-4_3

Download citation

Publish with us

Policies and ethics

Navigation