• 131 Accesses

Abstract

Mathematical modeling is an important part of many fields of engineering and science. Mathematical models are used to simulate physical systems and test their behavior under different conditions. These mathematical models depend on model parameters that must be evaluated from measurements collected by scientists or engineers from physical systems operating under the conditions of validity of the mathematical model. For example, an aerospace engineer modeling a rocket collects measurements of position, velocity and acceleration that together with the equation of motion allow him to evaluate the parameters of the rocket’s model. An electronic engineer measures voltage and current at the input and output of an electronic device and uses his measurements to obtain an input-output model of the device. A wildlife biologist collects population data for species in a particular environment to develop a predator-prey model for wildlife management. The measurements in all the above applications include measurement errors that must be considered when evaluating the model parameters. The measurement errors can be deterministic, random, or both, depending on the application. The presence of errors in the measurements makes it necessary to collect far more measurements than the number of parameters to be evaluated because the measurement redundancy corrects for the errors in each measurement. The two most popular approaches for parameter estimation are the minimum square error or least-squares approach and the maximum likelihood approach. They are the subjects of later chapters. In this chapter we discuss properties of estimators that allow us to assess their quality. Because parameter estimates are based on noisy measurements, the estimates themselves are random. Even in cases where the noise distribution is known, the distribution of the parameter estimates can be quite complicated. However, to assess the quality of an estimate it is essential to learn more about its distribution and how its distribution is influenced by the size of the data sample used in estimation. In some cases, it is not possible to obtain this information for finite sample size, but much can be learned by taking the limit as the sample size goes to infinity. Properties evaluated by taking the limit are known as large sample properties. These properties allow us to evaluate the effect of increasing sample size on the quality of the estimator. Properties that can be evaluated without taking the limit are valid for any sample size and are known as small sample properties. We discuss small sample properties next.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This follows from the properties of the discrete-time Fourier transform.

References

  1. Goldberger, A. S. (1964). Economic theory. Wiley.

    Google Scholar 

  2. Brown, R. G., & Hwang, P. Y. C. (2012). Introduction to random signals and applied Kalman filtering (4th ed.) Wiley.

    Google Scholar 

  3. Kay, S. M. (1993). Fundamentals of statistical signal processing: Estimation theory (Vol. I). Prentice-Hall.

    Google Scholar 

Bibliography

  1. Bendat, J. S., & Piersol, A. G. (2011). Random data: Analysis and measurement procedures. Wiley.

    Google Scholar 

  2. Casella, G., & Berger, R. (1990). Statistical inference. Duxbury Press.

    Google Scholar 

  3. DeGroot, M. H., & Schervish, M. J. (2012). Probability and statistics. Addison-Wesley.

    Google Scholar 

  4. Hogg, R. V., McKean, J. W., & Craig, A. T. (2005). Introduction to mathematical statistics. Prentice-Hall.

    Google Scholar 

  5. Lindsey, J. K. (1996). Parametric statistical inference. Clarendon Press.

    Book  Google Scholar 

  6. Martinez, W. L., & Martinez, A. R. (2002). Computational statistics handbook with MATLAB. Chapman & Hall/CRC.

    Google Scholar 

  7. Mendel, J. (1995). Lessons in estimation theory and signal processing, communications, and control. Prentice-Hall.

    Google Scholar 

  8. Shanmugan & Breipohl. (1988). Detection, estimation & data analysis (pp. 132–134). Wiley.

    Google Scholar 

  9. Stoica, P., & Moses, R. (1997). Introduction to spectral analysis. Prentice Hall.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Sami Fadali .

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Fadali, M.S. (2024). Estimation and Estimator Properties. In: Introduction to Random Signals, Estimation Theory, and Kalman Filtering. Springer, Singapore. https://doi.org/10.1007/978-981-99-8063-5_5

Download citation

Publish with us

Policies and ethics

Navigation