Part of the book series: Methodology of Educational Measurement and Assessment ((MEMA))

  • 1357 Accesses

Abstract

Computational psychometrics is a blend of stochastic processes theory, computer science-based methods, and theory-based psychometric approaches that may aid the analyses of complex data from performance assessments. This chapter discusses the grounds for using complex performance assessments, the design of such assessments so that useful evidence about targeted abilities will be present in the data to be analysed, and roles that computational psychometric ideas and methods can play. It first provides background on a situative, sociocognitive, perspective on human capabilities and how we develop them and use them—a perspective we believe is necessary to synthesize the methodologies. Next it reviews the form of evidentiary argument that underlies the evidence-centered approach to design, interpretation, and use of educational assessments. It then points out junctures in extensions of the argument form where computational psychometric methods can carry out vital roles in assessment of more advanced constructs, from more complex data, in new forms and contexts of assessment. It concludes by reflecting on how one reconceives and extends the notions of validity, reliability, comparability, fairness, and generalizability to more complex assessments and analytic methods.

The R or Python codes can be found at the GitHub repository of this book: https://github.com/jgbrainstorm/computational_psychometrics

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (Brazil)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (Brazil)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (Brazil)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (Brazil)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    See Markus and Borsboom (2013), for a comprehensive discussion of the history and alternative views of validity in educational and psychological testing.

References

  • American Educational Research Association, American Psychological Association, National Council on Measurement in Education (AERA/APA/NCME). (2014). Standards for educational and psychological testing. American Educational Research Association.

    Google Scholar 

  • Behrens, J. T., Mislevy, R. J., DiCerbo, K. E., & Levy, R. (2012). An evidence centered design for learning and assessment in the digital world. In M. C. Mayrath, J. Clarke-Midura, & D. Robinson (Eds.), Technology-based assessments for 21st century skills: Theoretical and practical implications from modern research (pp. 13–54). Information Age.

    Google Scholar 

  • Brennan, R. L. (2001). An essay on the history and future of reliability from the perspective of replications. Journal of Educational Measurement, 38, 295–317.

    Article  Google Scholar 

  • Conati, C., Gertner, A., & VanLehn, K. (2002). Using Bayesian networks to manage uncertainty in student modeling. User Modeling and User-Adapted Interactions, 12(4), 371–417.

    Article  Google Scholar 

  • Cronbach, L. J. (1988). Five perspectives on validity argument. In H. Wainer (Ed.), Test validity (pp. 3–17). Erlbaum.

    Google Scholar 

  • Cronbach, L. J., Gleser, G. C., Nanda, H., & Rajaratnam, N. (1972). The dependability of behavioral measurements: Theory of generalizability for scores and profiles. Wiley.

    Google Scholar 

  • Desmarais, M. C., & Baker, R. S. (2012). A review of recent advances in learner and skill modeling in intelligent learning environments. User Modeling and User-Adapted Interaction, 22, 9–38.

    Article  Google Scholar 

  • Ercikan, K. A., & Pellegrino, J. W. (Eds.). (2017). Validation of score meaning in the next generation of assessments. The National Council on Measurement in Educational.

    Google Scholar 

  • Ericsson, K. A., Hoffman, R. R., Kozbelt, A., & Williams, A. M. (Eds.). (2018). The Cambridge handbook of expertise and expert performance. Cambridge University Press.

    Google Scholar 

  • Greeno, J. G. (1998). The situativity of knowing, learning, and research. American Psychologist, 53(1), 5.

    Article  Google Scholar 

  • Greeno, J. G., Collins, A. M., & Resnick, L. B. (1997). Cognition and learning. In D. Berliner & R. Calfee (Eds.), Handbook of educational psychology (pp. 15–47). Simon & Schuster Macmillan.

    Google Scholar 

  • Gumperz, J. (1982). Language and social identity. Cambridge University Press.

    Google Scholar 

  • Haertel, E. H. (2006). Reliability. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 65–110). ACE/Praeger.

    Google Scholar 

  • Holland, J. H. (2006). Studying complex adaptive systems. Journal of Systems Science and Complexity, 19, 1–8.

    Article  Google Scholar 

  • Kane, M. T. (1992). An argument-based approach to validation. Psychological Bulletin, 112, 527–535.

    Article  Google Scholar 

  • Kelley, T. L. (1927). Interpretation of educational measurements. Macmillan.

    Google Scholar 

  • Khan, S. M. (2017). Multimodal behavioral analytics in intelligent learning and assessment systems. In A. A. von Davier, M. Zhu, & P. C. Kyllonen (Eds.), Innovative assessment of collaboration (pp. 173–184). Springer.

    Chapter  Google Scholar 

  • Kintsch, W. (1998). Comprehension: A paradigm for cognition. Cambridge University Press.

    Google Scholar 

  • Markus, K.A., & Borsboom, D. (2013). Frontiers of test validity theory: Measurement, causation, and meaning. Routledge.

    Google Scholar 

  • Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 13–103). American Council on Education/Macmillan.

    Google Scholar 

  • Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23(2), 13–23.

    Article  Google Scholar 

  • Mislevy, R. J. (2018). Sociocognitive foundations of educational measurement. Routledge.

    Book  Google Scholar 

  • Mislevy, R. J., Behrens, J. T., DiCerbo, K., & Levy, R. (2012). Design and discovery in educational assessment: Evidence centered design, psychometrics, and data mining. Journal of Educational Data Mining, 4, 11–48.

    Google Scholar 

  • Mislevy, R. J., Haertel, G., Cheng, B. H., Ructtinger, L., DeBarger, A., Murray, E., Rose, D., Gravel, J., Colker, M., Rutstein, D., & Vendlinski, T. (2013). A “conditional” sense of fairness in assessment. Educational Research and Evaluation, 19, 121–140.

    Article  Google Scholar 

  • Mislevy, R. J., Steinberg, L. S., & Almond, R. A. (2003). On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspectives, 1, 3–67.

    Google Scholar 

  • NGSS Lead States. (2013). The next generation science standards. Retrieved from https://www.nextgenscience.org/next-generation-science-standards

  • Ryans, D. G., & Frederiksen, N. (1951). Performance tests of educational achievement. In E. F. Lindquist (Ed.), Educational measurement (pp. 455–494). American Council of Education.

    Google Scholar 

  • Schum, D. A. (1994). The evidential foundations of probabilistic reasoning. Wiley.

    Google Scholar 

  • Sperber, D. (1996). Explaining culture: A naturalistic approach. Blackwell.

    Google Scholar 

  • von Davier, A. A. (2017). Computational psychometrics in support of collaborative educational assessments. Journal of Educational Measurement, 1(54), 3–11.

    Article  Google Scholar 

  • Wertsch, J. (1994). The primacy of mediated action in sociocultural studies. Mind, Culture, and Activity, 1, 202–208.

    Google Scholar 

  • Zumbo, B. D., & Hubley, A. M. (Eds.). (2017). Understanding and investigating response processes in validation research. Springer.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert J. Mislevy .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Mislevy, R.J. (2021). Next Generation Learning and Assessment: What, Why and How. In: von Davier, A.A., Mislevy, R.J., Hao, J. (eds) Computational Psychometrics: New Methodologies for a New Generation of Digital Learning and Assessment. Methodology of Educational Measurement and Assessment. Springer, Cham. https://doi.org/10.1007/978-3-030-74394-9_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-74394-9_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-74393-2

  • Online ISBN: 978-3-030-74394-9

  • eBook Packages: EducationEducation (R0)

Publish with us

Policies and ethics

Navigation