Creating Content for Educational Testing Using a Workflow That Supports Automatic Item Generation

  • Conference paper
  • First Online:
EAI International Conference on Technology, Innovation, Entrepreneurship and Education (TIE 2017)

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 532))

  • 432 Accesses

Abstract

Automatic item generation is a rapidly evolving research area where cognitive theories, computer technologies, and psychometric practices are used to create models that produce test items with the aid of computer technology. The purpose of our study is to describe the workflow in a strategic partnership between researchers as the University of Alberta and content specialists at the testing company ACT Inc. In this workflow, technical automated item and content generation expertise was combined with item development and subject-matter expertise for the purpose of producing large numbers of high-quality, content-specific test items. The methods and processes described in our study will also be used to help transform item and passage development at ACT Inc. from what is currently a manual, labor intensive, non-scalable process to a specification driven, automated, highly scalable process.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 117.69
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
EUR 160.49
Price includes VAT (Germany)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    To date, Drs. Gierl and Lai have received over $5.5 million in external grants for their AIG research program. They have published 1 book, 12 book chapters, and 19 refereed articles as well as delivered 29 invited presentations and 53 conference presentations on the topic of AIG. Drs. Gierl and Lai received the National Council on Measurement in Education Annual Award in recognition of their AIG research program in 2016.

References

  1. F. Drasgow, Technology and Testing: Improving Educational and Psychological Measurement (Routledge, New York, 2016)

    Google Scholar 

  2. F. Drasgow, R.M. Luecht, R. Bennett, Technology and Testing, in Educational Measurement, 4th edn. ed. by R.L. Brennan (American Council on Education, Washington, DC, 2006), pp. 471–516

    Google Scholar 

  3. R.M. Luecht, Computer-Based Test Delivery Models, Data, and Operational Implementation Issues, in Technology and Testing: Improving Educational and Psychological Measurement, ed. by F. Drasgow (Routledge, New York, 2016), pp. 179–205

    Google Scholar 

  4. S. Sireci, A. Zenisky, Computerized Innovative Item Formats: Achievement and Credentialing, in Handbook of Test Development, 2nd edn. (Routledge, New York, 2016), pp. 313–334

    Google Scholar 

  5. S. Lane, M. Raymond, R. Haladyna, S. Downing, Test Development Process, in Handbook of Test Development, 2nd edn. ed. by S. Lane, M. Raymond, T. Haladyna (Routledge, New York, 2016), pp. 3–18

    Google Scholar 

  6. M.J. Gierl, T. Haladyna, Automatic Item Generation: Theory and Practice (Routledge, New York, 2013)

    Google Scholar 

  7. R.M. Luecht, Automatic Item Generation for Computerized Adaptive Testing, in Automatic Item Generation: Theory and Practice, ed. by M. Gierl, T. Haladyna (Routledge, New York, 2013), pp. 196–216

    Google Scholar 

  8. L. Rudner, Implementing the Graduate Management Admission Test Computerized Adaptive Test, in Elements of Adaptive Testing, ed. by W. van der Linden, C. Glas (Springer, New York, 2010), pp. 151–165

    Google Scholar 

  9. K. Breithaupt, A. Ariel, D. Hare, Assembling an Inventory of Multistage Adaptive Testing Systems, in Elements of Adaptive Testing, ed. by W. van der Linden, C. Glas (Springer, New York, 2010), pp. 247–266

    Google Scholar 

  10. M.J. Gierl, H. Lai, A process for reviewing and evaluating generated test items. Educ. Meas. Issues Pract. 35, 6–20 (2016)

    Article  Google Scholar 

  11. M.J. Gierl, H. Lai, Automatic Item Generation, in Handbook of Test Development, 2nd edn. ed. by S. Lane, M. Raymond, T. Haladyna (Routledge, New York, 2016), pp. 410–429

    Google Scholar 

  12. M.J. Gierl, H. Lai, Using automated processes to generate test items. Educ. Meas. Issues Pract. 32, 36–50 (2013)

    Article  Google Scholar 

  13. M.J. Gierl, H. Lai, S. Turner, Using automatic item generation to create multiple-choice items for assessments in medical education. Med. Educ. 46, 757–765 (2012)

    Article  Google Scholar 

  14. H. Lai, M.J. Gierl, C. Touchie, D. Pugh, A. Boulais, A. DeChamplain, Using automatic item generation to improve the quality of MCQ distractors. Teach. Learn. Med. 28, 166–173 (2016)

    Article  Google Scholar 

  15. M.J. Gierl, H. Lai, Using automated processes to generate test items and their associated solutions and rationales to support formative feedback. IxD&A J. N. 25, 9–20 (2015)

    Google Scholar 

  16. M.J. Gierl, H. Lai, K. Fung, B. Zheng, Using Technology-Enhanced Processes to Generate Items in Multiple Languages, in Technology and Testing: Improving Educational and Psychological Measurement, ed. by F. Drasgow (Routledge, New York, 2016) pp. 109–127

    Google Scholar 

  17. I.I. Bejar, R. Lawless, M.E. Morley, M.E. Wagner, R.E. Bennett, J. Revuelta, A feasibility study of on-the-fly item generation in adaptive testing. J. Technol. Learn. Assess. 2(3), 1–30 (2003)

    Google Scholar 

  18. A. LaDuca, W.I. Staples, B. Templeton, G.B. Holzman, Item modeling procedures for constructing content-equivalent multiple-choice questions. Med. Educ. 20, 53–56 (1986)

    Article  Google Scholar 

  19. M.J. Gierl, J. Zhou, C. Alves, Develo** a taxonomy of item model types to promote assessment engineering. J. Technol. Learn. Assess. 7(2), 1–51 (2008)

    Google Scholar 

  20. C.B. Schmeiser, C.J. Welch, Test Development, in Educational Measurement, 4th edn. ed. by R.L. Brennan (National Council on Measurement in Education and American Council on Education, Westport, CT, 2006), pp. 307–353

    Google Scholar 

  21. M.J. Gierl, H. Lai, J. Hogan, D. Matovinovic, A method for generating test items that are aligned to the Common Core State Standards. J. Appl. Test. Technol. 16, 1–18 (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mark J. Gierl .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gierl, M.J., Matovinovic, D., Lai, H. (2019). Creating Content for Educational Testing Using a Workflow That Supports Automatic Item Generation. In: Reyes-Munoz, A., Zheng, P., Crawford, D., Callaghan, V. (eds) EAI International Conference on Technology, Innovation, Entrepreneurship and Education. TIE 2017. Lecture Notes in Electrical Engineering, vol 532. Springer, Cham. https://doi.org/10.1007/978-3-030-02242-6_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-02242-6_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-02241-9

  • Online ISBN: 978-3-030-02242-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics

Navigation