Abstract
Computational thinking (CT) is regarded as a valuable skill set for the students of the 21st century, fostering problem-solving skills applicable to academic disciplines and everyday problems. Assessing CT involves evaluating the development of its concepts, practices, and perspectives. However, establishing comprehensive and validated assessments across different educational levels remains challenging. The Beginners Computational Thinking Test (BCTt) is a validated tool for assessing CT concepts among primary school students, especially during their first grades (ages 5 to 10). This paper describes the translation, cultural adaptation, and psychometric validation of the BCTt for use with Greek students. The translation process involved both forward and backward translation, while the validity assessment included content and construct validity. The psychometric properties of the adapted scale were also evaluated using Item Difficulty Index, Item Discrimination Index, internal consistency, and test-retest reliability. The results indicated that the Greek version of the BCTt can be used as a reliable and valid tool for assessing the CT skills among students in the three lower grades of primary school, with greater suitability for use among students in the two lower grades. Finally, our findings contribute to improving the existing assessment tools tailored to primary school students while guiding future refinement efforts to enhance overall psychometric quality.
Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Notes
For more details and access to the Greek version of the BCTt scale, please refer to the designated repository: https://vourletsis.users.uth.gr/.
References
Annamalai, S., Che Omar, A., & Abdul Salam, S. N. (2022). REVIEW OF COMPUTATIONAL THINKING MODELS IN VARIOUS LEARNING FIELDS. International Journal of Education Psychology and Counseling, 7(48), 562–574. https://doi.org/10.35631/ijepc.748042.
Arafat, S., Chowdhury, H., Qusar, M., & Hafez, M. (2016). Cross Cultural Adaptation and Psychometric Validation of Research Instruments: A Methodological Review. Journal of Behavioral Health, 5(3), 129. https://doi.org/10.5455/jbh.20160615121755.
Azevedo, J. M., Oliveira, E. P., & Beites, P. D. (2019). Using learning analytics to evaluate the quality of multiple-choice questions. The International Journal of Information and Learning Technology, 36(4), 322–341. https://doi.org/10.1108/ijilt-02-2019-0023.
Babazadeh, M., & Negrini, L. (2022). How is computational thinking assessed in European K-12 education? A systematic review. International Journal of Computer Science Education in Schools, 5(4), 3–19. https://doi.org/10.21585/ijcses.v5i4.138.
Barnard, J. J. (1999). Item analysis in test construction. Advances in Measurement in Educational Research and Assessment, 195–206. https://doi.org/10.1016/b978-008043348-6/50016-4.
Bartlett, M. S. (1951). The Effect of standardization on a χ2 approximation in factor analysis. Biometrika, 38(3/4), 337–344. https://doi.org/10.2307/2332580.
Beaton, D. E., Bombardier, C., Guillemin, F., & Ferraz, M. B. (2000). Guidelines for the process of cross-cultural adaptation of self-report measures. Spine, 25(24), 3186–3191. https://doi.org/10.1097/00007632-200012150-00014.
Beck, C. T., & Gable, R. K. (2001). Ensuring content validity: An illustration of the process. Journal of Nursing Measurement, 9(2), 201–215. https://doi.org/10.1891/1061-3749.9.2.201.
Bentler, P. M., & Bonett, D. G. (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin, 88(3), 588–606. https://doi.org/10.1037/0033-2909.88.3.588.
Boateng, G. O., Neilands, T. B., Frongillo, E. A., Melgar-Quiñonez, H. R., & Young, S. L. (2018). Best Practices for Develo** and Validating Scales for Health, Social, and Behavioral Research: A Primer. Frontiers in Public Health, 6. https://doi.org/10.3389/fpubh.2018.00149.
Borsa, J. C., Damásio, B. F., & Bandeira, D. R. (2012). Cross-cultural adaptation and validation of psychological instruments: Some considerations. Paidéia (Ribeirão Preto), 22, 423–432. https://doi.org/10.1590/1982-43272253201314.
Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development of computational thinking. In Proceedings of the annual American educational research association meeting, pp. 1–25. Vancouver, Canada. https://web.media.mit.edu/~kbrennan/files/Brennan_Resnick_AERA2012_CT.pdf.
Briggs, S. R., & Cheek, J. M. (1986). The role of factor analysis in the development and evaluation of personality scales. Journal of Personality, 54(1), 106–148. https://doi.org/10.1111/j.1467-6494.1986.tb00391.x.
Broglio, S. P., Ferrara, M. S., Macciocchi, S. N., Baumgartner, T. A., & Elliott, R. (2007). Test-retest reliability of computerized concussion assessment programs. Journal of Athletic Training, 42(4), 509–514.
Byrne, B. M. (1994). Structural equation modelling with EQS and EQS/Windows: Basic concepts, applications, and Programming. Sage.
Cheung, A. K. L. (2014). Probability proportional sampling. In A. C. Michalos (Ed.), Encyclopedia of Quality of Life and Well-Being Research (pp. 5069–5071). Springer. https://doi.org/10.1007/978-94-007-0753-5_2269.
R Core Team (2019). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334. https://doi.org/10.1007/bf02310555.
Cutumisu, M., Adams, C., & Lu, C. (2019). A sco** review of empirical research on recent computational thinking assessments. Journal of Science Education and Technology, 28(6), 651–676. https://doi.org/10.1007/s10956-019-09799-3.
Ebel, R. L., & Frisbie, D. A. (1991). Essentials of educational measurement (5th ed.). Prentice-Hall.
El-Hamamsy, L., Zapata-Cáceres, M., Marcelino, P., Bruno, B., Dehler Zufferey, J., Martín-Barroso, E., & Román-González, M. (2022). Comparing the psychometric properties of two primary school Computational Thinking (CT) assessments for grades 3 and 4: The Beginners’ CT test (BCTt) and the competent CT test (cCTt). Frontiers in Psychology, 13(1082659). https://doi.org/10.3389/fpsyg.2022.1082659.
Epstein, J., Santo, R. M., & Guillemin, F. (2015). A review of guidelines for cross-cultural adaptation of questionnaires could not bring out a consensus. Journal of Clinical Epidemiology, 68(4), 435–441. https://doi.org/10.1016/j.jclinepi.2014.11.021.
Fabrigar, L. R., MacCallum, R. C., Wegener, D. T., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272–299. https://doi.org/10.1037/1082-989X.4.3.272.
Grover, S., & Pea, R. (2018). Computational thinking: A competency whose time has come. In S. Sentence, E. Barendsen, & C. Schulte (Eds.), Computer Science Education: Perspectives on teaching and learning (pp. 19–38). Bloomsbury. https://doi.org/10.5040/9781350057142.ch-003.
Guggemos, J., Seufert, S., & Román-González, M. (2023). Computational thinking assessment – towards more vivid interpretations. Technology Knowledge and Learning, 28(2), 539–568. https://doi.org/10.1007/s10758-021-09587-2
Hair, J. J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2006). Multivariate Data Analysis. Pearson Education.
Hambleton, R. K., & Jones, R. W. (1993). Comparison of classical test theory and item response theory and their applications to test development. Educational Measurement: Issues and Practice, 12(3), 38–47. https://doi.org/10.1111/j.1745-3992.1993.tb00543.x.
Hazzan, O., Ragonis, N., & Lapidot, T. (2020). Computational thinking. In O. Hazzan, N. Ragonis, & T. Lapidot (Eds.), Guide to Teaching Computer Science (pp. 57–74). Springer. https://doi.org/10.1007/978-3-030-39360-1_4.
Hingorjo, M. R., & Jaleel, F. (2012). Analysis of one-best MCQs: The difficulty index, discrimination index and distractor efficiency. Journal of the Pakistan Medical Association, 62(2), 142–147.
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. https://doi.org/10.1080/10705519909540118.
Hurley, A. E., Scandura, T. A., Schriesheim, C. A., Brannick, M. T., Seers, A., Vandenberg, R. J., & Williams, L. J. (1997). Exploratory and confirmatory factor analysis: Guidelines, issues, and alternatives. Journal of Organizational Behavior, 18(6), 667–683. https://doi.org/10.1155/2016/2696019.
Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrika, 39(1), 31–36. https://doi.org/10.1007/BF02291575.
Kline, R. B. (2011). Principles and practice of structural equation modeling, structural equation modeling. Guilford Press.
Koo, T. K., & Li, M. Y. (2016). A Guideline of selecting and reporting Intraclass correlation coefficients for Reliability Research. Journal of Chiropractic Medicine, 15(2), 155–163. https://doi.org/10.1016/j.jcm.2016.02.012.
Kuder, G. F., & Richardson, M. W. (1937). The theory of the estimation of test reliability. Psychometrika, 2(3), 151–160. https://doi.org/10.1007/BF02288391.
Lu, C., Macdonald, R., Odell, B., Kokhan, V., Epp, D., C., & Cutumisu, M. (2022). A sco** review of computational thinking assessments in higher education. Journal of Computing in Higher Education, 34(2), 416–461. https://doi.org/10.1007/s12528-021-09305-y.
Lynn, M. R. (1986). Determination and Quantification Of Content Validity. Nursing Research, 35(6), 382–386. https://doi.org/10.1097/00006199-198611000-00017.
Martuza, V. R. (1977). Applying norm-referenced and criterion-referenced measurement in education. Allyn and Bacon.
Mitra, N. K., Nagaraja, H. S., Ponnudurai, G., & Judson, J. P. (2009). The levels of Difficulty and discrimination indices in type a multiple choice questions of pre-clinical semester 1 Multidisciplinary Summative tests. International E-Journal of Science Medicine & Education, 3(1), 2–7. https://doi.org/10.56026/imu.3.1.2.
Nitko, A. J., & Brookhart, S. M. (2014). Educational assessment of students (6th international electronic edition). Harlow: Pearson.
Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. Basic Books.
Passos, M. P. V. D., Almeida, J. R., Santos, Y. H. S., Junior, E. P. P., Flores-Quispe, M. del, Aquino, P., Martufi, R., Barreto, V., M., & Amorim, L. (2023). D. A. F. Measurement Models with Binary Indicators: A Tutorial for the Assessment of Antenatal Care Quality. https://doi.org/10.21203/rs.3.rs-2860527/v1.
Polit, D. F., & Beck, C. T. (2006). The content validity index: Are you sure you know what’s being reported? Critique and recommendations. Research in Nursing & Health, 29(5), 489–497. https://doi.org/10.1002/nur.20147.
Polya, G. (1945). How to solve it. Princeton University Press. https://doi.org/10.1515/9781400828678.
Poulakis, E., & Politis, P. (2021). Computational thinking Assessment: Literature Review. In T. Tsiatsos, S. Demetriadis, A. Mikropoulos, & V. Dagdilelis (Eds.), Research on E-Learning and ICT in Education. Springer. https://doi.org/10.1007/978-3-030-64363-8_7.
Resnick, M., & Rusk, N. (2020). Coding at a crossroads. Communications of the ACM, 63(11), 120–127. https://doi.org/10.1145/3375546.
Revelle, W. (2021). Psych: Procedures for Psychological, Psychometric, and Personality Research. R package version 2.2.9. https://cran.r-project.org/package=psych.
Román-González, M. (2015). Computational thinking test: Design guidelines and content validation. In EDULEARN15 Proceedings (pp. 2436–2444). IATED. https://library.iated.org/view/ROMANGONZALEZ2015COM.
Román-González, M., Pérez-González, J. C., & Jiménez-Fernández, C. (2017). Which cognitive abilities underlie computational thinking? Criterion validity of the computational thinking test. Computers in Human Behavior, 72, 678–691. https://doi.org/10.1016/j.chb.2016.08.047.
Román-González, M., Moreno-León, J., & Robles, G. (2019). Combining Assessment Tools for a comprehensive evaluation of computational thinking interventions. Computational Thinking Education, 79–98. https://doi.org/10.1007/978-981-13-6528-7_6.
Rosseel, Y. (2012). Lavaan: AnRPackage for Structural equation modeling. Journal of Statistical Software, 48(2). https://doi.org/10.18637/jss.v048.i02.
Schreiber, J. B., Nora, A., Stage, F. K., Barlow, E. A., & King, J. (2006). Reporting Structural Equation Modeling and Confirmatory Factor Analysis Results: A review. The Journal of Educational Research, 99(6), 323–338. https://doi.org/10.3200/joer.99.6.323-338.
Schumacker, R. E., & Lomax, R. G. (2004). A beginner’s guide to structural equation modeling, Second edition. Mahwah, NJ: Lawrence Erlbaum Associates.
Shi, J., Mo, X., & Sun, Z. (2012). Content validity index in scale development. Journal of Central South University Medical Sciences, 37(2), 152–155. https://doi.org/10.3969/j.issn.1672-7347.2012.02.007.
Singh, S. (2003). Simple Random Sampling. Advanced Sampling Theory with Applications (pp. 71–136). Springer. https://doi.org/10.1007/978-94-007-0789-4_2.
Sousa, V. D., & Rojjanasrirat, W. (2010). Translation, adaptation and validation of instruments or scales for use in cross-cultural health care research: A clear and user-friendly guideline. Journal of Evaluation in Clinical Practice, 17(2), 268–274. https://doi.org/10.1111/j.1365-2753.2010.01434.x.
Streiner, D. L. (2003). Starting at the beginning: An introduction to Coefficient Alpha and Internal consistency. Journal of Personality Assessment, 80(1), 99–103. https://doi.org/10.1207/s15327752jpa8001_18.
Streiner, D. L., Norman, G. R., & Cairney, J. (2015). Health measurement scales: A practical guide to their development and use. Oxford University Press.
Tang, X., Yin, Y., Lin, Q., Hadad, R., & Zhai, X. (2020). Assessing computational thinking: A systematic review of empirical studies. Computers & Education, 148, 103798. https://doi.org/10.1016/j.compedu.2019.103798.
Terwee, C. B., Bot, S. D. M., de Boer, M. R., van der Windt, D. A. W. M., Knol, D. L., Dekker, J., Bouter, L. M., & de Vet, H. C. W. (2007). Quality criteria were proposed for measurement properties of health status questionnaires. Journal of Clinical Epidemiology, 60(1), 34–42. https://doi.org/10.1016/j.jclinepi.2006.03.012.
Thorndike, R. M., Cunningham, G. K., Thorndike, R. L., & Hagen, E. P. (1991). Measurement and evaluation in psychology and education (5th ed.). Macmillan Publishing Co, Inc.
Tissenbaum, M., Sheldon, J., & Abelson, H. (2019). From computational thinking to computational action. Communications of the ACM, 62(3), 34–36. https://doi.org/10.1145/3265747.
Tsang, S., Royse, C., & Terkawi, A. (2017). Guidelines for develo**, translating, and validating a questionnaire in perioperative and pain medicine. Saudi Journal of Anaesthesia, 11(5), 80. https://doi.org/10.4103/sja.sja_203_17.
Vaz, S., Falkmer, T., Passmore, A. E., Parsons, R., & Andreou, P. (2013). The case for using the repeatability coefficient when calculating test–retest reliability. Plos One, 8(9), e73990. https://doi.org/10.1371/journal.pone.0073990.
Wheaton, B., Muthén, B., Alwin, D. F., & Summers, G. F. (1977). Assessing Reliability and Stability in Panel models. Sociological Methodology, 8, 84–136. https://doi.org/10.2307/270754.
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35. https://doi.org/10.1145/1118178.1118215.
Wing, J. M. (2011). Research notebook: Computational thinking—What and why? The link magazine Retrieved from https://www.cs.cmu.edu/link/research-notebook-computational-thinking-what-and-why.
Yu, C. Y. (2002). Evaluating cutoff criteria of model fit indices for latent variable models with binary and continuous outcomes. University of California, Los Angeles. http://www.statmodel.com/download/Yudissertation.pdf.
Yusoff, M. S. B. (2019). ABC of Content Validation and Content Validity Index calculation. Education in Medicine Journal, 11(2), 49–54. https://doi.org/10.21315/eimj2019.11.2.6.
Zapata-Cáceres, M., Martín-Barroso, E., & Román-González, M. (2020). Computational thinking test for beginners: Design and content validation. 2020 IEEE Global Engineering Education Conference (EDUCON). https://doi.org/10.1109/educon45650.2020.9125368.
Zapata-Cáceres, M., Martín-Barroso, E., & Román-González, M. (2021). BCTt: Beginners Computational Thinking Test. In Understanding computing education (Vol 1). Proceedings of the Raspberry Pi Foundation Research Seminar series. Retrieved from www.rpf.io/seminar-proceedings-2020.
Zapata-Cáceres, M., Marcelino, P., El-Hamamsy, L., & Martín-Barroso, E. (2024). A Bebras Computational thinking (ABC-Thinking) program for primary school: Evaluation using the competent computational thinking test. Education and Information Technologies. https://doi.org/10.1007/s10639-023-12441-w.
Acknowledgements
Ioannis Vourletsis is grateful for the opportunity to conduct this research as part of his postdoctoral studies at the Pedagogical Department of Primary Education, School of Humanities and Social Sciences, University of Thessaly.
Funding
No funding was received for conducting this study.
Author information
Authors and Affiliations
Contributions
Author 1: Conceptualization, Methodology, Validation, Formal analysis, Investigation, Resources, Data Curation, Writing - Original Draft, Writing - Review & Editing. Author 2: Supervision, Project administration, Methodology, Resources, Writing - Review & Editing.
Corresponding author
Ethics declarations
Ethics approval
The research adhered to ethical principles and guidelines, ensuring the protection of participants’ privacy, confidentiality, and dignity throughout all stages of data collection and analysis.
Consent to participate
Participants were provided with comprehensive study details, assured of confidentiality, and informed of their right to withdraw without consequences.
Competing interests
The authors have no relevant financial or non-financial interests to disclose.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Vourletsis, I., Politis, P. Greek translation, cultural adaptation, and psychometric validation of beginners computational thinking test (BCTt). Educ Inf Technol (2024). https://doi.org/10.1007/s10639-024-12887-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10639-024-12887-6