Abstract
Recent research has touted the benefits of learner-centered instruction, problem-based learning, and a focus on complex learning. Instructors often struggle to put these goals into practice as well as to measure the effectiveness of these new teaching strategies in terms of mastery of course objectives. Enter the course evaluation, often a standardized tool that yields little practical information for an instructor, but is nonetheless utilized in making high-level career decisions, such as tenure and monetary awards to faculty. The present researchers have developed a new instrument to measure teaching and learning quality (TALQ). In the current study of 464 students in 12 courses, if students agreed that their instructors used First Principles of Instruction and also agreed that they experienced academic learning time (ALT), then students were about 5 times more likely to achieve high levels of mastery of course objectives and 26 times less likely to achieve low levels of mastery, according to independent instructor assessments. TALQ can measure improvements in use of First Principles in teaching and course design. The feedback from this instrument can assist teachers who wish to implement the recommendation made by Kuh et al. (2007) that universities and colleges should focus their assessment efforts on factors that influence student success.
Similar content being viewed by others
Notes
Krathwohl (2002) explained that a taxonomy of educational objectives was never produced for the psycho-motor domain. Perhaps this is a telling point. As Maccia (1987), Frick (1997), Greenspan and Benderly (1997) and Estep (2003; 2006) have argued, the mind-body distinction is fallacious (i.e., cognitive vs. psycho-motor vs. affective). For example, try driving an automobile on a highway without being cognitively aware of one’s surroundings and making adjustments accordingly. Failing to be immediately aware will threaten one’s prospects for survival. This is not a rote motor skill. Driving an automobile is an example of know how.
References
American Institutes for Research. (2006, January 19). New study of the literacy of college students finds some are graduating with only basic skills. Retrieved January 20, 2007, from http://www.air.org/news/documents/Release200601pew.htm.
Baer, J., Cook, A., & Baldi, S. (2006, January). The literacy of America’s college students. American Institutes for Research. Retrieved January 20, 2007, from http://www.air.org/news/documents/The%20Literacy%20of%20Americas%20College%20Students_final%20report.pdf.
Berliner, D. (1990). What’s all the fuss about instructional time? In M. Ben-Peretz & R. Bromme (Eds.), The nature of time in schools: Theoretical concepts, practitioner perceptions. New York: Teachers College Press.
Brown, B., & Saks, D. (1986). Measuring the effects of instructional time on student learning: Evidence from the beginning teacher evaluation study. American Journal of Education, 94(4), 480–500. doi:10.1086/443863.
Cohen, P. (1981). Student ratings of instruction and student achievement. A meta-analysis of multisection validity studies. Review of Educational Research, 51(3), 281–309.
Estep, M. (2003). A theory of immediate awareness: Self-organization and adaptation in natural intelligence. Boston: Kluwer Academic Publishers.
Estep, M. (2006). Self-organizing natural intelligence: Issues of knowing, meaning and complexity. Dordrecht, The Netherlands: Springer.
Feldman, K. A. (1989). The association between student ratings of specific instructional dimensions and student achievement: Refining and extending the synthesis of data from multisection validity studies. Research in Higher Education, 30, 583–645. doi:10.1007/BF00992392.
Fisher, C., Filby, N., Marliave, R., Cohen, L., Dishaw, M., Moore, J., et al. (1978). Teaching behaviors: Academic learning time and student achievement: Final report of phase III-B, beginning teacher evaluation study. San Francisco: Far West Laboratory for Educational Research and Development.
Frick, T. (1990). Analysis of patterns in time (APT): A method of recording and quantifying temporal relations in education. American Educational Research Journal, 27(1), 180–204.
Frick, T. (1997). Artificial tutoring systems: What computers can and can’t know. Journal of Educational Computing Research, 16(2), 107–124. doi:10.2190/4CWM-6JF2-T2DN-QG8L.
Frick, T. W., Chadha, R., Watson, C., Wang, Y., & Green, P. (2008a). College student perceptions of teaching and learning quality. Educational Technology Research and Development (in press).
Frick, T. W., Chadha, R., Watson, C., Wang, Y., & Green, P. (2008b). Theory-based course evaluation: Implications for improving student success in postsecondary education. Paper presented at the American Educational Research Association conference, New York.
Greenspan, S., & Benderly, B. (1997). The growth of the mind and the endangered origins of intelligence. Reading, MA: Addison-Wesley.
Keller, J. M. (1987). The systematic process of motivational design. Performance & Instruction, 26(9), 1–8. doi:10.1002/pfi.4160260902.
Kirkpatrick, D. (1994). Evaluating training programs: The four levels. San Francisco, CA: Berrett-Koehler.
Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory into Practice, 41(4), 212–218. doi:10.1207/s15430421tip4104_2.
Kuh, G., Kinzie, J., Buckley, J., Bridges, B., & Hayek, J. (2007). Piecing together the student success puzzle: Research, propositions, and recommendations. ASHE Higher Education Report, 32(5). San Francisco: Jossey-Bass.
Kulik, J. A. (2001). Student ratings: Validity, utility and controversy. New Directions for Institutional Research, 109, 9–25. doi:10.1002/ir.1.
Maccia, G. S. (1987). Genetic epistemology of intelligent natural systems. Systems Research, 4(1), 213–281.
Merrill, M. D. (2002). First principles of instruction. Educational Technology Research and Development, 50(3), 43–59. doi:10.1007/BF02505024.
Merrill, M. D. (2008). What makes e 3 (effective, efficient, engaging) instruction? Keynote address at the AECT Research Symposium, Bloomington, IN.
Merrill, M. D., Barclay, M., & van Schaak, A. (2008). Prescriptive principles for instructional design. In J. M. Spector, M. D. Merrill, J. van Merriënboer, & M. F. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed., pp. 173–184). New York: Lawrence Erlbaum Associates.
Rangel, E., & Berliner, D. (2007). Essential information for education policy: Time to learn. Research Points: American Educational Research Association, 5(2), 1–4.
Sperber, M. (2001). Beer and circus: How big-time college sports is crippling undergraduate education. New York: Henry Holt & Co.
Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate statistics (4th ed.). Boston, MA: Allyn and Bacon.
van Merriënboer, J. J. G., Clark, R. E., & de Croock, M. B. M. (2002). Blueprints for complex learning: The 4C/ID model. Educational Technology Research and Development, 50(2), 39–64. doi:10.1007/BF02504993.
van Merriënboer, J. J. G., & Kirschner, P. A. (2007). Ten steps to complex learning: A systematic approach to four-component instructional design. Hillsdale, NJ: Lawrence Erlbaum Associates.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.
Yazzie-Mintz, E. (2007). Voices of students on engagement: A report on the 2006 high school survey of student engagement. Retrieved January 8, 2008, from http://ceep.indiana.edu/hssse/pdf/HSSSE_2006_Report.pdf.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Frick, T.W., Chadha, R., Watson, C. et al. Improving course evaluations to improve instruction and complex learning in higher education. Education Tech Research Dev 58, 115–136 (2010). https://doi.org/10.1007/s11423-009-9131-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11423-009-9131-z