Log in

College student perceptions of teaching and learning quality

  • Development Article
  • Published:
Educational Technology Research and Development Aims and scope Submit manuscript

Abstract

Numerous instructional design models have been proposed over the past several decades. Instead of focusing on the design process (means), this study investigated how learners perceived the quality of instruction they experienced (ends). An electronic survey instrument containing nine a priori scales was developed. Students responded from 89 different undergraduate and graduate courses at multiple institutions (n = 140). Data analysis indicated strong correlations between student self-reports on academic learning time, how much they learned, First Principles of Instruction, their satisfaction with the course, perceptions of their mastery of course objectives, and global course ratings. Most importantly, these scales measure principles with which instructional developers and teachers can evaluate their products and courses, regardless of design processes used: provide authentic tasks for students to do; activate prior learning; demonstrate what is to be learned; provide repeated opportunities for students to successfully complete authentic tasks with coaching and feedback; and help students integrate what they have learned into their personal lives.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. It should be also noted that Kirkpatrick’s Level 3 is very similar to Merrill’s Principle 5 (integration). We did not attempt to measure Kirkpatrick’s Levels 3 and 4.

  2. It should be noted that a minimum of two items is needed to determine internal consistency of a scale (Cronbach’s α) on a single instrument administration.

  3. 2 = great, 1 = average, 0 = awful

  4. 4 = A, 3 = B, 2 = C, 1 = D, 0 = F

  5. 2 = master, 1 = partial master, 0 = nonmaster

  6. 5 = graduate, 4 = senior, 3 = junior, 2 = sophomore, 1 = freshman, 0 = other

  7. Frick (1990) used Analysis of Patterns in Time (APT). MAPSAT is a more comprehensive methodology that includes map** and analysis of temporal patterns and structure of systems relations—previously called APT&C in Frick et al. (2006).

  8. Items with (−) are negatively worded; thus rating scores are reversed for analysis of these items. Each item requires Likert scale ratings (strongly disagree, disagree, undecided, agree, strongly agree).

References

  • Abrami, P. (2001). Improving judgments about teaching effectiveness using teacher rating forms. New Directions for Institutional Research, 109, 59–87.

    Article  Google Scholar 

  • Abrami, P., d’Apollonia, S., & Cohen, P. (1990). Validity of student ratings of instruction: What we know and what we do not. Journal of Educational Psychology, 82(2), 219–231.

    Article  Google Scholar 

  • Andrews, D. H., & Goodson, L. A. (1991). A comparative analysis of models of instructional design. In G. J. Anglin (Ed.), Instructional technology: Past, present and future (pp. 133–155). Englewood Cliffs: Libraries Unlimited.

    Google Scholar 

  • Arthur, J., Tubré, T., Paul, D., & Edens, P. (2003). Teaching effectiveness: The relationship between reaction and learning evaluation criteria. Educational Psychology, 23(3), 275–285.

    Article  Google Scholar 

  • Baer, J., Cook, A., & Baldi, S. (2006). The literacy of America’s college students. American Institutes for Research. Retrieved January 20, 2007: http://www.air.org/news/documents/The%20Literacy%20of%20Americas%20College%20Students_final%20report.pdf.

  • Berliner, D. (1991). What’s all the fuss about instructional time? In M. Ben-Peretz & R. Bromme (Eds.), The nature of time in schools: Theoretical concepts, practitioner perceptions. New York: Teachers College Press.

    Google Scholar 

  • Brown, B., & Saks, D. (1986). Measuring the effects of instructional time on student learning: Evidence from the Beginning Teacher Evaluation study. American Journal of Education, 94(4), 480–500.

    Article  Google Scholar 

  • Clayson, D., Frost, T., & Sheffet, M. (2006). Grades and the student evaluation of instruction: A test of the reciprocity effect. Academy of Management Learning and Education, 5(1), 52–65.

    Google Scholar 

  • Cohen, P. (1981). Student ratings of instruction and student achievement. A meta-analysis of multisection validity studies. Review of Educational Research, 51(3), 281–309.

    Google Scholar 

  • Emery, C., Kramer, T., & Tian, R. (2003). Return to academic standards: A critique of student evaluations of teaching effectiveness. Quality Assurance in Education, 11(1), 37–46.

    Article  Google Scholar 

  • Estep, M. (2003). A theory of immediate awareness. Self-organization and adaptation in natural intelligence. NY: Springer-Verlag.

    Google Scholar 

  • Feldman, K. (1989). The association between student ratings of specific instructional dimensions and student achievement: Refining an extending the synthesis of data from multisection validity studies. Research in Higher Education, 30, 583–645.

    Article  Google Scholar 

  • Fisher, C., Filby, N., Marliave, R., Cohen, L., Dishaw, M., Moore, J., & Berliner, D. (1978). Teaching behaviors: Academic Learning Time and student achievement: Final report of Phase III-B, Beginning Teacher Evaluation Study. San Francisco: Far West Laboratory for Educational Research and Development.

  • Frick, T. (1990). Analysis of patterns in time (APT): A method of recording and quantifying temporal relations in education. American Educational Research Journal, 27(1), 180–204.

    Google Scholar 

  • Frick, T. (1997). Artificial tutoring systems: What computers can and can’t know. Journal of Educational Computing Research, 16(2), 107–124.

    Article  Google Scholar 

  • Frick, T. (2005). Bridging qualitative and quantitative methods in educational research: Analysis of patterns in time and configuration (APT&C). Proffitt Grant Proposal. Retrieved March 4, 2007: http://education.indiana.edu/∼frick/proposals/apt&c.pdf.

  • Frick, T., An, J., & Koh, J. (2006). Patterns in education: Linking theory to practice. In M. Simonson (Ed.), Proceedings of the Association for Educational Communication and Technology, Dallas, TX. Retrieved March 4, 2007: http://education.indiana.edu/∼frick/aect2006/patterns.pdf.

  • Glenn, D. (2007). Method of using student evaluations to assess professors is flawed but fixable, 2 scholars say. Chronicle of Higher Education Daily. Retrieved May 29, 2007 from http://chronicle.com/daily/2007/05/2007052901n.htm.

  • Gustafson, K., & Branch, R. (2002). Survey of instructional development models (4th ed.). Syracuse: Syracuse University, ERIC Clearinghouse on Information Resources.

    Google Scholar 

  • Kirk, R. (1995). Experimental design: Procedures for the behavioral sciences (3rd ed.). Pacific Grove: Brooks/Cole.

    Google Scholar 

  • Kirkpatrick, D. (1994). Evaluating training programs: The four levels. San Francisco: Berrett-Koehler.

    Google Scholar 

  • Koon, J., & Murray, H. (1995). Using multiple outcomes to validate student ratings of overall teacher effectiveness. The Journal of Higher Education, 66(1), 61–81.

    Article  Google Scholar 

  • Kuh, G., Kinzie, J., Buckley, J., & Hayek, J. (2006, July). What matters to student success: A review of the literature (Executive summary). Commissioned report for the National Symposium on Postsecondary Student Success. Retrieved January 20, 2007: http://nces.ed.gov/npec/pdf/Kuh_Team_ExecSumm.pdf.

  • Kulik, J. (2001). Student ratings: Validity, utility and controversy. New Directions for Institutional Research, 109, 9–25.

    Article  Google Scholar 

  • Maccia, G. S. (1987). Genetic epistemology of intelligent natural systems. Systems Research, 4(1), 213–218.

    Google Scholar 

  • Marsh, H. (1984). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases, and utility. Journal of Educational Psychology, 76(5), 707–754.

    Article  Google Scholar 

  • Merrill, M. D. (2002). First principles of instruction. Education Technology Research & Development, 50(3), 43–59.

    Article  Google Scholar 

  • Merrill, M. D. (2007). A task-centered instructional strategy. Journal of Research on Technology in Education, 40(1), 33–50.

    Google Scholar 

  • Reigeluth, C. M. (1983). Instructional-design theories and models: An overview of their current status. Mahwah: Lawrence Erlbaum.

    Google Scholar 

  • Reigeluth, C. M. (1999). Instructional-design theories and models: A new paradigm of instructional theory (Vol. II). Hillsdale: Lawrence Erlbaum.

    Google Scholar 

  • Reigeluth, C. M., & Stein, F. S. (1983). The elaboration theory of instruction. In C. M. Reigeluth (Ed.), Instructional-design theories and models: An overview of their current status (pp. 335–382). Mahwah: Lawrence Erlbaum.

    Google Scholar 

  • Renaud, R., & Murray, H. (2004). Factorial validity of student ratings of instruction. Research in Higher Education, 46(8), 929–953.

    Article  Google Scholar 

  • Squires, D., Huitt, W., & Segars, J. (1983). Effective schools and classrooms: A research-based perspective. Alexandria, VA: Association for Supervision and Curriculum Development.

    Google Scholar 

  • Visscher-Voerman, I., & Gustafson, K. (2004). Paradigms in the theory and practice of education and training design. Educational Technology Research & Development, 52(2), 69–89.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Theodore W. Frick.

Appendix A

Appendix A

The nine TALQ scales: teaching and learning quality

  1. 1.

    Academic Learning Time (ALT) Scale: Cronbach α = 0.85

    • I frequently did very good work on projects, assignments, problems and/or learning activities for this course.

    • I spent a lot of time doing tasks, projects and/or assignments, and my instructor judged my work as high quality.

    • I put a great deal of effort and time into this course, and it has paid off—I believe that I have done very well overall.

  2. 2.

    Learning Progress Scale (Kirkpatrick, Level 2) : Cronbach α = 0.97

    • Compared to what I knew before I took this course, I learned a lot.

    • I learned a lot in this course.

    • Looking back to when this course began, I have made a big improvement in my skills and knowledge in this subject.

    • I learned very little in this course. (−)Footnote 8

    • I did not learn much as a result of taking this course. (−)

  3. 3.

    Student Satisfaction Scale (Kirkpatrick, Level 1): Cronbach α = 0.94

    • I am dissatisfied with this course. (−)

    • This course was a waste of time and money. (−)

    • I am very satisfied with this course.

  4. 4.

    BEST Scale (Global IU course evaluation items): Cronbach α = 0.92

    • Overall, I would rate the quality of this course as outstanding.

    • Overall, I would rate this instructor as outstanding.

    • Overall, I would recommend this instructor to others.

  5. 5.

    Authentic Problems Scale (Merrill, Principle 1): Cronbach α = 0.81

    • I performed a series of increasingly complex authentic tasks in this course.

    • I solved authentic problems or completed authentic tasks in this course.

    • In this course I solved a variety of authentic problems that were organized from simple to complex.

    • Assignments, tasks, or problems I did in this course are clearly relevant to my professional goals or field of work.

  6. 6.

    Activation Scale (Merrill, Principle 2): Cronbach α = 0.91

    • I engaged in experiences that subsequently helped me learn ideas or skills that were new and unfamiliar to me.

    • In this course I was able to recall, describe or apply my past experience so that I could connect it to what I was expected to learn.

    • My instructor provided a learning structure that helped me to mentally organize new knowledge and skills.

    • In this course I was able to connect my past experience to new ideas and skills I was learning.

    • In this course I was not able to draw upon my past experience nor relate it to new things I was learning. (−)

  7. 7.

    Demonstration Scale (Merrill, Principle 3): Cronbach α = 0.88

    • My instructor demonstrated skills I was expected to learn in this course.

    • My instructor gave examples and counter-examples of concepts that I was expected to learn.

    • My instructor did not demonstrate skills I was expected to learn. (−)

    • My instructor provided alternative ways of understanding the same ideas or skills.

  8. 8.

    Application Scale (Merrill, Principle 4): Cronbach α = 0.74

    • My instructor detected and corrected errors I was making when solving problems, doing learning tasks or completing assignments.

    • My instructor gradually reduced coaching or feedback as my learning or performance improved during this course.

    • I had opportunities to practice or try out what I learned in this course.

    • My course instructor gave me personal feedback or appropriate coaching on what I was trying to learn.

  9. 9.

    Integration Scale (Merrill, Principle 5): Cronbach α = 0.81

    • I had opportunities in this course to explore how I could personally use what I have learned.

    • I see how I can apply what I learned in this course to real life situations.

    • I was able to publicly demonstrate to others what I learned in this course.

    • In this course I was able to reflect on, discuss with others, and defend what I learned.

    • I do not expect to apply what I learned in this course to my chosen profession or field of work. (−)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Frick, T.W., Chadha, R., Watson, C. et al. College student perceptions of teaching and learning quality. Education Tech Research Dev 57, 705–720 (2009). https://doi.org/10.1007/s11423-007-9079-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11423-007-9079-9

Keywords

Navigation