Abstract
In this chapter, seven different approaches to improving student learning through the use of data are identified: Data Driven Decision Making (DDDM); Educational Accountability; School Improvement; School Effectiveness; Program Evaluation; Teacher Effectiveness; and Formative Assessment (or Assessment for Learning). Their characteristics, assumptions, effectiveness and consequences are discussed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
No Child Left Behind Act of 2001, P.L. 107-110, 20 U.S.C. § 6319 (2002). See Jorgensen and Hoffman (2003) for an explanation of its history.
- 2.
Strunk and McEachin (2014) found greater gains with mathematics achievement than with English language achievement, and point to this as a common finding worth further research.
- 3.
- 4.
The list of factors is: 1. achievement orientation; 2. educational leadership; 3. staff consensus and cohesion; 4. curriculum quality; 5. school climate; 6. evaluative potential; 7. parental involvement; 8. school climate; 9. effective learning time; 10. structured instruction; 11. independent learning; 12. adaptive instruction; 13. feedback/reinforcement. (Scheerens & Bosker, 1997, p. 100)
- 5.
The difference between top and bottom teachers on this measure is about 0.6 of a standard deviation. This is difficult to compare with the earlier ‘years of growth’ measure but in both cases the difference is substantial.
- 6.
Chetty, Friedman, and Rockoff, (2011–2012) claimed to show that the effects of high quality teaching are both substantial and long-term and may even include future earnings, but Adler (2013) has shown that these conclusions are methodologically unsound, in fact, ‘contradicted by the findings of the study itself’ (p. 7).
- 7.
Wiliam (2010a) provides a short well-presented overview of the importance and benefits of focusing on improving teacher quality in situ and the limited payoffs from other approaches.
- 8.
An overview of these issues can be found in Evaluation Policy Analysis Archives, Volume 21, 2013.
- 9.
Rowan et al. (2002) conclude that the combined effects of many small events produce learning outcomes, that few classrooms would offer optimum learning conditions, and that ‘the majority of classrooms probably present students with a mix of more and [emphasis in original] less instructionally effective practices simultaneously’ (p. 22).
- 10.
The results are not easily interpreted and classroom composition also played a role.
- 11.
‘Predicted’ here is a technical term related to the regression equation linking the composite measure and achievement gain. This is not a forward prediction in time since all the measures already exist. Whether such a ‘prediction’ would be sustained in other circumstances is a question of the external validity (or generalisability) of these findings.
- 12.
These were: ‘classroom management practices; knowledge and understanding of teachers’ main field; knowledge and understanding of instructional practices in their main subject field; a development or training plan to improve their teaching; teaching students with special needs; handling of student discipline and behaviour problems; teaching students in a multicultural setting; and the emphasis on improving student test scores’ (OECD, 2009, p. 159).
- 13.
A similar definition is provided by Moss and Brookhart (2019, p. 6): ‘Formative assessment is an active and intentional learning process that partners the teacher and the students to continuously and systematically gather evidence of learning with the express goal of improving student achievement.’
- 14.
There are some potential benefits from this unpredictability. ‘[The] various sources of indeterminacy introduce scope for creative invention and change. Learners are not simply passive recipients of information but are able to negotiate shared meaning, thus allowing the process of learning to be one of active transformation and appropriation of knowledge.’ (Ireson, 2008, p. 143)
- 15.
See Cizek (2009) for an extended discussion.
- 16.
In the US, many commercial testing companies market benchmark or interim assessments as Formative Assessment Item Banks.
- 17.
Perie et al. (2009) themselves make a distinction between interim and formative assessments in terms of their scope and duration, but see both as types of assessment.
- 18.
Shepard (2019) defines formative assessments as a form of assessment, whose focus and characteristics differ from summative assessments, but also incorporates such assessments within a theory or classroom action.
- 19.
Wiliam (2017) makes the point that ‘[k]ee** a clear focus on assessment for learning (or formative assessment ) as assessment allows much clearer analysis of the relationship between the evidence and the inferences and actions that the evidence supports’ (p. 684).
- 20.
Penuel and Shepard (2016) point out that so-called ‘interim assessments’ in the USA can be formative or summative depending how they are used, but also warn that such assessments ‘may lack the close connection to instruction of more embedded and immediate types of formative assessment’ (p. 787), especially those assessments that are intrinsically located in the process of teaching.
- 21.
Penuel and Shepard (2016) and Shepard (2019) construct ‘clusters of interventions’ for develo** formative assessment practices, based on different theories of learning and theories of action. Their categorisation identifies so-called interim assessments as a form of DBDM, though (as discussed earlier in this chapter) this is a management concept rather than a typical instance of formative assessment. The other clusters are ‘strategy-focused’, ‘sociocognitive’ and ‘sociocultural’, which is similar to the categories adopted by Stobart and Hopfenbeck (2014).
- 22.
Three decades ago, Harry Black (1986) suggested that formative assessment is not an innovation, but has always been seen as part of appropriate teaching, except that it is generally done inadequately and unsystematically.
- 23.
van der Kleij et al. (2015) provide an analysis of the differences between DBDM, Assessment for Learning and Diagnostic Testing in terms of their theoretical underpinnings and practical implementation. They make the useful point that whereas typically DBDM can involve decisions about student, class and school, and Assessment for Learning about student and class, Diagnostic Testing focuses just on the individual student.
- 24.
The relationship between these two feedback loops within assessment for learning models appears to need further exploration (Wiliam, 2017).
- 25.
Cohen’s d is a commonly used measure of effect size (Cohen, 1988). It represents a difference between means in units of standard deviation. Typical rules of thumb are: small effect d = .2; medium effect d = .5; large effect d = .8 but this is a judgment call.
- 26.
Popularisation of this concept is attributed to Havighurst (1952).
References
Aaronson, D., Barrow, L., & Sander, W. (2007). Teachers and student achievement in the Chicago public high schools. Journal of Labor Economics, 25(1), 95–135. https://doi.org/10.1086/508733
Ackoff, R. L. (1989). From data to wisdom. Journal of Applied Systems Analysis, 16, 3–9.
Adler, M. (2013). Findings vs. interpretation in “The long-term impacts of teachers” by Chetty et al. Education Policy Analysis Archives, 21(10), 1–10. https://doi.org/10.14507/epaa.v21n10.2013
Akyüz, G., & Berberoglu, G. (2010). Teacher and classroom characteristics and their relations to mathematics achievement of the students in the TIMMS. New Horizons in Education, 58(1), 77–95.
Allal, L. (2010). Assessment and the regulation of learning. In P. Peterson, E. Baker, & B. McGaw (Eds.), International encyclopedia of education (Vol. 3, 3rd ed., pp. 348–352). Elsevier. https://doi.org/10.1016/B978-0-08-044894-7.00362-6
Allal, L. (2016). The co-regulation of student learning in an assessment for learning culture. In D. Laveault & L. Allal (Eds.), Assessment for learning: Meeting the challenge of implementation (pp. 259–273). Springer. https://doi.org/10.1007/978-3-319-39211-0_15
Anderson, L. W., & Postlethwaite, T. N. (2007). Program evaluation: Large-scale and small-scale studies. UNESCO, International Institute for Educational Planning; International Academy of Education.
Andrade, H. L. (2009). Students as the definitive source of formative assessment: Academic self-assessment and self-regulation. In H. Andrade & G. J. Cizek (Eds.), Handbook of formative assessment (pp. 90–105). Routledge.
Andrade, H. L. (2018). Feedback in the context of self-assessment. In A. A. Lipnevich & J. K. Smith (Eds.), The Cambridge handbook of instructional feedback (pp. 376–408). Cambridge University Press. https://doi.org/10.1017/9781316832134.019
Argyris, C., & Schön, D. (1978). Organizational learning: A theory of action perspective. Addison Wesley.
Argyris, C., & Schön, D. (1996). Organizational learning II: Theory, method and practice. Addison Wesley.
Assessment Reform Group. (1999). Assessment for learning: Beyond the black box. University of Cambridge. https://www.nuffieldfoundation.org/sites/default/files/files/beyond_blackbox.pdf
Assessment Reform Group. (2002). Assessment for learning: 10 principles. University of Cambridge. http://www.hkeaa.edu.hk/DocLibrary/SBA/HKDSE/Eng_DVD/doc/Afl_principles.pdf
Bellinger, G., Castro, D., & Mills, A. (2004). Data, information, knowledge, and wisdom. Systems Thinking. http://www.systems-thinking.org/dikw/dikw.htm
Bennett, R. E. (2009). A critical look at the meaning and basis of formative assessment (Report No. RM-09-06). Educational Testing Service.
Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education: Principles, Policy & Practice, 18(1), 5–25. https://doi.org/10.1080/0969594X.2010.513678
Betebenner, D. W., & Linn, R. L. (2010). Growth in student achievement: Issues of measurement, longitudinal data analysis, and accountability. Educational Testing Service.
Binkley, M., Erstad, O., Herman, J., Raizen, S., Ripley, M., & Rumble, M. (2012). Defining 21st century skills. In P. Griffin, B. McGaw, & E. Care (Eds.), Assessment and teaching of 21st century skills (pp. 17–66). Springer. https://doi.org/10.1007/978-94-007-2324-5_2
Birenbaum, M., DeLuca, C., Earl, L., Heritage, M., Klenowski, V., Looney, A., Smith, K., Timperley, H., Volante, L., & Wyatt-Smith, C. (2015). International trends in the implementation of assessment for learning: Implications for policy and practice. Policy Futures in Education, 13(1), 117–140. https://doi.org/10.1177/1478210314566733
Black, H. (1986). Assessment for learning. In D. L. Nuttall (Ed.), Assessing educational achievement (pp. 7–18). Falmer Press.
Black, P. (2015). Formative assessment: An optimistic but incomplete vision. Assessment in Education: Principles, Policy & Practice, 22(1), 161–177. https://doi.org/10.1080/0969594X.2014.999643
Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment for learning: Putting it into practice. Open University Press.
Black, P., & Wiliam, D. (1998a). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74. https://doi.org/10.1080/0969595980050102
Black, P., & Wiliam, D. (1998b). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139–148.
Black, P., & Wiliam, D. (2003). In praise of educational research: Formative assessment. British Educational Research Journal, 29(5), 751–765. https://doi.org/10.1080/0141192032000133721
Black, P., & Wiliam, D. (2004). The formative purpose: Assessment must first promote learning. Yearbook of the National Society for the Study of Education, 103(2), 20–50. https://doi.org/10.1111/j.1744-7984.2004.tb00047.x
Black, P., & Wiliam, D. (2006). Develo** a theory of formative assessment. In J. Gardner (Ed.), Assessment and learning (pp. 81–100). Sage.
Black, P., & Wiliam, D. (2009). Develo** the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21(1), 5–31. https://doi.org/10.1007/s11092-008-9068-5
Black, P., & Wiliam, D. (2018). Classroom assessment and pedagogy. Assessment in Education: Principles, Policy & Practice, 25(6), 551–575. https://doi.org/10.1080/0969594X.2018.1441807
Bloom, B. S., Hastings, J. T., & Madaus, G. F. (Eds.). (1971). Handbook of formative and summative evaluation of student learning. McGraw-Hill.
Briggs, D. C., Ruiz-Primo, M. A., Furtak, E., Shepard, L., & Yin, Y. (2012). Meta-analytic methodology and inferences about the efficacy of formative assessment. Educational Measurement: Issues and Practice, 31, 13–17. https://doi.org/10.1111/j.1745-3992.2012.00251.x
Brophy, J. (2002). Teaching (Education Practices 1). UNESCO, International Academy of Education, International Bureau of Education. http://www.ibe.unesco.org/en/document/teaching-educational-practices-1
Brown, K. M., Benkovitz, J., Muttilo, A. J., & Urban, T. (2011). Leading schools of excellence and equity: Documenting effective strategies in closing achievement gaps. Teachers College Record, 113(1), 57–96.
Brown, G. T. L., & Harris, L. R. (2013). Student self-assessment. In J. McMillan (Ed.), SAGE handbook of research on classroom assessment (pp. 367–393). Sage. https://doi.org/10.4135/9781452218649.n21
Brynjolfsson, E., Hitt, L. M., & Kim, H. H. (2011). Strength in numbers: How does data-driven decision making affect firm performance? Social Science Research Network. https://doi.org/10.2139/ssrn.1819486
Calfee, R., Wilson, K. M., Flannery, B., & Kapinus, B. (2014). Formative assessment for the common core literacy standards. Teachers College Record, 116(110), 1–32.
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental evaluations in social research. Rand McNally.
Cantrell, S., & Kane, T. J. (2013). Ensuring fair and reliable measures of effective teaching: Culminating findings from the MET project’s three-year study. Bill and Melinda Gates Foundation. http://k12education.gatesfoundation.org/resource/ensuring-fair-and-reliable-measures-of-effective-teaching-culminating-findings-from-the-met-projects-three-year-study/
Carnoy, M., & Loeb, S. (2004). Does external accountability affect student outcomes? A cross-state analysis. In S. H. Fuhrman & R. F. Elmore (Eds.), Redesigning accountability systems for education (pp. 189–219). Teachers College Press.
Center on Education Policy. (2007). Answering the question that matters most: Has student achievement increased since no child left behind? https://files.eric.ed.gov/fulltext/ED520272.pdf
Chetty, R., Friedman, J., & Rockoff, J. (2011, December, revised 2012, January). The long-term impacts of teachers: Teacher value-added and student outcomes in adulthood (NBER Working Paper No. 17699). National Bureau of Economic Research. https://doi.org/10.3386/w17699
Cizek, G. J. (2009). An introduction to formative assessment: History, characteristics, and challenges. In H. Andrade & G. J. Cizek (Eds.), Handbook of formative assessment (pp. 3–17). Routledge. https://doi.org/10.4324/9781315166933-1
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Erlbaum.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation. Rand McNally.
Cooper, A. (2012, April 27). Analytics and big data: Reflections from the Teradata Universe Conference 2012. CETIS. http://blogs.cetis.org.uk/adam/page/4/
Council of Chief State School Officers (CCSSO). (2008a). Attributes of effective formative assessment: A work product coordinated by Sarah McManus, NC Department of Public Instruction, for the Formative Assessment for Students and Teachers (FAST) collaborative.
Council of Chief State School Officers (CCSSO). (2008b). Formative assessment: Examples of practice: A work product initiated and led by Caroline Wylie, ETS, for the Formative Assessment for Students and Teachers (FAST) collaborative.
Cowie, B., & Bell, B. (1999). A model of formative assessment in science education. Assessment in Education: Principles, Policy & Practice, 6(1), 32–42. https://doi.org/10.1080/09695949993026
Cronbach, L. J. (1964). Essentials of psychological testing (2nd ed.). Harper & Row.
Cuban, L. (1990). Reforming again, again, and again. Educational Researcher, 19(1), 2–13. https://doi.org/10.3102/0013189X019001003
Cumming, J., Jackson, C., Day, C., Maxwell, G., Adie, L., Lingard, B., Haynes, M., & Heck, E. (2018). Queensland NAPLAN Review: School and system perceptions report and literature review. Australian Catholic University, Institute for Learning Sciences and Teacher Education. https://qed.qld.gov.au/programsinitiatives/education/Documents/naplan-2018-school-perceptions-report.pdf
Cuttance, P. (2001). The impact of teaching on student learning. In K. J. Kennedy (Ed.), Beyond the rhetoric: Building a teaching profession to support quality teaching (pp. 35–55). Australian College of Education.
Darling-Hammond, L. (2000). Teacher quality and student achievement: A review of state policy evidence. Education Policy Analysis Archives, 8(1), 1–44. https://doi.org/10.14507/epaa.v8n1.2000
David, J. L., Shields, P. M., Humphrey, D. C., & Young, V. M. (2001). When theory hits reality: Standards-based reform in urban districts: Final narrative report. SRI International. https://files.eric.ed.gov/fulltext/ED480210.pdf
Desimore, L. (2002). How can comprehensive school reform models be successfully implemented? Review of Educational Research, 72(3), 433–479. https://doi.org/10.3102/00346543072003433
Dobbie, W. (2011). Teacher characteristics and student achievement: Evidence from Teach for America. Harvard University.
Ehren, M. C. M., & Swanborn, M. S. L. (2012). Strategic data use of schools in accountability systems. School Effectiveness and School Improvement: An International Journal of Research, Policy and Practice, 23(2), 257–280. https://doi.org/10.1080/09243453.2011.652127
Elmore, R. F. (2004). Conclusion: The problem of stakes in performance-based accountability systems. In S. H. Fuhrman & R. F. Elmore (Eds.), Redesigning accountability systems for education (pp. 274–296). Teachers College Press.
Ercikan, K., & Roth, W.-M. (2014). Limits of generalizing in educational research: Why criteria for research generalization should include population heterogeneity and uses of knowledge claims. Teachers College Record, 116(5), 1–28.
European Union. (2010). Teachers’ professional development: Europe in international comparison: An analysis of teachers’ professional development based on the OECD’s Teaching and learning Survey (TALIS). Office for Official Publications of the European Union.
Finnigan, K. S., & Gross, B. (2007). Do accountability policy sanctions influence teacher motivations? Lessons from Chicago’s low-performing schools. American Educational Research Journal, 44(3), 594–629. https://doi.org/10.3102/0002831207306767
Firestone, W. A., & Gonzalez, R. A. (2007). Culture and processes affecting data use in school districts. Yearbook of the National Society for the Study of Education, 106(1), 132–154. https://doi.org/10.1111/j.1744-7984.2007.00100.x
Fitz-Gibbon, C. T. (1996). Monitoring education: Indicators, quality and effectiveness. Cassell.
Fleisch, B. (2006). Education district development in South Africa: A new direction for school improvement? In A. Harris & J. H. Chrispeels (Eds.), Improving schools and educational systems: International perspectives (pp. 217–240). Routledge.
Fuhrman, S. H. (2003). Redesigning accountability systems for education. CPRE Policy Briefs, RB-38, 1–10. https://doi.org/10.1037/e382792004-001
Fuhrman, S. H. (2004). Introduction. In S. H. Fuhrman & R. F. Elmore (Eds.), Redesigning accountability systems for education (pp. 3–14). Teachers College Press.
Fuhrman, S. H., & Elmore, R. F. (Eds.). (2004). Redesigning accountability systems for education. Teachers College Press.
Galbraith, J. R. (1973). Designing complex organizations. Addison-Wesley.
Galbraith, J. R. (1974). Organization design: An information processing view. Interfaces, 4(3), 28–36. https://doi.org/10.1287/inte.4.3.28
Galbraith, J. R. (1977). Organization design. Addison-Wesley.
Galbraith, J. R. (2001). Designing organizations: An executive guide to strategy, structure and process. Jossey-Bass.
Good, R. (2011). Formative use of assessment information: It’s a process, so let’s say what we mean. Practical Assessment, Research and Evaluation, 16(3), 1–6.
Gottfried, M. A., Strecher, B. M., Hoover, M., & Cross, A. B. (2011). Federal and state roles and capacity for improving schools. The RAND Corporation.
Guth, G. J. A., Holtzman, D. J., Schneider, S. A., Carlos, L., Smith, J. R., Hayward, G. C., & Calvo, N. (1999). Evaluation of California’s standards-based accountability system: Final report. WestEd. https://www.wested.org/resources/evaluation-of-californias-standards-based-accountability-system-final-report-november-1999/
Haertel, E. H. (1999). Performance assessment and educational reform. Phi Delta Kappan, 80(9), 662–666.
Hamilton, L., Halverson, R., Jackson, S., Mandinach, E., Supovitz, J., & Wayman, J. (2009). Using student achievement data to support instructional decision making (NCEE 2009-4067). U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance. https://ies.ed.gov/ncee/wwc/Docs/PracticeGuide/dddm_pg_092909.pdf
Hansen, M., & Choi, K. (2011). Chronically low-performing schools and turnaround: Findings in three states (Calder Working Paper No. 60). Calder Center. https://caldercenter.org/sites/default/files/wp-60.pdf
Hanushek, E. A. (1971). Teacher characteristics and gains in student achievement: Estimation using micro data. American Economic Review, 61(2), 280–288.
Hanushek, E. A. (2002). Teacher quality. In L. T. Izumi & W. M. Evers (Eds.), Teacher quality (pp. 1–12). Hoover Press.
Hanushek, E. A., Kain, J. F., & Rivkin, S. G. (1998). Teachers, schools, and academic achievement (NBER Working Paper No. 6691). National Bureau of Economic Research. https://doi.org/10.3386/w6691
Hanushek, E. A., & Rivkin, S. G. (2006). Teacher quality. In E. A. Hanushek & F. Welch (Eds.), Handbook of the economics of education (pp. 1051–1078). Elsevier. https://doi.org/10.1016/S1574-0692(06)02018-6
Hanushek, E. A., & Rivkin, S. G. (2012). The distribution of teacher quality and implications for policy. Annual Review of Economics, 4, 131–157. https://doi.org/10.1146/annurev-economics-080511-111001
Harlen, W. (2005). Teachers’ summative practices and assessment for learning: Tensions and synergies. Curriculum Journal, 16, 207–223. https://doi.org/10.1080/09585170500136093
Harlen, W., & James, M. (1999). Assessment and learning: Differences and relationships between formative and summative assessment. Assessment in Education: Principles, Policy & Practice, 4(3), 365–379. https://doi.org/10.1080/0969594970040304
Harris, A. (2001). Contemporary perspectives on school effectiveness and school improvement. In A. Harris & N. Bennett (Eds.), School effectiveness and school improvement: Alternative perspectives (pp. 7–25). Continuum.
Harris, A., & Chrispeels, J. H. (2006). Introduction. In A. Harris & J. H. Chrispeels (Eds.), Improving schools and educational systems: International perspectives (pp. 3–22). Routledge.
Hattie, J. (2012). Visible learning for teachers: Maximizing impact on learning. Routledge. https://doi.org/10.4324/9780203181522
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
Havighurst, R. J. (1952). Human development and education. Longmans, Green.
Heritage, M. (2013). Gathering evidence of student understanding. In J. H. McMillan (Ed.), SAGE handbook of research on classroom assessment (pp. 179–195). Sage. https://doi.org/10.4135/9781452218649.n11
Heritage, M. (2014). The place of assessment to improve learning in a context of high accountability. In C. Wyatt-Smith, V. Klenowski, & P. Colbert (Eds.), Designing assessment for quality learning (pp. 337–354). Springer. https://doi.org/10.1007/978-94-007-5902-2_21
Heritage, M., Kim, J., Vendlinski, T., & Herman, J. (2009). From evidence to action: A seamless process in formative assessment? Educational Measurement: Issues and Practice, 28(3), 24–31. https://doi.org/10.1111/j.1745-3992.2009.00151.x
Herman, J., & Gribbons, B. (2001). Lessons learned in using data to support school inquiry and continuous improvement (CSE Technical Report 535). Center for the Study of Evaluation. https://cresst.org/wp-content/uploads/TR535.pdf
Hill, P. W., & Rowe, K. J. (1996). Multilevel modelling in school effectiveness research. School Effectiveness and School Improvement, 7(1), 1–34. https://doi.org/10.1080/0924345960070101
Hill, P. W., & Rowe, K. J. (1998). Modeling student progress in studies of educational effectiveness. School Effectiveness and School Improvement, 9(3), 310–333. https://doi.org/10.1080/0924345980090303
Hopkins, D. (2001). School improvement for real. RoutledgeFalmer.
Hoy, W. K., Tarter, C. J., & Woolfolk Hoy, A. (2006). Academic optimism of schools: A force for student achievement. American Educational Research Journal, 43(3), 425–446. https://doi.org/10.3102/00028312043003425
Ikemoto, G. S., & Marsh, J. M. (2007). Cutting through the ‘data-driven’ mantra: Different conceptions of data-driven decision making. Yearbook of the National Society for the Study of Education, 106(1), 105–131. https://doi.org/10.1111/j.1744-7984.2007.00099.x
Ireson, J. (2008). Learners, learning and educational activity. Routledge. https://doi.org/10.4324/9780203929094
Jacob, B. A., Lefgren, L., & Sims, D. (2008). The persistence of teacher-induced learning gains (NBER Working Paper No. 14065). National Bureau of Economic Research. https://doi.org/10.3386/w14065
Jacob, B. A., & Sims, D. (2010). The long-term value of value-added: Examining the persistence of teacher-induced learning gains. Journal of Human Resources, 45(4), 915–943.
Jensen, B. (2011). Better teacher appraisal and feedback: Improving performance. Grattan Institute.
Jimerson, J. B., Cho, V., Scroggins, K. A., Balial, R., & Robinson, R. R. (2018). How and why teachers engage students with data. Educational Studies, 45(6), 667–691. https://doi.org/10.1080/03055698.2018.1509781
Jimerson, J. B., Cho, V., & Wayman, J. C. (2016). Student-involved data use: Teacher practices and considerations for professional learning. Teaching and Teacher Education, 60, 413–424. https://doi.org/10.1016/j.tate.2016.07.008
Jimerson, J. B., & Reames, E. B. (2015). Student-involved data use: Establishing the evidence base. Journal of Educational Change, 16(3), 281–304. https://doi.org/10.1007/s10833-015-9246-4
Jimerson, J. B., & Wayman, J. C. (2015). Professional learning for using data: Examining teacher needs and supports. Teachers College Record, 117(4), 1–36.
Jordan, H. R., Mendro, R. L., & Weerasinghe, D. (1997). Teacher effects on longitudinal student achievement: A report on research in progress. Dallas Public Schools.
Jorgensen, M. A., & Hoffman, J. (2003). History of the No Child Left Behind Act of 2001 (NCLB). Pearson Education. http://images.pearsonassessments.com/images/tmrs/tmrs_rg/HistoryofNCLB.pdf
Kane, T. J., McCaffrey, D. F., Miller, T., & Staiger, D. O. (2013). Have we identified effective teachers? Validating measures of effective teaching using random assignment. Bill and Melinda Gates Foundation. https://files.eric.ed.gov/fulltext/ED540959.pdf
Kane, T. J., & Staiger, D. O. (2008). Estimating teacher impacts on student achievement: An experimental evaluation (NBER Working Paper No. 14607). National Bureau of Economic Research. https://doi.org/10.3386/w14607
Kane, T. J., Taylor, E. S., Tyler, J. H., & Wooten, A. L. (2010). Identifying effective classroom practices using student achievement data. Journal of Human Resources, 46(3), 586–613. https://doi.org/10.3368/jhr.46.3.587
Katzenbach, J. R., & Smith, D. K. (1993). The wisdom of teams: Creating the high-performance organization. Harvard Business School Press.
Kennedy, B. L., & Datnow, A. (2011). Student involvement in data-driven decision making: Develo** a new typology. Youth and Society, 43(4), 1246–1271. https://doi.org/10.1177/0044118X10388219
Kerchner, C. T., Menefee-Libey, D. J., Mulfinger, L. S., & Clayton, S. E. (2008). Learning from L.A.: Institutional change in American public education. Harvard Education Press.
Kingston, N., & Nash, B. (2011). Formative assessment: A meta-analysis and a call for research. Educational Measurement: Issues and Practice, 30(4), 28–37. https://doi.org/10.1111/j.1745-3992.2011.00220.x
Klenowski, V. (2009). Assessment for learning revisited: An Asia-Pacific perspective. Assessment in Education: Policy, Principles & Practice, 16(3), 263–268. https://doi.org/10.1080/09695940903319646
Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284. https://doi.org/10.1037/0033-2909.119.2.254
Koretz, D. (2017). The testing charade: Pretending to make schools better. University of Chicago Press. https://doi.org/10.7208/chicago/9780226408859.001.0001
Krauss, S., Brunner, M., Kunter, M., Baumert, J., Blum, W., Neubrand, M., & Jordan, A. (2008). Pedagogical content knowledge and content knowledge of secondary mathematics teachers. Journal of Educational Psychology, 100(3), 716–725. https://doi.org/10.1037/0022-0663.100.3.716
Laveault, D., & Allal, L. (2016). Implementing assessment for learning: Theoretical and practical issues. In D. Laveault & L. Allal (Eds.), Assessment for learning: Meeting the challenge of implementation (pp. 1–18). Springer. https://doi.org/10.1007/978-3-319-39211-0_1
Leahy, S., Lyon, C., Thompson, M., & Wiliam, D. (2005). Classroom assessment: Minute-by-minute and day-by-day. Educational Leadership, 63(3), 18–24.
Lee, H., Chung, H. Q., Zhang, Y., Abedi, J., & Warschauer, M. (2020). The effectiveness and features of formative assessment in U.S. K-12 education: A systematic review. Applied Measurement in Education, 33(2), 124–140. https://doi.org/10.1080/08957347.2020.1732383
Lee, M., Louis, K., & Anderson, S. (2012). Local education authorities and student learning: The effects of policies and practices. School Effectiveness and School Improvement: An International Journal of Research, Policy and Practice, 23(2), 133–158. https://doi.org/10.1080/09243453.2011.652125
Leigh, A. (2010). Estimating teacher effectiveness from two-year changes in students’ test scores. Economics of Education Review, 29(3), 480–488. https://doi.org/10.1016/j.econedurev.2009.10.010
Levin, B. (2000). Putting students at the centre in education reform. Journal of Educational Change, 1, 155–172. https://doi.org/10.1023/A:1010024225888
Levin, J. A., & Datnow, A. (2012). The principal role in data-driven decision making: Using case-study data to develop multi-mediator models of educational reform. School Effectiveness and School Improvement: An International Journal of Research, Policy and Practice, 23(2), 179–201. https://doi.org/10.1080/09243453.2011.599394
Linn, R. L. (2004). Accountability models. In S. H. Fuhrman & R. F. Elmore (Eds.), Redesigning accountability systems for education (pp. 73–95). Teachers College Press.
Linn, R. L. (2005). Issues in the design of accountability systems. Yearbook of the National Society for the Study of Education, 104(2), 78–98. https://doi.org/10.1111/j.1744-7984.2005.00026.x
Lodge, C. (2005). From hearing voices to engaging in dialogue: Problematising student engagement in school improvement. Journal of Educational Change, 6(2), 125–146. https://doi.org/10.1007/s10833-005-1299-3
Looney, J. (Ed.). (2005). Formative assessment: Improving learning in secondary classrooms (OECD Policy Brief). OECD. http://www.oecd.org/education/ceri/35661078.pdf
Looney, J. (2009). Assessment and innovation in education (OECD Education Working Paper No. 24). OECD. https://doi.org/10.1787/222814543073
Louis, K. S., Toole, J., & Hargreaves, A. (1999). Rethinking school improvement. In J. Murphy & K. S. Louis (Eds.), Handbook of research on educational administration (2nd ed., pp. 251–276). Jossey Bass.
Luyten, H., & Snijders, T. A. B. (1996). School effects and teacher effects in Dutch elementary education. Educational Research and Evaluation, 2(1), 1–24. https://doi.org/10.1080/1380361960020101
Luyten, H., Visscher, A., & Witziers, B. (2004). School effectiveness research: From a review of the criticism to recommendations for further development. School Effectiveness and School Improvement, 16(3), 249–279. https://doi.org/10.1080/09243450500114884
MacBeath, J., & Mortimer, P. (2001). School effectiveness and improvement: The story so far. In J. MacBeath & P. Mortimer (Eds.), Improving school effectivenesse (pp. 1–21). Open University Press.
Mandinach, E. B., & Honey, M. (Eds.). (2008). Data-driven school improvement: Linking data and learning. Teachers College Press.
Mandinach, E. B., Honey, M., & Light, D. (2006, April 7–11). A theoretical framework for data-driven decision making (Paper presentation). American Educational Research Association Annual Meeting, San Francisco, CA, USA. https://pdfs.semanticscholar.org/70be/11b76e48eab123ef8a0d721accedb335ed5c.pdf
Mandinach, E. B., & Jackson, S. S. (2012). Transforming teaching and learning through data-driven decision making. Corwin Press. https://doi.org/10.4135/9781506335568
March, J. G., & Simon, H. A. (1993). Organizations (2nd ed.). Basil Blackwell.
Mark, M. M. (2005). Generalization. In S. Mathison (Ed.), Encyclopedia of evaluation (p. 169). Sage. https://doi.org/10.4135/9781412950558.n229
Marsh, J. A., Fane, J. F., & Hamilton, L. S. (2006). Making sense of data-driven decision making in education: Evidence from recent RAND research. RAND Corporation.
Marsh, J. A., Farrell, C. C., & Bertrand, M. (2014). Trickle-down accountability: How middle school teachers engage students in data use. Educational Policy, 30(2), 243–280. https://doi.org/10.1177/0895904814531653
Martins, P. S. (2009). Individual teacher incentives, student achievement and grade inflation (IZA Discussion Paper No. 4051). Institute for the Study of Labor (IZA).
Marzano, R. J. (2007). The art and science of teaching: A comprehensive framework for effective instruction. ASCD.
Maxwell, G. S. (2010). Moderation of student work by teachers. In P. Peterson, E. Baker, & B. McGaw (Eds.), International encyclopedia of education (Vol. 3, 3rd ed., pp. 457–463). Elsevier. https://doi.org/10.1016/B978-0-08-044894-7.00347-X
McNaughton, S., Lai, M. K., & Hsiao, S. (2012). Testing the effectiveness of an intervention model based on data use: A replication series across clusters of schools. School Effectiveness and School Improvement: An International Journal of Research, Policy and Practice, 23(2), 203–228. https://doi.org/10.1080/09243453.2011.652126
Mihaly, K., McCaffrey, D. F., Staiger, D. O., & Lockwood, J. R. (2013). A composite estimator of effective teaching. Bill and Melinda Gates Foundation. http://k12education.gatesfoundation.org/resource/a-composite-estimator-of-effective-teaching/
Moss, C. M., & Brookhart, S. M. (2019). Advancing formative assessment in every classroom: A guide for instructional leaders (2nd ed.). ASCD.
Moss, P. A. (Ed.). (2007). Evidence and decision making (Special issue). Yearbook of the National Society for the Study of Education, 106(1). https://doi.org/10.1111/j.1744-7984.2007.00095.x
Moss, P. A., & Piety, P. J. (2007). Introduction: Evidence and decision making. Yearbook of the National Society for the Study of Education, 106(1), 1–14. https://doi.org/10.1111/j.1744-7984.2007.00095.x
Murphy, J. F., & Bleiberg, J. F. (2019). School turnaround policies and practices in the US: Learning from failed school reform. Springer.
Nye, B., Konstantopoulos, S., & Hedges, L. V. (2004). How large are teacher effects? Educational Evaluation and Policy Analysis, 26(3), 237–257. https://doi.org/10.3102/01623737026003237
O’Day, J. A. (2002). Complexity, accountability, and school improvement. Harvard Educational Review, 72(3), 293–329. https://doi.org/10.17763/haer.72.3.021q742t8182h238
O’Day, J. A. (2004). Complexity, accountability, and school improvement. In S. H. Fuhrman & R. F. Elmore (Eds.), Redesigning accountability systems for education (pp. 15–43). Teachers College Press.
OECD. (2009). Creating effective teaching and learning environments: First results from TALIS. OECD. https://doi.org/10.1787/23129638
OECD. (2013a). Synergies for better learning: An international perspective on evaluation and assessment. OECD.
OECD. (2013b). Teachers for the 21st century: Using evaluation to improve teaching. OECD.
OECD/CERI. (2008, May 15–16). Assessment for learning: The case for formative assessment (Paper presentation). OECD/CERI International Conference, Learning in the 21st century: Research, innovation and policy, Paris, France. https://www.oecd.org/site/educeri21st/40600533.pdf
O’Malley, K. J., Murphy, S., McClarty, K. L., Murphy, D., & McBryde, Y. (2011). Overview of student growth models. Pearson. http://images.pearsonassessments.com/images/tmrs/Student_Growth_WP_083111_FINAL.pdf
Panadero, E., Jonsson, A., & Alqassab, M. (2018). Providing formative peer feedback. In A. A. Lipnevich & J. K. Smith (Eds.), The Cambridge handbook of instructional feedback (pp. 351–353). Cambridge University Press. https://doi.org/10.1017/9781316832134.020
Pawson, R. (2013). The science of evaluation: A realist manifesto. Sage. https://doi.org/10.4135/9781473913820
Pawson, R., & Tilley, N. (1997). Realistic evaluation. Sage.
Pawson, R., & Tilley, N. (2001). Realistic evaluation bloodlines. American Journal of Evaluation, 22(3), 317–324. https://doi.org/10.1177/109821400102200305
Penuel, W. R., & Shepard, L. A. (2016). Assessment and teaching. In D. H. Gitomer & C. A. Bell (Eds.), Handbook of research on teaching (5th ed., pp. 787–850). American Educational Research Association. https://doi.org/10.3102/978-0-935302-48-6_12
Perie, M., Marion, S., & Gong, B. (2009). Moving toward a comprehensive assessment system: A framework for considering interim assessments. Educational Measurement: Issues and Practices, 28, 5–13. https://doi.org/10.1111/j.1745-3992.2009.00149.x
Pomerol, J.-C., & Adam, F. (2004, July 1–3). Practical decision making: From the legacy of Herbert Simon to Decision Support Systems (Paper presentation). International Federation of Information Processing Working Group 8.3 Conference, Prato, Italy. https://castle.eiu.edu/~a_illia/MBA5670/Pomerol-Adam-practical-decisionmaking-from-Simon-to-DSS-2004.pdf
Ramaprasad, A. (1983). On the definition of feedback. Behavioural Science, 28(1), 4–13. https://doi.org/10.1002/bs.3830280103
Rey, O. (2010). The use of external assessments and the impact on education systems. In S. M. Stoney (Ed.), Beyond Lisbon 2010: Perspectives from research and development for education policy in Europe (CIDREE Yearbook 2010) (pp. 138–158). NFER. https://pdfs.semanticscholar.org/d2a9/181ac89e93ea2f7819b63ffaccc5cf0c060d.pdf?_ga=2.205258384.1677626141.1594253991-2147439961.1594253991
Reynolds, D. (2001). Beyond school effectiveness and school improvement? In A. Harris & N. Bennett (Eds.), School effectiveness and school improvement: Alternative perspectives (pp. 26–43). Continuum.
Reynolds, D., Sammons, P., De Fraine, B., Van Damme, J., Townsend, T., Teddlie, C., & Stringfield, S. (2014). Educational effectiveness research: A state-of-the-art review. School Effectiveness and School Improvement, 25(2), 197–230. https://doi.org/10.1080/09243453.2014.885450
Rivkin, S. G., Hanushek, E. A., & Kain, J. F. (2005). Teachers, schools, and academic achievement. Econometrica, 73(2), 417–458. https://doi.org/10.1111/j.1468-0262.2005.00584.x
Rockoff, J. E., Jacob, B. A., Kane, T. J., & Staiger, D. O. (2011). Can you recognize an effective teacher when you recruit one? Education Finance and Policy, 6(1), 43–74. https://doi.org/10.1162/EDFP_a_00022
Rothstein, J. (2010). Teacher quality in educational production: Tracking, decay, and student achievement. Quarterly Journal of Economics, 125(1), 175–214. https://doi.org/10.1162/qjec.2010.125.1.175
Rothstein, J., & Mathis, W. J. (2013). Review of two culminating reports from the MET project. National Education Policy Center.
Rowan, B., Correnti, R., & Miller, R. J. (2002). What large-scale, survey research tells us about teacher effects on student achievement: Insights from the Prospects study of elementary schools (CPRE Research Report Series RR-051). Consortium for Policy Research in Education, University of Pennsylvania. https://doi.org/10.1037/e384482004-001
Rowe, K. (2003, October 19–21). The importance of teacher quality as a key determinant of students’ experiences and outcomes of schooling (Keynote address). ACER Research Conference, Melbourne, Victoria, Australia.
Ruiz-Primo, M. A. (2011). Informal formative assessment: The role of instructional dialogues in assessing students’ learning. Studies in Educational Evaluation, 37, 15–24. https://doi.org/10.1016/j.stueduc.2011.04.003
Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119–144. https://doi.org/10.1007/BF00117714
Sanders, W. L., & Rivers, J. C. (1996a). Cumulative and residual effects of teachers on future student academic achievement: Research progress report. University of Tennessee Value-Added research and Assessment Center. https://www.heartland.org/publications-resources/publications/cumulative-and-residual-effects-of-teachers-on-future-student-academic-achievement
Sanders, W. L., & Rivers, J. C. (1996b). Research findings from the Tennessee value-added assessment system (TVAAS) database: Implications for educational evaluation and research. Journal of Personnel Evaluation in Education, 12(3), 247–256.
Sarason, S. B. (1990). The predictable failure of educational reform: Can we change course before it’s too late? Jossey Bass.
Saw, G., Schneider, B., Frank, K., Chen, I.-C., Keesler, V., & Martineau, J. (2017). The impact of being labeled as a persistently lowest achieving school: Regression discontinuity evidence on consequential school labeling. American Journal of Education, 123, 585–613. https://doi.org/10.1086/692665
Schacter, J., & Thum, Y. M. (2004). Paying for high- and low-quality teaching. Economics of Education Review, 23, 411–430. https://doi.org/10.1016/j.econedurev.2003.08.002
Scheerens, J., & Bosker, R. (1997). The foundations of educational effectiveness. Pergamon.
Schildkamp, K., Ehren, M., & Lai, M. K. (2012). Editorial article for the special issue on data-based decision making around the world: From policy to practice to results. School Effectiveness and School Improvement: An International Journal of Research, Policy and Practice, 23(2), 123–131. https://doi.org/10.1080/09243453.2011.652122
Schildkamp, K., & Kuiper, W. (2010). Data-informed curriculum reform: Which data, what purposes, and promoting and hindering factors. Teaching and Teacher Education, 26, 482–496. https://doi.org/10.1016/j.tate.2009.06.007
Schildkamp, K., & Visscher, A. (2010). The use of performance feedback in school improvement in Louisiana. Teaching and Teacher Education, 26, 1389–1403. https://doi.org/10.1016/j.tate.2010.04.004
Scriven, M. (1967). The methodology of evaluation. In R. Tyler, R. Gagne, & M. Scriven (Eds.), Perspectives on curriculum evaluation (AERA Monograph Series – Curriculum Evaluation). Rand McNally.
Senge, P. M. (1990). The fifth discipline: The art and practice of the learning organization. Doubleday.
Senge, P., Kleiner, A., Roberts, C., Ross, R., Roth, G., & Smith, B. (1999). The dance of change: The challenges of sustaining momentum in learning organizations. Doubleday/Currency.
Senge, P., Ross, R., Smith, B., Roberts, C., & Kleiner, A. (1994). The fifth discipline fieldbook: Strategies and tools for building a learning organization. Doubleday/Currency.
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton-Miflin.
Shepard, L. (2009). Commentary: Evaluating the validity of formative and interim assessment. Educational Measurement, 28(3), 32–37. https://doi.org/10.1111/j.1745-3992.2009.00152.x
Shepard, L. A. (2019). Classroom assessment to support teaching and learning. The Annals of the American Academy of Political and Social Science, 683(1), 183–200. https://doi.org/10.1177/0002716219843818
Shepard, L. A., Penuel, W. R., & Davidson, K. L. (2017). Design principles for new systems of assessment. Phi Delta Kappan, 98(6), 47–52. https://doi.org/10.1177/0031721717696478
Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–31. https://doi.org/10.3102/0013189X015002004
Shute, V. J. (2007). Focus on feedback. Educational Testing Service. https://doi.org/10.1002/j.2333-8504.2007.tb02053.x
Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78, 153–189. https://doi.org/10.3102/0034654307313795
Simon, H. A. (1972). Theories of bounded rationality. In C. B. McGuiure & R. Radner (Eds.), Decision and organization (pp. 161–176). North Holland.
Simon, H. A. (1977). The new science of management decision (3rd ed.). Prentice Hall.
Simon, H. A. (1986). Report of the research briefing panel on decision making and problem solving. In National Research Council, Research briefings 1986 (pp. 17–35). The National Academies Press. https://doi.org/10.17226/911
Simon, H. A. (1997). Administrative behavior (4th ed.). The Free Press.
Stecher, B. M., Holtzman, D. J., Garet, M. S., Hamilton, L. S., Engberg, J., Steiner, E. D., Robyn, A., Baird, M. D., Gutierrez, I. A., Peet, E. D., De Los Reyesm, I. B., Fronberg, K., Weinberger, G., Hunter, G. H., & Chambers, J. (2018). Improving teaching effectiveness: Final report: The intensive partnerships for effective teaching. RAND Corporation. https://doi.org/10.7249/RR2242
Stobart, G., & Hopfenbeck, T. N. (2014). Assessment for learning and formative assessment. In J.-A. Baird, T. N. Hopfenbeck, P. Newton, G. Stobart, & A. T. Steen-Utheim (Eds.), Assessment and learning: State of the field review (pp. 33–50). Oxford University Centre for Educational Assessment; Knowledge Centre for Education.
Stock, R. (2004). Drivers of team performance: What do we know and what have we still to learn? Schmalenbach Business Review, 56, 274–306. https://doi.org/10.1007/BF03396696
Stoll, L., Creemers, B. P. M., & Reezigt, G. (2006). Effective school improvement: Similarities and differences in improvement in eight European countries. In A. Harris & J. H. Chrispeels (Eds.), Improving schools and educational systems: International perspectives (pp. 90–106). Routledge.
Strunk, K. O., & McEachin, A. (2014). More than sanctions: Closing achievement gaps through California’s use of intensive technical assistance. Educational Evaluation and Policy Analysis, 36(3), 281–306. https://doi.org/10.3102/0162373713510967
Stufflebeam, D. L. (1999). Foundational models for 21st century program evaluation. The Evaluation Center, Western Michigan University.
Stuit, D. (2010). Are bad schools immortal? The scarcity of turnarounds and shutdowns in both charter and district sectors. Thomas B. Fordham Institute.
Tanenbaum, C., Boyle, A., Graczewski, C., James-Burdumy, S., Dragoset, L., Hallgren, K., et al. (2015). State capacity to support school turnaround. Mathematica Policy Research. https://files.eric.ed.gov/fulltext/ED556118.pdf
Teddlie, C., & Reynolds, D. (Eds.). (2000). International handbook on school effectiveness and improvement. Falmer Press.
Teddlie, C. B., & Stringfield, S. (2006). A brief history of school improvement research in the USA. In A. Harris & J. H. Chrispeels (Eds.), Improving schools and educational systems: International perspectives (pp. 23–38). Routledge.
Thorn, C. (2002). Data use in the classroom: The challenges of implementing data-based decision making at the school level. Wisconsin Center for Education Research. https://wcer.wisc.edu/docs/working-papers/Working_Paper_No_2002_2.pdf
Threlfall, J. (2005). The formative use of assessment information in planning: The notion of contingent planning. British Journal of Educational Studies, 53(1), 54–65. https://doi.org/10.1111/j.1467-8527.2005.00283.x
Thrupp, M., Lupton, R., & Brown, C. (2007). Pursuing the contextualization agenda: Recent progress and future prospects. In T. Townsend (Ed.), International handbook on school effectiveness and improvement (pp. 111–125). Springer. https://doi.org/10.1007/978-1-4020-5747-2_7
Thurlings, M., Vermeulen, M., Bastiaens, T., & Stijnen, S. (2013). Understanding feedback: A learning theory perspective. Educational Research Review, 9, 1–15. https://doi.org/10.1016/j.edurev.2012.11.004
Top**, K. J. (2009). Peers as a source of formative assessment. In H. Andrade & G. J. Cizek (Eds.), Handbook of formative assessment (pp. 61–74). Routledge.
Top**, K. J. (2013). Peers as a source of formative and summative assessment. In J. H. McMillan (Ed.), SAGE handbook of research on classroom assessment (pp. 395–412). Sage. https://doi.org/10.4135/9781452218649.n22
Townsend, T. (2007a). 20 years of ICSEI: The impact of school effectiveness and school improvement on school reform. In T. Townsend (Ed.), International handbook on school effectiveness and improvement (pp. 3–26). Springer. https://doi.org/10.1007/978-1-4020-5747-2_1
Townsend, T. (Ed.). (2007b). International handbook on school effectiveness and improvement. Springer. https://doi.org/10.1007/978-1-4020-5747-2
Tyack, D., & Cuban, L. (1995). Tinkering toward utopia: A century of public school reform. Harvard University Press.
van der Kleij, F., & Adie, L. (2020). Towards effective feedback: An investigation of teachers’ and students’ perceptions of oral feedback in classroom practice. Assessment in Education: Principles, Policy & Practice, 27(3), 252–270. https://doi.org/10.1080/0969594X.2020.1748871
van der Kleij, F. M., Feskens, R. C. W., & Eggen, T. J. H. M. (2015). Effects of feedback in a computer-based learning environment on students’ learning outcomes: A meta-analysis. Review of Educational Research, 85(4), 475–511. https://doi.org/10.3102/0034654314564881
van der Kleij, F. M., Vermeulen, J. A., Schildkamp, K., & Eggen, T. J. H. M. (2015). Integrating data-based decision making, assessment for learning and diagnostic testing in formative assessment. Assessment in Education: Principles, Policy & Practice, 22(3), 324–343. https://doi.org/10.1080/0969594X.2014.999024
Visscher, A. J., & Coe, R. (Eds.). (2002). School improvement through performance. Swets & Zeitlinger.
Wayman, J. C. (2005). Involving teachers in data-driven decision making: Using computer data systems to support teacher inquiry and reflection. Journal of Education for Students Placed at Risk, 10(3), 295–308. https://doi.org/10.1207/s15327671espr1003_5
Wayman, J. C., Jimerson, J. B., & Cho, V. (2012). Organizational considerations in establishing the data-informed district. School Effectiveness and School Improvement: An International Journal of Research, Policy and Practice, 23(2), 159–178. https://doi.org/10.1080/09243453.2011.652124
Weiner, N. (1948). Cybernetics, or control and communication in the animal and the machine. John Wiley and Sons.
Weinstein, T. (2011, April 8–12). Interpreting No Child Left Behind corrective action and technical assistance programs: A review of state policy (Paper presentation). American Educational Research Association Annual Meeting, New Orleans, LA, USA.
Werler, T., & Klepstad Færevaag, M. (2017). National testing in Norwegian classrooms: A tool to improve pupil performance? Nordic Journal of Studies in Education Policy, 3(1), 67–91. https://doi.org/10.1080/20020317.2017.1320188
Wiliam, D. (2005). Kee** learning on track: Formative assessment and the regulation of learning. In M. Coupland, J. Anderson, & T. Spencer (Eds.), Making mathematics vital: Proceedings of the Twentieth Biennial Conference of the Australian Association of Mathematics Teachers (pp. 20–34). AAMT.
Wiliam, D. (2007). Kee** learning on track: Classroom assessment and the regulation of learning. In F. K. Lester Jr. (Ed.), Second handbook of mathematics teaching and learning (pp. 1053–1098). Information Age Publishing.
Wiliam, D. (2009). An integrative summary of the research literature and implications for a new theory of formative assessment. In H. L. Andrade & G. J. Cizek (Eds.), Handbook of formative assessment (pp. 18–40). Taylor & Francis.
Wiliam, D. (2010a, March 4). Teacher quality: Why it matters, and how to get more of it [Paper presentation]. Spectator ‘Schools Revolution’ Conference, London, United Kingdom. https://www.dylanwiliam.org/Dylan_Wiliams_website/Papers.html
Wiliam, D. (2010b). The role of formative assessment in effective learning environments. In H. Dumont, D. Instance, & F. Benavides (Eds.), The nature of learning: Using research to inspire practice (pp. 135–159). OECD. https://doi.org/10.1787/9789264086487-8-en
Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37, 3–14. https://doi.org/10.1016/j.stueduc.2011.03.001
Wiliam, D. (2017). Review of ‘Assessment for learning: Meeting the challenge of implementation’. Assessment in Education: Principles, Policy & Practice, 25(6), 682–685. https://doi.org/10.1080/0969594X.2017.1401526
Wiliam, D. (2018). Feedback: At the heart of – but definitely not all of – formative assessment. In A. A. Lipnevich & J. K. Smith (Eds.), The Cambridge handbook of instructional feedback (pp. 3–28). Cambridge University Press. https://doi.org/10.1017/9781316832134.003
Wiliam, D., Lee, C., Harrison, C., & Black, P. (2004). Teachers develo** assessment for learning: Impact on student achievement. Assessment in Education: Principles, Policy & Practice, 11, 49–65. https://doi.org/10.1080/0969594042000208994
Wiliam, D., & Thompson, M. (2008). Integrating assessment with instruction: What will it take to make it work? In C. A. Dwyer (Ed.), The future of assessment: Sha** teaching and learning (pp. 53–82). Routledge. https://doi.org/10.4324/9781315086545-3
Wright, S. P., Horn, S. P., & Sanders, W. L. (1997). Teacher and classroom context effects on student achievement: Implications for teacher evaluation. Journal of Personnel Evaluation in Education, 11, 57–67. https://doi.org/10.1023/A:1007999204543
Wylie, E. C., & Lyon, C. J. (2015). The fidelity of formative assessment implementation: Issues of breadth and quality. Assessment in Education: Principles, Policy & Practice, 22(1), 140–160. https://doi.org/10.1080/0969594X.2014.990416
Zhao, Y. (2017). What works may hurt: Side effects in education. Journal of Educational Change, 18(1), 1–19. https://doi.org/10.1007/s10833-016-9294-4
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Maxwell, G.S. (2021). Different Approaches to Data Use. In: Using Data to Improve Student Learning. The Enabling Power of Assessment, vol 9. Springer, Cham. https://doi.org/10.1007/978-3-030-63539-8_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-63539-8_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-63537-4
Online ISBN: 978-3-030-63539-8
eBook Packages: EducationEducation (R0)