Abstract
Data interpretation is seen as a process of meaning making. This requires attention to the purpose in analysing the data, the kinds of questions asked and by whom, and the kind of data that are needed or available. The relationship between questions and data can be interactive. Data can be aggregated, disaggregated, transformed and displayed in order to reveal patterns, relationships and trends. Different ways of comparing data can be identified—against peers, against standards, against self—and of delving more deeply—through protocol analysis, reason analysis, error analysis, and change analysis. Techniques for analysing group change and growth present various technical challenges and cautions. In particular, value-added measures have been shown to have serious flaws if used for teacher and school evaluation. Data literacy is being given increasing attention as a requirement for successful data interpretation and use, along with associated literacies in educational assessment, measurement, statistics and research. These literacies lack clear definition and elaboration. They also present many challenges for professional development and warrant further research.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Coburn and Turner (2011) noted that ‘a handful of studies do link interventions to context, data use process, and outcomes, providing insight into at least a few possible pathways from intervention to student learning’ (p. 195).
- 2.
There is a fifth question about implications of the data: how to make things better. Because this question extends beyond the meaning of the data and requires additional considerations, such as curriculum and instruction opportunities, this matter is discussed in Chap. 9.
- 3.
The characteristics of QSP, along with other programs of the time, are summarised in Wayman et al. (2004).
- 4.
The six challenges are quoted from Mason (2002, p. 6); the comments in parenthesis paraphrase the discussion on each of these challenges.
- 5.
Lachat (2001) provides several vignettes on the ways in which data disaggregation assisted schools in revealing false assumptions about what affected low achievement, the effectiveness of special programs, equity for specific groups of students, and consistency of expectations across areas of learning.
- 6.
Protocol analysis and reason analysis were discussed in Chap. 6 as the third and fourth ways of validating reasoning processes in performance assessments.
- 7.
- 8.
In some circumstances, sophisticated statistical methods might be applicable for inserting best estimates of the missing data.
- 9.
Braun (2005, p. 493) warns that ‘the strength of the correspondence between the evidence from one test and that from another, superficially similar, test is determined by the different aspects of knowledge and skills that the two tests tap, by the amount and quality of the information they provide, and by how well they each match the students’ instructional experiences.’
- 10.
- 11.
Amrein-Beardsley et al., (2013) introduce a Special Issue of Education Policy Analysis Archives (Volume 21, Number 4) entitled Value-added: What America’s policymakers need to know and understand.
- 12.
‘Because true teacher effects might be correlated with the characteristics of the students they teach, current VAM approaches cannot separate any existing contextual effects from these true teacher effects. Existing research is not sufficient for determining the generalizability of this finding or the severity of the actual problems associated with omitted background variables. … [O]ur analysis and simulations demonstrate that VAM based rankings of teachers are highly unstable, and that only large differences in estimated impact are likely to be detectable given the effects of sampling error and other sources of uncertainty. Interpretations of differences among teachers based on VAM estimates should be made with extreme caution’ (McCaffrey et al., 2003, p. 113).
- 13.
Huff (1954) provided the ultimate guide (‘How to lie with statistics’), but he was actually directing his comments at the consumer of statistics, therefore warning against misinterpretation.
- 14.
The extended version is: ‘Data-literate educators continuously, effectively, and ethically access, act on, and communicate multiple types of data from state, local, classroom, and other sources to improve outcomes for students in a manner appropriate to educators’ professional roles and responsibilities’ (DQC, 2014, p. 6).
- 15.
Cowie and Cooper (2017): ‘Assessment literacy, broadly defined, encompasses how to construct, administer and score reliable student assessments and communicate valid interpretations about student learning, as well as the capacity to integrate assessment into teaching and learning for formative purposes’ (p. 148).
- 16.
Honig and Venkateswaran (2012) draw attention to the differences between school and central office use of data, and the interrelationships between the two.
- 17.
A quite different way of characterising assessment literacy has been expressed in the research literature, one that is located in sociocultural theory. This is less concerned with ‘skills, knowledges and cognitions’ than with social, ethical and collaborative practice. Willis, Adie, & Klenowski, (2013) define ‘teacher assessment literacies as dynamic social practices which are context dependent and which involve teachers in articulating and negotiating classroom and cultural knowledges [sic] with one another and with learners, in initiation, development and practice of assessment to achieve the learning goals of students’ (p. 241). Their focus is the intersection and interconnection of assessment practice and pedagogical practice, characterised as ‘horizontal discourses,’ which offer no guidance on data literacy, seen as a component of ‘vertical discourses’. Teacher collaboration and communities of practice are reviewed in Chap. 10.
- 18.
The fifth skill, instructional decision making, is a step beyond data interpretation per se, and is taken up in Chap. 9.
- 19.
Kippers, Poortman, Schildkamp, & Visscher, (2018) also based their approach to data literacy development on the inquiry cycle. They identify five decision steps: set a purpose; collect data; analyse data; interpret data; and take instructional action. Other formulations of the decision cycle are explored in Chap. 9.
- 20.
‘Identify problems’ and ‘frame questions’ are potentially relevant, but are not elaborated in the Gummer and Mandinach (2015) model.
- 21.
These are a reinterpretation (reframed and reorganized) of Gummer and Mandinach (2015) where the elements are presented in the form of a mind map.
- 22.
This list is a paraphrase of Brookhart (2011), Table 1, p. 7.
- 23.
Looney, Cumming, van der Kleij, & Harris, (2017) propose an extension of the concept of assessment literacy to encompass ‘assessment identity’ with ‘not only a range of assessment strategies and skills, and even confidence and self-efficacy in undertaking assessment , but also the beliefs and feelings about assessment’ (p. 15). They also examine assessment literacy instruments for their theoretical justification and validity (Appendix 2).
- 24.
DeLuca et al., (2016a) also developed a Classroom Assessment Inventory incorporating these dimensions.
References
Adelson, J. L., Dickenson, E. R., & Cunningham, B. C. (2016). A multigrade, multiyear statewide examination of reading achievement: Examining variability between districts, schools, and students. Educational Researcher, 45(4), 258–262. https://doi.org/10.3102/0013189X16649960
Allen, L. K., Likens, A. D., & McNamara, D. S. (2018). Writing flexibility in argumentative essays: A multidimensional analysis. Reading and Writing, 32, 1607–1634. https://doi.org/10.1007/s11145-018-9921-y
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. American Psychological Association.
American Federation of Teachers, National Council on Measurement in Education, & National Education Association. (1990). Standards for teacher competence in educational assessment of students. National Council on Measurement in Education.
Amrein-Beardsley, A. (2008). Methodological concerns about the education value-added assessment system. Educational Researcher, 37(2), 65–75. https://doi.org/10.3102/0013189X08316420
Amrein-Beardsley, A. (2014). Rethinking value-added models in education. Routledge. https://doi.org/10.4324/9780203409909
Amrein-Beardsley, A., Collins, C., Polasky, S. A., & Sloat, E. F. (2013). Value-added model (VAM) research for educational policy: Framing the issue. Education Policy Analysis Archives, 21(4). https://doi.org/10.14507/epaa.v21n4.2013
Association for Educational Assessment–Europe. (2012). European framework of standards for educational assessment 1.0. AEA Europe. https://www.aea-europe.net/wp-content/uploads/2017/07/SW_Framework_of_European_Standards.pdf
Athanases, S. Z., Bennett, L. H., & Wahleithner, J. M. (2013). Fostering data literacy through preservice teacher inquiry in English language arts. The Teacher Educator, 48(1), 8–28. https://doi.org/10.1080/08878730.2012.740151
Baker, E. (2000). Understanding educational quality: Where validity meets technology (William H. Angoff Memorial Lecture Series). Educational Testing Services. http://files.eric.ed.gov/fulltext/ED449172.pdf
Baker, E. L., Barton, P., Darling-Hammond, L, Haertel, E., Ladd, H. F., Linn, R. L., Ravitch, D., Rothstein, R., Shavelson, R. J., & Shepard, L. A. (2010). Problems with the use of student test scores to evaluate teachers. Economic Policy Institute. http://www.epi.org/publication/bp278/
Bernhardt, V. (1998). Data analysis for comprehensive schoolwide improvement. Eye on Education.
Betebenner, D. (2009). Norm- and criterion-referenced student growth. Educational Measurement: Issues and Practice, 28(4), 42–51. https://doi.org/10.1111/j.1745-3992.2009.00161.x
Betebenner, D. W., & Linn, R.L. (2010). Growth in student achievement: Issues of measurement, longitudinal data analysis and accountability. Educational Testing Service, K–12 Assessment and Performance Management Center.
Braun, H. I. (2005). Using student progress to evaluate teachers: A primer on value-added models (Policy Information Perspective). Educational Testing Services.
Braun, H., Chudowski, N., & Koenig, J. (Eds.). (2010). Getting value out of value-added: Report of a workshop. Washington, DC: The National Academies Press.
Brookhart, S. M. (2011). Education knowledge and skills for teachers. Educational Measurement: Issues and Practice, 30(1), 3–12. https://doi.org/10.1111/j.1745-3992.2010.00195.x
Brown, J., & Duguid, P. (2000). The social life of information. Harvard Business School Press.
Cardozo-Gaibisso, L., Kim, S., & Buxton, C. (2019). Thinking beyond the score: Multidimensional analysis of student performance to inform the next generation of science assessments. Journal of Research in Science Teaching, 57(6), 1–23. https://doi.org/10.1002/tea.21611
Center for Research on Evaluation Standards and Student Testing. (2004). CRESST Quality School Portfolio System: Reporting on school goals and student achievement [PowerPoint presentation]. http://www.slideserve.com/jana/download-presentation-source
Chappuis, J., Stiggins, R. J., Chappuis, S., & Arter, J. A. (2012). Classroom assessment for student learning: Doing it right—Using it well (2nd ed.). Pearson.
Check, J., & Schutt, R. K. (2012). Research methods in education. SAGE. https://doi.org/10.4135/9781544307725
Chen, E., Heritage, M., & Lee, J. (2005). Identifying students’ learning needs with technology. Journal of Education for Students Placed at Risk, 10(3), 309–332. https://doi.org/10.1207/s15327671espr1003_6
Chick, H., & Pierce, R. (2013). The statistical literacy needed to interpret school assessment data. Mathematics Teacher Education and Development, 15(2), 5–26.
Choppin, J. (2002, April 1–5). Data use in practice: Examples from the school level [Conference presentation]. American Educational Research Association Annual Meeting, New Orleans, Louisiana, United States. http://archive.wceruw.org/mps/AERA2002/data_use_in_practice.htm
Christman, J. B., Ebby, C. B., & Edmunds, K. A. (2016). Data use practices for improved mathematics teaching and learning: The importance of productive dissonance and recurring feedback cycles. Teachers College Record, 118(11), 1–32.
Coburn, C. E., & Talbert, J. E. (2006). Conceptions of evidence use in school districts: Map** the terrain. American Journal of Education, 112(4), 469–495. https://doi.org/10.1086/505056
Coburn, C. E., Toure, J., & Yamashita. (2009). Evidence, interpretation, and persuasion: Instructional decision making in the district central office. Teachers College Record, 111(4), 1115–1161.
Coburn, C. E., & Turner, E. O. (2011). Research on data use: A framework and analysis. Measurement: Interdisciplinary Research and Perspectives, 9(4), 173–206. https://doi.org/10.1080/15366367.2011.626729
Cohen, L., Manion, L., & Morrison, K. (2011). Research methods in education (7th ed.). Routledge.
Corcoran, S. P. (2010). Can teachers be evaluated by their students’ test scores? Should they be? The use of value-added measures of teacher effectiveness in policy and practice. Annenberg Institute for School Reform at Brown University. https://annenberg.brown.edu/sites/default/files/valueAddedReport.pdf
Cowie, B., & Cooper, B. (2017). Exploring the challenge of develo** student teacher data literacy. Assessment in Education: Principles, Policy and Practice, 24(2), 147–163. https://doi.org/10.1080/0969594X.2016.1225668
Cumming, J., Goldstein, H., & Hand, K. (2020). Enhanced use of educational accountability data to monitor educational progress of Australian students with a focus on indigenous students. Educational Assessment, Evaluation and Accountability, 32, 29–51. https://doi.org/10.1007/s11092-019-09310-x
Darling-Hammond, L., & Adamson, F. (Eds.). (2014). Beyond the bubble test: How performance assessments support 21st century learning. Wiley. https://doi.org/10.1002/9781119210863
Darling-Hammond, L., Amrein-Beardsley, A., Haertel, E., & Rothstein, J. (2012). Evaluating teacher evaluation. Phi Delta Kappan, 93(6), 8–15. https://doi.org/10.1177/003172171209300603
Data Quality Campaign. (2014). Teacher data literacy: It’s about time. https://dataqualitycampaign.org/resource/teacher-data-literacy-time/
Datnow, A., & Hubbard, L. (2015). Teachers’ use of assessment data to inform instruction: Lessons from the past and prospects for the future. Teachers College Record, 117(4), 1–26.
Datnow, A., & Park, V. (2009). School system strategies for supporting data. In T. Kowalski & T. Lasley (Eds.), Handbook of data-based decision making for education (pp. 191–206). Routledge.
DeLuca, C., & Bellara, A. (2013). The current state of assessment education: Aligning policy, standards, and teacher education curriculum. Journal of Teacher Education, 64(4), 356–372. https://doi.org/10.1177/0022487113488144
DeLuca, C., & Klinger, D. A. (2010). Assessment literacy development: Identifying gaps in teacher education candidates’ learning. Assessment in Education: Principles, Policy & Practice, 17(4), 419–438. https://doi.org/10.1080/0969594X.2010.516643
DeLuca, C., LaPointe-McEwan, D., & Luhanga, U. (2016a). Approaches to classroom assessment inventory: A new instrument to support teacher assessment literacy. Educational Assessment, 21(4), 248–266. https://doi.org/10.1080/10627197.2016.1236677
DeLuca, C., LaPointe-McEwan, D., & Luhanga, U. (2016b). Teacher assessment literacy: A review of international standards and measures. Educational Assessment, Evaluation and Accountability, 28, 251–272. https://doi.org/10.1007/s11092-015-9233-6
Downes, D., & Vindurampulle, O. (2007). Value-added measures for school improvement. Department of Education and Early Childhood Development, Office of Education Policy and Innovation.
Dunlap, K., & Piro, J. S. (2016). Diving into data: Develo** the capacity for data literacy in teacher education. Cogent Education, 3(1). https://doi.org/10.1080/2331186X.2015.1132526
Ericsson, K. A., & Simon, H. A. (1980). Verbal reports as data. Psychological Review, 87(3), 215–251. https://doi.org/10.1037/0033-295X.87.3.215
Ericsson, K. A., & Simon, H. A. (1984/1993). Protocol analysis: Verbal reports as data. MIT Press. https://doi.org/10.7551/mitpress/5657.001.0001
Feuer, M. J., Holland, P. W., Green, B. F., Bertenthal, M. W., & Cadell Hemphill, F. (Eds.). (1999). Uncommon measures: Equivalence and linking among educational tests. National Academies Press.
Fonseca, M. J., Costa, P. P., Lencastre, L., & Tavares, F. (2012). Multidimensional analysis of high-school students’ perceptions and biotechnology. Journal of Biological Education, 46(3), 129–139. https://doi.org/10.1080/00219266.2011.634019
Frederiksen, N., Glaser, R., Lesgold, A., & Shafto, M. G. (Eds.). (1990). Diagnostic monitoring of skill and knowledge acquisition. Lawrence Erlbaum Associates.
Gardner J., Harlen W., Hayward L., & Stobart G. (2008). Changing assessment practice: Process, principles and standards. Assessment Reform Group. http://www.aria.qub.ac.uk/JG%20Changing%20Assment%20Practice%20Final%20Final.pdf
Goldstein, H. (2001). Using pupil performance data for judging schools and teachers: Scope and limitations. British Educational Research Journal, 27(4), 433–422. https://doi.org/10.1080/01411920120071443
Goldstein, H. (2011). Multilevel statistical models (4th ed.). Wiley. https://doi.org/10.1002/9780470973394
Gotch, C. M., & French, B. F. (2014). A systematic review of assessment literacy measures. Educational Measurement: Issues and Practice, 33(2), 14–18. https://doi.org/10.1111/emip.12030
Graue, E., Delaney, K., & Karch, A. (2013). Ecologies of education quality. Education Policy Analysis Archives, 21(8). https://doi.org/10.14507/epaa.v21n8.2013
Great Schools Partnership. (2015). The glossary of education reform: Aggregate data. http://edglossary.org/aggregate-data/
Greenberg, J., McKee, A., & Walsh, K. (2013). Teacher prep review: A review of the nation’s teacher preparation programs. National Council on Teacher Quality. https://doi.org/10.2139/ssrn.2353894
Greenberg, J., & Walsh, K. (2012). What teacher preparation programs teach about K–12 assessment: A review. National Council on Teacher Quality. https://www.nctq.org/publications/What-Teacher-Preparation-Programs-Teach-about-K%2D%2D12-Assessment:-A-review
Griffard, P. B., & Wandersee, J. H. (2001). The two-tier instrument on photosynthesis: What does it diagnose? International Journal of Science Education, 23(10), 1039–1052. https://doi.org/10.1080/09500690110038549
Gummer, E. S., & Mandinach, E. B. (2015). Building a conceptual framework for data literacy. Teachers College Record, 117(4), 1–22.
Hamilton, L. S., Nussbaum, E. M., & Snow, R. E. (1997). Interview procedures for validating science assessments. Applied Measurement in Education, 10(2), 181–200. https://doi.org/10.1207/s15324818ame1002_5
Hanushek, E. A., Rivkin, S. G., & Taylor, L. L. (1995). Aggregation bias and the estimated effects of school resources (working paper 397). University of Rochester, Center for Economic Research. https://doi.org/10.3386/w5548
Harris, D. N. (2009). The policy issues and policy validity of value-added and other teacher quality measures. In D. H. Gitomer (Ed.), Measurement issues and assessment for teaching quality (pp. 99–130). Sage. https://doi.org/10.4135/9781483329857.n7
Harris, D. (2011). Value-added measures in education: What every educator needs to know. Harvard Education Press.
Heritage, M., Lee, J., Chen, E., & LaTorre, D. (2005). Upgrading America’s use of information to improve student performance (CSE report 661). National Center for Research on Evaluation, Standards, and Student Testing.
Herman, J., & Gribbons, B. (2001). Lessons learned in using data to support school inquiry and continuous improvement: Final report to the Stuart Foundation (CSE technical report 535). Center for the Study of Evaluation and National Center for Research on Evaluation, Standards, and Student Testing.
Hill, H. C., Kapitula, L. R., & Umland, K. L. (2011). A validity argument approach to evaluating value-added scores. American Educational Research Journal, 48, 794–831. https://doi.org/10.3102/0002831210387916
Hill, M., Smith, L. F., Cowie, B., & Gunn, A. (2013). Preparing initial primary and early childhood teacher education students to use assessment. Wellington, New Zealand: Teaching and Learning Research Initiative.
Holcomb, E. L. (1999). Getting excited about data: How to combine people, passion, and proof. Corwin Press.
Holloway-Libbell, J., & Amrein-Beardsley, A. (2015). ‘Truths’ devoid of empirical proof: Underlying assumptions surrounding value-added models in teacher evaluation. Teachers College Record, 18008.
Honig, M., & Coburn, C. E. (2005). When districts use evidence for instructional improvement: What do we know and where do we go from here? Urban Voices in Education, 6, 22–26.
Honig, M. I., & Venkateswaran, N. (2012). School–central office relationships in evidence use: Understanding evidence use as a systems problem. American Journal of Education, 112(2), 199–222. https://doi.org/10.1086/663282
Huff, D. (1954). How to lie with statistics. Norton.
Hughes, G. (2014). Ipsative assessment: Motivation through marking progress. Palgrave Macmillan. https://doi.org/10.1057/9781137267221
Jimerson, J. B., & Wayman, J. C. (2015). Professional learning for using data: Examining teacher needs and supports. Teachers College Record, 117(4), 1–36.
Johnson, R. S. (1996). Setting our sights: Measuring equity in school change. Los Angeles: The Achievement Council.
Joint Committee on Standards for Educational Evaluation. (2015). Classroom assessment standards: Practices for PreK-12 teachers. http://www.jcsee.org/the-classroom-assessment-standards-new-standards
Kapler Hewitt, K., & Amrein-Beardsley, A. (Eds.). (2016). Student growth measures in policy and practice: Intended and unintended consequences of high-stakes teacher evaluations. Springer. https://doi.org/10.1057/978-1-137-53901-4
Kersting, N., Chen, M.-K., & Stigler, J. (2013). Value-added teacher estimates as part of teacher evaluations: Exploring the specifications on the stability of teacher value-added scores. Education Policy Analysis Archives, 21(7). https://doi.org/10.14507/epaa.v21n7.2013
Kippers, W. B., Poortman, C. L., Schildkamp, K., & Visscher, A. J. (2018). Data literacy: What do educators learn and struggle with during a data use intervention? Studies in Educational Evaluation, 56, 21–31. https://doi.org/10.1016/j.stueduc.2017.11.001
Knapp, M. S., Swinnerton, J. A., Copland, M. A., & Monpas-Huber, J. (2006). Data-informed leadership in education. University of Washington, Center for the Study of Teaching and Policy.
Lachat, M. A. (2001). Data-driven high-school reform: The breaking ranks model. The Northeast and Islands Regional Education Laboratory.
Lachat, M. A., & Williams, M. (1996). Learner-based accountability: Using data to support continuous school improvement. Center for Resource Management.
Lachat, M. A., Williams, M., & Smith, S. C. (2006). Making sense of all your data. Principal Leadership, 7(2), 16–21.
Leighton, J. P., & Gierl, M. J. (2007). Verbal reports as data for cognitive diagnostic assessment. In J. P. Leighton & M. J. Gierl (Eds.), Cognitive diagnostic assessment for education: Theory and Applications (pp. 146–172). Cambridge University Press. https://doi.org/10.1017/CBO9780511611186
Lem, S., Onghena, P., Verschaffel, L., & Van Dooren, W. (2013). On the misinterpretation of histograms and box plots. Educational Psychology, 33(2), 155–174. https://doi.org/10.1080/01443410.2012.674006
Levine, D., & Lezotte, L. (1990). Unusually effective schools: A review and analysis of research and practice. Wisconsin center for education research, National Center for Effective Schools Research and Development.
Lin, S.-W. (2004). Development and application of a two-tier diagnostic test for high school students’ understanding of flowering plant growth and development. International Journal of Science and Mathematics Education, 2, 175–199. https://doi.org/10.1007/s10763-004-6484-y
Linn, R. L. (2016). Test-based accountability. The Gordon Commission on the Future of Assessment in Education. https://www.ets.org/Media/Research/pdf/linn_test_based_accountability.pdf
Looney, A., Cumming, J., van der Kleij, F., & Harris, K. (2017). Reconceptualising the role of teachers as assessors: Teacher assessment identity. Assessment in Education: Principles, Policy & Practice, 25(5), 442–467. https://doi.org/10.1080/0969594X.2016.1268090
Love, N. (2000). Using data, getting results: Collaborative inquiry for school-based mathematics and science reform. Regional Alliance at TERC.
Magone, M. E., Cai, J., Silver, E. A., & Wang, N. (1994). Validating the cognitive complexity and content quality of a mathematics performance assessment. International Journal of Educational Research, 21(3), 317–340.
Mandinach, E. B., Friedman, J., & Gummer, E. S. (2014). How can schools of education help to build educators’ capacity to use data? A systematic view of the issue. Teachers College Record, 117(4), 1–50.
Mandinach, E. B., & Gummer, E. S. (2012). Navigating the landscape of data literacy: It IS complex. WestEd. https://www.wested.org/online_pubs/resource1304.pdf
Mandinach, E. B., & Gummer, E. S. (2013). A systematic view of implementing data literacy in educator preparation. Educational Researcher, 42(1), 30–37. https://doi.org/10.3102/0013189X12459803
Mandinach, E. B., & Gummer, E. S. (2015). Building a conceptual framework for data literacy. Teachers College Record, 117(4), 1–22.
Mandinach, E. B., & Gummer, E. S. (2016). Every teacher should succeed with data literacy. Phi Delta Kappan, 97(8), 43–46. https://doi.org/10.1177/0031721716647018
Mandinach, E. B., & Honey, M. (2008). Data-driven decision making: An introduction. In E. B. Mandinach & M. Honey (Eds.), Data-driven school improvement: Linking data and learning (pp. 1–9). Teachers College Press.
Mason, S. (2002, April 1–5). Turning data into knowledge: Lessons from six Milwaukee Public Schools [Conference presentation]. American Educational Research Association Annual Meeting, New Orleans, Louisiana, United States. http://archive.wceruw.org/mps/AERA2002/Mason%20AERA%202002%20QSP%20Symposium%20Paper.pdf
McCaffrey, D. F., Lockwood, J. R., Koretz, D. M., & Hamilton, L. (2003). Evaluating value-added models for teacher accountability. Rand. https://doi.org/10.1037/e658712010-001
McCaffrey, D. F., Lockwood, J. R., Koretz, D., Louis, T. A., & Hamilton, L. (2004). Let’s see more empirical studies on value-added modelling of teacher effects: A reply to Raudenbush, Ruin, Stuart and Zanutto, and Rechase. Journal of Educational and Behavioral Statistics, 29(1), 139–143. https://doi.org/10.3102/10769986029001139
Means, B., Chen, E., DeBarger, A., & Padilla, C. (2011). Teachers’ ability to use data to inform instructional challenges and supports. U.S. Department of Education, Office of Planning, Evaluation and Policy Development.
Means, B., Padilla, C., DeBarger, A., & Bakia, M. (2009). Implementing data-informed decision making in schools: Teacher access, supports and use. U.S. Department of Education.
Mertler, C. A. (2018). Norm-referenced interpretation. In B. B. Frey (Ed.), The SAGE encyclopedia of educational research, measurement, and evaluation (pp. 1161–1163). Sage. https://doi.org/10.4135/9781506326139
Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 13–103). Macmillan.
Messick, S. (1996). Validity of performance assessment. In G. Philips (Ed.), Technical issues in large-scale performance assessment (pp. 1–18). National Center for Educational Statistics.
Michael and Susan Dell Foundation. (2016a). Boston teacher residency. https://www.msdf.org/wp-content/uploads/2019/06/MSDF_teacherprep_BTR-1.pdf
Michael and Susan Dell Foundation. (2016b). Data-literate teachers: Insights from pioneer programs. https://www.msdf.org/wp-content/uploads/2019/06/MSDF_teacherprep.pdf
Michaelides, M. P., & Haertel, E. H. (2004). Sampling of common items: An unrecognized source of error in test equating. Los Angeles: Center for Research on Evaluation Standards and Student Testing.
Michaelides, M. P., & Haertel, E. H. (2014). Sampling of common items as an unrecognized source of error in test equating: A bootstrap approximation assuming random sampling of common items. Applied Measurement in Education, 27(1), 46–57. https://doi.org/10.1080/08957347.2013.853069
Michigan Assessment Consortium. (2020). Assessment literacy standards. https://www.michiganassessmentconsortium.org/assessment-literacy-standards/
Moss, P. A. (2012). Exploring the macro-micro dynamic in data use practice. American Journal of Education, 118(2), 223–232. https://doi.org/10.1086/663274
Muijs, D. (2006). Measuring teacher effectiveness: Some methodological reflection. Educational Research and Evaluation, 12(1), 53–74. https://doi.org/10.1080/13803610500392236
Newton, X., Darling-Hammond, L., Haertel, E., & Thomas, E. (2010). Value-added modeling of teacher effectiveness: An exploration of stability across models and contexts. Education Policy Analysis Archives, 18(23). https://doi.org/10.14507/epaa.v18n23.2010
Nickodem, K., & Rodriguez, M. C. (2018). Criterion-referenced interpretation. In B. B. Frey (Ed.), The SAGE encyclopedia of educational research, measurement, and evaluation (pp. 426–428). SAGE. https://doi.org/10.4135/9781506326139
O’Day, J. (2002). Complexity, accountability, and school improvement. Harvard Educational Review, 72(3), 293–329. https://doi.org/10.17763/haer.72.3.021q742t8182h238
Organisation for Economic Development and Cooperation. (2008). Measuring improvement in learning outcomes: Best practices to assess the value-added of schools. https://doi.org/10.1787/9789264050259-en
Papay, J. P. (2011). Different tests, different answers. American Educational Research Journal, 48(1), 163–193. https://doi.org/10.3102/0002831210362589
Pellegrino, J. W., Chudowsky, N., & Glaser, R. (Eds.). (2001). Knowing what students know: The science of design and educational assessment. National Academies Press.
Phillips, D. C. (2007). Adding complexity: Philosophical perspectives on the relationship between evidence and policy. Yearbook of the National Society for the Study of Education, 106(1), 376–402. https://doi.org/10.1111/j.1744-7984.2007.00110.x
Pierce, R., & Chick, H. (2013). Workplace statistical literacy for teachers: Interpreting box plots. Mathematics Education Research Journal, 25(1), 189–205. https://doi.org/10.1007/s13394-012-0046-3
Pierce, R., Chick, H., & Gordon, I. (2013). Teachers’ perceptions of factors influencing their engagement with statistical reports on student data. Australian Journal of Education, 57(3), 237–255. https://doi.org/10.1177/0004944113496176
Pierce, R., Chick, H., Watson, J., Magdalena, L., & Dalton, M. (2014). A statistical literacy hierarchy for interpreting educational system data. Australian Journal of Education, 58(2), 195–217. https://doi.org/10.1177/0004944114530067
Piro, J. S., & Hutchinson, C. J. (2014). Using data chat to teach instructional interventions: Student perceptions of data literacy in an assessment course. The New Educator, 10(2), 95–111. https://doi.org/10.1080/1547688X.2014.898479
Popham, W. J. (2009). Assessment literacy for teachers: Faddish or fundamental? Theory Into Practice, 48(1), 4–11. https://doi.org/10.1080/00405840802577536
Popham, W. J. (2014). Classroom assessment: What teachers need to know (7th ed.). Pearson.
Raudenbush, S. W. (2004). What are value-added models estimating and what does this imply for statistical practice? Journal of Educational and Behavioral Statistics, 29(1), 121–129. https://doi.org/10.3102/10769986029001121
Reckase, M. D. (2004). The real world is more complicated than we would like. Journal of Educational and Behavioral Statistics, 29(1), 117–120. https://doi.org/10.3102/10769986029001117
Reeves, T. D., & Honig, S. L. (2015). A classroom data literacy intervention for pre-service teachers. Teaching and Teacher Education, 50, 90–101. https://doi.org/10.1016/j.tate.2015.05.007
Rubin, D. B., Stuart, E. A., & Zanutto, E. L. (2004). A potential outcomes view of value-added assessment in education. Journal of Educational and Behavioral Statistics, 29(1), 103–116. https://doi.org/10.3102/10769986029001103
Schmidt, W. H., Houang, R. T., & McKnight, C. C. (2005). Value-added research: Right idea but wrong solution? In R. Lissitz (Ed.), Value-added models in education: Theory and applications (pp. 145–165). Maple Grove: JAM Press.
Schochet, P. Z., & Chiang, H. S. (2010). Error rates in measuring teacher and school performance based on student test score gains. U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance.
Shepard, L. A. (2013). Validity for what purpose? Teachers College Record, 115(9), 1–15.
Sloane, F. C., Oloff-Lewis, J., & Kim, S. H. (2013). Value-added models of teacher and school effectiveness in Ireland: Wise or otherwise? Irish Educational Studies, 32(1), 37–67. https://doi.org/10.1080/03323315.2013.773233
Spillane, J. P., & Miele, D. B. (2007). Evidence in practice: A framing of the terrain. Yearbook of the National Society for the Study of Evidence, 106(1), 46–73. https://doi.org/10.1111/j.1744-7984.2007.00097.x
Tan, D. K.-C., Treagust, D. F., Goh, N.-K., & Chia, L.-S. (2002). Development and application of a two-tier multiple-choice diagnostic instrument to assess high school students’ understanding of inorganic qualitative analysis. Journal of Research in Science Teaching, 39(4), 283–301. https://doi.org/10.1002/tea.10023
Treagust, D. F. (1995). Diagnostic assessment of students’ science concepts. In S. Glynn & R. Duit (Eds.), Learning science in the schools: Research reforming practice (pp. 327–346). Lawrence Erlbaum Associates.
Treagust, D. F. (2006). Diagnostic assessment in science as a means to improving teaching, learning and retention. In Uniserve Science Assessment Symposium Proceedings. https://core.ac.uk/download/pdf/229410386.pdf
Tufte, E. R. (1983). The visual display of quantitative information. Graphics Press.
Tufte, E. R. (1990). Envisioning information. Graphics Press.
Tufte, E. R. (1997). Visual explanations. Graphics Press.
Tversky, B. (1997). Cognitive principles of graphic displays. AAAI Technical Report FS-97-03, pp. 116–124. https://www.aaai.org/Papers/Symposia/Fall/1997/FS-97-03/FS97-03-015.pdf
United Kingdom Department of Education. (2016). Eliminating unnecessary workload associated with data management: Report of the Independent Teacher Workload Review Group. Government Publications.
van Barneveld, C. (2008). Using data to improve student achievement (What Works: Research into Practice, Research Monograph #15). http://www.edu.gov.on.ca/eng/literacynumeracy/inspire/research/Using_Data.pdf
Volante, L., & Fazio, X. (2007). Exploring teacher candidates’ assessment literacy: Implications for teacher education reform and professional development. Canadian Journal of Education, 30(3), 749–770. https://doi.org/10.2307/20466661
Wayman, J. C., & Jimerson, J. B. (2014). Teacher needs for data-related professional learning. Studies in Educational Evaluation, 42, 25–34. https://doi.org/10.1016/j.stueduc.2013.11.001
Wayman, J. C., Stringfield, S., & Yakimowski, M. (2004). Software enabling school improvement through analysis of student data (report no. 67). CRESPAR/Johns Hopkins University.
Wiggins, G., & McTighe, J. (1998). Understanding by design. Association for Supervision and Curriculum Development.
Willis, J., Adie, L., & Klenowski, V. (2013). Conceptualising teachers’ assessment literacies in an era of curriculum and assessment reform. Australian Educational Researcher, 40, 241–256. https://doi.org/10.1007/s13384-013-0089-9
Yen, W. M. (2007). Vertical scaling and no child left behind. In N. J. Dorans, M. Pommerich, & P. W. Holland (Eds.), Linking and aligning scores and scales (pp. 273–283). Springer. https://doi.org/10.1007/978-0-387-49771-6_15
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Maxwell, G.S. (2021). Interpreting Data: Creating Meaning. In: Using Data to Improve Student Learning. The Enabling Power of Assessment, vol 9. Springer, Cham. https://doi.org/10.1007/978-3-030-63539-8_8
Download citation
DOI: https://doi.org/10.1007/978-3-030-63539-8_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-63537-4
Online ISBN: 978-3-030-63539-8
eBook Packages: EducationEducation (R0)