Interpreting Data: Creating Meaning

  • Chapter
  • First Online:
Using Data to Improve Student Learning

Part of the book series: The Enabling Power of Assessment ((EPAS,volume 9))

Abstract

Data interpretation is seen as a process of meaning making. This requires attention to the purpose in analysing the data, the kinds of questions asked and by whom, and the kind of data that are needed or available. The relationship between questions and data can be interactive. Data can be aggregated, disaggregated, transformed and displayed in order to reveal patterns, relationships and trends. Different ways of comparing data can be identified—against peers, against standards, against self—and of delving more deeply—through protocol analysis, reason analysis, error analysis, and change analysis. Techniques for analysing group change and growth present various technical challenges and cautions. In particular, value-added measures have been shown to have serious flaws if used for teacher and school evaluation. Data literacy is being given increasing attention as a requirement for successful data interpretation and use, along with associated literacies in educational assessment, measurement, statistics and research. These literacies lack clear definition and elaboration. They also present many challenges for professional development and warrant further research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Coburn and Turner (2011) noted that ‘a handful of studies do link interventions to context, data use process, and outcomes, providing insight into at least a few possible pathways from intervention to student learning’ (p. 195).

  2. 2.

    There is a fifth question about implications of the data: how to make things better. Because this question extends beyond the meaning of the data and requires additional considerations, such as curriculum and instruction opportunities, this matter is discussed in Chap. 9.

  3. 3.

    The characteristics of QSP, along with other programs of the time, are summarised in Wayman et al. (2004).

  4. 4.

    The six challenges are quoted from Mason (2002, p. 6); the comments in parenthesis paraphrase the discussion on each of these challenges.

  5. 5.

    Lachat (2001) provides several vignettes on the ways in which data disaggregation assisted schools in revealing false assumptions about what affected low achievement, the effectiveness of special programs, equity for specific groups of students, and consistency of expectations across areas of learning.

  6. 6.

    Protocol analysis and reason analysis were discussed in Chap. 6 as the third and fourth ways of validating reasoning processes in performance assessments.

  7. 7.

    In the case of multiple-choice tests this could be accomplished by two-tier items that require a defence of the chosen answer (Griffard & Wandersee, 2001; Lin, 2004; Tan, Treagust, Goh, & Chia, 2002; Treagust, 1995, 2006; Wiggins & McTighe, 1998).

  8. 8.

    In some circumstances, sophisticated statistical methods might be applicable for inserting best estimates of the missing data.

  9. 9.

    Braun (2005, p. 493) warns that ‘the strength of the correspondence between the evidence from one test and that from another, superficially similar, test is determined by the different aspects of knowledge and skills that the two tests tap, by the amount and quality of the information they provide, and by how well they each match the students’ instructional experiences.’

  10. 10.

    The choice of common items for linking tests can be problematic, as discussed by Michaelides and Haertel (2004, 2014). Feuer, Holland, Green, Bertenthal, & Cadell Hemphill, (1999) caution that there are serious technical problems to be addressed in linking and equating.

  11. 11.

    Amrein-Beardsley et al., (2013) introduce a Special Issue of Education Policy Analysis Archives (Volume 21, Number 4) entitled Value-added: What America’s policymakers need to know and understand.

  12. 12.

    ‘Because true teacher effects might be correlated with the characteristics of the students they teach, current VAM approaches cannot separate any existing contextual effects from these true teacher effects. Existing research is not sufficient for determining the generalizability of this finding or the severity of the actual problems associated with omitted background variables. … [O]ur analysis and simulations demonstrate that VAM based rankings of teachers are highly unstable, and that only large differences in estimated impact are likely to be detectable given the effects of sampling error and other sources of uncertainty. Interpretations of differences among teachers based on VAM estimates should be made with extreme caution’ (McCaffrey et al., 2003, p. 113).

  13. 13.

    Huff (1954) provided the ultimate guide (‘How to lie with statistics’), but he was actually directing his comments at the consumer of statistics, therefore warning against misinterpretation.

  14. 14.

    The extended version is: ‘Data-literate educators continuously, effectively, and ethically access, act on, and communicate multiple types of data from state, local, classroom, and other sources to improve outcomes for students in a manner appropriate to educators’ professional roles and responsibilities’ (DQC, 2014, p. 6).

  15. 15.

    Cowie and Cooper (2017): ‘Assessment literacy, broadly defined, encompasses how to construct, administer and score reliable student assessments and communicate valid interpretations about student learning, as well as the capacity to integrate assessment into teaching and learning for formative purposes’ (p. 148).

  16. 16.

    Honig and Venkateswaran (2012) draw attention to the differences between school and central office use of data, and the interrelationships between the two.

  17. 17.

    A quite different way of characterising assessment literacy has been expressed in the research literature, one that is located in sociocultural theory. This is less concerned with ‘skills, knowledges and cognitions’ than with social, ethical and collaborative practice. Willis, Adie, & Klenowski, (2013) define ‘teacher assessment literacies as dynamic social practices which are context dependent and which involve teachers in articulating and negotiating classroom and cultural knowledges [sic] with one another and with learners, in initiation, development and practice of assessment to achieve the learning goals of students’ (p. 241). Their focus is the intersection and interconnection of assessment practice and pedagogical practice, characterised as ‘horizontal discourses,’ which offer no guidance on data literacy, seen as a component of ‘vertical discourses’. Teacher collaboration and communities of practice are reviewed in Chap. 10.

  18. 18.

    The fifth skill, instructional decision making, is a step beyond data interpretation per se, and is taken up in Chap. 9.

  19. 19.

    Kippers, Poortman, Schildkamp, & Visscher, (2018) also based their approach to data literacy development on the inquiry cycle. They identify five decision steps: set a purpose; collect data; analyse data; interpret data; and take instructional action. Other formulations of the decision cycle are explored in Chap. 9.

  20. 20.

    ‘Identify problems’ and ‘frame questions’ are potentially relevant, but are not elaborated in the Gummer and Mandinach (2015) model.

  21. 21.

    These are a reinterpretation (reframed and reorganized) of Gummer and Mandinach (2015) where the elements are presented in the form of a mind map.

  22. 22.

    This list is a paraphrase of Brookhart (2011), Table 1, p. 7.

  23. 23.

    Looney, Cumming, van der Kleij, & Harris, (2017) propose an extension of the concept of assessment literacy to encompass ‘assessment identity’ with ‘not only a range of assessment strategies and skills, and even confidence and self-efficacy in undertaking assessment , but also the beliefs and feelings about assessment’ (p. 15). They also examine assessment literacy instruments for their theoretical justification and validity (Appendix 2).

  24. 24.

    DeLuca et al., (2016a) also developed a Classroom Assessment Inventory incorporating these dimensions.

References

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Maxwell, G.S. (2021). Interpreting Data: Creating Meaning. In: Using Data to Improve Student Learning. The Enabling Power of Assessment, vol 9. Springer, Cham. https://doi.org/10.1007/978-3-030-63539-8_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-63539-8_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-63537-4

  • Online ISBN: 978-3-030-63539-8

  • eBook Packages: EducationEducation (R0)

Publish with us

Policies and ethics

Navigation