Different Approaches to Data Use

  • Chapter
  • First Online:
Using Data to Improve Student Learning

Part of the book series: The Enabling Power of Assessment ((EPAS,volume 9))

  • 706 Accesses

Abstract

In this chapter, seven different approaches to improving student learning through the use of data are identified: Data Driven Decision Making (DDDM); Educational Accountability; School Improvement; School Effectiveness; Program Evaluation; Teacher Effectiveness; and Formative Assessment (or Assessment for Learning). Their characteristics, assumptions, effectiveness and consequences are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (France)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 117.69
Price includes VAT (France)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
EUR 147.69
Price includes VAT (France)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    No Child Left Behind Act of 2001, P.L. 107-110, 20 U.S.C. § 6319 (2002). See Jorgensen and Hoffman (2003) for an explanation of its history.

  2. 2.

    Strunk and McEachin (2014) found greater gains with mathematics achievement than with English language achievement, and point to this as a common finding worth further research.

  3. 3.

    The articles in this journal issue are: Ehren and Swanborn (2012); Lee et al. (2012); Levin and Datnow (2012); McNaughton, Lai, and Hsiao (2012); Wayman, Jimerson, and Cho (2012).

  4. 4.

    The list of factors is: 1. achievement orientation; 2. educational leadership; 3. staff consensus and cohesion; 4. curriculum quality; 5. school climate; 6. evaluative potential; 7. parental involvement; 8. school climate; 9. effective learning time; 10. structured instruction; 11. independent learning; 12. adaptive instruction; 13. feedback/reinforcement. (Scheerens & Bosker, 1997, p. 100)

  5. 5.

    The difference between top and bottom teachers on this measure is about 0.6 of a standard deviation. This is difficult to compare with the earlier ‘years of growth’ measure but in both cases the difference is substantial.

  6. 6.

    Chetty, Friedman, and Rockoff, (2011–2012) claimed to show that the effects of high quality teaching are both substantial and long-term and may even include future earnings, but Adler (2013) has shown that these conclusions are methodologically unsound, in fact, ‘contradicted by the findings of the study itself’ (p. 7).

  7. 7.

    Wiliam (2010a) provides a short well-presented overview of the importance and benefits of focusing on improving teacher quality in situ and the limited payoffs from other approaches.

  8. 8.

    An overview of these issues can be found in Evaluation Policy Analysis Archives, Volume 21, 2013.

  9. 9.

    Rowan et al. (2002) conclude that the combined effects of many small events produce learning outcomes, that few classrooms would offer optimum learning conditions, and that ‘the majority of classrooms probably present students with a mix of more and [emphasis in original] less instructionally effective practices simultaneously’ (p. 22).

  10. 10.

    The results are not easily interpreted and classroom composition also played a role.

  11. 11.

    ‘Predicted’ here is a technical term related to the regression equation linking the composite measure and achievement gain. This is not a forward prediction in time since all the measures already exist. Whether such a ‘prediction’ would be sustained in other circumstances is a question of the external validity (or generalisability) of these findings.

  12. 12.

    These were: ‘classroom management practices; knowledge and understanding of teachers’ main field; knowledge and understanding of instructional practices in their main subject field; a development or training plan to improve their teaching; teaching students with special needs; handling of student discipline and behaviour problems; teaching students in a multicultural setting; and the emphasis on improving student test scores’ (OECD, 2009, p. 159).

  13. 13.

    A similar definition is provided by Moss and Brookhart (2019, p. 6): ‘Formative assessment is an active and intentional learning process that partners the teacher and the students to continuously and systematically gather evidence of learning with the express goal of improving student achievement.’

  14. 14.

    There are some potential benefits from this unpredictability. ‘[The] various sources of indeterminacy introduce scope for creative invention and change. Learners are not simply passive recipients of information but are able to negotiate shared meaning, thus allowing the process of learning to be one of active transformation and appropriation of knowledge.’ (Ireson, 2008, p. 143)

  15. 15.

    See Cizek (2009) for an extended discussion.

  16. 16.

    In the US, many commercial testing companies market benchmark or interim assessments as Formative Assessment Item Banks.

  17. 17.

    Perie et al. (2009) themselves make a distinction between interim and formative assessments in terms of their scope and duration, but see both as types of assessment.

  18. 18.

    Shepard (2019) defines formative assessments as a form of assessment, whose focus and characteristics differ from summative assessments, but also incorporates such assessments within a theory or classroom action.

  19. 19.

    Wiliam (2017) makes the point that ‘[k]ee** a clear focus on assessment for learning (or formative assessment ) as assessment allows much clearer analysis of the relationship between the evidence and the inferences and actions that the evidence supports’ (p. 684).

  20. 20.

    Penuel and Shepard (2016) point out that so-called ‘interim assessments’ in the USA can be formative or summative depending how they are used, but also warn that such assessments ‘may lack the close connection to instruction of more embedded and immediate types of formative assessment’ (p. 787), especially those assessments that are intrinsically located in the process of teaching.

  21. 21.

    Penuel and Shepard (2016) and Shepard (2019) construct ‘clusters of interventions’ for develo** formative assessment practices, based on different theories of learning and theories of action. Their categorisation identifies so-called interim assessments as a form of DBDM, though (as discussed earlier in this chapter) this is a management concept rather than a typical instance of formative assessment. The other clusters are ‘strategy-focused’, ‘sociocognitive’ and ‘sociocultural’, which is similar to the categories adopted by Stobart and Hopfenbeck (2014).

  22. 22.

    Three decades ago, Harry Black (1986) suggested that formative assessment is not an innovation, but has always been seen as part of appropriate teaching, except that it is generally done inadequately and unsystematically.

  23. 23.

    van der Kleij et al. (2015) provide an analysis of the differences between DBDM, Assessment for Learning and Diagnostic Testing in terms of their theoretical underpinnings and practical implementation. They make the useful point that whereas typically DBDM can involve decisions about student, class and school, and Assessment for Learning about student and class, Diagnostic Testing focuses just on the individual student.

  24. 24.

    The relationship between these two feedback loops within assessment for learning models appears to need further exploration (Wiliam, 2017).

  25. 25.

    Cohen’s d is a commonly used measure of effect size (Cohen, 1988). It represents a difference between means in units of standard deviation. Typical rules of thumb are: small effect d = .2; medium effect d = .5; large effect d = .8 but this is a judgment call.

  26. 26.

    Popularisation of this concept is attributed to Havighurst (1952).

References

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Maxwell, G.S. (2021). Different Approaches to Data Use. In: Using Data to Improve Student Learning. The Enabling Power of Assessment, vol 9. Springer, Cham. https://doi.org/10.1007/978-3-030-63539-8_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-63539-8_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-63537-4

  • Online ISBN: 978-3-030-63539-8

  • eBook Packages: EducationEducation (R0)

Publish with us

Policies and ethics

Navigation