Log in

Common Methodological Problems in Randomized Controlled Trials of Preventive Interventions

  • Published:
Prevention Science Aims and scope Submit manuscript

Abstract

Randomized controlled trials (RCTs) are often considered the gold standard in evaluating whether intervention results are in line with causal claims of beneficial effects. However, given that poor design and incorrect analysis may lead to biased outcomes, simply employing an RCT is not enough to say an intervention “works.” This paper applies a subset of the Society for Prevention Research (SPR) Standards of Evidence for Efficacy, Effectiveness, and Scale-up Research, with a focus on internal validity (making causal inferences) to determine the degree to which RCTs of preventive interventions are well-designed and analyzed, and whether authors provide a clear description of the methods used to report their study findings. We conducted a descriptive analysis of 851 RCTs published from 2010 to 2020 and reviewed by the Blueprints for Healthy Youth Development web-based registry of scientifically proven and scalable interventions. We used Blueprints’ evaluation criteria that correspond to a subset of SPR’s standards of evidence. Only 22% of the sample satisfied important criteria for minimizing biases that threaten internal validity. Overall, we identified an average of 1–2 methodological weaknesses per RCT. The most frequent sources of bias were problems related to baseline non-equivalence (i.e., differences between conditions at randomization) or differential attrition (i.e., differences between completers versus attritors or differences between study conditions that may compromise the randomization). Additionally, over half the sample (51%) had missing or incomplete tests to rule out these potential sources of bias. Most preventive intervention RCTs need improvement in rigor to permit causal inference claims that an intervention is effective. Researchers also must improve reporting of methods and results to fully assess methodological quality. These advancements will increase the usefulness of preventive interventions by ensuring the credibility and usability of RCT findings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Data Availability

Available from authors upon request.

References

  • Altman, D. G. (1985). Comparability of randomised groups. Statistician, 34, 125–136.

    Article  Google Scholar 

  • Altman, D. G., & Dore, C. J. (1990). Randomisation and baseline comparisons in clinical trials. The Lancet, 335(8682), 149–153.

    Article  CAS  Google Scholar 

  • Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-five trials and eleven systematic reviews a day: How will we ever keep up? PLoS Med, 7(9), e1000326.

  • Bickman, L., & Reich, S. M. (2015). Randomized controlled trials: A gold standard or gold plated. Credible and Actionable Evidence: The Foundation for Rigorous and Influential Evaluations, Sage, Los Angeles, 83–113.

  • Brincks, A., Montag, S., Howe, G. W., Huang, S., Siddique, J., Ahn, S., & Brown, C. H. (2018). Addressing methodologic challenges and minimizing threats to validity in synthesizing findings from individual-level data across longitudinal randomized trials. Prevention Science, 19(1), 60–73.

    Article  PubMed  PubMed Central  Google Scholar 

  • Bonell, C. (2002). The utility of randomized controlled trials of social interventions: An examination of two trials of HIV prevention. Critical Public Health, 12(4), 321–334.

    Article  Google Scholar 

  • Buckley, P. R., Ebersole, C. R., Steeger, C. M., Michaelson, L. E., Hill, K. G., & Gardner, F. (2021). The role of clearinghouses in promoting transparent research: A methodological study of transparency practices for preventive interventions. Prevention Science. [online first]. https://doi.org/10.1007/s11121-021-01252-5

  • Buckley, P. R., Fagan, A. A., Pampel, F. C., & Hill, K. G. (2020). Making evidence-based interventions relevant for users: A comparison of requirements for dissemination readiness across program registries. Evaluation Review, 44(1), 51–83.

    Article  PubMed  PubMed Central  Google Scholar 

  • Burkhardt, J. T., Schröter, D. C., Magura, S., Means, S. N., & Coryn, C. L. (2015). An overview of evidence-based program registers (EBPRs) for behavioral health. Evaluation and Program Planning, 48, 92–99.

    Article  PubMed  PubMed Central  Google Scholar 

  • Chilenski, S. M., Pasch, K. E., Knapp, A., Baker, E., Boyd, R. C., Cioffi, C., & Rulison, K. (2020). The Society for Prevention Research 20 years later: A summary of training needs. Prevention Science, 21(7), 985–1000.

    Article  PubMed  PubMed Central  Google Scholar 

  • Cook, T. D. (2018). Twenty-six assumptions that have to be met if single random assignment experiments are to warrant" gold standard" status: A commentary on Deaton and Cartwright. Social Science & Medicine, 210, 37–40.

    Article  Google Scholar 

  • Cook, T. D., & Campbell, D. T. (1979). The design and conduct of true experiments and quasi-experiments in field settings. In Reproduced in part in Research in Organizations: Issues and Controversies. Goodyear Publishing Company.

  • Curran, P. J., & Hussong, A. M. (2009). Integrative data analysis: The simultaneous analysis of multiple data sets. Psychological Methods, 14(2), 81.

    Article  PubMed  PubMed Central  Google Scholar 

  • Deaton, A., & Cartwright, N. (2018). Understanding and misunderstanding randomized controlled trials. Social Science & Medicine, 210, 2–21.

    Article  Google Scholar 

  • Dechartres, A., Trinquart, L., Faber, T., & Ravaud, P. (2016). Empirical evaluation of which trial characteristics are associated with treatment effect estimates. Journal of Clinical Epidemiology, 77, 24–37.

    Article  PubMed  Google Scholar 

  • Deke, J., & Chiang, H. (2017). The WWC attrition standard: Sensitivity to assumptions and opportunities for refining and adapting to new contexts. Evaluation Review, 41(2), 130–154.

    Article  PubMed  Google Scholar 

  • European Medicines Agency (2015). Guideline on adjustment for baseline covariates in clinical trials. Retrieved on October 19, 2020 from https://www.ema.europa.eu/en/documents/scientific-guideline/guideline-adjustment-baseline-covariates-clinical-trials_en.pdf

  • Fagan, A. A., & Buchanan, M. (2016). What works in crime prevention? Comparison and critical review of three crime prevention registries. Criminology & Public Policy, 15(3), 617–649.

    Article  Google Scholar 

  • Falagas, M. E., Grigori, T., & Ioannidou, E. (2009). A systematic review of trends in the methodological quality of randomized controlled trials in various research fields. Journal of Clinical Epidemiology, 62(3), 227–231. e229.

  • Farrington, D. P., & Petrosino, A. (2001). The Campbell collaboration crime and justice group. The Annals of the American Academy of Political and Social Science, 578(1), 35–49.

    Article  Google Scholar 

  • Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., & Ji, P. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6(3), 151–175.

    Article  PubMed  Google Scholar 

  • Gottfredson, D. C., Cook, T. D., Gardner, F. E., Gorman-Smith, D., Howe, G. W., Sandler, I. N., & Zafft, K. M. (2015). Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: Next generation. Prevention Science, 16(7), 893–926.

    Article  PubMed  PubMed Central  Google Scholar 

  • Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549–576.

    Article  PubMed  Google Scholar 

  • Grant, S., Mayo-Wilson, E., Montgomery, P., Macdonald, G., Michie, S., Hopewell, S., & Moher, D. (2018). CONSORT-SPI 2018 Explanation and elaboration: Guidance for reporting social and psychological intervention trials. Trials, 19(1), 406.

    Article  PubMed  PubMed Central  Google Scholar 

  • Grant, S., Montgomery, P., Hopewell, S., Macdonald, G., Moher, D., & Mayo-Wilson, E. (2013a). Develo** a reporting guideline for social and psychological intervention trials. Research on Social Work Practice, 23(6), 595–602.

    Article  PubMed  PubMed Central  Google Scholar 

  • Grant, S. P., Mayo-Wilson, E., Melendez-Torres, G., & Montgomery, P. (2013b). Reporting quality of social and psychological intervention trials: A systematic review of reporting guidelines and trial publications. PLoS One, 8(5), e65442.

  • Gupta, S. K. (2011). Intention-to-treat concept: A review. Perspectives in Clinical Research, 2(3), 109.

    Article  PubMed  PubMed Central  Google Scholar 

  • Hedges, L. V., & Hedberg, E. C. (2007). Intraclass correlation values for planning group-randomized trials in education. Educational Evaluation and Policy Analysis, 29(1), 60–87.

    Article  Google Scholar 

  • Henry, D., Tolan, P., Gorman-Smith, D., & Schoeny, M. (2017). Alternatives to randomized control trial designs for community-based prevention evaluation. Prevention Science, 18(6), 671–680.

    Article  PubMed  Google Scholar 

  • Higgins, J. P., Altman, D. G., Gøtzsche, P. C., Jüni, P., Moher, D., Oxman, A. D., & Sterne, J. A. (2011). The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ, 343, d5928.

  • Hopewell, S., Dutton, S., Yu, L. M., Chan, A. W., & Altman, D. G. (2010). The quality of reports of randomised trials in 2000 and 2006: Comparative study of articles indexed in PubMed. BMJ, 340, c723.

  • Ioannidis, J. P. (2018). Randomized controlled trials: Often flawed, mostly useless, clearly indispensable: A commentary on Deaton and Cartwright. Social Science & Medicine (1982), 210, 53.

  • Jeličić, H., Phelps, E., & Lerner, R. M. (2009). Use of missing data methods in longitudinal studies: The persistence of bad practices in developmental psychology. Developmental Psychology, 45(4), 1195.

    Article  PubMed  Google Scholar 

  • Kristman, V. L., Manno, M., & Côté, P. (2005). Methods to account for attrition in longitudinal data: Do they work? A simulation study. European Journal of Epidemiology, 20(8), 657–662.

    Article  PubMed  Google Scholar 

  • Lachin, J. M. (2000). Statistical considerations in the intent-to-treat principle. Controlled Clinical Trials, 21(3), 167–189.

    Article  CAS  PubMed  Google Scholar 

  • Little, R. J., & Rubin, D. B. (2019). Statistical analysis with missing data (Vol. 793). John Wiley & Sons.

  • Martin, J., McBride, T., Brims, L., Doubell, L., Pote, I., & Clarke, A. (2018). Evaluating early intervention programmes: Six common pitfalls, and how to avoid them. Retrieved on October 12, 2020 from http://www.eif.org.uk/publication/evaluating-early-intervention-programmes-six-common-pitfalls-and-how-to-avoid-them

  • Mayo-Wilson, E., Grant, S., Hopewell, S., Macdonald, G., Moher, D., & Montgomery, P. (2013). Develo** a reporting guideline for social and psychological intervention trials. Trials, 14(1), 242.

    Article  PubMed  PubMed Central  Google Scholar 

  • Means, S. N., Magura, S., Burkhardt, J. T., Schröter, D. C., & Coryn, C. L. (2015). Comparing rating paradigms for evidence-based program registers in behavioral health: Evidentiary criteria and implications for assessing programs. Evaluation and Program Planning, 48, 100–116.

    Article  PubMed  PubMed Central  Google Scholar 

  • Mihalic, S. F., & Elliott, D. S. (2015). Evidence-based programs registry: Blueprints for healthy youth development. Evaluation and Program Planning, 48, 124–131.

    Article  PubMed  Google Scholar 

  • Montgomery, P., Grant, S., Mayo-Wilson, E., Macdonald, G., Michie, S., Hopewell, S., & Moher, D. (2018). Reporting randomised trials of social and psychological interventions: The CONSORT-SPI 2018 Extension. Trials, 19(1), 407.

    Article  PubMed  PubMed Central  Google Scholar 

  • Murray, D. M., Pals, S. L., George, S. M., Kuzmichev, A., Lai, G. Y., Lee, J. A., & Nelson, S. M. (2018). Design and analysis of group-randomized trials in cancer: A review of current practices. Preventive Medicine, 111, 241–247.

    Article  PubMed  PubMed Central  Google Scholar 

  • Murray, D. M., Taljaard, M., Turner, E. L., & George, S. M. (2020). Essential ingredients and innovations in the design and analysis of group-randomized trials. Annual Review of Public Health, 41, 1–19.

    Article  PubMed  Google Scholar 

  • Murray, D. M., Varnell, S. P., & Blitstein, J. L. (2004). Design and analysis of group-randomized trials: A review of recent methodological developments. American Journal of Public Health, 94(3), 423–432.

    Article  PubMed  PubMed Central  Google Scholar 

  • Nicholson, J. S., Deboeck, P. R., & Howard, W. (2017). Attrition in developmental psychology: A review of modern missing data reporting and practices. International Journal of Behavioral Development, 41(1), 143–153.

    Article  Google Scholar 

  • Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606.

  • Pigott, T. D., & Polanin, J. R. (2020). Methodological guidance paper: High-quality meta-analysis in a systematic review. Review of Educational Research, 90(1), 24–46.

    Article  Google Scholar 

  • Pocock, S. J., Assmann, S. E., Enos, L. E., & Kasten, L. E. (2002). Subgroup analysis, covariate adjustment and baseline comparisons in clinical trial reporting: Current practice and problems. Statistics in Medicine, 21(19), 2917–2930.

  • Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879.

    Article  PubMed  Google Scholar 

  • Puma, M. J., Olsen, R. B., Bell, S. H., & Price, C. (2009). What to do when data are missing in group randomized controlled trials. NCEE 2009–0049. National Center for Education Evaluation and Regional Assistance.

  • Raab, G. M., Day, S., & Sales, J. (2000). How to select covariates to include in the analysis of a clinical trial. Controlled Clinical Trials, 21(4), 330–342.

    Article  CAS  PubMed  Google Scholar 

  • Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods, (1): sage.

  • Raudenbush, S. W., & Schwartz, D. (2020). Randomized experiments in education, with implications for multilevel causal inference. Annual Review of Statistics and Its Application, 7, 177–208.

    Article  Google Scholar 

  • Schafer, J. L., & Graham, J. W. (2002). Missing data: Our view of the state of the art. Psychological Methods, 7(2), 147.

    Article  PubMed  Google Scholar 

  • Schulz, K. F., Altman, D. G., Moher, D., & Group, C. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. Trials, 11(1), 32.

    Article  Google Scholar 

  • Shadish, W. R., & Cook, T. D. (2009). The renaissance of field experimentation in evaluating interventions. Annual Review of Psychology, 60, 607–629.

    Article  PubMed  Google Scholar 

  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.

    Google Scholar 

  • Senn, S. (1994). Testing for baseline balance in clinical trials. Statistics in Medicine, 13(17), 1715–1726.

    Article  CAS  PubMed  Google Scholar 

  • Song, M., & Herman, R. (2010). Critical issues and common pitfalls in designing and conducting impact studies in education: Lessons learned from the What Works Clearinghouse (Phase I). Educational Evaluation and Policy Analysis, 32(3), 351–371.

    Article  Google Scholar 

  • Spieth, P. M., Kubasch, A. S., Penzlin, A. I., Illigens, B.M.-W., Barlinn, K., & Siepmann, T. (2016). Randomized controlled trials—A matter of design. Neuropsychiatric Disease and Treatment, 12, 1341.

    PubMed  PubMed Central  Google Scholar 

  • Sterne, J. A., Savović, J., Page, M. J., Elbers, R. G., Blencowe, N. S., Boutron, I., & Higgins, J. P. (2019). RoB 2: A revised tool for assessing risk of bias in randomised trials. BMJ, 366.

  • Thomson, D., Hartling, L., Cohen, E., Vandermeer, B., Tjosvold, L., & Klassen, T. P. (2010). Controlled trials in children: Quantity, methodological quality and descriptive characteristics of pediatric controlled trials published 1948–2006. PLoS One, 5(9), e13106.

  • Torgerson, D. J., & Torgerson, C. J. (2003). Avoiding bias in randomised controlled trials in educational research. British Journal of Educational Studies, 51(1), 36–45.

    Article  Google Scholar 

  • Wadhwa, M., & Cook, T. D. (2019). The set of assumptions randomized control trials make and their implications for the role of such experiments in evidence-based child and adolescent development research. New Directions for Child and Adolescent Development, 2019(167), 17–37.

    Article  PubMed  Google Scholar 

  • Walleser, S., Hill, S. R., & Bero, L. A. (2011). Characteristics and quality of reporting of cluster randomized trials in children: Reporting needs improvement. Journal of Clinical Epidemiology, 64(12), 1331–1340.

    Article  PubMed  Google Scholar 

  • West, S. G. (2009). Alternatives to randomized experiments. Current Directions in Psychological Science, 18(5), 299–304.

    Article  Google Scholar 

  • West, S. G., & Thoemmes, F. (2010). Campbell’s and Rubin’s perspectives on causal inference. Psychological Methods, 15(1), 18.

    Article  PubMed  Google Scholar 

  • What Works Clearinghouse (WWC) (2020). WWC procedures and standards handbook (Version 4.1). Washington, DC: US Department of Education, Institute of Education Sciences. National Center for Education Evaluation and Regional Assistance, What Works Clearinghouse.

  • Wilson, D. B. (2009). Missing a critical piece of the pie: Simple document search strategies inadequate for systematic reviews. Journal of Experimental Criminology, 5(4), 429–440.

    Article  Google Scholar 

  • Wing, C., & Cook, T. D. (2013). Strengthening the regression discontinuity design using additional design elements: A within-study comparison. Journal of Policy Analysis and Management, 32(4), 853-U208. https://doi.org/10.1002/pam.21721

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Abigail Fagan, Delbert Elliott, Denise Gottfredson, and Amanda Ladika for their comments and critical read of the manuscript, Sharon Mihalic for paper concepts, and Jennifer Balliet for participating in data entry and data coding.

Funding

his study was conducted with support from Arnold Ventures.

Author information

Authors and Affiliations

Authors

Contributions

Concepts and design (CS; PB; FP); data entry, coding, management, and analysis (CS; PB; FP; CG); drafting of manuscript (CS; PB; FP); intellectual contributions, reviewing, and critical editing of manuscript content (CS; PB; FP; CG; KH). All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Christine M. Steeger.

Ethics declarations

Ethics Approval and Consent to Participate

This paper does not contain research with human participants or animals.

Conflict of interest

The authors declare that they are members of the Blueprints for Healthy Youth Development staff and that they have no financial or other conflict of interest with respect to any of the specific interventions, policies, or procedures discussed in this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 14 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Steeger, C.M., Buckley, P.R., Pampel, F.C. et al. Common Methodological Problems in Randomized Controlled Trials of Preventive Interventions. Prev Sci 22, 1159–1172 (2021). https://doi.org/10.1007/s11121-021-01263-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11121-021-01263-2

Keywords

Navigation