Abstract
Both within and outside of sociology, there are conversations about methods to reduce error and improve research quality—one such method is preregistration and its counterpart, registered reports. Preregistration is the process of detailing research questions, variables, analysis plans, etc. before conducting research. Registered reports take this one step further, with a paper being reviewed on the merit of these plans, not its findings. In this manuscript, I detail preregistration’s and registered reports’ strengths and weaknesses for improving the quality of sociological research. I conclude by considering the implications of a structural-level adoption of preregistration and registered reports. Importantly, I do not recommend that all sociologists use preregistration and registered reports for all studies. Rather, I discuss the potential benefits and genuine limitations of preregistration and registered reports for the individual sociologist and the discipline.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Science is powerful, in part, because it is self-correcting. Specifically, due to replication and the cumulative nature of scientific inquiry, over time, errors are exposed. For errors to be exposed, however, research must be transparent, i.e., research methods (e.g., data cleaning processes, questionnaires, research protocol, etc.) must be made explicit. Without transparency, the replication and comprehensive evaluation of prior research is difficult, and scientific progress may be inhibited (Freese, 2007).
Because accuracy is integral to science, it is perhaps unsurprising that researchers in social and natural sciences are vexed by errors which may result in an inability to verify or replicate research findings. For example, psychologists’ inability to replicate important findings has garnered considerable attention in and outside of academia (Moody et al., 2022). In response, some have claimed that science is in the midst of a replication crisis (Jamieson, 2018). The discussion of this “crisis” generally blames researchers whose findings are non-replicable, portraying them as sloppy, incompetent, or dishonest (Jamieson, 2018). Ironically, due to fear of public shaming, researchers may hesitate to admit to mistakes, thereby making the identification of non-replicability more difficult (Moody et al., 2022).
Like these other social and natural sciences, there is reason to believe sociology may also suffer from non-replicability. Specifically, Gerber and Malhotra (2008) reviewed papers published in three top sociology journals: American Sociological Review, American Journal of Sociology, and The Sociological Quarterly. The authors compared the distribution of z-scores in these published papers to what might be expected by chance alone. Without bias, we would expect that z-scores would be about equivalent among statistically significant levels. That is, there would be no greater likelihood to publish a paper with a z-score of 1.96 (and corresponding p-value of < 0.05) than a paper with a z-score of 2.58 (and a corresponding p-value of < 0.01). In fact, if the effect and statistical power are large enough, we might even expect a higher rate of papers to be published at the p < 0.01 than the p < 0.05 level (Simonsohn et al., 2014). In contrast, however, z-scores that are barely past the widely accepted critical value (1.96) and associated alpha level (0.05) are published at a (much) higher rate than would be expected by chance alone. The abundance of barely significant results suggests that, for many claims in sociological research, the strength of evidence may be overstated.
To some extent, a lack of reproducibility is normal and to be expected in research (Shiffrin et al., 2018). It is important, however, that non-reproducible findings are identified so that they do not become the foundation for future research. Additionally, retractions, corrections, and public debate over research findings may damage sociologists’ credibility among the general public (Anvari and Lakens 2018; Hendriks et al., 2020; Wingen et al., 2020). This reputational damage is unfortunate because sociologists have the theoretical and methodological training to weigh in on conversations that are relevant for public policy, workplace practices, and educational guidelines, among others. Furthermore, sociologists’ conclusions are often unpopular, especially to those who have historically held power. For sociological research to have the maximum possible impact, researchers must maximize the actual and perceived quality of their research.
Below, I begin by describing two related practices, preregistration and registered reports, which have been shown to: (1) enhance research quality and replication (Chambers and Tzavella 2022; Scheel 2021; Soderberg et al., 2021; Wicherts et al. 2011) and (2) increase the public’s trust in research (Chambers and Tzavella 2022; Christensen and Miguel 2016; Nosek and Lakens 2014; Parker et al., 2019; Scheel 2021). Next, I discuss the strengths and weaknesses of these practices. Finally, I conclude by discussing the potential implications of adopting preregistration and registered reports for the field more broadly.
Background
Pre-registration
Preregistration is the process of carefully considering and stating research plans and rationale in a repository. When and if the researcher is ready to share them, these plans can be made public. Depending on the type of research, preregistration may include details such as hypotheses, sampling strategy, interview guides, exclusion criteria, study design, and/or analysis plans (Kavanagh & Kapitány, 2019). Because inductive/abductive and descriptive research designs may change in response to findings, researchers are encouraged to update their study design throughout the research process. In updating the study design, the preregistration becomes a living document for researchers to track their study’s evolution.
Deductive Research
Deductive research involves the testing of hypotheses. Like inductive/abductive and descriptive research, deductive research may take many forms, including experiments, textual analysis, analysis of interview data, secondary analysis of survey data, etc. To reduce multiple tests (see discussion below), when preregistering deductive research, researchers will post information about research plans. Specifically, researchers will be asked to report their: research question, hypotheses, sampling strategy, independent variables, covariates, dependent variables, planned analyses, exclusion criteria, sample size, operationalization of variables, etc. Changes to these plans may still be made and documented, but researchers engaged in deductive research are encouraged to stay close to original research plans, justify deviations from those plans, and address statistical concerns associated with such deviations.
Strengths of Preregistration
Scholars from other social science disciplines, including psychology, economics, and political science, have discussed the benefits of preregistration (e.g., see DeHaven 2017; Haven & Van Grootel, 2019; Kavanagh & Kapitány, 2020; Haven & Van Grootel, 2019; Timmermans and Tavory 2012). Far from a liability, this prior theoretical knowledge is often considered to be beneficial for theory construction by hel** researchers identify gaps in existing theories (Burawoy 1991; James, 1907; Katz, 2015; Timmermans and Tavory 2012). Preregistration encourages researchers to consider prior theoretical knowledge and maintain a contemporaneous record of changes in theoretical dispositions throughout the development of the research project (Haven et al., 2020; Haven & Van Grootel, 2019).
Multiple Comparisons: Over-reliance on p-values
Above, I discussed how multiple comparisons increase the risk of Type 1 error. Although preregistration and registered reports can verify a priori hypotheses and prevent motivated reasoning for multiple comparisons, they cannot change the fact that multiple comparisons may exist (Rubin, 2017). Specifically, each potential decision about how to treat outliers, missing data, etc. can affect the number of potential comparisons. Even if the results of these decisions do not affect which decisions are made, the mere existence of multiple tests belies the logic of the Fisher/Neyman-Pearson/NHST approach. Said otherwise, although preregistration and registered reports may prevent analysis decisions from being influenced by the implications for results, preregistration and registered reports do not prevent analysis decisions “from being influenced by the idiosyncrasies of the data” (Rubin, 2017, p. 8).
In addition to preregistration and registered reports, there are a few ways to address the issue of multiple comparison, including: (1) considering a higher threshold for statistical significance, (2) abandoning the Neyman-Pearson approach (p-values) for Bayesian methods, and (3) the liberal use of sensitivity analyses (Moody et al., 2022; Rubin, 2017). Doing any one of these methods does not necessarily address all concerns about multiple comparisons. Additionally, of the above-listed solutions to the problem of multiple comparisons, preregistration alone is perhaps the least effective in reducing Type 1 error and ensuring the strength of published research (Moody et al., 2022; Rubin, 2017). Rather, research suggests that the liberal use of sensitivity analyses may be the best approach (Frank et al., 2013; Moody et al., 2022; Rubin, 2017; Young, 2018). That is, rather than rely on a single test, researchers can conduct multiple iterations of that test, reporting them all, and coming to a consensus based on this larger body of analyses. Thus, while preregistration can ensure p-values are used as intended, it cannot fix the errors inherent with p-values.
Errors: Coding & Transcription
It is difficult to avoid error. Although some errors occur due to important researcher oversights, many others occur due to everyday human fallibilities, such as coding and transcription mistakes. Indeed, Nuijten et al., (2016) examined papers published in eight major psychology journals between 1983 and 2015 and found that nearly half contained at least one p-value that was inconsistent with its test statistic and degrees of freedom. The researchers attributed many of these to transcription errors, which aligns with other research (Eubank, 2016; Ferguson and Heene 2012; Gerber and Malhotra 2008; Wetzels et al., 2011). Although this kind of error is pervasive and important, unfortunately it cannot be resolved by preregistration. No matter how thoughtful a preregistration, it cannot prevent typos. To reduce the number of such typos, researchers should embrace tools for automation (Long 2009).
Additionally, rather than exposing potential errors in research, preregistration could be used as armor against criticism. As it stands, when errors are identified in research papers (and/or non-replication occurs), the authors of papers face serious consequences and a damaged reputation (Shamoo & Resnick, 2015). Often, this treatment of errors does not distinguish between honest error and misconduct (Resnik and Stewart 2012). As a result, individuals who make honest mistakes may be hesitant to accept their mistakes for fear of the criticism that may befall them (Moody et al., 2022). To defend against such criticism, researchers may use preregistration as a signal of quality, rather than soberly addressing honest mistakes.
This hypothetical scenario is less of a criticism of preregistration than it is a criticism of the treatment of honest mistakes and the researchers who make them. When encountering scientific errors, sociologists must eschew shaming tactics. Instead, we should establish norms of scientific humility, understanding that we are all prone to making similar errors, and treating others’ research (and mistakes) as we would want them to treat ours (Janz and Freese 2021; Moody et al., 2022).
Because error-prevention is equally desirable for journals and authors, journals may consider embracing some of the responsibility of high quality research by, for example, providing pre-publication code review (Colaresi, 2016; Maner, 2014; Moody et al., 2022). This would involve individuals who are employed by the journal examining code in detail. The proposed benefit of this method is that it would reduce error before publication, thereby aligning journal and researcher interests. Although it would require considerable investment, Moody (2022:78–79) notes that “as a discipline, we have decided that other publication processes—copyediting, layout, and bibliometrics, for example—are acceptable and worthwhile. It may be time to include data and methods editing in this process.” To effectively implement these procedures, however, journals would need to have support and incentive (as opposed to mandates and punishment) from other entities, e.g., governmental agencies, funding organizations, academic associations, etc.
Conclusion
In summary, preregistration has promise, but, like any solution, is unable to fully resolve some of the most serious problems that hinder scientific progress. Preregistration may help improve the quality of research by increasing transparency, reducing some forms of error (through planning), preventing questionable research practices, and reducing the file-drawer problem (Chambers and Tzavella 2022; Nuijten et al., 2016; Scheel, 2021; Soderberg et al., 2021). Furthermore, preregistered papers and registered reports have higher rates of replicability and quality than traditional research (Chambers and Tzavella 2022; Nuijten et al., 2016; Scheel, 2021; Soderberg et al., 2021).
Due to its benefits, preregistration may provide an air of legitimacy. Specifically, the use of preregistration has been found to increase the public’s perception of research credibility (Chambers and Tzavella 2022; Christensen and Miguel 2016; Nosek and Lakens 2014; Parker et al., 2019; Scheel 2021).Footnote 5 Unfortunately, reviewers and journals must be careful when assuming that a preregistered paper is inherently more rigorous or accurate than a non-preregistered paper. Preregistration does not reduce Type 1 error caused by data-related idiosyncrasies, does not reduce errors from coding or transcription, and cannot prevent all questionable research practices (Ikeda et al., Footnote 6 If preregistration is adopted instead of other tools, then researchers, reviewers, and editors may feel better about scientific integrity without substantially improving it. If, however, preregistration is adopted along with other tools or strategies for improving research quality (e.g., pre-publication code review, posting of all materials, more in-depth methods sections), we are likely to see the greatest improvement in research quality (Moody et al., 2022). Thus, preregistration’s true promise lies in the adoption of its goals (i.e., improved transparency, reduced error, comprehensive evaluation of the strength of findings, and a shift in incentive structure), as opposed to an uncritical adoption of its methods.
Data Availability
N/A.
Code Availability
N/A.
Notes
Although preregistration is not widely adopted by sociologists in the United States, the flagship journal for the European Sociological Association, European Societies, recently encouraged scholars to preregister quantitative research (Präg et al., 2022).
I chose this example for a few reasons. First, it appears to be an honest mistake caused by an understandable oversight. Second, the researchers are well-established scholars whose work is highly respected in the field. Although errors in research should not be considered shameful, but rather, human, scholars are often ridiculed for research errors. Due to the high potential for such ridicule, I did not want to choose an example from an early career scholar whose reputation was not yet established in the field.
This is not to say that interviews are only used for inductive research.
Preregistration cannot prevent all questionable research practices and misconduct. For example, to ensure there is something publishable that is also preregistered, researchers could preregister multiple versions of studies and only selectively report these in published papers. Additionally, researchers could preregister findings after results are known, a practice known as PARKing (Yamada, 2018). These practices would, of course, belie the spirit of preregistration, rendering it ineffective (Ikeda et al., 2019; Pham and Oh 2021).
But also see Field et al., (2020).
Fortunately, virtually all journals that adopt registered reports tend to include them as an alternative type of submission, not one that replaces traditional articles (Chambers and Tzavella 2022).
References
Allen, C., & Mehler, D. M. A. (2019). Open Science Challenges, benefits and Tips in Early Career and Beyond. PLOS Biology, 17(5), e3000246. doi: https://doi.org/10.1371/journal.pbio.3000246.
Anvari, F., and Daniël Lakens (2018). The Replicability Crisis and Public Trust in Psychological Science. Comprehensive Results in Social Psychology, 3(3), 266–286. doi: https://doi.org/10.1080/23743603.2019.1684822.
Atkinson, J. (2001). “Privileging Indigenous Research Methodologies.” National Indigenous Researchers Forum, University of Melbourne
Barbour, R. S. (2003). The Newfound credibility of qualitative research? Tales of Technical Essentialism and Co-Option. Qualitative Health Research, 13(7), 1019–1027. doi: https://doi.org/10.1177/1049732303253331.
Brewer, J. D. (2000). Ethnography. Philadelphia, PA: Open University Press: Buckingham.
Burawoy, M. (Ed.). (1991). Ethnography unbound: power and resistance in the Modern Metropolis. Berkeley: University of California Press.
Chambers, C. D., and Loukia Tzavella (2022). The past, Present and Future of Registered Reports. Nature Human Behaviour, 6(1), 29–42. doi: https://doi.org/10.1038/s41562-021-01193-7.
Charmaz, K. (2014). Constructing Grounded Theory.
Charmaz, K., and Robert Thornberg (2021). The pursuit of quality in grounded theory. Qualitative Research in Psychology, 18(3), 305–327. doi: https://doi.org/10.1080/14780887.2020.1780357.
Christensen, G. S. (2016). and Edward Miguel. “Transparency, Reproducibility, and the Credibility of Economics Research.” National Burea of Economic Research Working Paper Series 94.
Colaresi, M. (2016). Preplication, replication: a proposal to efficiently upgrade Journal Replication Standards. International Studies Perspectives, 17, 367–378. doi: https://doi.org/10.1093/isp/ekv016.
DeHaven, A. C., & Retrieved (2017). (https://www.cos.io/blog/preregistration-plan-not-prison).
Desmond, M., & Papachristos, A. V., and David S. Kirk (2016). Police Violence and Citizen Crime reporting in the Black Community. American Sociological Review, 81(5), 857–876. doi: https://doi.org/10.1177/0003122416663494.
Desmond, M., Papachristos, A. V., & Kirk, D. S. (2020). Evidence of the Effect of Police Violence on Citizen Crime Reporting. American Sociological Review, 85(1), 184–190. doi: https://doi.org/10.1177/0003122419895979.
Dey, I. (1999). Grounding grounded theory: guidelines for qualitative Inquiry. San Diego: Academic Press.
Dickersin, K. (1990). The existence of publication Bias and Risk factors for its occurrence. Journal Of The American Medical Association, 263(10), 1385–1389.
Emerson, R. M., Rachel, I., Fretz, & Shaw, L. L. (2011). Writing Ethnographic Fieldnotes (2nd ed.). Chicago: The University of Chicago Press.
Eubank, N. (2016). Lessons from a decade of replications at the Quarterly Journal of Political Science. PS: Political Science & Politics, 49(02), 273–276. doi: https://doi.org/10.1017/S1049096516000196.
Fanelli, D. (2010). “‘Positive’ results increase down the Hierarchy of the Sciences” edited by E. Scalas. Plos One, 5(4), e10068. doi: https://doi.org/10.1371/journal.pone.0010068.
Ferguson, C. J., and Moritz Heene (2012). A vast graveyard of undead theories: publication Bias and Psychological Science’s aversion to the null. Perspectives on Psychological Science, 7(6), 555–561. doi: https://doi.org/10.1177/1745691612459059.
Field, S. M., Wagenmakers, E. J., Henk, A. L., Kiers, R., Hoekstra, A. F., Ernst, & Don, R. (2020). The Effect of Preregistration on Trust in empirical research findings: results of a registered report. Royal Society Open Science, 7(4), 181351. doi: https://doi.org/10.1098/rsos.181351.
Franco, A., & Malhotra, N., and Gabor Simonovits (2014). Publication Bias in the Social Sciences: unlocking the file drawer. Science, 345(6203), 1502–1505. doi: https://doi.org/10.1126/science.1255484.
Frane, A. V. (2015). Planned hypothesis tests are not necessarily exempt from Multiplicity Adjustment. Journal of Research Practice, 11(1), 17.
Frank, K. A., Spiro, J., Maroulis, Minh, Q., Duong, & Kelcey, B. M. (2013). What would it take to change an inference? Using Rubin’s Causal Model to interpret the robustness of Causal Inferences. Educational Evaluation and Policy Analysis, 35(4), 437–460. doi: https://doi.org/10.3102/0162373713493129.
Freese, J. (2007). Overcoming objections to Open-Source Social Science. Sociological Methods & Research, 36(2), 220–226. doi: https://doi.org/10.1177/0049124107306665.
Gelman, A. (2013). and Eric Loken. “The Garden of Forking Paths: Why Multiple Comparisons Can Be a Problem, Even When There Is No ‘FIshing Expedition’ or ‘p-Hacking’ and the Research Hypothesis Was Posited Ahead of Time.” Department of Statistics, Columbia University 17.
Gelman, A., and Eric Loken (2014). The Statistical Crisis in Science. American Scientist, 102(6), 460–465.
Gerber, A. S., and Neil Malhotra (2008). Publication Bias in empirical Sociological Research: do arbitrary significance levels distort published results? Sociological Methods & Research, 37(1), 3–30. doi: https://doi.org/10.1177/0049124108318973.
Glaser, B. G. (1978). Theoretical sensistivity: advances in the methodology of grounded theory (2nd ed.). Mill Valley, CA: Sociology Press.
Glaser, B. G. (1998). Doing grounded theory: issues and discussions. Mill Valley, Calif: Sociology Press: First printing.
Glaser, B. G. (2001). The grounded theory persepctive: conceptualization contrasted with description. Calif.: Sociology Press: Mill Valley.
Glaser, B. G., and Strauss Anselm (1967). The Discovery of grounded theory: strategies for qualitative research. London: Wiedenfeld and Nicholson.
de Groot, A. D. “The Meaning of ‘Significance’ for Different Types of Research [Translated and Annotated by Eric-Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, & Bakker, M. (2014). Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han L. J. van Der Maas].” Acta Psychologica 148:188–94. doi: https://doi.org/10.1016/j.actpsy.2014.02.001.
Haven, T. L., Timothy, M., Errington, K. S., Gleditsch, L., van Grootel, A. M., Jacobs, F. G., Kern, Rafael Piñeiro, Fernando Rosenblatt, and, & Mokkink, L. B. (2020). “Preregistering Qualitative Research: A Delphi Study.” International Journal of Qualitative Methods 19:160940692097641. doi: https://doi.org/10.1177/1609406920976417.
Haven, T. L., & Van Grootel, D. L. (2019). Preregistering qualitative research. Accountability in Research, 26(3), 229–244. doi: https://doi.org/10.1080/08989621.2019.1580147.
Hendriks, F., & Kienhues, D., and Rainer Bromme (2020). Replication Crisis = Trust Crisis? The effect of successful vs failed replications on Laypeople’s Trust in Researchers and Research. Public Understanding of Science, 29(3), 270–288. doi: https://doi.org/10.1177/0963662520902383.
Ikeda, A., Xu, H., Fuji, N., & Zhu, S. (2019). and Yuki Yamada. Questionable Research Practices Following Pre-Registration. preprint. PsyAr**v. doi: https://doi.org/10.31234/osf.io/b8pw9.
Ioannidis, J. P. A. (2008). Why most discovered true Associations are inflated. Epidemiology (Cambridge, Mass.), 19(5), 640–648. doi: https://doi.org/10.1097/EDE.0b013e31818131e7.
Jacobs, A. (2020). “Pre-Registration and Results-Free Review in Observational and Qualitative Research.” Pp. 221–64 in The Production of Knowledge: Enhancing Progress in Social Science, edited by C. Elman, J. Gerrig, and J. Mahoney. Cambridge University Press.
James, W. (1907). Pragmatism. Cambridge, Massachusetts: Hackett.
Jamieson, K. H. (2018). “Crisis or Self-Correction: Rethinking Media Narratives about the Well-Being of Science.” Proceedings of the National Academy of Sciences 115(11):2620–27. doi: https://doi.org/10.1073/pnas.1708276114.
Janz, N., and Jeremy Freese (2021). Replicate others as you would like to be replicated yourself. PS: Political Science & Politics, 54(2), 305–308. doi: https://doi.org/10.1017/S1049096520000943.
Katz, J. (2015). A theory of qualitative methodology: the Social System of Analytic Fieldwork. Méthod(e)s: African Review of Social Sciences Methodology, 1(1–2), 131–146. doi: https://doi.org/10.1080/23754745.2015.1017282.
Kavanagh, C. M., & Kapitány, R. (2019). Promoting the Benefits and Clarifying Misconceptions about Preregistration, Preprints, and Open Science for Cognitive Science of Religion. preprint. PsyAr**v. doi: https://doi.org/10.31234/osf.io/e9zs8.
Lakens, D. (2019). The Value of Preregistration for Psychological Science: A Conceptual Analysis. preprint. PsyAr**v. doi: https://doi.org/10.31234/osf.io/jbh4w.
Layder, D. (1998). Sociological practice: linking theory and Social Research. London; Thousand Oaks, Calif: Sage.
Locascio, J. J. (2019). The impact of results Blind Science Publishing on Statistical Consultation and collaboration. The American Statistician, 73(sup1), 346–351. doi: https://doi.org/10.1080/00031305.2018.1505658.
Long, J. S. (2009). The Workflow of Data Analysis Using Stata. Stata Press Books.
Lucchesi, L. R., Petra, M., Kuhnert, J. L., & Davis (2022). and Lexing **e. “Smallset Timelines: A Visual Representation of Data Preprocessing Decisions.” Pp. 1136–53 in 2022 ACM Conference on Fairness, Accountability, and Transparency. Seoul Republic of Korea: ACM.
Maner, J. K. (2014). Let’s put our money where our mouth is: if authors are to Change their Ways, Reviewers (and editors) must change with them. Perspectives on Psychological Science, 9(3), 343–351. doi: https://doi.org/10.1177/1745691614528215.
Moody, J. W., Lisa, A., Keister, & Ramos, M. C. (2022). Reproducibility in the Social Sciences. Annual Review of Sociology, 48, 21.
Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). “The Preregistration Revolution.” Proceedings of the National Academy of Sciences 115(11):2600–2606. doi: https://doi.org/10.1073/pnas.1708274114.
Nosek, B. A., Daniël, & Lakens (2014). Registered reports: a method to increase the credibility of published results. Social Psychology, 45(3), 137–141. doi: https://doi.org/10.1027/1864-9335/a000192.
Nuijten, M. B., Chris, H. J., Hartgerink, Marcel, A. L. M., van Assen, S., & Epskamp, and Jelte M. Wicherts (2016). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior Research Methods, 48(4), 1205–1226. doi: https://doi.org/10.3758/s13428-015-0664-2.
Olken, B. A. (2015). Promises and perils of Pre-Analysis Plans. Journal of Economic Perspectives, 29(3), 61–80. doi: https://doi.org/10.1257/jep.29.3.61.
Parker, T., & Fraser, H., and Shinichi Nakagawa (2019). Making Conservation Science more Reliable with Preregistration and Registered Reports. Conservation Biology, 33(4), 747–750. doi: https://doi.org/10.1111/cobi.13342.
Petticrew, M., Egan, M., Thomson, H., Hamilton, V., Kunkler, R., & Roberts, H. (2008). “Publication Bias in Qualitative Research: What Becomes of Qualitative Research Presented at Conferences?” Journal of Epidemiology & Community Health 62(6):552–54. doi: https://doi.org/10.1136/jech.2006.059394.
Pham, M., & Tuan, and Travis Tae Oh (2021). Preregistration is neither sufficient nor necessary for Good Science. Journal of Consumer Psychology, 31(1), 163–176. doi: https://doi.org/10.1002/jcpy.1209.
Präg, P., & Ersanilli, E., and Alexi Gugushvili (2022). An invitation to Submit. European Societies, 24(1), 1–6. doi: https://doi.org/10.1080/14616696.2022.2029131.
Resnik, D. B., & Neal Stewart, C. (2012). Misconduct versus honest error and scientific disagreement. Accountability in Research, 19(1), 56–63. doi: https://doi.org/10.1080/08989621.2012.650948.
Reyes, V. (2020). Ethnographic Toolkit: Strategic Positionality and Researchers’ visible and invisible tools in Field Research. Ethnography, 21(2), 220–240. doi: https://doi.org/10.1177/1466138118805121.
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638–641. doi: https://doi.org/10.1037/0033-2909.86.3.638.
Rubin, M. (2017). An evaluation of four solutions to the forking Paths Problem: adjusted alpha, preregistration, sensitivity analyses, and abandoning the Neyman-Pearson Approach. Review of General Psychology, 21(4), 321–329. doi: https://doi.org/10.1037/gpr0000135.
Schäfer, T., and Marcus A. Schwarz (2019). The meaningfulness of Effect Sizes in Psychological Research: differences between sub-disciplines and the impact of potential biases. Frontiers in Psychology, 10, 813. doi: https://doi.org/10.3389/fpsyg.2019.00813.
Scheel, A. M. (2021). An excess of positive results: comparing the standard psychology literature with registered reports. Advances in Methods and Practices in Psychological Science, 4(2), 1–12.
Shamoo, A. E., & Resnick, D. B. (2015). Responsible Conduct of Research. Third edition. Oxford; New York: Oxford University Press.
Shiffrin, R. M., & Börner, K. (2018). and Stephen M. Stigler. “Scientific Progress despite Irreproducibility: A Seeming Paradox.” Proceedings of the National Academy of Sciences 115(11):2632–39. doi: https://doi.org/10.1073/pnas.1711786114.
Simmons, J. P., Leif, D., & Nelson, and Uri Simonsohn (2011). False-positive psychology: undisclosed flexibility in Data Collection and Analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. doi: https://doi.org/10.1177/0956797611417632.
Simmons, J. P., & Nelson, L., and Uri Simonsohn (2021). Pre-registration: why and how. Journal of Consumer Psychology, 31(1), 151–162. doi: https://doi.org/10.1002/jcpy.1208.
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). P-Curve: a key to the file-drawer. Journal of Experimental Psychology: General, 143(2), 14. doi: https://doi.org/10.1037/a0033242.
Soderberg, C. K., Timothy, M., Errington, S. R., Schiavone, J., Bottesini, F. S., Thorn, S., Vazire, K. M., Esterling, & Nosek, B. A. (2021). Initial evidence of Research Quality of Registered Reports compared with the Standard Publishing Model. Nature Human Behaviour, 5(8), 990–997. doi: https://doi.org/10.1038/s41562-021-01142-4.
Szucs, D. (2016). A Tutorial on Hunting Statistical significance by chasing N. Frontiers in Psychology, 7, doi: https://doi.org/10.3389/fpsyg.2016.01444.
Timmermans, S., and Iddo Tavory (2012). Theory construction in qualitative research: from grounded theory to Abductive Analysis. Sociological Theory, 30(3), 167–186.
van Elisabeth, V. A., and Roger Giner-Sorolla (2016). Pre-registration in social Psychology—A discussion and suggested Template. Journal of Experimental Social Psychology, 67, 2–12. doi: https://doi.org/10.1016/j.jesp.2016.03.004.
Wagenmakers, E. J. (2016). and Gilles Dutilh. “Seven Selfish Reasons for Preregistration.” Retrieved December 23, 2021 (from https://www.psychologicalscience.org/observer/seven-selfish-reasons-for-preregistration).
Wasserfall, R. (1993). Reflexivity, Feminism and Difference. Qualitative Sociology, 16(1), 23–41. doi: https://doi.org/10.1007/BF00990072.
Wetzels, R., Matzke, D., Lee, M. D., Rouder, J. N., & Iverson, G. J., and Eric-Jan Wagenmakers (2011). Statistical evidence in experimental psychology: an empirical comparison using 855 t tests. Perspectives on Psychological Science, 6(3), 291–298. doi: https://doi.org/10.1177/1745691611406923.
Wicherts, J. M., & Bakker, M. (2011). and Dylan Molenaar. “Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results” edited by R. E. Tractenberg. PLoS ONE 6(11):e26828. doi: https://doi.org/10.1371/journal.pone.0026828.
Wicherts, J. M., Coosje, L. S., Veldkamp, Hilde, E. M., Augusteijn, M., Bakker, Robbie, C. M., van Aert, Marcel, A. L. M., & van Assen (2016). Degrees of Freedom in Planning, running, analyzing, and reporting psychological studies: a Checklist to avoid p-Hacking. Frontiers in Psychology, 7, doi: https://doi.org/10.3389/fpsyg.2016.01832.
Wingen, T., & Berkessel, J. B., and Birte Englich (2020). No replication, No Trust? How low replicability Influences Trust in psychology. Social Psychological and Personality Science, 11(4), 454–463. doi: https://doi.org/10.1177/1948550619877412.
Yamada, Y. (2018). How to Crack Pre-Registration: toward transparent and Open Science. Frontiers in Psychology, 9, 1831. doi: https://doi.org/10.3389/fpsyg.2018.01831.
Young, C. (2018). Model uncertainty and the Crisis in Science. Socius: Sociological Research for a Dynamic World, 4, 237802311773720. doi: https://doi.org/10.1177/2378023117737206.
Zoorob, M. (2020). Do police brutality stories reduce 911 calls? Reassessing an important Criminological Finding. American Sociological Review, 85(1), 176–183. doi: https://doi.org/10.1177/0003122419895254.
Acknowledgements
This paper would not be possible without the excellent feedback from my colleagues Jane Sell, Anne Groggel, Jenny Davis, and Alex Frenette. Additional thanks to Andrew Wesolek and Elaine Li for research support.
Funding
This research was not funded.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of Interest/Competing Interests
On behalf of all authors, the corresponding author states that there are no financial or non-financial conflicts of interest.
Ethics Approval
N/A There were no subjects, human or otherwise, used in this research.
Consent to Participate
N/A.
Consent for publication
N/A.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Manago, B. Preregistration and Registered Reports in Sociology: Strengths, Weaknesses, and Other Considerations. Am Soc 54, 193–210 (2023). https://doi.org/10.1007/s12108-023-09563-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12108-023-09563-6