Being open when research goes wrong

5 March 2025

It has been well publicised by now that psychological research has a problem with replicability. That is to say, a large proportion of our research findings aren’t reliable because they can’t be independently replicated. Among other problems, a lack of methodological rigour was leading to ‘p-hacking’, where researchers perform many different tests and only report those that (often simply by chance) return a statistically significant result. This has the effect of increasing the number of false positives entering the literature, which has impacted the credibility of psychology as a research field. When I started my research career, this reckoning had already happened, and indeed I learnt about it as part of my undergraduate degree. I was also in a lab that took open science practices seriously, so I had an above-average exposure to discussions about the ways in which we can make psychology a more robust science.

One of the more widely adopted measures has been the preregistration of studies, meaning that before starting data analysis, you upload your research plan to an online repository. The practice is ubiquitous in clinical research because regulators require registration of clinical trials, but is only used by some in psychology. The main benefit is that being open about the plan before starting makes it more difficult to p-hack. Readers of the finished article will be expecting to see the analyses that were originally planned, and would question any findings that come from later ‘fishing expeditions’.

I myself have conducted studies without preregistering them, usually out of an eagerness to get on with the data analysis. But I have come to realise that this is a false economy – putting together a good preregistration means you have already done a lot of the strategising you have to do anyway, and in a more organised way. My current project using data from the Millennium Cohort Study is the first that I have properly preregistered, and although it hasn’t really changed my approach to planning studies, it has made the process more transparent and meant that all the relevant information is available in one place. By far the most frequent visitor of the preregistration is me, checking details of the methodology before I implement it!

The level of detail in a preregistration can vary, with some only outlining the basics of a study while leaving finer details open to interpretation. My approach is rather more detailed, as I wanted to leave as few researcher degrees of freedom available as possible to increase the credibility of my findings. I also had a theoretical framework already laid out as a product of my collaboration with advisors with lived experience of mental health symptoms, which formed the basis of what variables would be specified in the statistical models. So I made sure to specify each individual variable ID, along with details about the modelling procedure and handling of missing data right from the start. All without having looked at the data. As you may imagine, this left a lot of room for mistakes, oversights and unforeseen circumstances that required a change to the plan that was described in the preregistration.

As others before me have laid out, preregistration is not a prison, and it is acceptable to deviate from the preregistration to deal with methodological problems that arise once the data analysis has started. The key, though, is to be transparent about these changes and why they have occurred. But what form should this transparency take? It is common for authors to have a section of their articles that details the deviations and the reasons for those deviations. In this study, I wanted to go one step further and publish the deviations as they happened, or at least not long afterwards, as updates to the preregistration. This feels like a stepping stone towards another concept that I’m curious about – that of open lab notebooks, which require scientists to publicly document every stage of the research process. Fourteen updates later, I am perhaps a little less enthusiastic about that level of transparency, but I nonetheless hope that the added transparency helps readers to evaluate the findings that we eventually publish.

There is some debate about how much deviation from the preregistration should be tolerated before the research should be considered exploratory, rather than confirmatory. There is no one-size-fits-all solution, as it depends strongly on the intention behind the deviations. In my case, though the list of deviations is growing, comparing the current version to the original shows that the research is still faithful to the original plan. We have deviated mostly to replace or simplify variables that were troublesome because, for example, large amounts of data were missing or there are too few participants in some categories. On the other hand, a researcher could also use deviations to cover up changes that they made after seeing the results of their planned analysis, which would certainly need to be considered exploratory. Exploratory research can still be useful, though, for instance all of my work with UNICEF data. The important thing is that this is transparently declared to readers.

In short, preregistration is a useful tool for transparency, but it is not a panacea for psychology’s credibility problem. We should not be asking those who read our research to take anything on trust; rather, we should be as open as is reasonably practicable about the process behind the research. The more detail is included in a bona fide preregistration, the less scope there is for concerns about p-hacking and other questionable research practices. And this includes being honest when our plans need to be adapted. In so doing, we give readers information about the level of rigour in the methodology, and allow them to make informed decisions as to whether they should believe what they are reading.


Being open when research goes wrong by Tom Metherell is licensed under CC BY 4.0