4 Comments
Jun 7, 2023Liked by Mark Rubin

Hi Mark, great post! A lot of this resonates with my own experiences with preregistration.

I'd tend to think that the two scenarios of a deviation from a strict preregistration or an unanticipated decision after a vague preregistration tend to have pretty similar consequences for the credibility of the analysis subsequently presented. I.e., in both cases there is the possibility (albeit not the certainty) that knowledge about the results produced by different analysis choices affected which analysis the researcher decided to report.

Personally, I edge towards the strict preregistration route, but for pretty banal reasons. My experience has been that when a preregistration leaves a particular decision unstated, it's pretty easy for that to fall between the cracks in a communication sense. The researcher might not "click" that they actually had to make a decision, or they might forget to write down that this happened, and the reviewers may not realise that a decision needed making. In contrast, when a preregistration said X but the study did Y, it's a little more obvious to all concerned.

By the way, the idea of trade-offs makes me think of the one applying to data analysis plans created before versus after we've seen the data. When we create the analysis plan first (as in a preregistration), this has the advantage of ruling out the possibility of the observed values of the statistics from affecting which statistics we choose to report, this being a potential source of bias. However, this plan can't use other information we obtain after collecting data (e.g., knowledge about distributional assumption violations). In contrast, a plan created or modified after collecting data can take into account useful information from the data itself, but comes with the possibility that analysis decisions are consciously or unconsciously affected by knowledge of what substantive results they produce.

I don't think one could make a principled argument that either of these options is *in general* better than the other (unless one were a rabid predictivist or something). It depends on the situation. But when peer reviewing a preregistered study I do quite like being able to read about *both*!

Expand full comment
Jun 10, 2023Liked by Mark Rubin

Hello Mark, I liked your post and agree with most of it. My only disagreement is with the notion that preregistered confirmatory tests become exploratory whenever researchers deviate from their preregistration. To me, there is a difference between a planned but deviated test and an unplanned test. I agree with you that confirmation and exploration are distinguished based on whether the test was planned, but I think that a deviated plan is still a plan, and so deviated preregistrations are still confirmatory.

However, not all confirmatory tests are equal. Firstly, a stricter preregistration prohibits more possible scenarios and thus can confirm more than a vaguer preregistration. Secondly, a confirmatory test’s confirmation decreases as the amount of deviations increase. Thus, I would rephrase the preregistration trade-off you have identified as one involving confirmation: assuming that the goal of researchers is to confirm their theories as much as they can, they are faced with a trade-off between 1) strict preregistrations that has more potential confirmation which can decrease if deviations are made, and 2) vague preregistrations that has less potential confirmation which can also decrease if deviations are made but such deviations are less likely. Basically, researchers can choose to make big bets (strict preregistrations) with potentially big pay-offs (strong confirmation) or small bets (vague preregistrations) with potentially small pay-offs (weak confirmation).

Expand full comment