In honor of International Day of Happiness, I’m posting my SPSP talk from a few weeks ago, where I discuss happiness research during the replication crisis. I talk about some good things about happiness research, and also some things that could be improved. But more importantly, I brag about my seventh grade awards, talk about the type of happiness research that I’m most skeptical about, praise data thugs, and tell the story of how we became replication bullies.
One of the most exciting things to happen during the years-long debate about the replicability of psychological research is the shift in focus from providing evidence that there is a problem to developing concrete plans for solving those problems. Whether it is journal badges that reward good practices, statistical software that can check for problems before papers are published, collaborative efforts to deal with limited resources and underpowered studies, proposals for new standards of evidence, or even entire societies dedicated to figuring out what we can do to make thing better, many people have devoted an incredible amount of thought, time, and energy to figuring out how we can fix any problems that exist and move the field forward.
One of the most contentious issues in recent debates about replication studies concerns the importance of context in explaining failed replications. Those who question the value of direct replication often suggest that many psychological effects should be expected not to replicate because they depend so strongly on a multitude of seemingly inconsequential contextual factors. Thus, because you can’t step in the same river twice, direct replication attempts should often be expected to fail.
Cece doesn’t understand the rules of the couch. Do replication studies need special rules? In my previous post I focused on the question of whether replicators need to work with original authors when conducting their replication studies. I argued that this rule is based on the problematic idea that original authors somehow own an effect and that their reputations will be harmed if that effect turns out to be fragile or to have been a false positive.
Recently I traveled to Vienna for APS’s International Convention on Psychological Science, where I gave a talk on “The Rules of Replication.” Thanks to the other great talks in the session, it was well attended. But as anyone who goes to academic conferences knows, “well attended” typically means that at best, there may have been a couple hundred people in the room. And it seems like kind of a waste to prepare a talk—one that I will probably only give once—for such a limited audience.