A few weeks ago, I was listening to a bit of point/counterpoint on the Mother Jones Inquiring Minds podcast. On one episode, Brad Bushman gave an interview about the causes of gun violence, emphasizing the Weapons Priming Effect and the effects of violent video games. (Apparently he and his co-authors have a new meta-analysis of the Weapons Priming Effect; I can't read it because it's still under revision and the authors have not sent me a copy.)
On the other, Inquiring Minds invited violent-media-effect skeptic Chris Ferguson, perhaps one of Bushman's most persistent detractors. Ferguson recounted all the reasons he has for skepticism of violent-game effects, some reasonable, some less reasonable. One of his more reasonable criticisms is that he's concerned about publication bias and p-hacking in the literature. Perhaps researchers are running several studies and only reporting the ones that find significance, or maybe researchers take their null results and wiggle them around until they reach significance. (I think this is happening to some degree in this literature.)
Surprisingly, this was the criticism that drew the most scoffing from the hosts. University scientists don't earn anything, they argued, so who in their right mind would go into science and twist their results in hope of grant funding? Anyone wanting to make money would have an easier time of it staying far away from academia and going into something more lucrative, like dog walking.
Clearly, the hosts are mistaken, because we know that research fraud happens, publication bias happens, and p-hacking happens. Andrew Gelman's blog today suggests that these things happen when researchers find themselves chasing null hypotheses: due to publish-or-perish pressures, researchers have to find statistical significance. But why does anybody bother?
If the choice is between publishing nonsense and "perishing" (e.g., leaving academia to take a significant pay raise at a real job), why don't we see more researchers choosing to perish?