Header


Saturday, November 23, 2019

Weighing bullets, not hot sauce

It's been a rich week of readings for wondering just what the hell we're doing. Loyka et al. (2019) present a framework for considering external validity, and this framework reminds us just how poorly we are doing at considering actual real-world human behavior. Tal Yarkoni has a preprint up that describes how implausible it is that the situations and stimuli we study will generalize to other situations and stimuli. Danielle Navarro has clarified her stance on preregistration by elaborating on how misguided she perceives hypothesis testing to be. Together, these articles remind us of the importance of studying the thing we actually care about, rather than what's convenient, because chances are that our findings won't generalize as simply as we expect, because a significant p-value only means that the null is wrong, and not that the alternative is correct.

These readings reminded me of some thoughts I'd jotted down following APA 2019. I'd been invited to present some of my research on violent video games. While I had a great session and had a lot of fun talking to a receptive audience about issues like measurement validity and publication bias, the overall APA experience was personally challenging. This is because one of the major themes of APA 2019 was gun violence and what the APA can do about it.

I attended a number of interesting sessions with presenters who studied actual violence by working and serving in communities, doing ride-alongs with police, interviewing people who had suffered violence and had perpetrated violence. This was draining in two ways. 

First, there's a lot of human suffering out there. One presenter had found that many felons serving prison sentences for gun violence had themselves been victims of gun violence, often as early as age 14. He further found that, when people knew who shot them, they were less likely to tell the police. They trust the police so little that they would prefer to settle the score themselves, and the police are just somebody you can dump your cold cases on as one last hail mary. A mother from Newtown was there. Both of her children had been shot in the massacre. One died. She described crying until the capillaries burst in both her eyes. One gets the feeling that tragedy cannot be prevented and that many people are doomed to poverty and violence from the moment they're born.

Second, it made me frustrated with how far removed we are from the actual societal problem we want to study. We want to prevent gang violence, child abuse, intimate partner violence, bullying, aggressive driving, and harassment. Instead of studying the community members of South- and West-Side Chicago, we study college undergraduates, a bunch of nerds who would rather read a book than fight somebody and generally have enough money and safety to be able to do just that. Instead of studying shootings or fights or abuse, we study how much hot sauce these undergrads pour for each other or whether they think a rude RA should be able to keep their job. We even use proxies of proxies -- when it's too much trouble to see how much hot sauce they'll pour for somebody, we give them KI__ and watch whether they fill it in as KILL or KISS.

One of the APA speakers closed by reference to the old joke about the drunkard looking for his keys. The drunk is looking for his keys under the streetlight. A friend joins him and helps him look for a while, with no progress. Eventually the friend, exasperated, says "Let's try something different. Where did you last see your keys?" The drunk says "I dropped my keys over there in the bushes." The bewildered friend asks "Well then, why are we searching over here by the streetlight?" To which the drunk replies "Well, the light's good over here, and I'm afraid of the dark." 

The light's good over here playing parlor tricks with college undergraduates and hot sauce. And it's certainly less scary than trying to get out in the rough parts of Chicago!

It's possible that I'm not well read and that there's a lot of great aggression research going on that studies these real problems. But mostly I see us running little experiments with just-significant results, or running survey designs that tell us something obvious and hopelessly confounded. Interviews and ethnography and field work seem to be for sociologists or criminologists, not psychologists.

What am I doing about it?  Not much. For now, I'm doing my part by trying to test the convergent validity of our lab measures and see whether they actually agree with each other (preliminary answer: they don't). I often worry about my career, because I've never "discovered" some effect. You could do a decent job summarizing my last ten years as digging a deeper and deeper hole in what we think we already know, hoping to find some sort of bedrock that we can build from. So far, I'm still shoveling, assessing publication bias, failing to replicate findings, criticizing too-good-to-be-true results, and trying to figure out if our measures are at all valid and reliable

I like the work that I do, and I think it's the best work I can do given my skills and resources and timeframe. But that work could be much more valuable if I could get out into the actual populations and environments that we're worried about. I had an RA with a connection at a maximum-security prison, but I wasn't able to pursue the lead aggressively enough and it slipped through my fingers. I'm not particularly smooth or adventurous, so I'm not enthusiastic about going into communities to understand gun violence. I'm pre-tenure, so what makes the most sense for me career-wise is to stick to doing more of the same research with college undergrads and MTurk workers. Maybe try to find some sort of eyebrow-raising lab effect that I can wildly extrapolate from.

I'm not sure what to recommend. As a field, we probably recalibrate our expectations; we can't expect a scientist to make three or four noteworthy, generalizable discoveries a year. Getting actionable and generalizable psychological findings will probably require orders of magnitude more effort and investmentWe can make psychological science prepared for that investment by trying to improve the transparency and honesty of that process.

I'm gonna try to read more sociology and criminology. Maybe they know something we don't?

Monday, June 17, 2019

Comment on Chang & Bushman (2019): Effects of outlier exclusion

Recent research by Chang & Bushman (2019) reports how video games may cause children to be more likely to play with a real handgun. In this experiment, children participate in the study in pairs. They play one of three versions of Minecraft for 20 minutes. One version has no violence (control), another has monsters that they fight with swords (sword violence), and another has monsters that they fight with guns (gun violence). 

The children are then left to play in a room in which, hidden in a drawer, are two very real 9mm handguns. The handguns are disabled -- their firing mechanism has been taken out and replaced with a clicker that counts the number of trigger pulls. But these guns look and feel like the real thing, so one would hope that a child would not touch them or pull their triggers.

The authors report four study outcomes: whether the kid touches the gun, how long they hold the gun, how many times they pull the trigger, and how many times they pull the trigger while the gun is pointed at somebody (themself or the other kid).

I think it's an interesting paradigm. The scenario has a certain plausibility about it, and the outcome is certainly important. It must have been a lot of work to get the ethics board approval.


However, the obtained results depend substantially on the authors' decision to exclude two participants from the control group for playing with the guns a lot. I feel that this is an inappropriate discarding of data. Without this discard, the results are not statistically significant.


Overinterpretation of marginal significance

The results section reports one significant and three marginally significant outcomes:
  • "The difference [in handgun touching] across conditions was nonsignificant [...]" (p = .09)
  • "The gun violence condition increased time spent holding a handgun, although the effect was nonsignificant [...]" (p = .080)
  • "Participants in the gun violence condition pulled the trigger more times than participants in other conditions, although the effect was nonsignificant [...]" (p = .097)
  • "Participants in the violent game conditions pulled the trigger at themselves or their partner more than participants in the nonviolent condition." (p = .007)
These nonsignificant differences are overinterpreted in the discussion section, which begins: "In this study, playing a violent video game increased the likelihood that children would touch a real handgun, increased time spent holding a handgun, and increased pulling the trigger at oneself and others." I found this very confusing; I thought I had read the wrong results section. One has to dig into Supplement 2 to see the exact p values.

Exclusion of outliers

The distribution of the data is both zero-inflated and powerfully right skewed. About half of the kids did not touch the gun at all, much less pull its trigger. Among the minority of kids that did pull the trigger, they pulled it many times. This is a noisy outcome, and difficult to model: you would need a zero-inflated negative binomial regression with cluster-adjusted variances. The authors present a negative binomial regression with cluster-adjusted variances, ignoring the zero-inflation, which is fine enough by me since I can't figure out how to do all that at once either.

Self-other trigger pulls outcome. The pair in red were excluded because the coders commented that they were acting unusually wild. The pair in green were excluded for having too high a score on the outcomes.

Noisy data affords many opportunities for subjectivity. The authors report: "We eliminated 1 pair who was more than 5 SDs from the mean for both time spent holding a handgun and trigger pulls [green pair].  The coders also recommended eliminating another pair because of unusual and extremely aggressive behavior [red pair]." The CONSORT flow diagram reveals that these four excluded subjects with very high scores on the dependent variables were all from the nonviolent control condition, in which participants were expected to spend the least time holding the gun and pulling its trigger. 

The authors tell me that the pair eliminated because of unusual and extremely aggressive behavior was made on the coders' recommendation, blind to condition. That may be true, but the registration is generally rather vague and says nothing about excluding participants on coder recommendation.

The authors also tell me that the pair eliminated because of high scores were eliminated without looking at the results. That may be true as well, but I feel as though one could predict how this exclusion might affect the results.

This latter exclusion of the high-scoring pair is not acceptable to me. You can consider this decision in two ways: First, you can see that there are scores still more extreme in the other two conditions. With data this zero-inflated and skewed, it is no great feat to be more than 5 SDs from the mean. Second, you can look at the model diagnostics. The excluded outliers are not "outliers" in any model influence sense -- their Cook's distances are less than 0.2. (Thresholds of 0.5 or 1.0 are often suggested for Cook's distance.)



Here are the nonzero values in log space, which is where the model fits the negative binomial. On a log scale, the discarded data points still do not look at all like outliers to me.

Revised results

If the high-scoring pair is retained for analysis, none of the results are statistically significant:
  • Touching the gun: omnibus F(2, 79.5) = 1.04, p = .359; gun-vs-control contrast p = .148.
  • Time holding gun: omnibus F(2, 79.5) = .688, p = .506; gun-vs-control contrast p = .278.
  • Trigger pulls at self or other: omnibus F(2, 79.4) = 1.80, p = .172; gun-vs-control contrast p = .098.
From here, adding the coder-suggested pair to the analysis moves the results further still from statistical significance.

If you're worried about the influence of the zero inflation and the long tail, a simpler way to look at the data might be to ask "is the trigger pulled at all while the gun is pointed at somebody?" After all, the difference between not being shot and being shot once is a big deal; the difference between being shot four times and being shot five times less so. Think of this as winsorizing all the values in the tail to 1. Then you could just fit a logistic regression and not have to worry about influence.

Analyzed this way, there are 6 events in the control group, 10 in the sword-game group, and 13 in the gun-game group. The authors excluded four of these six control-group events as outliers. With these exclusions, there is a statistically significant effect, p = .029. If you return either pair to the control group, the effect is not statistically significant, p = .098. If you return both pairs to the control group, the effect is not statistically significant, p = .221.

I wish the authors and peer reviewers had considered the sensitivity of the results to the questionable exclusion of this pair. While these results are suggestive, they are much less decisive than the authors have presented them.

Journal response

I attempted to send JAMA Open a version of this comment, but their publication portal does not accept comment submissions. I asked to speak with an editor; the editor declined to discuss the article with me. The journal's stance is that, as an online-only journal, they don't consider letters to the editor. They invited me to post a comment in their Youtube-style comments field, which appears on a separate tab where it will likely go unread.

I am disturbed by the ease with which peer reviewers would accept ad hoc outlier exclusion and frustrated that the article and press release do little to present the uncertainty. It seems like one could get up to a lot of mischief at JAMA Open by excluding hypothesis-threatening datapoints.

Author response

I discussed these criticisms intensely with the authors as I prepared my concerns for JAMA Open and for this blog post. Dr. Bushman replied:

We believe that [the coder-suggested pair] was removed completely legitimately, although you are correct this was not documented ahead of time on the clinicaltrials.gov site. We believe [the high-scoring pair] should also have been excluded, but you do not. We acknowledge there may be honest differences of opinion regarding [the high-scoring pair]. 
As stated in our comment on JAMA Open, “Importantly, both pairs were eliminated before we knew how they would impact our analyses and whether their results would support our hypotheses.”
Again, I disagree with the characterization of the removal of the high-scoring pair as a subjective decision. I don't see any justifiable criterion for throwing this data away, and one can anticipate how this removal would influence the analyses and results.

Conclusion

I was successfully able to reproduce the results presented by Chang and Bushman (2019). However, those results seem to depend heavily on the exclusion of four of the six most aggressive participants in the nonviolent control group. The justification that these four participants are unusually aggressive does not seem tenable in light of the low influence of these datapoints and similarly aggressive participants retained in the other two conditions. 

While I admire the researchers for their passion and their creative setup, I am also frustrated. I believe that researchers have an obligation to quantify uncertainty to the best of their ability. I feel that the exclusion of high-scoring participants from the control group serves to understate the uncertainty and facilitate the anticipated headlines. The sensitivity of the results to this questionable exclusion should be made clearer.



Code
See my code at https://osf.io/8jgrp/. Analyses reproduced in R using MASS::glm.nb for negative binomial regression with log link and clubSandwich for cluster-robust variance estimation. Data available upon request from the authors. Thanks to James Pustejovsky for making clubSandwich. Thanks to Jeff Rouder for talking with me about all this when I needed to know I wasn't taking crazy pills.

Sunday, April 15, 2018

Why I hate teaching the classics

I’m approaching the end of my first semester teaching Intro to Social Psychology. As someone who came of age during the peak of the replication crisis (Bem, Stapel, Reproducibility Project), studies publication bias, and has had a hard time finding statistically significant results, I generally have a dim view of big chunks of the literature. I was worried that we would have very little to talk about given all the uncertainty, but we’ve made a good semester of it by talking about the general ideas, their strengths and weaknesses, and the opportunities for a young scientist to contribute by addressing these uncertainties.

But this semester’s teaching has taught me one thing: I hate teaching the classics.
What makes the classics, and why do I hate teaching them? The studies that my textbooks present as classics tend to have a few common attributes, some desirable and others undesirable.
The desirable:

  1. They provide a useful summary of some broader theory.
  2. They are catchy or sticky in a way that makes them easy to remember and fun to talk about.
  3. The outcome is provocative and interesting.

The undesirable:
  1. The sample size is tiny.
  2. The p-values are either marginal or bizarrely good. 
  3. The outcome has little evidence of validity.
  4. Data from the classic study tend to predate strong tests of the theory by several decades. The strongest evidence tends to come later (if at all) when people have cleaned up the methods and run more studies (often in response to criticism).
My concern is that these qualities of classics give students the wrong idea about what makes for good psychological science, leading them to embrace the desirable attributes of these classics without considering the undesirable attributes.

Some classics that I’ve struggled with this semester:
Frederickson et al., 1998: In this classic study on the harms of self-objectification, wearing a swimsuit (vs. a sweater) caused women (but not men) to do worse on a math test, N = 82, p = .051.
Pennebaker & Beall (1986): In this classic study on the benefits of self-expression, students who wrote about a traumatic experience enjoyed better health, N = 46, p = .055 for health center visits, p = .10 for sick days, p = .040 for total health problems.
Rosenthal and Jacobson (1968): In this classic study on how expectations shape outcomes, students labeled as “about to bloom” gained more IQ than other students. Unfortunately, the data are insane, with many students scoring well outside of the range of the test, featuring pre-post scores on the scale of hundreds of points (see Snow, 1995; hat tip to Bob C-J)
Srull & Wyer (1979): In this classic study of how primes influence perceptions of others, primes influenced perceptions up to days later. Unfortunately, the data show an effect too insanely powerful to be true; in meta-analyzing this literature, DeCoster and Claypool (2004) estimate Srull & Wyer’s result as d = 5.7. (For reference, obvious effects like “men are taller than women” are in the range of d = 1.85; Simmons, Nelson, & Simonsohn, 2013.)
Festinger & Carlsmith (1959): In this classic study of cognitive dissonance, participants given a small bribe to say a boring task was fun changed their opinion of the boring task. Unfortunately, the published results contain a number of GRIM errors.
This isn’t to say that the classics are bad science, especially for their time. My concern is that their evidence is much weaker than one might expect given their status as classics. It makes me feel sometimes like I am teaching the history of psychology instead of the science of psychology; something where knowing about the peg-turning experiment is hoped to represent some greater knowledge.
Figure 1. Me and my fellow troublemakers (periphery) complaining about a classic study (center).
What’s the problem?
My concern is that these classics set a bad example for young scientists and do not prepare them to think about science according to modern standards. According to these classics, one collects a little data on a new, untested method, and so long as the p-value isn’t too far from significance, you can make an argument about how the mind works. If your idea is catchy enough, the citations will roll in forever, and few will talk about the weaknesses of the evidence. Like Daryl Bem said in his recent interview with Dan Engber, “I’m all for rigor, but I prefer other people do it. […] If you looked at all my past experiments, they were always rhetorical devices. I gathered data to show how my point would be made.” 
This isn’t to say that the theories proposed by these classics are necessarily wrong. It’s just hard to teach these originals while talking about how weak that one study is. Discrediting one operationalization may unjustifiably discredit the broader idea. Maybe the whole Festinger & Carlsmith peg-turning, subject-bribing method is bunk, but cognitive dissonance is such a stronger, broader idea that it seems impossible to discard it. In that light, is it really important how Festinger & Carlsmith did it? Couldn’t we cite instead something that demonstrates the core idea with a little more refinement or rigor?
In the “Creativity and Rigor” episode of The Black Goat, Sanjay, Simine, and Alexa talk about the problem of framing creativity and rigor as enemies. This framing sets science up as some sort of battle between the creative, idea-generating geniuses and the rigorous, pencil-pushing critics. It doesn’t have to be this way, they argue -- the goals of rigor and creativity are aligned. To test interesting ideas in useful ways will require both rigor and creativity.
It’s my concern that teaching these cool-idea, weak-evidence studies as the classics may lead students to value creativity without rigor. When we canonize these early studies, we honor them for their interesting ideas or provocative manipulations, but we overlook all their weaknesses in sample size and measurement validity.
Figure 2. A brilliant idea occurs to a psychologist in 1972. The psychologist will demonstrate its truth in a sample of 28 undergraduates with a p-value of .063, an event which will be remembered by textbooks forever.

What should we do?
I would like to see more textbooks credit both the original idea and some of the stronger methods and samples. In this way, we could teach both the origin of the theory and the best science involved in testing that theory. If newer, stronger data is not available, this should be made clear as a weakness of the literature and an opportunity for students to do their own studies.
This is probably not easy to do. The classics have a lot of momentum and citations, which makes them easy to discover. Finding these newer, more rigorous studies and writing them up for textbooks will be more work. I think it will be worth it. This will help communicate to students our values as a member of the sciences. It will give more credit and more attention to psychology as an empirical science, not just a system for the generation of cool ideas.

Wednesday, December 13, 2017

How to Play a Prediction Market

The prediction market is a way to try to assign probabilities to events. Bettors buy YES bets on things they think are likely to happen (relative to the market price) and NO bets on things they think are unlikely to happen (relative to the market price). Market dynamics lead the market price to settle on what is, across the bettors, the best subjective probability of the event. This is useful if you are trying to assign probabilities to one-off future events.

In this post, I'll teach you how to place bets to most effectively get the largest payout possible. In so doing, you'll do more to calibrate the market to your predictions.

Let's get ready to corner the replication market!

How does a prediction market work?

A prediction market allows people to bet YES or NO on some outcome. As people bet that the outcome will happen, the price of YES shares increases. As people bet that the outcome won't happen, the price of YES shares falls.

The market price for a YES share is p, the probability of the outcome. The market price for a NO share is (1-p). If the event happens, all the YES shares pay out $1 each and the NO shares become worthless. If the event does not happen, all the NO shares pay out $1 each and the YES shares become worthless.

The probability of rolling a six is 1/6, so we should be willing to pay up to $1/6 for YES or $5/6 for NO.

Imagine we are betting that a roll of a six-sided die will yield a six. The probability of this is 1/6, or about 17 percent. YES shares will cost 17 cents and NO shares will cost 83 cents. With five dollars, you could buy 30 YES shares or 6 NO shares.

Your expected payout is the number of shares times the probability. In the die example, since the market price is correct, your expected value is five dollars whether you buy YES or NO. For YES shares, 30 shares * (1/6 payout chance) = $5. For NO shares, 6 shares * (5/6 payout chance) = $5.

If the market price is wrong, you have a chance to make a profit. Suppose we are still betting on the die, but for some reason the market price is set at 10 cents for a YES share. We know that the probability of the die rolling six is greater than this, so with our five dollars we can buy 50 shares with an expected value of 50 shares * (1/6 payout chance) = $8.33. This is a profit of three dollars. Another way to look at this is that it's a profit of six cents per share, the difference between the wrong market price (.10) and the true probability (.16).

But if the market price is wrong, and we are wrong with it, we will lose money. Buying NO shares at this price will turn our five dollars into 5.55 shares * (5/6 payout chance) = $4.62, a long-run loss of 38 cents.

The Big Picture of the Big Short


Like we covered above, playing the prediction market isn't simply about buying YES on things you think will replicate and NO on things you don't replicate. Otherwise, we would just buy NO shares on the die rolling six because we know it's unlikely relative to the die not rolling six. It's about evaluating the probability of those replications. Your strategy in a betting market should be to look for those opportunities where there is a difference between the market price and the probability that you'd assign to that event.

If the market is completely correct, it shouldn't matter what you buy -- your 50 tokens will have an expected value of $50. In our die example above, when the market price was right, YES and NO shares had the same expected value. But if the market is wrong, you have a chance to beat the market, turning your 50 tokens into several times their value.

In order to beat the market, you have to find places where the market price is miscalibrated. Maybe something is trading at 40% when it only has a 20% chance to replicate, in your view. If you are right, each NO share you buy will cost 60 cents but have an expected value of 80 cents. But if you are wrong, you will pay more for the shares than they are truly worth, getting a poorer return on your 50 tokens than had you just spread them across the market.

Below is my four-step process for turning your predictions into the largest possible payoff.

1. Evaluate your prices.

Before the market started, I wrote down my estimates of what would or wouldn't replicate. I assigned probabilities to these studies indicating what chance I thought they had of replicating.

Coming up with these estimates is the basis of the replication market. I ended up focusing on the things I thought wouldn't replicate. Some studies were a priori deeply implausible, others had weak p-values, some had previous failures to replicate, and some had a combination of factors. These were studies I felt pretty confident wouldn't replicate, and so I priced them at about 10% (2.5% chance of Type I error + 7.5% chance of true effect).

A peek at my spreadsheet, comparing my subjective probabilities to the market prices.

Some other studies seemed more likely to replicate, so I was willing to price them in the 50-80% range. I was less certain about these, so I saw these as riskier purchases, and tended to invest less in them.

It's also useful to remember the context of the last prediction market. In that market, the prices were much too high. Nothing below 40% replicated, and the highest-priced study (88%) also failed to replicate. The lowest price on a successful replication was about 42%.

2. Buy and sell to your prices.

To make profit on the replication market, you have to spend your money where you think the market price is most miscalibrated. Something that the market thinks is a sure thing (95%) that you think will flop (5%) would be a massive 90-cent profit per share. Something that seems reasonable (50%) that the market is afraid won't replicate (15%) could be a nice little profit of 35 cents per share.

I made a spreadsheet of my prices and the market's prices. I added a column representing the difference between those prices. The largest absolute difference indicates where I would expect the greatest profit per share.

If the difference is negative, then buy NO shares. Suppose something is trading at 50%, but you think it has only a 15% chance of replicating. You can buy NO shares for 50 cents that you think are worth 85 cents -- a 35 cent profit per share.

If the difference is positive, then buy YES shares. If something is trading at 50%, and you think it has a 75% chance of replicating, then every YES share costs 50 cents but is worth 75 cents.

Overly optimistic market prices meant that I placed most of my bets on certain studies not replicating.
Again, you only profit when you are right and the market is wrong. Look for where there is juice!

3. Diversify your portfolio

If you want to ensure a decent payout, it may make sense to spread your money around. Suppose there is a study priced at 50% chance of replicating, but you know the true chance of replication is 80%. If you're right, putting all 50 tokens on this one study has an 80% chance of earning you $100, but a 20% chance of earning you $0. Your expected value is $80, a nice $30 profit, but there's a lot of variability.

Payout $100 $0
Frequency 80% 20%
EV = $80; SD = $41

By diversifying your bets, you can reduce the variability at the cost of reducing your expected value slightly. Consider if we divide your bets across two options, one with a slightly worse profit margin. Let's say Study 1 is priced at 50% but is worth 80%, and Study 2 is priced at 65% but is worth 75%. By putting half our money into Study 2, we reduce our average profit, but we also reduce the likelihood of suffering a blowout.

Payout $88 $50 $38 $0
Frequency 60% 20% 15% 5%
EV = $70; SD = $26

In the recent market, for example, Sparrow, Liu, and Wegner tended to trade at 55%, whereas I thought it was worth about 15%. Although this 40-cent gap would have been my biggest profit-per-dollar, I felt it was too risky to put everything on this study, so I balanced it against other studies with smaller profit margins.

4. Day trading

As other people show up to the market and start twiddling their bets around, the market prices will change. The market may move towards some of your predictions and away from other of your predictions. If you like to procrastinate by watching the market, you can leverage out your bets for a higher potential payout.

Figure 1. You hold NO shares of Studies 1 and 2, which the market has evaluated at 35% (bars) but you think have only a 10% chance of replicating (dashed line). Each share represents 25 cents of profit to you.

Lets say you think Study 1 and Study 2 each have a 10% chance of replicating. You bought 30 shares of Study1 NO and Study2 NO for 65 cents a share each (35% chance to replicate). You see each of these shares as representing a 25 cent profit (Figure 1).

Figure 2. The market has shifted such that your Study1 NO shares are worth more and your Study2 NO shares are worth less. If you are ready to be aggressive, you can sell your Study1 NO shares to take advantage of cheaper Study2 NO shares.

Some time passes, and now the market has agreed with you on Study1, dropping its probability to 20%, but it disagrees with you on Study2, raising the probability to 45% (Figure 2). The shares of Study 1 you're holding have already realized 15 cents per share of profit. The shares of Study 2 you're holding have lost 10 cents a share, but if you are right, then you can keep buying these shares at 55 cents when you think they are worth 90 cents.

Since the Study 1 shares have already realized their value, you can sell the Study1 NO shares to buy more cheap shares of Study2 NO. If the market fluctuates again, you can sell your expensive shares to pick up more cheap shares and so on and so on.

I watched the market and kept comparing the prices against my predictions. When one of my NO bets started to cap out (e.g., Gervais and Norenzayan reached 15%), I would sell my NO bets and reinvest them in another cheaper NO bet (e.g., buying NO on Kidd and Castano at 40%). Sometimes some poor credible soul (or somebody fumbling with the GUI) would buy a bunch of YES bets on Ackerman, driving the price way up (e.g., to 45%). When this would happen, I'd sell all my current bets to take advantage of the opportunity of cheap Ackerman NO bets.

It can be tempting to try to play the market, moving your tokens around to try to catch where other people will move tokens. I don't think there's much use in that. There aren't news events to influence the prediction market prices. Just buy your positions and hold them. If the market disagrees with you, you may consider doubling down on your bets now that they are cheaper. If the market agrees with you, you can release those options to invest in places where the market disagrees with you.

Summary

To make the biggest profits, you have to beat the market. To do this, you must: (1) Make good estimates of the probability to replicate. (2) Find the places where the market price is most divergent from what probability you would assign the study. (3) Spread your bets out across a number of studies to manage your risk. (4) Use day trading to take advantage of underpriced shares and increase your total leverage.

Friday, December 1, 2017

Adventures programming a Word Pronunciation Task in PsychoPy

I'm a new assistant professor trying to set up my research laboratory. I thought I'd try making the jump to PsychoPy as a way to make my materials more shareable, since not everybody will have a $750+ E-Prime or DirectRT license or whatever. (I'm also a tightwad.)

My department has a shared research suite of cubicles. Those cubicles are equipped with Dell Optiplex 960s running Windows 7. I'm reluctant to try to upgrade them since, as shared computers, other members of the department have stuff running on them that I'm sure they don't want to set up all over again.

In this process, I ran into a couple of bugs on these machines that I hadn't encountered while developing the tasks on my Win10 Dell Optiplex 7050s. These really made life difficult. I spent a lot of time wrangling with these errors, and I experienced a lot of stress wondering whether I'd fix them in five minutes or five months.

Here for posterity are the two major bugs I'd encountered and how they were resolved. I don't know anything about Python, so I hope these are helpful to the equally clueless.

"Couldn't share context" error

Initially, PsychoPy tasks of all varieties were crashing on startup. Our group couldn't even get the demos to run. The error message said pyglet.gl.ContextException: Unable to share contexts.


Didn't fix it:

Apparently this can be an issue with graphics drivers on some machines. Updating my drivers didn't fix the problem, perhaps in part because the hardware is kind of old.

Solution:

This error was resolved by specifying an option for pyglet. I used PyschoPy's Builder View to compile the task. This made a file called Task.py. I opened up the .py file with notepad / wordpad / coder view / code writer and added two lines to the top of the script (here in bold):

from __future__ import absolute_import, division
# Trying to fix pyglet 'shared environment' error
import pyglet
pyglet.options['shadow_window']=False
# script continues as normal
from psychopy import locale_setup, sound, gui, visual, core, data, event, logging
from psychopy.constants import (NOT_STARTED, STARTED, PLAYING, PAUSED,
                                STOPPED, FINISHED, PRESSED, RELEASED, FOREVER)
This fixed my "Couldn't share context" error. If you're having trouble with "couldn't share context", consider opening up your .py file and adding these two lines just underneath from __future__ import.

Portaudio not initialized error

My Word Pronunciation Task requires the use of a microphone to detect reaction time. Apparently this was a simple task for my intellectual ancestors back in the 1990s -- they were able to handle this using HyperCard, of all things! But I have lost a lot of time and sleep and hair trying to get microphones to play nice with PsychoPy. It's not a major priority for the overworked developers, and it seems to rely on some other libraries that I don't understand.

Trying to launch my Word Pronunciation Task lead to the following error: PortAudio not initialized [...] The Server must be booted! [...] Need a running pyo server."


This was fixed by changing Windows' speaker playback frequency from 48000 Hz to 44100 Hz.


Right click on the Volume icon in the taskbar and open up "Playback devices."

Right click on your playback device and click "Properties."

Under the "Advanced" tab, switch the audio quality from a 48000Hz sampling rate (which Portaudio doesn't like) to a 44100 Hz sampling rate (which Portaudio does like, apparently).

This strangely oblique tweak was enough to fix my Portaudio problems.

Now that I can use all these computers, I'm looking forward to scaling up my data collection and getting this project really purring!


Thanks to Matt Craddock and Stephen Martin for help with the "shared context" bug. Thanks to Olivier Belanger for posting how to fix the Portaudio bug.


Thursday, June 22, 2017

Overestimation of violent-game effects

At long last, our article "Overstated Evidence for Short-Term Effects of Violent Games on Affect and Behavior: A Reanalysis of Anderson et al. (2010)" is released from its embargo at Psychological Bulletin. (Paywalled version here.)

In this paper, Chris Engelhardt, Jeff Rouder, and I re-analyze the famous Anderson et al. (2010) meta-analysis on violent video game effects. At the time, this meta-analysis was hailed by some as "nailing the coffin shut on doubts that violent video games stimulate aggression" (Huesmann, 2010). It is perhaps the most comprehensive and most-cited systematic review of violent-game research.

The authors conclude that, across experimental, cross-sectional, and longitudinal research designs, the recovered literature indicates significant effects of violent games on aggressive thoughts, feelings, and behaviors. Effects are moderate in size (r = ~.2).

Our paper challenges some of the conclusions from that paper. Namely,

  • The original authors reported that there was "little evidence of selection (publication) bias." We found, among some sets of experiments, considerable evidence of selection bias.
  • The original authors reported that better experiments found larger effects. We found that it instead may be the case that selection bias is stronger among the "best" experiments.
  • The original authors reported short-term effects on behavior of r = .21, a highly significant result of medium size. We estimated that effect as being r = .15 at the most and possibly as small as r = .02.

We do not challenge the results from cross-sectional or longitudinal research. The cross-sectional evidence is clear: there is a correlation between violent videogames and aggressive outcomes, although this research cannot demonstrate causality. There is not enough longitudinal research to try to estimate the degree of publication bias, so we are willing to take that research at its word for now. (Besides, an effect of hundred of hours of games over a year is more plausible than an effect of a single 15-minute game session.)

Signs of selection bias in aggressive behavior experiments

With regard to short-term effects on aggressive behavior, the funnel plot shows some worrying signs. Effect sizes seem to get smaller as the sample size gets larger. There is a cluster of studies that fall with unusual accuracy in the .01 < p < .05 region. And when filtering for the "best practices" experiments, nearly all the nonsignificant results are discarded, leaving a starkly asymmetrical funnel plot. See these funnel plots from experiments on aggressive behavior:

When filtering for what the original authors deemed "best-practices" experiments, most null results are discarded. Effect sizes are reported in Fisher's Z, with larger effects on the right side of the x-axis. The average effect size increases, but so does funnel plot asymmetry, indicating selection bias. Studies fall with unusual regularity in the .01 < p < .05 region, shaded in dark grey.


The p-curve doesn't look so hot either:
P-curve of experiments of aggressive behavior coded as "best-practices". The curve is generally flat. This suggests either (1) the null is true or (2) the null is false but there is p-hacking.


Where naive analysis suggests r = .21 and trim-and-fill suggests r = .18, p-curve estimates the effect as r = .08. Let's put that in practical terms. If Anderson and colleagues are right, a good experiment needs 140 participants for 80% power in a one-tailed test. If p-curve is right, you need 960 participants. 

Given that 4 out of 5 "best-practices" studies have fewer than 140 participants, I suspect that we know very little about short-term causal effects of violent games on behavior.

Reply from Kepes, Bushman, and Anderson

You can find a reply by Kepes, Bushman, and Anderson here. They provide sensitivity analyses by identifying and removing outliers and by applying a number of other adjustments to the data: random-effects trim-and-fill, averaging the five most precise studies, and a form of selection modeling that assumes certain publication probabilities for null results.

They admit that "selective publishing seems to have adversely affected our cumulative knowledge regarding the effects of violent video games." However, they conclude that, because many of their adjustments are not so far from the naive estimate, that the true effects are probably only modestly overstated. In their view, the lab effect remains theoretically informative.

They do a fine job of it, but I must point out that several of their adjustments are unlikely to fully account for selection bias. We know that trim-and-fill doesn't get the job done. An average of the five most precise studies is also unlikely to fully eliminate bias. (In our preprint, we looked at an average of the ten most precise studies and later dropped it as uninteresting. You shed only a little bias but lose a lot of efficiency.)

I know less about the Vevea and Woods selection model they use. Still, because it uses a priori weights instead of estimating them from the data, I am concerned it may yet overestimate the true effect size if there is p-hacking or if the selection bias is very strong. But that's just my guess.

Summary

I am deeply grateful to Psychological Bulletin for publishing my criticism. It is my hope that this is the first of many similar re-analyses increasing the transparency, openness, and robustness of meta-analysis. Transparency opens the black box of meta-analysis and makes it easier to tell whether literature search, inclusion/exclusion, and analysis were performed correctly. Data sharing and archival also allows us to apply new tests as theory or methods are developed.

I am glad to see that we have made some progress as a field. Where once we might have debated whether or not there is publication bias, we can now agree that there is some publication bias. We can debate whether there is only a little bias and a medium effect, or whether there is a lot of bias and no effect. Your answer will depend somewhat on your choice of adjustment model, as Kepes et al. make clear.

To that end, I hope that we can start collecting and reporting data that does not require such adjustment. Iowa State's Douglas Gentile and I are preparing a Registered Replication Report together. If we find an effect, I'll have a lot to think about and a lot of crow to eat. If we don't find an effect, we will need to reevaluate what we know about violent-game effects on the basis of brief laboratory experiments.

Tuesday, May 30, 2017

Trim-and-fill just doesn't work

The last couple years have seen an exciting explosion in new techniques for publication bias. If you're on the cutting edge of meta-analysis, you now can choose between p-curve, p-uniform, PET, PEESE, PET-PEESE, Top-10, and selection-weight models. If you're not on the cutting edge, you're probably just running trim-and-fill and calling it a day.

Looking at all these methods, my colleagues and I got to wondering: Which of these methods work best? Are some always better than others, or are there certain conditions under which they work best? Should we use p-curve or PET-PEESE? Does trim-and-fill work at all?

Today Evan Carter, Felix Schonbrodt, Will Gervais, and I have finished an exciting project in which we simulated hundreds of thousands of research literatures, then held a contest between the methods to see which does the best at recovering the true effect size.

You can read the full paper here. For this blog post, I want to highlight one finding: that the widely-used trim-and-fill technique seems to be wholly inadequate for dealing with publication bias.

https://www.youtube.com/watch?v=8DlJUrPtm3I
One of the outcomes we evaluated in our simulations was mean error, or the bias. When statistically significant results are published and non-significant results are censored, doing a plain-vanilla meta-analysis is gonna give you an estimate that's much too high. To try to handle this, people use trim-and-fill, hoping that it will give a less-biased estimate.

Unfortunately, trim-and-fill is not nearly strong enough to recover an estimate of zero when the null hypothesis is true. In terms of hypothesis testing, then, meta-analysis and trim-and-fill seem hopeless -- given any amount of publication bias, you will conclude that there is a true effect.

In the figure here I've plotted the average estimate from plain-vanilla random-effects meta-analysis (reMA) and the average estimate from trim-and-fill (TF). I've limited it to meta-analyses of 100 studies with no heterogeneity or p-hacking. Each facet represents a different true effect size, marked by the horizontal line. As you go from left to right, the number of studies forced to be statistically significant ranges from 0% to 60% to 90%.

As you can see, when the null is true and there is moderate publication bias, the effect is estimated as d = 0.3. Trim-and-fill nudges that down to about d = 0.25, which is still not enough to prevent a Type I error rate of roughly 100%.

Indeed, trim-and-fill tends to nudge the estimate down by about 0.05 regardless of how big the true effect or how strong the publication bias. Null, small, and medium effects will all be estimated as medium effects, and the null hypothesis will always be rejected.

Our report joins the chorus of similar simulations from Moreno et al. (2009) and Simonsohn, Nelson, and Simmons (2014) indicating that trim-and-fill just isn't up to the job.

I ask editors and peer reviewers everywhere to stop accepting trim-and-fill and fail-safe N as publication bias analyses. These two techniques are quite popular, but trim-and-fill is too weak to adjust for any serious amount of bias, and fail-safe N doesn't even tell you whether there is bias.

For what you should use, read our preprint!!