Header


Thursday, October 15, 2020

Fraud and Erroneous Judgment: Varieties of Deception in the Social Sciences (1995)

Killing time in the UChicago stacks in the summer of 2019, I found a book from 1995 called Fraud and Erroneous Judgment in the Social Sciences. It's been an interesting read, because despite having been written nearly 25 years ago, much of it reads like it was written today. Specifically, there is very little substance about actually preventing, detecting, or prosecuting fraud, presumably because all these things are very difficult to do. 

Instead, a substantial portion of the text is dedicated to the easier task of fighting the culture war. Nearly half the book consists of polemics from scientists who think their ability to speak hard truths about sexual assault or intelligence or race or whatever has been suppressed by the bleeding hearts. This is particularly depressing and unhelpful when you see that two of the thirteen chapters are written by Linda Gottfredson and J. Phillippe Rushton, scientists receiving funding from the Pioneer Fund, an organization founded to study and promote eugenics.



Fraud...

For a text that is notionally about fraud, there is very little substance about actual fraud. Instead, most of the chapters are dedicated to the latter topic of "fallible judgment". Only three instances of research misconduct in psychology are discussed. Two of them appear in brief bullet points in the first chapter: In one, a psychologist fabricated data to demonstrate the efficacy of a drug for preventing self-harm in the mentally disabled; in the other, a researcher may have massaged his data to overstate the potential harms of low levels of lead exposure.

The third case consists of the allegations surrounding Cyril Burt. Cyril Burt was an early behavior geneticist. He argued that intelligence was heritable, and he demonstrated this through studies of the similarity of identical twins raised apart.

Burt was unpopular at the time because the view that intelligence was heritable sounded to many like Nazi ideology. While he was alive, people protested him as a far-right ideologue. (Other hereditarians experienced similar treatment; Hans Eysenck reportedly needed bodyguards as a result of his 1971 views that some of the Black-White intelligence gap was genetic in nature.) 

Five years after his death, allegations arose that Burt had invented a number of his later samples. These allegations claimed that Burt, having found an initial sample that supported his hypothesis, and frustrated by the public resistance to his findings as well as the challenge of finding more identical twins raised apart, decided to help the process along by fabricating data from twin pairs. As evidence of this, his heritability coefficient remained .77 as the sample size increased from 15 twin pairs to 53 twin pairs. (Usually parameter estimates change a little bit as new data comes in.) He was further alleged to have made up two research assistants, but these assistants were later found. Complicating matters further, his housekeeper burnt all his research records shortly after his death (!) purportedly on the advice of one of Burt's scientific rivals (?!?).

Burt sounds like a real horse's ass. In a separate book, Cyril Burt: Fraud or Framed?, Hans Eysenck reports that Burt would sometimes sock-puppet, writing articles according to his own views, then leaving his name off of the work and handing it off to a junior researcher, giving the impression that some independent scholar shared his view. Burt purportedly went one further by editing articles submitted to his journal, inserting his own stances and invective into others' work and publishing it without their approval.

Two chapters in Fraud and Erroneous Judgment are devoted to the Burt affair. The first chapter, written by Robert B. Joynson, argues that, strictly speaking, you can't prove he committed fraud. Probably we will never know. Burt is dead and his records destroyed. Even if he made up the data, the potentially made-up data are at least consistent with what we believe today, so maybe it doesn't matter.

The other, written by the late J. Phillippe Rushton, one-time head of the Pioneer Fund, argues more stridently that Burt was framed. According to his perspective, the various social justice warriors and bleeding hearts of today's the 1970s' hyper-liberal universities couldn't bear the uncomfortable truths Burt preached. Rather than refute Burt's ideas in the arena of logic and facts and science, they resorted to underhanded callout-culture tactics to smear him after his death and spoil his legacy.

So in the only involved discussion of an actual fraud allegation in this 181-page book, all that can be said is "maybe he did, or maybe he didn't."

Some material is useful. Chapter 3 recognizes that scientific fraud is a human behavior that is motivated by, and performed within, a social system. One author theorizes that fraud is most often committed under three conditions: 1) there is pressure to publish, whether to advance one's career or to refute critics, 2) the researcher thinks they know the answer already, so that actually doing the experiment is unneccessary, and 3) the research area involves an amount of stochastic variability, such that a failure to replicate can be shaken off as Type I error or hidden moderators. It certainly sounds plausible, but I wonder how useful it is. Most research fulfills all three conditions: all of us are under pressure to publish, all of us have a theory or two to suggest a "right" answer, and all of us experience sampling error and meta-uncertainty.

One thing that hasn't changed one bit is that demonstrating fraud requires demonstrating intent, which is basically impossible. Then and now, people instead have to couch concerns in the language of error, presuming sloppiness instead of malfeasance. Even then, it's not clear at what level of sloppiness crosses the threshold between error and misconduct.

...and Erroneous Judgment

The other cases all concern "erroneous judgment". They reflect ideologically-biased interpretations of data, a lack of scientific rigor, or an excessive willingness to be fooled. These cases vary in their seriousness. At the extremely harmful end, there is a discussion of recovered-memory therapy; this therapy involves helping patients to recover memories of childhood abuse through a process indistinguishable from that one would use to create a false memory. Chillingly, recovered memories became permissible as court evidence in 15 states and lead to a number of false accusations and possible convictions during the Satanic Panic of the 1980s. At the less harmful end, there's an argument about whether the Greeks made up their culture by copying off of the Egyptians. Fun to think about maybe, but nobody is going to jail over that.

Other examples include overexaggeration of societal problems in order to drum up support for research and advocacy. Neil Gilbert illustrates how moral entrepreneurs can extrapolate from sloppy statistical work, small samples, and bad question wording to estimate that 100 billion children are abducted every 3.7 seconds. This fine example is, however, paired with a criticism of feminism and research on sexual assault that has aged poorly; the author's argument boils down to "c'mon, sexual assault can't be that common, right?" Maybe it can be, Neil.

According to the authors, these cases of fallible judgment are caused by excessive enthusiasm rather than deliberate intention to deceive. Therapists dealing in recovered memories are too excited to root out satanic child-abuse cults, too ignorant of the basic science of memory, and too dependent on the perceived efficacy of their practice to know better. Critics of the heritability of IQ are blinded by political correctness and "the egalitarian hoax" of blank-slate models of human development. Political correctness is cited as influencing "fallible judgments" as diverse as the removal of homosexuality from the DSM (and its polite replacement in diagnosis of other disorders so that homosexual patients could continue billing their insurance), the estimation of the prevalence of sexual harassment, failures to test and report racial differences in outcomes, or the attribution of the accomplishments of the Greeks to the Egyptians.

Again, it seems revealing that so little is known about actual cases of fraud that the vast majority of the volume is dedicated to cases where it is unclear who is right. Unable to discover and discuss actual frauds, the discussion has to focus instead on ideological opponents whom the authors don't trust to interpret and represent their data fairly.


Have we made progress?

What's changed between 1995 and now? Today we have more examples to draw upon and more forensic tools. We can use GRIM and SPRITE to catch what are either honest people making typographical mistakes or fraudsters too stupid to make up raw data (good luck telling which is which!). The Data Colada boys keep coming up with new tests for detecting suspicious patterns in data. It's become a little less weird to ask for data and a little more weird to refuse to share data. So there's progress.

Even so, we're still a billion miles away from being able to detect most fraud and to demonstrate intent. Demonstration of intent generally requires a confession or someone on the inside. Personally, I've suspect that fraud detection at scale is probably impossible unless we ask scientists to provide receipts. I can't imagine researchers going for another layer of bureaucracy like that.

One recurring theme is the absence of an actual science police. The discussion of the Burt affair complains that the Council of the British Psychological Society did little to examine Burt's case on its own, instead accepting the conclusions of a biographer. Chapters 1 and 2 discuss the political events that put "Science under Siege" and lead to the creation of the Office of Research Integrity, an institution only grudgingly accepted in Chapter 2. Huffing that every great scientist from Mendel to Millikan had to massage their data a bit from time to time to make their point, David Goodstein cautions the ORI, "I can only hope that we won't arrange things in such a way as would have inhibited Newton or Millikan from doing his thing."


Can we ever know the truth?

Earlier, I mentioned that the book contains three cases of purported fraud: the self-harm study, Cyril Burt's 38 twin-pairs raised apart, and the researcher possibly massaging his data to overestimate the harms of lead. This last case appears to be a reference to the late Herbert Needleman, accused in 1990 of p-hacking his model, an offense Newsweek described at the time as "like bringing a felony indictment for jaywalking." Needleman was exonerated in 1992, and the New York Times ran an obituary honoring him following his death in 2017.

Would I be impressed by Needleman's work today, or would I count him out as another garden-variety noise-miner looking for evidence to support a foregone conclusion? Maybe it doesn't matter. In the Newsweek article, the EPA is quoted as saying "We don't even use Needleman's study anymore" because subsequent research recommended even lower safety thresholds than did Needleman's controversial work. The tempest has blown over. The winners write their history, and the losers get paid by the Cato Institute to go on Fox News and argue against "lead hysteria".

There's a lot that hasn't changed

We think that science has only been subjective, partisan, and politicized in our current "war on science" post-2016 world, but the 1990s also had "science under siege" (Time, Aug 26, 1991) and intractable debates between competing groups with vested interests in there being a crisis or not being a crisis. The tobacco wars reappear in every decade.

Similarly, the froth and stupidity of daytime TV lives on in today's Daily Mail and Facebook groups. In the 90s, people with more outrage than sense believed in vast networks of underground Satanist cults that tortured children and "programmed" them to become pawns in their world domination scheme. Today, those people believe the Democratic party runs child trafficking ring through a pizza parlor and a furniture website and that Donald Trump is on a one-man mission to stop them.

Regarding fraud, we find that scientific self-policing only tends to emerge in response to crisis and scandal. NIH and NSF don't seem to have had formal recommendations regarding fraud until 1988; these were apparently motivated by pressure from Congress following the 1981 case of John Darsee, a Harvard cardiologist who had been faking his data. Those who do scientific self-policing aren't welcomed with open arms -- the book briefly stops to sneer at Walter Stewart and Ned Feder as "a kind of self-appointed truth squad. According to their critics, they had not been very productive scientists and were trying to find a way of holding on to their lab space." Nobody likes having fraud oversight, and everybody does the minimum possible to maintain public respectability until the scandal blows over.

Finally, each generation seems to suspect its successors of being fatally blinded by political correctness. This is clearest in the chapter dedicated to the defense of Cyril Burt, in which Rushton complains that academia will only become more corrupted by political correctness:
Today, the campus radicals of earlier decades are the tenured radicals of the 1990s. Some are chairmen, deans, and presidents. The 1960s mentality of peace, love, and above all equality now constitutes a significant portion of the intellectual establishment in the Western world. The equalitarian dogma is more, not less, entrenched than ever before. Yet, it is based on the scientific hoax of the century.
Will every generation of academics forever consider their successors insufferably and disreputably woke? Should they? It seems that, despite Rushton's concerns, the hereditarian perspective has won out in the end. Today we have researchers who not only recognize heritability, but have given careful thought to the meaning, causality, and societal implications of the research. I see this as tremendous progress when compared to the way the book tends to frame the debate over heritability, which invites the reader to choose between two equally misguided perspectives of either ignorant blank-slate idealism or Rushton's inhumane "race realism."

Summary

Some things have changed since 1995, but much has stayed the same.

Compared to 25 years ago, I think we have a better set of tools for detecting fraud. We have new statistical tricks and stronger community norms around data sharing and editorial action. We have the Office of Research Integrity and Retraction Watch.

But some things haven't changed. Researchers checking each other's work are still, at times, regarded coldly: the "self-appointed truth squad" of 1995 is the "self-appointed data police" of 2016. Demonstrating intent to deceive remains a very high bar for those investigating misconduct; probably some number of fraudsters escape oversight by claiming mere incompetence. Because it is difficult to prove intent to deceive, it's easier to fight culture war -- one can wave to an opponent's political bias without getting slapped with a libel suit. And we still don't know much about who commits fraud, why they commit fraud, and how we'll ever catch them.




Thursday, January 30, 2020

Are frauds incompetent?

Nick Brown asks:

My answer is that we are not spotting the competent frauds. This becomes obvious when we think about all the steps that are necessary to catch a fraud:
  1. The fraudulent work must be suspicious enough to get a closer look.
  2. Somebody must be motivated to take it upon themselves to take that closer look.
  3. That motivated person must have the necessary skill to detect the fraud.
  4. The research records available to that motivated and skilled person must be complete and transparent enough to detect the fraud.
  5. That motivated and skilled person must then be brave enough (foolish enough? equipped with lawyers enough?) to contact the research institution.
  6. That research institution must be motivated enough to investigate.
  7. That research institution must also be skilled enough to find and interpret the evidence for fraud.

Considering all these stages at which one could fail to detect or pursue misconduct, it seems immediately obvious to me that we are finding only the most obvious and least protected frauds.

Consider the "Boom, Headshot!" affair. I had read this paper several times and never suspected a thing; nothing in the summary statistics indicates any cause for concern. The only reason anybody discovered the deception was because Pat Markey was curious enough about the effect of skewness on the results to spend months asking the authors and journal for the data and happened to discover values edited by the grad student.

Are all frauds stupid?

Some of the replies to Nick's question imply that faking data convincingly is too much hassle compared to actually collecting data. If you know a lot about data and simulation, why would you bother faking data? This perspective assumes that fraud is difficult and requires skills that could be more profitably used for good. But I don't think either of those is true.

Being good at data doesn't remove temptations for fraud

When news of the LaCour scandal hit, the first thing that struck me was how good this guy was at fancy graphics. Michael LaCour really knew his way around analyzing and presenting statistics in an exciting and accessible way.

But that's not enough to get LaCour's job offer at Princeton. You need to show that you can collect exciting data and get exciting results! When hundreds of quant-ninja, tech-savvy grad students are scrambling for a scant handful of jobs, you need a result that lands you on This American Life. And those of us on the tenure track have our own temptations: bigger grants, bigger salaries, nicer positions, and respect.

Some might even be tempted by the prospect of triumphing over their scientific rivals. Cyril Burt, once president of the British Psychological Society, was alleged to have made up extra twin pairs in order to silence critics of his discovered link between genetics and IQ. Hans Eysenck, the most-cited psychologist of his time, published and defended dozens of papers using likely-fabricated data from his collaborator that supported his views on the causes of cancer.

Skill and intellect and fame and power do not seem to be vaccines against misconduct. And it doesn't take a lot of skill to commit misconduct, either, because...

Frauds don't need to be clever

A fraud does not need a deep understanding of data to make a convincing enough forgery. A crude fake might get some of the complicated multivariate relationships wrong, sure. But will those be detected and prosecuted? Probably not.

You don't need to be the Icy Black Hand of Death to get away with data fakery.
(img source fbi.gov)


Why not? Those complicated relationships don't need to be reported in the paper. Nobody will think to check them. If they want to check them, they'll need to send you an email requesting the raw data. You can ignore them for some months, then tell them your dog ate the raw data, then demand they sign an oath of fealty to you if they're going to look at your raw data.

Getting the complicated covariation bits a little wrong is not likely to reveal a fraud, anyway. Can a psychologist predict even the first digit of simple correlations? A complicated relationship that we know less about will be harder to predict, and it will be harder to persuade co-authors, editors, and institutions that any misspecification is evidence of wrongdoing. Maybe the weird covariation can be explained away as an unusual feature of the specific task or study population. The evidence is merely circumstantial.


...because data forensics can rarely stop them.

Direct evidence requires some manner of internal whistleblower who notices and reports research misconduct. Again, one would need the actually see the misconduct, which is especially unlikely in today's projects in which data and reports come from distant collaborators. Then one would need to actually blow the whistle, after which they might expect to lose their career and get stuck in a years-long court case. Most frauds in psychology are caught this way (Stroebe, Postmes, & Spears, 2012).

In data forensics, by contrast, most evidence for misconduct is merely circumstantial. Noticing in the data very similar means and standard deviations or duplicated data points or duplicated images might be suggestive, but requires assumptions, and is open to alternative explanations. Maybe there was an error in data preprocessing, or the research assistants managed the data wrong, or someone used file IMG4015.png instead of IMG4016.png.

This circumstantial evidence means that nonspecific screw-ups are often a plausible alternative hypothesis. It seems possible to me that a just-competent fraud could falsify a bunch of reports, plead incompetence, issue corrections as necessary, and refine one's approach to data falsification for quite a long time.

A play in one act:

FRAUDSTER
The means were 2.50, 2.50, 2.35, 2.15, 2.80, 2.40, and 2.67.


DATA THUG
It is exceedingly unlikely that you would receive such consistent means. I suspect you have fabricated these summary statistics.


FRAUDSTER
Oops, haha, oh shit, did I say those were the means? Major typo! The means were actually, uh, 2.53, 3.12, 2.07, 1.89...


EDITOR
Ahh, nice to see this quickly resolved with a corrigendum. Bye everyone.


UNIVERSITY
We are fully committed to upholding the highest ethical standards etc. any concerns are thoroughly etc. etc.


FRAUDSTER (sotto voce) 
That was close! Next time I fake data I will avoid this error.

The field isn't particularly trying to catch frauds, either.

Trying to prosecute fraud sounds terrible. It takes a very long time, it requires a very high standard of evidence, and lawyers get involved. It is for these reasons, I think, that the self-stated goal of many data thugs is to "correct the literature" rather than "find and punish frauds".

But I worry about this blameless approach, because there's no guarantee that the data that appears in a corrigendum is any closer to the truth. If the original data was a fabrication, chances are good the corrigendum is just a report of slightly-better-fabricated data. And even if the paper is retracted, the perpetrator may learn from the experience and find a way to refine his fabrications and enjoy a long, prosperous life of polluting the scientific literature.

In summary,

I don't think you have to be particularly clever to be a fraud. It seems to me that most discovered frauds involve either direct evidence from a whistleblower or overwhelming circumstantial evidence due to rampant sloppiness. I think that there are probably many more frauds with just a modicum of skill that have gone undiscovered. There are probably also a number of cases that are quietly resolved without the institution announcing the discovered fraud. I spend a lot of time thinking about what it would take to change this, and what the actual prevalence would be if we could uncover it.