LUKE CHURCH PHOTOGRAPHY
Psychology was already under scrutiny following a series of high-profile controversies. Now it faces fresh questions over research practices that can sometimes produce eye-catching — but irreproducible — results. Last week, Erasmus University Rotterdam in the Netherlands said that social psychologist Dirk Smeesters had resigned after an investigation found that he had massaged data to produce positive outcomes in his research, such as the effect of colour on consumer behaviour12. Smeesters says the practices he used are common in the field. None of his co-authors is implicated. The university was tipped off by social psychologist Uri Simonsohn at the University of Pennsylvania in Philadelphia, who spoke exclusively to Natureabout his investigation.

How did your investigation begin, and how did you analyse the papers?

Somebody sent me a paper by Smeesters. I was working on another project on false positives and had become pretty good at picking up on the tricks that people pull to get a positive result3. With the Smeesters paper, I couldn’t find any red flags, but there were really far-fetched predictions.
The basic idea is to see if the data are too close to the theoretical prediction, or if multiple estimates are too similar to each other. I looked at several papers by Smeesters and asked him for the raw data, which he sent. I did some additional analyses on those and the results looked less likely. I’ll be submitting a paper on the method this week.
I shared my analyses with Smeesters, showing him that the data didn’t look real, and I offered several times to explain my methods. He said he was going to re-run the study and retract the paper. That was all I heard until December, when Erasmus University Rotterdam contacted me and asked me to tell them why I was suspicious. They had started their own investigation.

Can we expect more cases like this?

I tried my approach with Diederik Stapel’s data after he had been called out for fraud (see Nature479, 15; 2011), and they looked fake from the very beginning. Besides him and Smeesters, there’s another person. I found three suspicious papers, engaged him for several months, and eventually contacted the university. They had already started an investigation, which has ended. It’s not official yet.
There’s a fourth case in which I am convinced that there’s fabrication. I’ve approached co-authors, but none of them wanted to help. If I didn’t have anything else to do, I’d do something about it, but it just became too difficult because I was handling these other cases and my own research. It’s very draining.

Is this indicative of deeper problems in the field?

I don’t know how systemic the crime is. What’s systemic is the lack of defences. Social psychology — and science in general — doesn’t have sufficient mechanisms for preventing fraud. I doubt that fabrication is any worse in psychology than in other fields. But I’m worried by how easy it was for me to come across these people.

Do you worry about other psychologists’ reactions to your investigations?

I did worry a lot. Everybody likes the fact that whistle-blowers exist, but nobody likes them. People worry about somebody engaging in a witch-hunt, but I have a technique that is accurate, I used it when confronted with evidence, and I subjected it to replication by checking other papers from the same author. That’s no more a witch-hunt than a neighbour calling the police when someone breaks into another person’s home. I did not take justice into my own hands, I contacted the authorities and they took care of the rest. I suspect some people will be against what I’ve done, but there is really no personal benefit to someone of doing what I am doing.

So what is your motivation?

Simply that it is wrong to look the other way. If there’s a tool to detect fake data, I’d like people to know about it so we can take findings that aren’t true out of our journals. And if it becomes clear that fabrication is not an unusual event, it will be easier for journals to require authors to publish all their raw data. It’s extremely hard for fabrication to go undetected if people can look at your data.
A university’s reputation suffers a lot when people fake data, but they don’t have tools for preventing that — journals do. Journals should be embarrassed when they publish fake data, but there’s no stigma. They’re portrayed as the victims, but they’re more like the facilitators, like a government that looks the other way. I’d like journals to take ownership of the problem and start working towards stopping it.

Previous challenges to data in psychology were made by internal whistle-blowers, but you are not connected to Smeesters. Does that herald an important change?

It’s a very important difference. The tool should be broadly applicable to other disciplines. I think it’ll be worthwhile to find other ways of finding fake data. We know people are really bad at emulating random data, so there should be all sorts of tests that could be developed.

Is it possible that such methods could falsely ensnare innocent researchers?

That’s my biggest fear; it’s why I look at different papers from the same person. I wouldn’t contact anybody unless they had three suspicious papers. And before any concerns become public, a proper investigation should always take place.