Statistically "Highly Unlikely" - Social Psychologist Dirk Smeesters Resigns
By Hank
Created Jun 26 2012 - 8:00am
Erasmus University Rotterdam has announced that Dirk Smeesters, Professor of Consumer and Society at Rotterdam School of Management, has had two papers withdrawn after a report from the Inquiry Committee on Scientific Integrity looked into suspicions that the professor had committed scientific errors.
This adds more weight to an uncomfortable truth in the social sciences and the humanities and anyone else who doesn't quite get what separates science from other fields; doing statistical analysis is not science. Yes, science has gotten 'bigger' in its datasets so understanding statistics and numerical models are increasingly important but that is not the same thing as being the science itself. Surveys of undergraduates are certainly not science.
As expected, because two papers have already been retracted, RetractionWatch is on the case. Their detailed insight into the situation (and the comments from the audience) prompted a response from one of Smeesters' co-authors, who wrote
Social psychology was once a strong field. It is not today but, as in the case of Diederik Stapel last November, researchers are taking the discipline back from people who have been able to get away with this sort of fraud.
Two articles were found to have irregularities with findings that, in a statistical sense, are highly unlikely. The raw data forming the basis of these articles was not available for inspection by third parties, and the professor indicated that he had selected data so that the sought-after effects were statistically significant.The University’s Board of Directors accepted the resignation of Smeesters on June 21st. They say in their statement above that none of the co-authors have been implicated.
This adds more weight to an uncomfortable truth in the social sciences and the humanities and anyone else who doesn't quite get what separates science from other fields; doing statistical analysis is not science. Yes, science has gotten 'bigger' in its datasets so understanding statistics and numerical models are increasingly important but that is not the same thing as being the science itself. Surveys of undergraduates are certainly not science.
As expected, because two papers have already been retracted, RetractionWatch is on the case. Their detailed insight into the situation (and the comments from the audience) prompted a response from one of Smeesters' co-authors, who wrote
Some of you might wonder, how did we not know that something was up? The answer is that it’s not that easy to spot a coauthor who is doctoring data. The variety seeking paper, for instance, started in a delightful conversation that I had with Dirk when I visited Erasmus. Dirk mentioned a finding on social exclusion that he had; I had an interest in why people seek variety. We came up with what we thought was an interesting hypothesis to test that related to previous work on variety seeking, some of which is my own. Dirk is a nice, intelligent guy, and was an enthusiastic coauthor. He was a good critic of research. He was respected in the field. He also was at Erasmus, which has perhaps the best behavioral lab I had ever seen. So when the data streamed every few months, it was hardly suspicious. Unlike Stapel, Dirk actually ran studies. What he did with the data afterward is what’s in question.The questionable nature of relying too much on statistics aside, it likely is easy to be fooled. We tend to think people are like us; basically ethical, basically honest and want to do good work and make a difference. In that light, it is completely okay to believe someone is ethical until shown to be otherwise.
Social psychology was once a strong field. It is not today but, as in the case of Diederik Stapel last November, researchers are taking the discipline back from people who have been able to get away with this sort of fraud.
The data detective
Uri Simonsohn explains how he uncovered wrongdoing in psychology research.
LUKE CHURCH PHOTOGRAPHY
Psychology was already under scrutiny following a series of high-profile controversies. Now it faces fresh questions over research practices that can sometimes produce eye-catching — but irreproducible — results. Last week, Erasmus University Rotterdam in the Netherlands said that social psychologist Dirk Smeesters had resigned after an investigation found that he had massaged data to produce positive outcomes in his research, such as the effect of colour on consumer behaviour1, 2. Smeesters says the practices he used are common in the field. None of his co-authors is implicated. The university was tipped off by social psychologist Uri Simonsohn at the University of Pennsylvania in Philadelphia, who spoke exclusively to Natureabout his investigation.
How did your investigation begin, and how did you analyse the papers?
Somebody sent me a paper by Smeesters. I was working on another project on false positives and had become pretty good at picking up on the tricks that people pull to get a positive result3. With the Smeesters paper, I couldn’t find any red flags, but there were really far-fetched predictions.
The basic idea is to see if the data are too close to the theoretical prediction, or if multiple estimates are too similar to each other. I looked at several papers by Smeesters and asked him for the raw data, which he sent. I did some additional analyses on those and the results looked less likely. I’ll be submitting a paper on the method this week.
I shared my analyses with Smeesters, showing him that the data didn’t look real, and I offered several times to explain my methods. He said he was going to re-run the study and retract the paper. That was all I heard until December, when Erasmus University Rotterdam contacted me and asked me to tell them why I was suspicious. They had started their own investigation.
Can we expect more cases like this?
I tried my approach with Diederik Stapel’s data after he had been called out for fraud (see Nature479, 15; 2011), and they looked fake from the very beginning. Besides him and Smeesters, there’s another person. I found three suspicious papers, engaged him for several months, and eventually contacted the university. They had already started an investigation, which has ended. It’s not official yet.
There’s a fourth case in which I am convinced that there’s fabrication. I’ve approached co-authors, but none of them wanted to help. If I didn’t have anything else to do, I’d do something about it, but it just became too difficult because I was handling these other cases and my own research. It’s very draining.
Is this indicative of deeper problems in the field?
I don’t know how systemic the crime is. What’s systemic is the lack of defences. Social psychology — and science in general — doesn’t have sufficient mechanisms for preventing fraud. I doubt that fabrication is any worse in psychology than in other fields. But I’m worried by how easy it was for me to come across these people.
Do you worry about other psychologists’ reactions to your investigations?
I did worry a lot. Everybody likes the fact that whistle-blowers exist, but nobody likes them. People worry about somebody engaging in a witch-hunt, but I have a technique that is accurate, I used it when confronted with evidence, and I subjected it to replication by checking other papers from the same author. That’s no more a witch-hunt than a neighbour calling the police when someone breaks into another person’s home. I did not take justice into my own hands, I contacted the authorities and they took care of the rest. I suspect some people will be against what I’ve done, but there is really no personal benefit to someone of doing what I am doing.
So what is your motivation?
Simply that it is wrong to look the other way. If there’s a tool to detect fake data, I’d like people to know about it so we can take findings that aren’t true out of our journals. And if it becomes clear that fabrication is not an unusual event, it will be easier for journals to require authors to publish all their raw data. It’s extremely hard for fabrication to go undetected if people can look at your data.
A university’s reputation suffers a lot when people fake data, but they don’t have tools for preventing that — journals do. Journals should be embarrassed when they publish fake data, but there’s no stigma. They’re portrayed as the victims, but they’re more like the facilitators, like a government that looks the other way. I’d like journals to take ownership of the problem and start working towards stopping it.
Previous challenges to data in psychology were made by internal whistle-blowers, but you are not connected to Smeesters. Does that herald an important change?
It’s a very important difference. The tool should be broadly applicable to other disciplines. I think it’ll be worthwhile to find other ways of finding fake data. We know people are really bad at emulating random data, so there should be all sorts of tests that could be developed.
Is it possible that such methods could falsely ensnare innocent researchers?
That’s my biggest fear; it’s why I look at different papers from the same person. I wouldn’t contact anybody unless they had three suspicious papers. And before any concerns become public, a proper investigation should always take place.
No comments:
Post a Comment