Close this search box.

‘Fabrication’ – Landmark research integrity survey finds ‘questionable scientific practices’ are surprisingly common

By Jop de Vrieze

More than half of Dutch scientists regularly engage in questionable research practices, such as hiding flaws in their research design or selectively citing literature, according to a new study. And one in 12 admitted to committing a more serious form of research misconduct within the past 3 years: the fabrication or falsification of research results.

This rate of 8% for outright fraud was more than double that reported in previous studies. Organizers of the Dutch National Survey on Research Integrity, the largest of its kind to date, took special precautions to guarantee the anonymity of respondents for these sensitive questions, says Gowri Gopalakrishna, the survey’s leader and an epidemiologist at Amsterdam University Medical Center (AUMC). “That method increases the honesty of the answers,” she says. “So we have good reason to believe that our outcome is closer to reality than that of previous studies.” The survey team published results on 6 July in two preprint articles, which also examine factors that contribute to research misconduct, on MetaArxiv.

When the survey began last year, organizers invited more than 60,000 researchers to take part—those working across all fields of research, both science and the humanities, at some 22 Dutch universities and research centers. However, many institutions refused to cooperate for fear of negative publicity, and responses fell short of expectations: Only about 6800 completed surveys were received. Still, that’s more responses than any previous research integrity survey, and the response rate at the participating universities was 21%—in line with previous surveys.

One of the preprints focuses on the prevalence of misbehavior—cases of fraud as well as a less severe category of “questionable research practices,” such as carelessly assessing the work of colleagues, poorly mentoring junior researchers, or selectively citing scientific literature. The other article focuses on responsible behavior; this includes correcting one’s own published errors, sharing research data, and “preregistering” experiments—posting hypotheses and protocols ahead of time to reduce the bias that can arise when these are released after data collection.

The survey found Ph.D. students had the hardest time meeting the standards of responsible research. Some 53% of them admitted to frequently engaging in one of the 11 questionable research behaviors within the past 3 years, compared to 49% of associate and full professors.

To look for possible explanations of participants’ behavior, the study team also asked about their professional experiences—whether they felt workplace pressure or peer pressure, for instance. The team found that pressure to publish was most strongly correlated with questionable research behavior, and that perceptions of the chance of being caught by peer reviewers was the biggest factor in inhibiting misconduct.

Although the 8% figure for fabrication or falsification exceeded that found by previous surveys, it could still be an underestimate, says Daniele Fanelli, a research ethicist at the London School of Economics who was not involved in the study. The survey defined scientific fabrication and falsification unambiguously, in a way that left no room for participants to think the practices might be a minor form of misconduct. “As a result, I assume participants were less likely to answer honestly,” Fanelli says.

Elisabeth Bik, a scientific integrity consultant who specializes in detecting image manipulation in biomedical research papers, is not surprised by the number. On average, she has found image manipulation in 4% of papers. “But most manipulation cannot be detected,” she says. “What we see is the tip of the iceberg. It’s probably between 5% and 10%, which is close to this number.”

Still, she says many of the questionable research practices mentioned in the survey, and even some examples of outright fraud, should not always be viewed as black and white. “Excluding an outlier from your results is falsification, but sometimes you have good reasons to do so,” she says. “And publishing your negative results is just very hard,” because many journals lack interest. “It is good to have these rules and to think about them, but not always obeying them does not make you a bad researcher.”

Fanelli, who calls the survey “one of the best in the field,” adds that he doesn’t think Dutch researchers are any less ethical than colleagues elsewhere. After the fraudulent work of psychologist Diederik Stapel was exposed in 2011, the Netherlands has been at the forefront of promoting scientific integrity, he says. In 2018, collaborating institutions published the Netherlands Code of Conduct for Research Integrity. “So I rather expect that researchers in the Netherlands are especially aware of research integrity issues,” Fanelli says.

However, awareness is not enough to banish unwanted behavior, Gopalakrishna says. “It’s about what researchers are judged on, and currently that’s quantity over quality,” she says—a pressure that can drive cutting corners. “Instead, you want transparent, responsible research to become the norm.”

Daniël Lakens, a social psychologist at Eindhoven University of Technology who was not involved in the study, believes that research integrity will improve as certain good practices become commonplace, like the public pre-registration of studies and the posting of raw data with every manuscript. “You do indeed notice that much of the good and bad behavior in question here is norm-driven,” he says.