Thursday , August 17 2017
Home / News / Researchers evaluate scientific rigor in animal research

Researchers evaluate scientific rigor in animal research

Washington: In a recent research published in journals PLOS Biology and PLOS ONE, researchers have assessed scientific rigor in animal experimentation in Switzerland.

The study, commissioned by the Swiss Federal Food Safety and Veterinary Office (FSVO), found widespread deficiencies in the reporting of experimental methodology.

In a first step, expert Thomas Reichlin screened all 1,277 approved applications for animal experiments in Switzerland in 2008, 2010 and 2012, as well as a random sample of 50 scientific publications resulting from studies described in the applications.

The materials were assessed to determine whether seven basic methods that can help combat experimental bias were reported (including randomization, blinding, and sample size calculation).

Appropriate use and understanding of these methods is a prerequisite for unbiased, scientifically valid results, said lead author Hanno Würbel.

Explicit evidence that these methods were used either in the applications for animal experiments or in the subsequent publications was scarce.

For example, fewer than 20 percent of applications and publications mentioned whether a sample size calculation had been performed (8 percent in applications, 0 percent in publications), whether the animals had been assigned randomly to treatment groups (13 percent in applications, 17 percent in publications), and whether outcome assessment had been conducted blind to treatment (3 percent in applications, 11 percent in publications).

Animal experiments are authorized based on the explicit understanding that they will provide significant new knowledge and that animals will suffer no unnecessary harm.

Thus, scientific rigor is a fundamental prerequisite for the ethical justification of animal experiments.

Based on this study, the current practice of authorizing animal experiments appears to rest on an assumption of scientific rigor, rather than on evidence that it is applied.

The authors of this study recommend more education and training in good research practice and scientific integrity for all those involved in this process.

Although the initial results found that fewer than 20 percent of applications and publications used methods to control for bias, that didn’t necessarily mean that more than 80 percent of animal studies failed to include methods to combat bias, and therefore use animals for potentially inconclusive research.

“It is possible that the researchers did use these methods but did not mention them in their applications and publications,” said Würbel.

Adding, “So we decided to ask the researchers.”

The researchers used an online survey for all 1,891 animal researchersregistered in the central online information system of the FSVO who were involved with ongoing experiments.

Among other questions, researchers were asked what bias-reducing methods they normally use when conducting animal experiments and which of these they had explicitly reported in their latest scientific publication.

According to the researchers’ responses the use of methods against bias is considerably higher than reported in the animal research applications and publications.

86 of the participants claimed to assign animals randomly to treatment groups, but only 44 percent answered that they had reported this in their latest publication.

The same applies to the other measures, for example, for sample size calculation (69 percent claimed to be doing this, but only 18 percent say they reported it in their latest publication) and for blinded outcome assessment (47 percent vs. 27 percent).

Taken together, the researchers draw two conclusions from these results: on the one hand, reporting in animal research applications or publications may underestimate the use of bias-reducing methods.

On the other hand, the researchers may overestimate their use of appropriate methods.

“We found considerably fewer publications with explicit evidence of the use of measures against risks of bias than claimed by the researchers”, said Würbel.

For example, 44 percent of the participants claimed to have reported randomization in their latest publication, but Würbel’s team found evidence of randomization in only 17 percent of publications. (ANI)