How to evaluate research reports

I use this checklist when I review journal papers. The emphasis here is on evaluating research reports on additives that have dominated journals in the past 20 years.

The Magazine Is A Paper Printed Periodical, On A Red Creative Ba
Nikolai Zotov | Bigstock.com

Use this checklist to ensure research reports meet certain basic standards

Abstracts (short research reports) presented at conferences should be considered only as preliminary information. They are not the final report. Many such preliminary research reports are rejected by journal editors. Thus, this blog refers only to published journal papers. I use this checklist when I review journal papers. The emphasis here is on evaluating research reports on additives that have dominated journals in the past 20 years.

  1. No negative control. It is often assumed that a long-established feed additive (for example, zinc oxide) is always effective when used under standard conditions. Thus, when such a product is replaced by a novel additive (for example, phytogenics) without any loss in performance, it is often assumed that the two additives are of equal potency. However, there is zero value in comparing these two products without a negative control (a diet without any of these two additives). This reflects basic knowledge (or not) of statistics.
  2. No positive control. On the other hand, without a positive control, it is difficult to evaluate the return on investment from the use of a novel product that may or may not support performance equal to the standard product. The lack of a positive control does not invalidate trials, except in certain conditions, but it always adds strength to them.
  3. Arbitrary dosage. Without proper dose-response trials, dosages often recommended by some manufacturers are just educated guesses, based on theoretical calculations. It is equally likely for a smaller or larger dose to elicit similar responses. So, such recommendations should be considered only as starting points and nothing more.
  4. No statistical analysis. Numerical differences without statistical analysis are never sufficient to base a decision on replacing or using a novel additive. Any trial presented without statistical analysis remains incomplete and practically of little use.
  5. Inappropriate statistics. It is widely accepted that a probability value of 5% (P < 0.05) is sufficient to differentiate experimental treatment averages. Under commercial conditions, a stretched value of 10% (P < 0.10) can be acceptable, especially in large-scale experiments, but even then, it should be used as an indication (trend) and not as an indisputable result. Conclusions based on greater values are meaningless.
  6. Insufficient replicates. Quite often, experiments are conducted with limited replicates per treatment. In such cases, it is challenging to detect small differences that can be confounded by inherent biological variation or random effects. Trials with less than six replicates per treatment are usually of limited value.
Page 1 of 36
Next Page