Misleading research on feed additives

The most common mistakes in trial design are lack of negative and (or) positive control treatments.

Additives trade is a huge part of international animal nutrition business. For most additives, animal performance and return on investment results under commercial conditions remain largely inconclusive. Indeed, peer-reviewed research reports on additives are scarce, whereas technical reports abound, whilst comparative studies are unavailable.

Understandably, at the feed mill level, deciding which additive to use is often a difficult experience, as it requires not only a sound knowledge of basic nutrition but also a good understanding of statistics. Quite often, those responsible for nutrition decisions shy from changing additives only because they have insufficient evidence to make the switch.

Things get extremely complicated when trying to distinguish between two similar additives based on unpublished results. Here, we should point out the lack of public funding of extension type research programs that would help shed light on such questions. Nevertheless, with available information at hand, we must continue to evaluate research results supplied by suppliers. To this end, it is handy to have a quick checklist to ensure trials were done properly and results are thus valid. The most common mistakes in trial design are lack of negative and (or) positive control treatments.

Negative control

It is often assumed that a common additive (for example, zinc oxide) is always effective when used under commercial conditions. Thus, when a novel additive (for example, an organic acid) is found to be able to replace the established one, without any loss in performance, it may be claimed the two additives are of equal potency.

However, there is no value in comparing the two additives without including a negative control (a diet without any of the two additives) in the trials to guard against the event of no response to the standard additive (which would render the whole trial invalid).

For example, pigs raised under high levels of sanitation would not benefit from the inclusion of zinc oxide in their feed. Growth rate of pigs in such trial would be about the same for the negative control, the zinc oxide treatment and the organic acid treatment. But, this equal performance does not mean the specific organic acid can replace zinc oxide because, under the same logic, naught can replace both of the two additives, and vice versa. In essence, only when the “old” additive gives a positive response over the negative control, then we can safely discuss the performance of the “new” additive; otherwise, the whole trial should be considered invalid.

Positive control

On the other hand, without a positive control, it is quite difficult to evaluate the return on investment from the use of a novel additive that may or may not support performance equal to the standard additive. Let’s use the example of butyric acid in piglet diets here. Say, a trial is presented where a diet without any organic acid (negative control) is compared with the exact same diet but with butyric acid. Now, butyric acid can be an effective additive under certain circumstances, but it happens to be quite expensive. Let’s further assume that butyric acid is enhancing growth by 5 percent (not an unreasonable expectation under less than ideal health conditions). Then, the logical question to ask the supplier providing these results would be how much of a cheaper (but equally effective) organic acid would be required to elicit a similar response.

Most likely such a treatment is not included, but this is information vital for the potential user of organic acids. If it costs more to use butyric acid to get a 5 percent boost in performance compared to another organic acid, then it makes no sense to switch over. Of course, there is the trap of having endless candidates to compare with, but a short list of the most important competitive products is always in the minds of commercial people.