The decision to address nutrition variability -- and its costs -- should be based on an operation’s specific requirements and abilities.
In the article, “The impact of nutrient variability in feedstuffs,” which is in the March/April issue of Feed Management, we examined how corn of a slightly different lysine concentration than expected (based on book values) can delay growth by two days, with even worst results expected by natural variability inherent in soybean meal that contributes most of the lysine in animal feed.
What remains is to examine ways to manage unavoidable nutrient variability in common feedstuffs through feed formulation. To this end, we will examine three methods.
Method 1. Accept variability and do nothing
Although it appears counterproductive, doing nothing is what most of us who formulate diets actually prefer to do. Indeed, it is a rare case where nutrient variability is taken into account, especially when generic formulas are designed. Thus, we depend on the average values found in trustworthy published tables, expecting variability in the various feedstuffs used to cancel each other towards the expected mean.
On the other hand, we accept that any losses from failing to reach the nutrient specification target (in terms of lost animal performance) or from exceeding our target (in terms of potential savings from avoiding nutrient wastage) are far less than the cost of addressing nutrient variability by other means. It might appear a very naïve way of approaching a very significant issue, but currently this is the norm.
Method 2. Use actual data based on ingredient analysis
This is based on actual wet chemistry (lab) or near infrared (NIR) analyses. The actual results are fed into the computer feed formulated software and diets are formulated very close to actual specifications. There are several obstacles that make this method difficult to implement.
First is the time gap between nutrient analysis and application (here the NIR approach has certain advantages). Second is the requirement for segregation of batches of the same ingredient with vastly different profiles. Third, of course, is the cost associated with this method, which is not inconsiderable, as it requires inputs in time, labor (sample collection), and cost.
Such approach can be used only by sizeable enterprises that purchase single large consignments of raw materials stored in oversized silos, for which loads a composite sample is obtained during unloading.
Method 3. Use actual or estimated nutrient variability in feed formulation
Regardless of how nutrient variability coefficient values are obtained (through published tables or from own analyses, as in Method 2), the numbers can be fed in to certain types of feed formulation software for use in feed design.
Basically, the range of expected variability, the inclusion rate of the ingredient in question — and the associated cost of the nutrient provided by such ingredient — are taken into account to adjust the dietary nutrient specification to ensure feeds meet expected specifications within a previously established range of sensitivity. The only drawback is that such feed formulation programs are more expensive than traditional ones, and they tend to result in overpriced feeds. The latter cost is supposedly recovered by improved animal performance, as they will be consuming diets closer to their requirements.
There is no way we can avoid “paying” for nutrient variability. We can ignore the issue and pay through potential lost animal performance or pay through the application of measures to reduce its impact.
Each operation should adopt a different approach to this problem. For example, a small farm most likely is to go for Method 1; a large integrator is expected to follow Method 3 (or even a combination of Methods 2 and 3); and a large feed mill should certainly consider Method 2.
At the end, it comes down to a balance between cost and savings.