No. 18 Article 4/July 23, 2004

Don't Just Wonder--TEST!

It may be just a fluke that a tough soybean year in 2003, higher crop prices, and the widespread damage (real or suspected) from "new" pests such as the soybean aphid are coinciding with probably the highest-ever number of offers to sell products for application to crops, especially soybean. If the pitch were that we could "return 300% on your investment in 60 days," many of us would have some qualms. When instead the ad reads "spend $8 per acre and get 4 extra bushels of soybean," we tend to see that as reasonable.

Perhaps we read things this way because of our experience--in some cases we really have gotten such returns for investments in crop production or protection. On the other hand, such returns have often come from solving well-understood problems, using technology tested thoroughly enough that we could be fairly sure that the solution would solve a problem that we knew existed. Most such "approachable" problems have been solved in most fields, and those new problems that will appear on occasion will have reasonable solutions once we gain enough experience to know when and how to apply such solutions. For example, soybean aphid is a problem with a solution, once we learn how to apply IPM principles to this particular pest.

Instead of offering well-defined solutions to understandable and observable problems, many of the promotions today are for things that purport to give general benefits or to offer solutions to "problems" that we don't know whether we have. Many of these are pitched toward soybean, both because of recent problems (pests, low yields) and because most fields are already sprayed with herbicide during the season, so adding things to the spray tank or spraying again is reasonable and doable for most people.

Without picking on these, let's use as examples foliar feeding and a fungicide + insecticide package for soybean, both of which are being promoted. At one time, university researchers might have been involved in putting together such input packages, or at least we would have conducted well-run tests of such packages prior to their release. That is no longer the norm, for a complex of reasons, not the least of which is the decline in numbers of applied researchers at universities. It is now more common for us to hear about such inputs at the same time that they are being sold to producers. Hence the "What do you think about this?" question that we occasionally get usually brings a cautious reply, even if we know about tests of similar inputs in the past.

What might be a reasonable response to, and approach to, such questions? We know that soybean plants need nutrients, including micronutrients, so the appeal of foliar nutrient applications is obvious; by applying a mixture, we are directly "feeding" the plant what it needs without having to worry whether it has gotten enough nutrients from the soil. In the same way, we know that there are sometimes insects and sometimes diseases in soybean that respond to control measures, and such measures can produce additional profits. Again, the sense that we "did what we could" to help the crop produce or protect yield is a powerful incentive to take direct, preventive action against all possible causes of yield reduction.

When we take such an "insurance" approach to inputs, we often do not need, and may not even want, to know whether a particular input provided a direct return on investment. Do we worry about whether or not auto or life insurance "pays off"? Even though we'd rather it not pay off (directly, at least), we still consider it to be a good investment. Should we adopt such an insurance approach to crop inputs? Fortunately, we have the tools to assess the direct effect of such inputs in individual fields and growing seasons. Results of such assessments provide us the ability to predict future returns, and hence they give us confidence about whether or not using particular inputs will provide a return on investment. Such predictions are never completely accurate, but failure to assess response to inputs will always have us wondering whether something we paid for provided, or is likely ever to provide, any response at all.

The process of making our own field comparisons of inputs has never been easier or less expensive than it is today. For those with well-adjusted combine yield monitors and their own application equipment or a competent and cooperative hired applicator, costs of making comparisons can approach zero. This means that for many inputs, there is essentially no reason not to assess performance, whether or not a particular input is used or not used on most acres.

Here are some points that I hope will ease the way for many Illinois producers to do their own on-farm comparisons of crop inputs:

  1. Approach this exercise with complete neutrality. If you already "believe in" an input, to the extent that you will be certain it provided a benefit even when your results don't show that, then just use it and skip the comparison. Biased trials and "selected" results that come from non-neutral trials mislead rather than inform.
  2. You need to have treated and untreated areas next to each other in order to know that the treatment did anything. The statement "I used this material and I got better yields than I expected" is a statement of feeling about an input, but it is not a comparison. No matter how good your sense of what yield to expect from a field, a measurement against your expectation is not precise enough to tell us much.
  3. Without replication, consisting of four to eight pairs of strips, one treated and one untreated strip in each pair, you will always wonder whether any difference (or lack of difference) you found might have been due to the luck of where the treated area was placed.
  4. A single replicated trial in a field one year will describe what happened better than it will predict what will happen next time or in another field. An ideal setup would be a set of trials with three or four strips in each of three or four fields, with fields and locations in fields fairly representing the area over which we want to predict results. At least you need to understand thata trial done in the best, most uniform soil that you farm will predict most accurately for the best, most uniform parts of the fields that you farm.
  5. Apply treatments carefully, in strips wide enough to take full combine passes for yield, and be sure to remember where you put what. Planting a 6-foot length of PVC pipe (use a soil probe to plant it) in the middle of the end of treated strips is a good way to know where strips are. If driving over the crop is a concern, you will want to drive over the untreated strips in order to duplicate the damage on treated strips. Randomizing the order of strips (by tossing a coin, with heads meaning treat the first strip of that pair and tails meaning treat the second) is good, but alternating treated and untreated strips doesn't usually cause built-in bias, especially if the field is relatively level and uniform.
  6. Record yield carefully in each strip. Yield monitors usually work well for such comparisons, since treatments like these seldom affect grain moisture or test weight. Some people like the direct measurements provided by weigh wagons.
  7. Don't fret too much about statistics; if the trial was done well and without bias, just averaging yields from treated and untreated strips will provide a reasonable answer. All statistics can do is put some probabilities to the results, telling us how much confidence we can have that any difference was actually caused by the treatment.
  8. Use all of the results you get, unless there was an obvious mistake. Unfortunately, throwing out strips that don't agree with a bias is a primary way to "cook" results, and it is probably better not to even have replications (or a trial at all) if there's any possibility that this could happen. Why go to the trouble of doing on-farm comparisons if we already "know" what the results will be?

If anyone does strip trials such as those described in this article and would like to share them and have some statistical work done on them, I am willing to respond to such requests. Simply e-mail me the data on a spreadsheet (Excel), along with a description of what you did. I'll provide input to those who send such data and will share results with others only with permission.

As I see it, the emergence of a network of those willing and able to conduct such applied research comparisons will help sustain Illinois as a crop-producing power. We have the tools and abilities to make this happen. Let's get started.Emerson Nafziger

Close this window