Issue No. 24, Article 5/November 9, 2012
Variety Trials and Choosing Seed for 2013
The last of the 2012 University of Illinois corn and soybean trial data were posted on our website this week: see and download results at vt.cropsci.illinois.edu. Results can be downloaded either as PDF files for printing or as Excel spreadsheet files that allow sorting by yield or other data.
Corn and soybean trials are conducted as regional trials, with four corn regions--North, West Central, East Central, and South--and five soybean regions, numbered from north (Region 1) to south (Region 5). Each corn region has three locations, and each soybean regions has either two or three locations. We require that entries be made by region, meaning that companies entering the trials must enter the same variety in all of the locations in a region. We do this because most varieties are in the trials for only one year, and so having data across two or three locations predicts performance more reliably than data from one site. While we firmly believe that results across locations provide a good comparison of different entries in a trial, we also know that how different varieties will perform next year depends heavily on whether next year’s conditions are similar to this year’s. Since we never know what next year will bring, we never know how well the performance of a variety this year will repeat next year, especially in comparison with other varieties.
Though some may have a hunch that conditions will be unusually dry again next year, looking at yields over the years suggests that unusual growing seasons don’t repeat themselves very often. So our best guess is that conditions next year will be "average," meaning that it makes sense to use data from trials where conditions were more or less "average" this year. We choose our variety testing sites to represent local soils reasonably well, but we do tend to use fields that don’t have drainage or other problems. Like many "better" producer fields, our sites tend to produce yields higher than county averages, but often no better than nearby producer fields.
In years with average (good) growing conditions we use all of the variety testing data and present averages across locations within a region as well as yield for each location. We also include yields averaged across two and three years for each region, though the percentage of entries made in more than one year continues to decline. As an example, in the corn hybrid trial in the North region, only about a third of the hybrids were entered for the second year in 2012, and fewer than one in 10 were entered for the third year.
In the 2012 soybean variety trials, while yields varied considerably among locations within most of the regions, all of the data were included in the report of results. For example, yields in the MG 2 trial in Region 1 (northern Illinois) averaged 51.7, 61.6, and 74.9 bushels at Mt. Morris, DeKalb, and Erie, respectively. Such a wide range raises a question: if we are trying to predict performance for next year, why would we combine results from such different sites?
The answer is found in how well the data line up with one another. That is, when varieties "hold rank" from one site to another, then averaging over those sites still gives a useful number to use for predicting future performance. We might use a term like "compatible" to describe sites where the higher- and lower-yielding entries at one site are also the higher- and lower-yielding entries at another site. Using as many compatible sites as possible generally gives better predictions, so we use them when we can.
The best indicator of how compatible sites are is found in a statistic called the coefficient of variation, or CV. In the results found on the variety testing website, the regional corn and soybean summaries each have at the bottom the yield average, an LSD value, and a CV value. (If the difference between two entries is greater than the LSD, then we say the two are "significantly" different; we use LSD 0.25, which means we are 75% confident that the two are actually different.) The CV, which is a percentage, is a kind of quality measure for the trial. In the case of regional averages (given in bold just to the right of the entry names), the CV is a good indicator of how consistent performance was across locations in the region. Each individual location also has an LSD and a CV; in that case the CV measures consistency of performance across the reps within that trial.
Values of CV within well-run trials tend to be in the range of 5% to 10%. Higher-average yields tend to lower the CV. And both low yields and non-uniform conditions tend to raise the CV. Values above 15% or 20% for yield often indicate that the trial had problems and that the data may not be very trustworthy--that is, some entries probably "got lucky" by being in better places in the field while others suffered the opposite. Such trials often would not produce similar results if run again under better conditions, so they are not considered to be very predictive of future performance.
The value of a CV calculated across several trials in a region is often higher than the values of the individual trials. This is because the yield rank of entries at one location may be quite different from another location or locations. This could happen, for example, because a disease to which some entries are susceptible and some are resistant is much worse at one location than at another, or because one location was very dry and entries with drought tolerance did relatively better there than at well-watered locations.
This variability, both within and among locations, was evident in the U of I corn hybrid trials in 2012. Table 3 gives average yields, yield ranges from top to bottom, and CV values for individual trials and for regions in these trials. The Perry location, in the West Central region, was not harvested due to very dry weather and very low yields, and for the same reason none of the three trials in the southern Illinois region were harvested for yield. The two remaining locations in the West Central region yielded differently--New Berlin was under considerable stress, while Monmouth was not. But the regional CV of 10% indicates that these were reasonably compatible locations. In the North, yield levels differed somewhat among locations, but regional results are useful there as well.
Table 3. Summary of 2012 U of I corn hybrid trials.
Avg yield (bu/acre)
Coefficient of variation (%)
Minimum yield (bu/acre)
Maximum yield (bu/acre)
West Central region
East Central region
The data in the East Central region don’t look that much different from those in the other regions, but we chose to publish only the Goodfield data to represent this region in 2012. Why, when the regional CV value is not high, would we not use all of the data? It’s a judgment call, based on the fact that yields at Dwight were very low and the CV was high, casting doubt on the usefulness of that data. The trial at Urbana had a moderately high CV, but more importantly, it had some unusual lodging and other problems that made us doubt the usefulness of the results.
This is not to say that university variety trials, even ours at the UI, are better or carry more weight than company trials. Breeders and company agronomists, though, face the same issues in their testing programs that I have outlined here, and they often need to make similar decisions. Company testing programs differ from university ones in that they don’t make public how many trials they have, what entries they include besides their own, and how many they choose not to use after a season like 2012. What all of these trials and how we use them do have in common, though, is that all are aimed at bringing better-performing varieties to producer fields.
So how do we go about choosing hybrids and varieties after a season like 2012? First, understand that most varieties are backed by a large body of data, from trials run over many sites and several years by the originating seed company. Testing is widespread before release, while university trials typically include entries only after release. Most of the data on performance are held by companies, and company personnel use them, along with some university trial data, to decide what seed to produce and what to sell.
So while university trials are important as a means to compare entries from many different sources and companies, they are not, and should not be, the only or even the major place to look at performance data. For that, find companies you trust that sell hybrids and varieties that you know do well, and use university trials mostly as a way to backstop your decisions on what varieties you choose to use. At the same time, know that no testing program can ever produce exact predictions and that there can be surprises--some good but most not--when we put a hybrid or variety in an environment it has not experienced before.--Emerson Nafziger