“Every agronomic decision is a good one for someone” is a quote that I saw recently that reminds us that being “entrepreneurial” is high valued in today’s business world, rewarded in some cases by large amounts of venture capital invested in startup companies. That’s as true in crop agriculture as in any other business, and it means that startups are under pressure to find or create niches and product(s) to fill them, and to demonstrate that these products are widely sellable. The “grand prize” can be sale of the startup to a larger company, yielding a large return for investors and a chance for the entrepreneur to get a large financial award and perhaps move on to bigger projects.
The result is an increasing number of novel crop inputs, accompanied by creative marketing campaigns. Such campaigns often employ the trappings of science to help build trust in such inputs and those who develop them. Photos of serious-looking people examining flasks or test tubes while dressed in white lab coats populate websites, especially for startups that are developing and selling novel inputs such as microbes, or the less specific terms “biologicals” or “biostimulants.” Companies tend to point to field trials they have in their database, and a selected set of such results may be available to potential customers. Testimonials are very common, and almost every such website includes mention of the positive ROI (return on investment) that buyers can expect from use of this product.
Unsurprisingly, company websites tend to highlight data selected for the purpose of supporting sales—it would make little sense from a marketing standpoint to show all of the data. A few decades ago, it was common for companies to engage university researchers to conduct trials on novel products, and for companies to use such results (at least the favorable ones) to help support sales. There may have been cases in which results from universities were insufficiently positive to support sales, and a product wasn’t taken to market as a result. But for the most part, university testing was used to demonstrate that the company had enough confidence in the product that it supported public research on it even without knowing what such research might show.
Novel product marketing
That approach is rare today. Instead, companies hire scientists, both their own and from third-party research companies, to test new products. If university researchers get involved, it’s often after the product is on the market, so any results tend to have little effect on marketing. Even if a neutral scientist believes that a novel product can’t possibly work as claimed and decides to do their own testing and to make the results public, such findings are easily explained away by the company or simply ignored. A trial or two at a university carries little weight compared to the hundreds of comparisons the company assembles and uses to develop the marketing message.
Many see this as no big deal, citing the fact that there is little or no evidence that any harm has come from marketing of products based on uncertain, or undisclosed, results. After all, “buyer beware” is an established principle of a market economy, and if a company’s product turns out to be ineffective, the market for it will disappear. For products such as seed, fertilizers that provide major nutrients, fungicides, insecticides, and herbicides, where failure to work as advertised is often visible, that’s valid. Protection for the buyer for such products comes from products’ needing to meet official standards—seed has to germinate, fertilizers have to contain nutrients in the quantities stated, and pesticides have to be proven both effective and safe to use. Sellers of such products would face major losses and possible lawsuits if they didn’t meet these standards, especially if failures are visible and represent a loss to the buyer.
Many newer products, especially those developed by startups, are not marketed to address or prevent known pest problems or nutrient deficiencies. Instead, they are promoted to provide a boost in such complex processes as plant or root growth, nutrient uptake by plants, interactions among plants, soils, and microbes, and in some cases “soil health.” A skeptic might wonder if such products aren’t “designed to fix problems that the plant (or the grower) didn’t know existed.” One near-universal characteristic of such “problems” is that they are not diagnosable or visible in the field, at least not yet. We may well see test kits at some point that purport to show some sort of imbalance in soil microbial populations (as an example), with results used to sell microbial fixes. At the moment, however, there are few if any confirmed indications that such problems exist for crops growing in well-managed fields.
This is not a new phenomenon—there have long been things sold as crop and soil additives with claims of improving plant growth and yield without any indication of the problem that they are supposed to correct, nor how they might work to fix such a problem. What is different today is the near-universal claim that these products resulted from scientific research that revealed (as an example) novel microbes that could boost growth and yield, or fix atmospheric N for the crop. In many cases, the biochemical changes such microbes bring about are also detailed, and in some cases, published in the scientific literature.
How can the grower being offered such novel products know if they actually work, especially when they’re said to fix a problem that isn’t obvious? The standard answer to this question is to “test the product in your own fields” to see if it works. That certainly seems reasonable, but there are reasons to wonder whether it’s a workable approach. That many such products address “problems” that aren’t visible means that we can generally expect no visible evidence that the product does anything.
A large number of companies are developing and offering for sale “biologicals”, a term that usually means microbial preparations. Some of these are for biological control of diseases or insects, and so fall into the category of pesticides. Here we’ll consider mostly those—mostly soil bacteria—that have less well-defined activity. Soil contains a huge variety and vast numbers of microbes, which makes finding out how an individual bacterial species affects plants growing in a field soil impossible. So discovering and initial testing for such microbes is often done on plants grown on a sterilized medium such nutrient solution or sand to exclude other microbes. While finding out whether a bacterium fixes nitrogen (for example) is possible in such a system, that does not predict how it will work in a field soil when added into the huge population of existing soil bacteria, including species related to it.
Another difficulty in using field tests to evaluate novel products is that any plant response to such products is certain to be affected by weather, soil texture, organic matter, and soil microbial populations, all of which vary across fields, by growing season, and by factors such as tillage and rotation. Any successful (profitable) input has to produce a response that pays for the input, and has to do this consistently enough—high enough in some years to pay for its use in other years and fields when it does little or nothing—to justify routine use. That is a high bar, and with something like a microbe that, even in the best of cases, is likely to produce only a small increase in yield, it is difficult or impossible to conclude from strip trials that the input is going to be profitable.
What if we need only one added bushel of yield to pay for an input and to make a small profit? One issue, as we noted above, is that a trial or two in one year cannot predict performance in that field or in other fields the next year or the year after that. Another major issue is that no two field strips yield exactly the same, and such variability operates in every case to cast doubt on whether we actually found what we think we found. This is not easy to get a handle on, but with a strip trial we’re asking a small sample of strips in a field to represent the whole field (and maybe other fields as well), and even after we apply statistics in order to see how firm our conclusion is, we can never be certain that a different sample of strips would have given the same answer.
A strip trial example: let’s pretend
Let’s use an example of applying a microbial product (“MicX”) as a foliar spray to corn at stage V6. We find a uniform part of the field by using yield monitor data from two years ago when the field was last in corn. The six strips we selected had the yields in the table below two years ago, and we’ll imagine that they will be exactly the same this year. [These yields are real ones from an Illinois field.] We pair up the strips into three pairs, and apply MicX to one strip in each pair, chosen by flipping a coin: Strips 1, 4, and 5 are treated with MicX. Let’s pretend for this exercise that MicX always increases yield by 1.5 bushels per acre (if we actually knew this there would be no reason to test the product ever, but we’re pretending here.)
The difference coming from treatment is 2.2 bushels per acre, more than the 1.5 bu/acre we know came from treatment because the treatment effect was added to yields that weren’t consistent: the first pair of strips shows a +4.5-bushel difference, the second pair -1.0 bu, and the third pair +2.8-bushel yield response to treatment. If we analyze these results statistically (this can be done using Excel), after deciding that we need to be 90% sure that the treatment effect was real (“significant”) and not coming from random chance, the 2.2-bushel difference turns out to be “not significant.” “Not significant” never means that the treatment did not affect yield (in this case we know it did), but it means that the variability among strips was enough to keep us from being as sure as we need to be that the treatment effect actually separated out from the (variable) yields in the strips.
There is always a temptation to look at data like this and to do some “post-harvest correction” to show that a treatment really did increase yield—that we must have made some sort of mistake that we can fix once we see numbers. Here, we notice that the middle pair of strips showed lower yield with treatment than without, so we could just ignore that pair of strips, figuring we must have treated the wrong strip or that there must have been “damage” in the treated strip. That would increase the yield added by treatment to 3.85 bushels, and would mean that we are 90% (actually 92%) certain that the difference was real. Still, this was already a very small trial with very good uniformity with six strips, and now we whittled it down to only four strips, for no good reason other than that we didn’t like the results in one pair of strips. It’s not difficult to see how “fixing” data can turn such field studies into a useless exercise. If we “already know” a product works, we might as well just skip the studies and use it, maybe comparing yield with it to what yield we imagine we might have gotten without it—in other words, take a testimonial approach.
Strip-to-strip variability along with small or inconsistent treatment effects combine to make “luck of the draw” of strip assignment more important than product performance in most such studies. In fact, of the eight ways to assign treatment to one strip in each pair in this exercise, two result in “significant” differences (+3.0 and +3.4 bu/acre) and the other six do not, with the treatment differences of those six ranging from -0.9 to +2.2, averaging +0.8 bushels per acre. If strips yields had been more variable, which they would be in nearly every field, it is even less likely that any difference would have been significant; even if by chance (or design) treatment got assigned to the highest-yielding strip in each pair, the size of the treatment advantage needs to be relatively uniform among pairs to reach significance. We never know (although we might try to guess) which strips will be the highest-yielding in each pair when we assign treatments, but if we did the trial eight times in the same uniform field, using all possible assignments of treated strips, at least one showing a “significant difference” would be likely, even if we forgot to apply the treatment or the treatment did absolutely nothing.
If a treatment effect is larger, then finding a significant difference is more likely, but not a sure thing. Doing such a comparison in three or four different fields in the same year might be more likely to extract a significant difference, but only if the treatment actually had the same effect in the other fields as it did in this one. Increasing the number of pairs of strips in a trial might also make finding a significant difference (if there is one) more likely, but this also provides more pairs to throw out if we don’t like them, in order to end up with favorable results.
To test or not to test?
The purpose of this exercise is not to discourage on-farm testing or to dismiss novel products that might be on the market, but to illustrate how difficult it is to get treatment effects to stand out among the “background noise” of variability within and between fields, especially when any treatment effect is small, and when soils and weather are likely to affect how (and if) a treatment works. Many of the yield claims made for products might have originated with field comparisons, but the process by which those become the “company line” is not clear. If claims exceed what we would expect from a certain product, we should see results of careful (and replicated) strip trials in our own fields before we make a commitment to buy and use the product: the burden of proof should be on the seller. If there is no opportunity to do that, not using the product is a good decision. Even if we trust those who sell such products, we have seen here how difficult it is to reach firm conclusions based un unbiased testing of products that offer solutions to problems that are difficult to describe.
Higher crop prices mean that it takes less yield increase to pay for novel inputs, which seems to have triggered a lot of suggestions that 2021 is a good year to try some of these. While it may take less yield increase to pay for them, it’s worth remembering that crop price has no influence on whether not such products actually work, and that the amount of money lost if they do nothing is the same no matter the crop price. Regardless of whether we ever see compelling evidence that they add yield consistently, it seems likely that some of them will end up being used, probably as seed treatments. Some commercial seed treatments already contain microbes, in some cases as a way to protect seed from attack by other microbes, and in other cases to produce crop growth responses. More are on their way.