Quantcast

Wine competitions, judging, and blind luck

Wine competitions, judging, and blind luckOr, as the co-author of a new study told me: “Consumers should disregard results from wine competitions, because it’s a matter of luck whether a wine gets a gold medal.”

That’s the conclusion of Robert Hodgson, a winemaker and statistician whose paper (written with SMU’s Jing Cao) is called “Criteria for Accrediting Expert Wine Judges” and appears in the current issue of The Journal of Wine Economics. It says that those of us who judge wine competitions, including some of the world’s best-known wine experts, are ordinary at best. And most of us aren’t ordinary.

Because:

… [M]any judges who fail the test have vast professional experience in the wine industry. This leads to us to question the basic premise that experts are able to provide consistent evaluations in wine competitions and, hence, that wine competitions do not provide reliable recommendations of wine quality.

More on the study and what it means, after the jump:

The report is the culmination of research started at the California State Fair wine competition at the end of the last decade. The competition’s organizers wanted to see if judging was consistent; that is, did the same wine receive the same medal from the same judge if the judge tasted it more than once during the event? The initial results, which showed that there was little consistency, were confirmed in the current study.

More than confirmed, actually. Just two of the 37 judges who worked the competition in 2010, 2011, and 2012 met the study’s criteria to be an expert; that is, that they gave the same wine the same medal (within statistical variation) each time they tasted it. Even more amazing, 17 of the 37 were so inconsistent that their ratings were statistically meaningless. In other words, presented with Picasso’s Guernica, most of the judges would have given a masterpiece of 20th century art  three different medals if they saw it three different times.

“This is not a reflection on the judges as people, and I don’t mean that kind of criticism,” says Hodgson. “But the task assigned them as wine judges was beyond their capabilities.”

Which, given the nature of wine competitions, makes more sense than many doubters want to believe. Could the problem be with the system, and not the judges? Is it possible to be consistent when judges taste 100 wines day? Or when they taste flight after flight of something like zinfandel, which is notoriously difficult to judge under the best circumstances?

When I asked him this, Hodgson agreed, but added: “But we don’t see an alternative. But it is an inherent problem. You just want to see the competitions give the judges sufficient time to do it.”

Perhaps. But my experience, after a decade of judging regularly, is that the results seem better (allowing for this um-mathematical approach) when I judge fewer wines. That means that the competition is smaller, or that the organizers have hired more judges. Maybe that’s where the next line of study should go, determining if judging fewer wines leads to more consistent results.

  • Diane Teitelbaum

    Judges should be selected for having a perceptive palate and the professional experience to make the judgement with no personal balance. Untrained consumers should not judge. Wineries have spent a lot of time and money making the wine. The wines deserve a careful examination.

  • http://www.blog.bayareawinesociety.org tom merle

    Wrong solution. Yes, smaller would be better. But the criterion of selecting judges–their expertise–is misguided. Regular people, those who buy wine for dinner, should do the evaluating, and you should glean the wisdom of crowds, meaning that more than 14 people should judge a wine to arrive at a meaningful consensus since palates do differ. This is our approach, and the results have been replicated. http://blog.bayareawinesociety.org.

  • Apolon

    I feel like this comes down to a basic problem… wines are not as different from each other as the winemakers and the press make them out to be. For the same varietal under similar growing conditions (even if not necessarily in the same place), there will certainly be differences. I just don’t think those differences are that important, or even that noticeable.

    Judges not being able to recognize those differences, as here, corroborates this for me.

    That said, the Wine Spectator types can land on the same score repeatedly in blind tastings, so maybe I just have an undeveloped palate.

  • http://blog.bayareawinesociety.org tom merle

    Also consider how Barbara Drady/Affairs of the Vine takes this problem into consideration for her competitions.

    “A maximum of 32 wines is tasted per day: four flights of eight wines each. We have arrived at this formula and find that in this format, each wine can be tasted with clarity and concentration without palate fatigue.”

    http://affairsofthevine.com/pinot-noir-shootout-summit-process/

  • http://winecurmudgeon.com Wine Curmudgeon

    Terrific comments all. I’m glad to see someone has tried smaller flights and gotten good results.

    I am usually a wisdom of crowds guy, but enough work has bee done to show that it doesn’t always work with wine. See especially Robin Goldstein’s tastings for The Wine Trials, where consumers couldn’t tell the difference between a $10 and $100 bottle of sparkling.

    And yes, given the way winemakers and their bosses focus on a certain style to chase scores, many wines do taste alike, especially when you taste 10 or 12 of them one after the other.

  • Pingback: Good Reads Wednesday « Artisan Family of Wines

Powered by WordPress | Designed by: suv | Thanks to toyota suv, infiniti suv and lexus suv