Red Wine Bias among Wine Critics?


Red Wine Bias?

Yesterday, The Wine Curmudgeon (Jeff Siegel) posted an interesting article on Expert Scores and Red Wine Bias. He pointed to a study by him and “data scientist” Suneal Chaudhary which analyzed magazine scores of more than 64,000 wines from the 1970’s to the present. It showed that, among other things, red wines tend to score above 90 points more frequently than white wines. He concluded that there is more than chance at work and said their study “seems to show” wine critics are biased toward red wines.

I applaud the effort that went into the study. And there may in fact be red wine bias. However, the only options the study and article considered for the difference between red and white scores are red wine bias on the part of critics and extra effort/investment by winemakers in their red wines. There are other factors at work which the authors don’t appear to have considered.

There is no mention at all in the study or article to indicate Messrs. Siegel and Chaudhary looked at how wines are actually scored—where the points come from. I cannot say with certainty that there is no red wine bias, but I believe the discrepancy can be explained quite easily if one understands and considers both the structure of wine and the structure of wine scores. Specifically, I believe the difference may largely involve body, length, storing potential and the point scores awarded for those categories.

In 2005, wine educator Clive S. Michelsen published Tasting & Grading Wine. This book is a good, though certainly not the only, guide on how wine scores are tallied. It covers both 20-points systems and the more popular 100-points system. Attention is paid to red, white, dessert, fortified and sparkling wines, with necessary differences in scoring methods noted.

I will only address the 100-points system here as those are the scores referenced in the article and research. In that system, as explained by Michelsen, all wines start with 50 points. That is completely standard. The goal is to arrive at a 100-point system which people can easily understand, but not give undo importance to individual elements as might occur if starting from zero.

The other 50 potential points come from tallying a number of factors. For dry, still wines these include appearance (clarity, brightness, hue and color depth), nose (cleanliness, depth/fullness and varietal typicity), taste (alcohol balance, bitterness, body, fruit structure, flavor, length and overall balance) and storing ability. For depth/fullness on the nose, I would use the terms intensity and complexity.

Let’s focus on three elements: body, length and storing ability. Those are the elements which I suspect are responsible for the majority of differences revealed by the accumulated data. (By the way, I believe further data analysis would show dry white wines also score lower than good dessert and fortified wines for the same reasons.) If I am correct, then it’s not red wine bias by critics that is in play, but a scoring system that rewards characteristics which dry white wines (and rosé) less frequently have.

Points for Body

Body can be scored from zero to two points as follows:

Thin – 0 points

Light – 0.5 points

Medium – 1.5 points

Full – 2.0 points

You may not agree that full-bodied wines should receive more weight, pardon the pun, than those with medium body. But that is the way the system is designed. Either way, you will probably agree that, on average, dry red wines have more body than dry whites. This is due to the presence of tannins and generally higher alcohol levels in reds.

On average white wines will have medium body. New World whites probably have full body more often than light, but the opposite is true in the Old World. In contrast, red wines tend to have medium or full body. Relatively few red wines of quality have light body. So, based on body alone, red wines have at least a half-point, perhaps a full-point, advantage.

Points for Length

Length is the amount of time good flavors persist in your mouth after you have spat or swallowed the wine. Length can be expressed as short (15 seconds or less), medium, long or very long (45 seconds or more). This duration is a function of the acidity, phenolics, fruit depth and residual sugar in a wine.

When scoring, short wines get zero points. Very long wines can get as much as four points. Other wines will be given one, two or three points based on where they fall in between.

White wines may have more acidity than reds, but they have far fewer phenolics (tannins, etc.). So, the vast majority of dry white wines get their finish from just acidity and fruit, whereas dry reds have acidity, fruit and phenolics. In my experience, this leads to more dry reds having long to very long finishes than dry whites. Length could easily result in one, and quite possibly two, points advantage for red wines in published scores.

Points for Storing Ability

Michelsen uses the term “storing ability” purposefully. It includes not just wines that will age (improve over time), but also wines that will hold (maintain their current level of quality for a while). The longer a wine can be stored without a decline in quality, the more points it is awarded.

In this category of scoring, whites are given a little help because they tend to be without phenolics which are a key element for aging. A red wine must be able to hang in for 25+ years from vintage to get maximum points. A white wine need only go for 15+ years to receive the same number of points.

I am quite sure that a significantly lower percentage of dry white wines than red are made with storing ability as a focus. That is due in part to the lack of phenolic potential, but also consumer tendencies and the nature of many white varietals which are simply at their best when fresh, fruity and/or floral.

This is of huge significance in scoring. Why? Because wines can be awarded up to ten points for storing ability. Therefore, an otherwise perfect wine that has no potential for storage could not be awarded more than 90 points.

Here is the breakdown for whites and reds:

White Storage Years/Points – 0/0, 1/1, 1.5/2, 2/3, 3/4, 4/5, 5/6, 8/7, 10/8, 12/9, 15/10

Red Storage Years/Points – 1/0, 2/1, 3/2, 4/3, 5/4, 6/5, 8/6, 10/7, 15/8, 20/9, 25/10

Most wines do not benefit from aging and are best consumed young. And the arrangement above acknowledges that a) red wines are usually aged longer at the winery before release and b) the best of them have better aging potential due to phenolics. Still, red wines are going to score more storing points on average than whites. This could easily make as much as a 5-point difference between a really good white and a really good red.

The Importance of Context and Reflection

Point scores were never intended to communicate everything about a wine. They are always published by the critics with some sort of additional information, plus the variety, region, etc. And, while the method of scoring is concrete rather than relative, scores are intended to be used in a relative, but rational, way.

Properly trained critics score each wine independently and systematically, taking into account style, variety and region. Consumers compare wines against each other. 90 points for that one, 93 for this and higher is better. But consumers need to to do this within categories, not across categories.

It’s silly to ask, “Which is better, this 91-point Sauvignon Blanc or the 93-point Syrah?” Which is better, a  91-point pickup truck or a 93-point minivan? It depends, are you hauling gravel or ferrying your kid’s basketball team?

Scores should be used for apples to apples comparisons. Which is better, the 91-point Sauvignon Blanc or the 93-point Sauvignon Blanc? Probably the latter.

Even so, consumers need to consider their intended use. A 100-point red Bordeaux is a wine with extreme aging potential. It won’t be near it’s prime drinking window when young. If a consumer is looking for something to drink tonight, buying such a wine young makes no sense. The consumer will pay a lot of money and not get the value of aging. That buyer will be much better off with a 93-point wine which is at peak that night.

Red Wine Bias?

The study and article I’ve addressed here found red wines tend to score more points than white wines. The only explanations offered were red wine bias and winemaker focus. And the authors had a point of view going in.

This quote was included in the press release(!) sent out by their PR agency(!), “Wine scores have always been controversial, and there has been plenty of anecdotal evidence that they were inherently flawed,” says Siegel. “With this study, which uses one of the largest databases of wine scores ever studied, we hope that the inconsistencies that we’ve found add to the evidence that scores don’t reflect wine quality as much as they reflect the personal taste of the critics who give the scores.”

I don’t claim to have all the answers. I don’t have all the data, nor do I know if the data the authors collected is rich enough to allow conclusive statements about red wine bias. However, nowhere in the article, paper or press release did they mention having investigated the methodology used by trained, professional critics. Perhaps they are using data to support a conclusion, rather than analyzing data and all potential factors to illuminate?

Bias is defined as “prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair.” There may be red wine bias among critics. There is certainly bias indicated in this sentence, “we hope that the inconsistencies that we’ve found add to the evidence that scores don’t reflect wine quality as much as they reflect the personal taste of the critics who give the scores.”

 

Note: Like them or not, scores are here to stay. I’ve previously written articles on how consumers can use scores to best advantage. You can find them here.

Wine Ratings: A Practical Look at Points Scores

Wine Ratings: What’s the Point and Should You Pay Attention?

Wine Ratings: What Do They Mean and How Do I Use Them?

I plan to write at least one more article on the specifics of how all elements of a wine are tallied into scores.

Copyright Fred Swan 2016. Photo: Creative Commons-Amanda Velocet. All rights reserved.

6 Comments

Add yours
  1. 1
    James Melendez

    Very good article. I review and rate wines that “fit” within my brand. I prefer to self select and I have to eliminate wines that I will not review: private label, mass market, etc. I would say for my scores whites, reds and sparklings are consistent. And I like your closing comment that scores will be around. As long as there are a lot wines there will be scores.

    • 2
      fredswan@norcalwine.com

      Thank you, James. I do some self-selection as well. There’s only so much time to write or read, may as well focus on the best or most interesting.

  2. 3
    Joey

    ” I cannot say with certainty that there is no red wine bias, but I believe the discrepancy can be explained quite easily if one understands and considers both the structure of wine and the structure of wine scores.”
    Bias is inherent and universal throughout humanity, and wine reviews are no different. Clearly, there is opportunity for price bias, red wine bias, confirmation bias, and all sorts of other things. Almost surely, there is going to be bias. And study after study illuminates this. Why? Taste is totally subjective, and worse, is completely influenced by our brain’s thoughts at the moment of tasting. People are not to blame. Our evolution is to blame.

    • 4
      fredswan@norcalwine.com

      Thank you for your comments, Joey. I have to disagree though. Taste, when referring to the way in which trained professionals taste and evaluate wine, is not entirely subjective. There is a great degree of consistency. I see that consistently, and frequently, where perusing wine scores and also when reviewing the exact same set of wines, in the same place, at the same time with a number of my colleagues. Our scores and notes will be extremely similar.

  3. 5
    Paul Vandenberg

    Phenolics. Sounds cool. Yet some of the longest lived wines are vinified to minimize them. Comet Year champers anyone?
    Acid, sugar, alcohol, genes, at least as important.
    I am convinced of the bias, look at lists at prices from eighty years ago. Riesling was much spendeirer? than Cabernet S. Something about old,fat, white guys who smoke cigars?
    Your article on Irragation vs dry farming was provoking. Come have a meal and let’s talk in much greater depth( pun intended) out how water makes or breaks a vintage

+ Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.