Grantham argues for quality over value

17 Jun 2010 by Jim Fickett.

Jeremy Grantham argues against buying value stocks, defined as those with a low ratio of price to book value, saying that such stocks (1) are more likely to take big losses (including outright failure) in a downturn, and (2) only outperform if bought at the right time. He prefers quality, mainly meaning stocks with high and consistent returns. These hold their value better in severe downturns and have outperformed the market since 1965.

In April Jeremy Grantham appended to his spring newsletter an essay on investing philosophy, part of a series entitled, “Letters to the Investment Committee”. This one is the lightly edited text of an October 2009 speech, so rather old now, but the material is not dated.

The first main point Grantham makes is the importance of the big trends and how they affect the major asset classes:

I believe the only things that really matter in investing are the bubbles and the busts. And here or there, in some country or in some asset class, there is usually something interesting going on in the bubble business. …

When you buy a stock, because it has surplus assets or a good yield or a great safety margin, you are really making a bet on regression to the mean. We are really counting on the fact that current unpopularity will fade, that the current problems in the industry will dissipate, and that the fortunes of war will move back to normal. Well, as a provable, statistical fact, industries are more dependably mean-reverting than stocks, for individual stocks can on rare occasion, permanently change their stripes à la Apple. (Or is that à l’Apple?) Sectors, like small caps, are more provably mean-reverting than industries. The aggregate stock market of a country is more provably mean-reverting when mispriced than sectors. And great asset classes are provably more mean-reverting than a single country. Asset classes are the most predictable of all: when a bubble occurs in a major asset class, it is a near certainty that it will go away. (A bubble for us is defined as a 2-sigma event, statistical talk for an event that would occur randomly every 40 years under normal conditions, a definition that is arbitrary but at least to us feels reasonable. And we define a “near certainty” as over 90% probable.)

So, with all the random vagaries of the real world, it is difficult to predict where an individual stock is going, or what will happen in the markets in the short term, but we can count on bubbles popping, and we can forecast how this will affect major asset classes. He gives an example:

For the record, I wrote an article for Fortune published in September of 2007 that referred to three “near certainties”: profit margins would come down, the housing market would break, and the risk-premium all over the world would widen, each with severe consequences.

So, he says, concentrate on the large asset classes.

So it’s simply illogical to give up the really high probabilities involved at the asset class level. All the data errors that frighten us all at the individual stock level are washed away at these great aggregations. It’s simply more reliable, higher-quality data.

This point is worth keeping in mind whether you agree with the rest of the argument or not.

He wants to consider two main asset classes, “value” and “quality”, and attacks one form of the value idea – that low price to book ratios mean you are getting a bargain. Perhaps, he says, they are cheap for a reason. And the risk is that, in a crash, they are more likely to fail.

a potential weakness of the Graham and Dodd approach, as it is usually practiced, is in its reliance on low price-to–book (P/B) ratios as one of its cornerstones. Low P/B ratios are, after all, the market’s way of saying “these are the assets in which I have the least trust.” It should not be surprising, therefore, that when you have a depression, or nearly have one, that more of these “cheap” companies go bust than is the case for the “expensive” Coca-Colas. …

Normally, a cheap company with lots of assets and a high yield outperforms in a bear market because it’s propped up by the yield that gets higher and higher as its price goes down. These companies almost always end up going down less than the average stock. When there is a really severe recession, however, the dividend starts to get cut and it becomes a little more questionable. And when there is a depression or a crash, then the companies start to get cut – to go out of business – and “value” companies get to take serious pain.

He then shows companies that were cheap on the basis of P/B were hit much harder in the great depression than those that were expensive on this basis.

According to Grantham, value stocks always have a higher risk of failure, but do sometimes perform well if they are sufficiently unpopular to be really cheap. Once fashionable, however, they become less cheap and not a good investment.

This state of affairs in which simple value measures outperformed was changed by two events, perhaps forever. First, there was the massive outperformance of “value” from 1973 to 1983 when the cheapest decile of P/B outperformed the market by over 100 percentage points. Second, a few years later a newly arriving wave of statistically well-educated “quants” adopted P/B and small cap as winning factors that should be modeled. Egged on by French and Fama, et al., they tended to assume that these “risk” factors delivered an extra return by divine right, regardless of how they were priced.

These factors in the past had delivered the goods because the “spreads” – the range between large and small cap and between high and low P/B ratios – had been wide. As they became mainstream “risk factors,” and with the popularity from their huge success in the 70s, the ranges narrowed. When the range between Coca-Cola and U.S. Steel on P/B becomes narrow, it can still easily be picked up and modeled but, it will fail to deliver an excess return. Low P/B stocks, or small cap stocks, only outperform when they are priced to do so

He gives three examples of “value traps” – times when the stocks with lowest P/B were not particularly cheap compared to the rest of the market, and subsequently did not perform well.

Grantham's answer, of course, is quality, defined as

Quality here is measured in the standard GMO way, using principally the level and stability of profitability and secondarily the level of debt. …

Quality’s close cousin, high return on equity. …

We define “quality” using primarily a high and stable return. I think you would agree that this is a workable definition of a franchise since to be both high and stable means you have the ability to set your own prices. Secondarily, we look at debt. This yields a very uncontroversial list of stocks of the Coca-Cola, Johnson & Johnson, and Microsoft ilk with not even one financial!

He shows that firms with low return on equity lost 95% of their value in the great depression, while those with high return “only” lost 75%. Further, quality stocks have outperformed the market, by 40%, since 1965.

Overall, Grantham makes a good case (1) for buying into asset classes based on macro trends, and (2) that quality stocks are safe and still provide a good return.

There is one point glossed over, however, in that timing is also important for quality. While Grantham points out that it was dangerous to buy value stocks at certain times, he ignores the parallel issue for quality stocks. That is, did quality stocks outperform the market since 1965 because they were quality stocks, or because 1965 was a good time to buy them? The last graph in the essay shows cumulative return of quality stocks relative to the S&P 500. Starting at 0% in 1965, the excess returns show considerable volatility, and have been back down to 0% (i.e., not outperforming) several times. So, in fact, there are good and bad times to buy quality stocks, as well.