We all know about the factor of seasonality, and especially about how it is distilled into the bit of TV news wisdom, “Sell In May and go away”. That phrase has been around for decades, originally supposedly referring to traders in England who would “Sell in May and go away, and come back on St. Leger’s Day”. This referred to the custom of aristocrats, merchants, and bankers who would skip town and go to the country during the hot months, returning for the St. Leger’s Stakes, a horse race held in September. Source=Investopedia.
It just happens to be a fun rhyme for the U.S. stock market, minus the St. Leger’s Stakes reference, but only during the past couple of decades. Years ago, seasonality did not work that way, which is what I explore with this week’s chart.
For years I have employed a file of data I created going back to 1976, a start point chosen so as to have a decent representative sample of what seasonality looks like. I also elected to omit the enormous bear market year of 1974, and its enormous rebound year of 1975, neither of which makes for a good contribution toward what “normal” looks like.
Creating an annual seasonal pattern requires several mathematical steps. One must first chop up the data into 1-year chunks of time. Next, one must equalize each of the years, as best as is possible, so that the years one is examining are as close to the same as possible. Back in the 1970s, there were 253 trading days per year, and now it is 252 due to a change in holidays. And then in years like 2012, when Hurricane Sandy shut down trading for a couple of days, there are days missing. All of these require adjustments.
I also leave out 1987 entirely, because the height of the peak that year and the depth of the October 1987 crash tend to drown out the voices of the other data. 1987 is not a good example of “normal” market behavior.
Next, each year’s data has to be reset to reflect a percentage change from the start point. Averaging together a year when the DJIA is above 20,000 with ones when it is below 1,000 makes no mathematical sense, and so we have to adjust for that.
Once each year’s data is fitted in a Procrustean fashion to our ideal year, we can then then average all of the years together to get an ideal average pattern of what “seasonality” looks like. But this is only the starting point for doing any meaningful analysis.
Throwing all possible years together into one average pattern can miss important information, such as a shift in the nature of seasonality. That is the point behind this week’s chart. In the 1970s and 1980s, there was not a lot of difference in the months of the year. The August and September weakness we are all aware of now really was not a relevant factor in that earlier period.
This is fascinating, and a sign that we might not all understand as much as we think we know about what drives price behavior. Among the theories offered for why summer to autumn weakness is a relevant factor is that financial liquidity gets tied up in the agricultural futures market, as money needs to be available to pay farmers for their harvests each fall, taking it away from the stock market. But if that factor is not present for a 20-year period, it is probably not a relevant and enduring factor.
Getting to the “why” is much less important than establishing the “is”. And the “is” always trumps the “why”, and also the “should”. The last 20 years’ data show that August and September have become pretty consistently weak months for the U.S. stock market.
Here is a current chart of the DJIA in 2018 versus the pattern of the last 20 years: