Wednesday, August 3, 2011

"Reality Retirement Planning: A New Paradigm for an Old Science"

Today's classic withdrawal rate study is Ty Bernicke's "Reality Retirement Planning: A New Paradigm for an Old Science," from the June 2005 Journal of Financial Planning.

A common assumption for retirement withdrawal rate studies, which I've used in all of my own research, is that retirees will adjust their withdrawal amounts for inflation in each year of retirement.  The assumption is that retirees will want to spend the same amount in real, inflation-adjusted terms for as long as they live.

Ty Bernicke challenges this assumption in a rather significant way.  If he is right, then we are playing a whole different ballgame and the 4% rule falls by the wayside. His argument is that as retirees get older and older, they voluntarily reduce their spending. They are just not as interested or able to travel as much, go to so many restaurants, and so on.  

I'm not sure if he is right or not, but this is a matter I would like to explore some more, as it is quite important. What percent of the population maintains constant spending?  What percent do voluntarily reduce their spending? What percent are forced to increase spending due to entering a nursing home or experience large medical bills?  What is the appropriate default assumption? Mr. Bernicke says that reduced spending is true for his clients, which I can fully believe.  People who use financial planners are probably more on top of their finances and may find that they can voluntarily reduce spending.  But I'm not necessarily convinced that this will be true for everyone or that do-it-yourselfers should rely on the notion that they will not need to spend as much as they get older and older.

Mr. Bernicke uses evidence from the Consumer Expenditure Survey (CES) to show that those aged 75+ spend less than those aged 70-74, who spend less than those aged 65-69, who spend less than those aged 60-64, who spend less than those aged 55-59.  This particular results seems hard to dispute, though like all of his results, it is based on aggregate numbers.  These are just the averages by age group, but how much variation is there within each age group?

One possible explanation for this reduced spending is the cohort effect: different age groups just happen to spend differently for reasons unrelated to age.  He checks this as well by comparing the 1984 and 2004 CES surveys and finds further evidence for the reduced spending.

In order to argue that these reductions are voluntary, he refers to data on median net worth by age and household income quintile to show that older people have more wealth than younger people within each income quintile. If older people are wealthier but are spending less, he concludes that the spending reductions must be voluntary. Again, these are all still just averages. Jonathan Clements brings up a valid criticism of this, though, in a 2006 Wall Street Journal article. These income quintiles are defined for the whole population, and a much higher percentage (43%) of the 75+ individuals are in the bottom income quintile.  This makes the comparisons somewhat meaningless. As well, Mr. Clements notes that the median net worth of those aged 75+ is $100,100.  But after removing home equity, the median net worth is only $19,205.  This would explain lower spending levels very well.

Beyond this as well, since Social Security is adjusted for wage growth prior to retirement but inflation after retirement, older retirees will naturally have lower benefits than younger retirees, another reason for less spending. I haven't used household data very much in recent years, but a paper that I wrote as part of my dissertation does also show that poverty rates are higher for the older retiree age groups than the younger retiree age groups.

Getting back to the results of Mr. Bernicke's paper, he then explores the implications of lower spending with a Monte Carlo simulation example.  Assuming 3% inflation, he assumes that retirees increase their spending by inflation, but at the same time tend to reduce their overall spending as well.  Spending fluctuates, but these two effects mostly cancel out so that nominal spending stays close to its initial value. This allows the failure rate in this "reality case" example to be 0% compared to 87% for the traditional case of constant inflation-adjusted spending.

If we can assume that a retiree's spending stays the same in nominal terms, the initial withdrawal rate can be higher.  Here is a figure I made before with Trinity Study data comparing the inflation-adjusted case with the no inflation-adjustments case.


























With no inflation-adjustments, the SAFEMAX (lowest sustainable withdrawal rate in history) was a little above 5.5%, as experienced by the 1929 retiree.  However, this is a bit misleading, because the Great Depression was also a time of sustained deflation, with prices falling 24 percent between the start of 1929 and the start of 1933. The January 1929 price level was not seen again until 1943. Thus, even though nominal spending stayed the same, the spending in real terms would have grown.  Aside from the deflation-case of the Great Depression, we are looking at a SAFEMAX of more like 6.5 percent.  Retirees who plan to reduce their spending as they get older and older can withdraw more at the beginning.

But what is the best assumption to use: constant inflation-adjusted spending, or decreased spending as one ages more?  This is a big question that I think is still not fully resolved.  I'd like to find a Ph.D. student willing to dig more into the household survey data and to classify different retirees by their spending patterns over time using surveys that do indeed track the same households over long periods.