It became clear in the discussion that the situation was not nearly as bad as it seemed based on the figures shown in the presentation. The implied changes in the stock allocation caused by fluctuating risk tolerance would only be somewhere in the neighborhood of 0 to 4%. If this is a question which interests you, it is worth visiting those comments.
My attempt to summarize the discussion there is as follows:
I'd say that the way the results were presented at Orlando would suggest to listeners (or, at least, to me) that our ability to measure risk tolerance, or at least to distinquish risk tolerance from risk perception, is very limited. But I can see from this discussion that it isn't the case at all.
It would be very alarming if the risk tolerance measure would suggest 20 percentage point changes in stock allocations over time, as that would feed into the notion of buying high and selling low, but it is clear from the discussion -- acknowledged now from both sides -- that this would not be a proper way to interpret the results.
The discussion shows that the changes in stock allocation may be somewhere between 0-4%. Whether or not this is even statistically significant (Michael K. asked this, but it has been answered) may only be of interest to academics. The important point is that these results suggest that there is little practical significance to the fluctuations in the risk tolerance measure.
To conclude: the title of this blog entry is asking the wrong question, as this study has nothing to do with whether we can measure risk tolerance. What we do learn, at least, is that investors are not being ill-served by any sorts of fluctuations in the risk tolerance measure which happen to be correlated with the S&P 500.