The three myths of modern risk management in quant analytics part 2
Myth 1: Volatility is a measue of risk – Volatility is indeed a “byproduct” of Risk, which means that it manifests itself only after the risk has already occured. Therefore, Volatiliy, like most other quantitative methods based on time series analysis, is a “Measure of Past Risk”.
Myth 2: Correlation is a measure of diversification – Up to a point. Correlation has some serious limitations as a measure of diversification. Correlation is a measure of how much returns of different assets move more or less in the same direction. However, it does NOT imply that highly correlated asset prices move together and vice-versa. Furthermore, one has to be very careful of which time period is used. There are cases where daily returns exhibit low correlations but monthly returns exhibit high correlation. That is why concepts like “co-integration” and “co-movement” have been introduced.
Myth 3: The underlying asset does not matter – This is a true Myth which comes mostly from CAPM. The theory behind CAPM is borderline absurd. It states that the Risk of a stock is a purely statistical function of the stock’s Beta and the Market risk. It totally ignores any reality of the underlying business. Furthermore, measures like Beta are totally unreliable and have a lot of statistical noise, plus they structurally change over time. Example: IBM in the 80s was a very different company than IBM in the 2000s. The idea that IBM was “born” with its own Beta is nonsensical. But then, since Beta changes with time, it is hard if not impossible to measure.
The key point is that Risk and Value are not simple results of some statistical analysis. They require true understanding of economic and business funamentals. The idea that investing can be reduced to running a piece of quantiative software is overly simplistic and certainly over-rated.
This is not to say that good quantitative analysis cannot be useful.
As Ivo says, the rub is in the intuition and art required to chosing the right inputs.
It’s fun to speculate, if Ben Graham were alive and working today, how he might have adapted his ideas about intrinsic value and margin of safety for individual holdings so that they could be used to generate probabilistic real (after inflation) outcomes at long-term (rather than just short-term) horizons for a portfolio combining many individual holdings? And how would he have allowed for individual risk tolerance, or indifference between the risk free asset used to determine the margin of safety and uncertain risky returns? If he couldn’t, would he get published today?
Though he would have unquestionably used his judgement and knowledge of history to try to solve the problem overall, would he have advocated quant techniques to deal with the scale and complexity and required rigour and consistency when continuously generating return probabilities and constructing and rebalancing portfolios or would he have suggested it should all, always, be a matter of judgement?
I would consider Ben Graham approach quite “quantitative”. He just used indicators of risk other than the statistical definitions advocated by CAPM. He also defined “risk tolerance” as the change of making a bad investment, not as the amount of volatility to be experienced.
On the other end, it is also true that CAPM and MPT techniques still require much more “judgement” than commonly believed. In particular, chosing the input assumptions for these models is anything but simple. For example, it is well known that portfolio optimization techniques based on expected returns and correlationas are VERY unstable and sensitive to errors in the inputs [Something that Richard Michaud has explored extensively]. If one considers the challenges in chosing the inputs, CAPM and MPT only offer the “illusion” of rigor, something that is easy to forget until one tries to use these methods in practice.
To be sure, I think that Ben Graham’s approach also has some severe limitations because it tends to focus solely on the bottom-up view.
It is inveitable that one needs to exercise considerable judgement to assess when a certain method, theory and tools may or may not work at any given time.
The problem is that common quant tools and softwares simply assume that the most recent past can be used as input to estimate future probabilities. This approach is effectively blind to changes in market cycles because academics say that in an efficient markets there cannot be cycles or bubbles.
Just a comment on volatility. Historical Vol. is a measure of “past” risk which not necessary gives us any clue of future risk. Implied Vol. is a better measure of “future/market” risk on a particular asset. The relationship of Historical and Implied is a great indicator of what kind of risk the market is pricing on a particular asset. Very seldom HV/IV is close to one. Not only that you can pick IV for different times: week, month, year….
Again we are facing sentiment indicators here, what Mr. Market is Expecting on this particular asset. Usually this is a contrarian indicator, i. e. when HV/IV is too high there is too much complacency; the reverse is also true.
As a personal note, I always run my numbers based on IV because HV alone does not tell me much.
, when I refer to volatility as a concept, I refer to the whole set of
indicators including implied and realized vol. That also includes the whole
vol surface whether estimated (including all mainstream methodologies) or
market-derived. Unfortunately, the whole set of these indicators move in
tandem with a few days gap (and difference in scale). While for trading
purposes it is crucial to follow the vol surface and look for mispricing,
in a large institutional setting implied volatilities are as misleading as
realized vols. In fact, all analysis I carry out is based on the VIX index,
which is an implied volatility measure across all strikes.
Not necessarily true and not necessarily that bad either. There are statistical models and software packages that allow you to incorporate your views into the analysis. The problem is that it is usually very difficult to formulate your views in a manner consistent with the model used. It requires experience with the model and skill.
The funny bit is that your own views about the future are based on past information. This is how our brains work. Your views about the future are based on your past experience and (hopefully) on the experience of others. A mathematical model does exactly the same – it uses past data to “learn”. Its forecasting power depends on the condition that the future is similar to the past and on its ability to quickly “learn” as new information becomes available. The past is never exactly the same as the future but on average it quite frequently is. Statistics is about averages. By the way, the VIX also relies on past data and models to come up with a measure of expected volatility.
I agree with you that volatility is a bad measure of risk, but I have a slightly different argument in addition to the already mentioned. Volatility makes sense as a measure of risk only when put in a probabilistic concept. Using volatility as a risk measure is equivalent to assuming that markets follow the normal distribution, which they don’t do very often. This also means that VaR and volatility are very different concepts.
Correlations are an average measure of codependency. It is weird that people put so much trust in a single number without even considering that this is an estimate, with usually quite wide confidence bounds, prone to outliers, changing over time, etc… And yes being an average it does not tell you much about any particular daily outcome. BTW it also assumes the normal distribution…
An interesting thread…
Yes there are problems with the current methods of risk management, however, until anything else comes along its all we have!
Risk practitioners and investment managers reading too much into the information is the problem, VaR seemed to get very bad press saying it didn’t work, but even saying this highlights that it was already being wrongly utilised. If you were a fundamental analyst would be buy a stock solely based on the PE or P/bk, of course not, so why would you manage risk based on a single number either. One can read far too much into a multitude of numbers, which then asked the questions, how information is this used?
I remember the low vol days before the crisis, we used to run stress/scenario analysis to determine possible large drawdown’s, did that stop managers running portfolios how they wanted… no; did that prior notice of possible meltdown allow managers to position themselves differently… no… we were running those numbers for a long time before the problems.
The main battle of risk management is ensuring effective portfolio construction throughout a full market cycle. Covariance based analysis works well under normal conditions, and inherently most of the time it is normal and risk can effectively budgeted, it’s the times of extreme market conditions that cause issue. Trending using multiple time horizons and models may offer some indicators to impending problematic markets and I believe there is a good link here to using technical indicators.
Practical lessons were learnt, if we want to manage the risk of portfolios we need to get involved; risk management is to constantly challenge about the process of portfolio construction. Lets remind ourselves it was excessive use of leverage that caused the problems, not the misuse of linear regressions
the usefulness of a metric can only be considered in the context of
its application. I agree with you, neither VaR nor variance can be blamed
for anything, since those are abstract statistical constructs and there may
be many harmless ways they can be used. However, in your comment you
highlighted eloquently the crux of the context-driven problem:
“I remember the low vol days before the crisis, we used to run
stress/scenario analysis to determine possible large drawdown’s, did that
stop managers running portfolios how they wanted… no; did that prior
notice of possible meltdown allow managers to position themselves
differently… no… we were running those numbers for a long time before
I suggest that as a portfolio manager I could not have reacted otherwise,
since one could come up with a large drawdown scenario at any point in
time, the question is did you have any indication from the risk tools that
specifically then (in a low vol environment) the large drawdown was more
likely to happen? And when the vol started to climb, was not this
accompanied by the first losses (particularly in the quant funds), i.e. it
was too late! This is not a criticism of linear regressions or VaR as a
statistical measure. What we have is a hazardous application of
inappropriate statistical techniques.
“Yes there are problems with the current methods of risk management,
however, until anything else comes along its all we have!”
There are several alternatives, some of them already deployed in a large
scale with very good results
HOW DO YOU START A PROFITABLE TRADING BUSINESS? Read more NOW >>>NOTE
I now post my TRADING ALERTS
into my personal FACEBOOK ACCOUNT
. Don't worry as I don't post stupid cat videos or what I eat!