Larry Cao, CFA Institute Oct. 20, 2014 from Business insider, http://www.businessinsider.com/common-mistakes-research-analysts-make-2014-10
CFA Institute: What are some of the things that fundamental analysts and quantitative analysts are each good at?
Myron Scholes: I look at three dimensions. One is forecasting the future, which is the alpha component, the abnormal return component. It takes a lot of skill to be able to forecast the future, and there are a lot of constraints on managers when they forecast that stocks are rich or cheap. How much of the future can they forecast? Is it over a short horizon or long horizon? Most active managers spend their time pursuing alpha or looking for rich and cheap securities within a sector.
The second dimension is risk management. What factors affect returns? Are those factors changing, and why? Most quantitative investors don’t really spend much time on changing risk. Quantitative managers do measure and monitor factor risks. They use quantitative techniques to understand factors that affect returns such as cash flows, earnings, small stocks, or valuation. Most managers compare their performance to a benchmark. This is relative performance. They ignore absolute performance and risk. And, many managers are constrained to stay close to the benchmark. This affects returns.
The third dimension is the intermediation business — I call this “omega.” In the omega world, investors are willing to pay others to take risks from them. Examples here include merger arbitrage or credit quality or spinoffs or corporate reorganizations. Investors pay others to take the risks, [are] willing to give up returns. Fundamental analysts or risk modelers don’t understand the omega business — they don’t have the expertise needed to delve into these opportunities.
What do you think are the common mistakes that analysts tend to make in developing models?
There are three important problems in quantitative science, which apply more generally than just to finance. And these problems apply equally to qualitative analysis and how we build our views and models of the world around us.
One is data mining: You use historical data to build your views or to build models that spot anomalies. The problem is, we build our views and test our views using historical data. What value is this? We might be lucky and take a random sample of past data. But this is unlikely. As a result, our results are biased and have little predictive power. Time series analysis, therefore, is fraught with worries. The correct data can provide a perspective of the future. But, we never know the extent of data mining.
Some claim that with long periods, such as over 50 or 60 years, we can identify risk factors that generate returns. I’m skeptical too, like you. The best defense is to use economic theory to develop a model of why something should be happening, and then garner data to test the model. Even then we must be careful about building a model, for our views are not independent of previous observations.
Two is the cross section. Cross sectional data, or panels, produce assumed clusters that are NP-complete in the sense that once I know the clusters, I can put elements into the clusters. But how many clusters really exist is unknown. If you don’t know the clusters beforehand, then you are making an error that could lead to wrong quantitative analysis and false conclusions. It is believed to be easier to put elements into clusters. But be careful. For example, prior to the financial crisis of 2008, the rating agencies assumed that home owners in Stockton, California, defaulted on homes independently of those in Las Vegas or Miami. Or Spain would be in difficulty independently of Italy. Or diversification in the stock market would provide protection in a crisis. Well, both the time series and cross section were wrong.
The third problem is that a model by definition is an incomplete description of reality. Every model has an error term. People will reverse-engineer the model and see whether they can use the error to make money against the modeler.
Can you give us an example?
When rating agencies started rating mortgage pools in the United States, they used the time series, the history of defaults on housing. In addition, they assumed that clusters were independent, which provided diversification against defaults. And the third component is that for a mortgage structure to achieve a AAA rating, it had needed to contain a certain number of quality mortgages. As a result, mortgage structures reverse-engineered the error of the model to deduce how to just pass to achieve an AAA rating. They started to put in more and more “stones in the wheat for delivery” and put worse and worse credits into their structures to just pass the AAA rating. The rating agencies did not adjust their models fast enough to correct their errors.
Then the defaults occurred. First, the credits were much worse than they assumed. Second, Las Vegas, Stockton, and Miami all defaulted together, so they lacked independence, which would have reduced risk. And third, because they used a short time period to estimate the probability of defaults, there were many more defaults than they ever thought possible. Housing prices fell to a much greater extent than was ever built into their models. I keep these three dimensions of risk of both qualitative and quantitative analysis in mind in thinking and modeling. There are risks in all of the sciences including aspects of our lives generally.
One way of dealing with the time series issue: What if we start from thinking how the world works, and then based on our experience, we put together a model? We will then test it with data and try not to tweak it after the tests. Even then I thought we may still not be in the clear because our experience is just like the rest of the data. It’s still data mining.
That is right because we all peek. [Laughter] Science is induction and then deduction. We must use induction first before we deduce anything. Our experience is induction, so that’s already data mining. That’s the problem for all science. We need to be careful.
There’s no way around that.
The only way around it is to have a first principles model. There’s no way around it. It’s just the drudgery of doing quantitative analysis. Fundamental and other forms of qualitative analysis have the same three issues. The best way around the problem is to be lucky or to build models where the assumptions are not too onerous and give insights on how to enhance our understanding.
In spirit, it is the same thing.
In spirit, it is the same thing. That’s what makes the investment world interesting.
It’s a very difficult business. Could the way to get out of this be somehow combining the fundamental and quantitative analyses?
I would think so. I think that basically every model has an error. Therefore, if I tell you my model, you can reverse-engineer the error and game against me. In some sense the market protects you. You trade at market prices thinking that the stock is cheap (by your model), but it isn’t. Unless you are perverse, you will make random returns. So if an investor has a model, which is systematic, and prices using that model, he will lose — for someone else will have better intuition or new data that will game against the model.
This is why non-market pricing is so difficult. When I play golf, I have a model of the golf game, which has a much greater error, however, than does the model that Tiger Woods [uses]. So the question is whether the convex combination is better than running the model alone. The purists say, no, you can’t do that. We could tell them that their models are also wrong. [Laughter]
You know the risk model is wrong, but it still gives you a good starting point. Whenever you look at a risk model trying to understand what’s going on, you get a better sense of why this is moving, why this is not moving, and why this probably just reflects the fact that the model is overspecified. At the same time, you still need to have a feel for what is going on.
Intuition is a model, too!
Indeed! Going back to what you said, everybody peeks. Since the global financial crisis, the world has changed in many ways. Some old models seem to have stopped working.
The herd goes on. The herd has a model, and the herd changes its location. That’s what makes science interesting. If we knew everything, [if] everything were static, we’d all give up. We’d be too bored.