Investor based in the United Kingdom Denmark Finland Germany Ireland Luxembourg Norway Sweden Switzerland |

Ben Peters' Investment View - Man Vs Models

Careful now

In the below I’m going to give some thoughts on mathematical models and how I use them in our investment process. I was inspired to write about this a few weeks ago as I stumbled across a public spat from 2010 between two Financial Times columnists, Gideon Rachman and Tim Harford. Gideon is a historian, while Tim is an economist. The argument can be summarized as:

  • Gideon: Mathematical models in economics are useless and dangerous!
  • Tim: No they’re not!

Gideon thinks that economic models aren’t useful, and potentially dangerous, because there isn’t a body of scientifically tested knowledge on which to compare results. Tim argues that economic models have and will improve through trial & error, and while there are many economists who wield models unwisely, there are numerous honorable exceptions that have successfully applied models in the right circumstances.

I side more with Harford. I’m not an economist, but there are close parallels between the use of models in economics and investment analysis. As an ex-physicist, there’s nothing I like better than tapping numbers into a computer and trying to draw conclusions from them. But investment is not physics, and if I can conclude almost before I begin there is one rule that should always be followed when using models in investment:

  • BE CAREFUL!

What is a model?

A model of any type is something that attempts to mimic reality. In the physical world think of a bust of a king, Rodin’s The Kiss, or an Airfix Spitfire kit. All of these try to represent their subjects in some way. Artists might distort reality in their model to get a response from the observer. An engineer looks for a model to resemble reality as closely as possible to look for potential problems in their design.

A mathematical model (which I’ll just call a ‘model’ from now) takes inputs in the form of numbers, and gives outputs in the form of other numbers. Usually the idea is that the inputs are things you can observe and measure, and the outputs are numbers that are useful. If you want to know what time you will arrive in Chipping Norton (useful number) when it’s 10am, you’re 100 miles away and you’re driving at 50 miles an hour (things you can measure), you’ll most likely use a simple model to figure it out.

The difficulty with modelling

In order to be effective, a model must have two things. First, it must represent reality closely enough so that given accurately measured inputs, the output is reasonably close to what is actually going to be observed in the real world. Second, you must be able to measure those inputs to a suitable degree of accuracy, otherwise even a perfect model won’t give the right answer.

In what appeared to be a simple “what time will I arrive” model above, suddenly there are all kinds of issues.

  • I don’t know how fast I’m going to drive over the next hundred miles – I might get stuck behind a tractor!
  • I haven’t got a very good map, and it might be 120 miles, or 80 miles!
  • I’m not sure I set my clock right, is it 10.00, 9.55 or 10.05?!

These are all input measurement problems. But what if the model itself is wrong? If you’ve got an extremely fast car that travels at 99% of the speed of light, you’ll need to use a different model or you’ll be horribly wrong about your arrival time, at least compared to the clock on Chipping Norton town hall*.

For physical systems, such as trains, bridges, buildings, computer processors and so on, we have pretty good models that have been refined to be fit for purpose. That is, they are known to reflect the reality of the specific things they are trying to model closely enough to be useful.

As both Gideon and Tim note, getting to these models is a process of continuous improvement. As a well-known example, Newton’s laws of motion are pretty good on Earth, but not so good for celestial bodies such as Mercury**. It took Einstein to come along and develop relativity to explain that. But even Einstein’s theories don’t describe what goes on inside atoms, for that we need Schrodinger’s quantum physics. The now famous (formerly infamous) Higgs Boson is yet another development, discovered attempting to explain the reason why small stuff has mass (and therefore feels gravity).

Even if we have a model that can reflect a situation accurately, it still might not be that much use. That’s because the sheer number of computations required would be too much even for our now pretty powerful computers to handle. The weather is a good example – meteorological models are necessarily oversimplifications because we need an answer today as to whether it will rain tomorrow. Having the answer in 100 years time just won’t do.

The general point is that a model might be good in one situation, and useless in another. In the time-to-arrival example above there’s relatively little danger. You might be five or ten minutes late for your play/dinner/meeting/football match if you get it wrong, hardly life-or-death.

Engineers using models are often really dealing with life-or-death situations. Don’t make that road bridge strong and stable enough and people could get hurt. So even if they have good models that can be relied on 99.9% of the time, engineers will build the bridge to a greater strength than the models suggest. So here’s a good rule for using models in the real world:

  • Leave a margin of safety.

The problems with modeling in investment (& economics)

To my mind there are three key issues that we need to deal with when using models in investment. One is that the grounds for dreaming up a model are not all that solid. Newton could think: “I drop an apple and it ALWAYS falls to the ground”. That’s a pretty solid place to start (not taking anything away from his genius in realizing that the apple was also attracting the Earth). In investment, you can’t say something like: “If I invest £50,000 into this type of project, I will ALWAYS get a return of 7% per annum” (fortunately for me, or I’d probably be out of a job). An example of a poor basis for a model is the capital asset pricing model, which calculates the value of an asset from the relative volatility the asset’s price. That figure that is somewhat abstracted from the real determinants of value, the cash flow an investor can expect from the asset.

Second, parameters are hard to measure, particularly economic figures (look at the trouble with measuring GDP or inflation). But even if you’re talking about an individual company, they all use accounting rules slightly differently, and the rules change over time. This can be mitigated to an extent by understanding those rules, but even then the accounts are an amalgamation of thousands, millions or billions of data points depending on the company.

Third, and I think most importantly, is that in investment we are always dealing with the future. The future is never quite here. We can measure what happened in the past with a great degree of accuracy, but we cannot think with any degree of certainty that it will be repeated in the future. But it is in the unknown future that our investment performance will be observed. Essentially investment is an experiment we will only get to do once! In statistical parlance, investment is more probabilistic than deterministic.

Why I use models in investment

So if models in investment are so tricky, why do I use them? The answer for me is because they convert observations into actions. In deciding where to invest our clients’ assets there are an almost uncountable number of factors that can be taken into account, and ultimately we need to take some of these, come up with a plan of action and execute it. The models I use facilitate this process.

By reading and digesting the investment greats, investment and economic theory, and developing knowledge in relevant areas such as accounting, I think we have developed a set of models that are useful for this purpose. It is absolutely essential that these models are used appropriately though.

I am not saying that models are necessary to be a successful investor. Warren Buffett thinks that if you need to use a spreadsheet to analyze/value a company, then it’s too complicated and you shouldn’t invest (although he has done the odd discounted cash flow calculation in his time I believe, and has an intuition for what the results might be). I personally find models provide a lot of insight that I couldn’t get from looking at the numbers or reading annual reports alone – perhaps that is to do with my science background. So I think it’s a tool that can be used or not, as suits the individual.

Some ways I reduce modeling risk

Given the difficulties I’ve outlined, I will repeat the rule I laid out at the top:

  • BE CAREFUL!

There are a number of ways of being careful with models and therefore reducing the risk involved in using them.

Compare with other models

The output of a model might be one number, for example “the value of Bencorp PLC is £10.00 per share” (I don’t really look at stocks this way, but it’s a good illustration). If I’m estimating a figure like this, I will make sure that I use more than one model, in case some peculiarity makes the figure unusually high or low. If another model estimates the value of Bencorp at £9.50 then I have more confidence that my models aren’t completely crackers; if it estimates £50.00 a share then I know that one or both of my models are rubbish and I can do something else. Another benefit of having a range of estimates is that it is possible to aggregate them and derive a better estimate, with an error range. In other words…

Estimate how wrong you might be

If there is uncertainty in inputs, your output will be uncertain. A scientist wouldn’t publish a figure without putting a range of error on it, but for some reason you don’t often see error estimates in finance despite it usually being a much more uncertain environment. I look at how ‘fuzzy’ my estimates are by using multiple models (as above), and also by seeing how the model changes if I vary the inputs. One way of doing this is by doing scenario (or “what if”) analysis. Another method I like using is a statistical technique called Monte Carlo analysis.

An example where we do see an error estimate in finance is in the Bank of England’s much maligned fan charts. People, particularly the media, scoff at the Bank’s lack of accuracy in forecasting, but the Bank themselves are saying that they can’t be sure what’s going to happen in the future. I find that comforting in a way.

Don’t try to model something that’s unmodelable

If something’s way too uncertain, just don’t bother. With individual stocks, I am much more comfortable modeling what we define to be ‘quality’ companies (those that have high, stable returns on capital and good long-term reasons why they might stay there) than, say, startups or turnaround situations. The cash flows from the latter are highly uncertain, making modeling highly uncertain too. They might make good investments, but more weight should be given to simpler rules of thumb*** and qualitative analysis if the expertise is available.

Leave a margin of safety

So you’ve done your modeling, now to execute. Much like the engineers designing a bridge, if your desired rate of return is 8%, don’t invest at a price that gives you 8.1% with an uncertainty of 3%. Invest at 11%, 12%, higher.... and if your investment rallies and you get close to that desired return it’s time to think about selling.

A final thought, on risk

The above are ways of reducing the risks involved in the act of modeling. But we try to reduce the risks involved in investing before even getting to the modeling stage by asking questions of individual investments, which are both numeric and qualitative in nature. While it is probably true that every investment has its price, there are shares that I personally wouldn’t own at any price due to the fundamental risk involved in owning them. This assessment would probably trigger the ‘unmodelable’ alarm anyway, but more generally if I wouldn’t own it, there’s no need to model it. Problem solved.

Ben Peters, 1st March 2013

*This is because of the phenomenon of time dilation.

**General relativity correctly predicts the advance of the perihelion of Mercury’s orbit, where Newton’s theory of gravity failed.

***The psychologist Gerd Gigerenzer advocates using appropriate rules of thumb as an effective alternative to complex decision-making rules.

Latest Awards
Best Companies World Class to Work For 2023
Best Companies Top 25 Best Small Companies 2023
Best Companies Top10 Financial Services 2023
CLOSE

Search