Optimeering has been accepted in to the incubator Nyskapingsparken which resides in Bergen’s science center Vil Vite on the 2nd floor. Nyskapingsparken is owned by Bergen University College (HiB), Bergen Technology Transfer (BTO), the Norwegian School of Economics (NHH), Hordaland County, Bergen municipality and SIVA. Through Nyskapingsparken Optimeering will get access to a large network of contacts and professional business developers in addition to a creative environment. You are welcome to visit us here any time! (Note: Optimeering will also still be found in Akersgata 1 in Oslo)
Med de store tallene som nevnes i tilknytning til OL i Oslo, er det en overhengende fare for å bli tallblind. En løsning på problemet er å se på kostnaden relativt til andre størrelser. Den estimerte offentlige kostnaden ved OL er eksempelvis litt mindre enn summen av en brutto månedslønn for alle arbeidstakere i Oslo, eller litt over tre ganger markedsverdien til Norwegian.
Vi i Optimeering har visualisert kostnaden av et eventuelt OL i Oslo slik at det skal bli lettere å fordøye hva det egentlig vil koste. I grafen under er arealet av rektanglene proporsjonalt med størrelsen på kostnaden.
Business and government buy and create huge reams of analysis day-in-day-out. Whilst much of the analysis is academically rigorous, it is often of very little use in making better decisions (on the other hand, it is often very useful for buck passing – “see, not my fault, the analysis told me to do it”). In my experience, this is because we often are asking the wrong things of analysis. Instead of a crystal ball that tells us the future, useful analysis should help us to understand a process, system or market better. Useful analysis should enable us to make decisions today that make a production process cheaper, improve employee satisfaction, lower emissions for a transport company without impacting service or cost, and improve the robustness of a portfolio of power generation assets to changing market conditions.
The first step in this process is to recognise why much of what passes for analysis is actually nothing of the sort, and consequently what you should looking for when using analytics to help you achieve better results. How often have you heard a something plausible-sounding, like:
“Oil prices have jumped because of increased political uncertainty in the Middle East, with many traders stating the geopolitics of oil have rarely been more important”.
“The strength of the German economic recovery in 2010 was a key driver of the increase in gas forward prices in the second half of that year”.
“The lack of bold action by European political leaders carries a big portion of the blame for the surge in European bond yields and has significantly increased the risk of a Euro break up”.
These sound pretty reasonable – and give us confidence we know why something turned out the way it did (and that it can be easily explained). If we could only figure out the key drivers behind this market event or that political change, we could learn from it. If the events occur again, we can then use this knowledge to figure out what will happen next. This type of “analysis by explanation” has long been a cornerstone of economic and market analysis. After all, for any event, we can look back and more often than not identify those 2 or 3 key factors that are responsible, and use them to predict the future. Right?
Well, unfortunately, no. Or perhaps more precisely, even if the explanation were absolutely right, we have no way of knowing. The event was not part of some large experiment under which we varied all the input conditions in order to figure out the true causes. History – what happened – is the result of many millions of interactions between individuals, companies and other actors, and each event is in many ways unique. The crash of ’29 was very different from the events of 2008, and it is impossible to know if it were the things they had in common or the things they didn’t that were important, or if it were some other complex combination of the two. Or, even, if the things they had in apparently common were actually different in subtle but important ways – that is, that two superficially similar events were actually not similar at all in any way that mattered?
Analytical explanations tend instead to simply describe rather than explain. This type of analysis is, by definition, done ex-post – that is, after the event itself. It is hard to tell what is explanation, and what is just description of stuff that happened. Take the second example above. It contains two statements: that the German economy recovered in 2010, and that gas forward prices rose in the second half of 2010. Written this way, the example is clearly descriptive. But is it explanatory?
Well, it certainly seems intuitive – economic recovery leading to increased demand for gas and thus higher gas prices. But the argument is essentially circular – recovery -> demand and higher prices -> economic recovery. It tells us nothing more than the descriptive version in the previous paragraph – that is, higher prices followed the recovery. It is filled with common sense, but it is also filled with some common fallacies.
For a while now, the fields of operations research, sociology, and systems thinking have seen an increasing recognition of this problem and our predisposition for stories that purport to explain why, even if they just describe what. Every day, we filter millions of pieces of information through the set of unwritten “rules” that guide us in our daily lives. These rules help us tell if someone’s behavior is threatening or merely boisterous, or what to wear to work, or to predict and evaluate what will happen if you punch someone in a bar.
This “common sense” serves us well on a day-to-day level, and if it doesn’t, well, we simply adjust our common sense rulebook and move on. Common sense helps us to deal with the fact that we can see and know the world only through our own eyes and mind – for everyone else, we can only observe their outer reactions and interpret them. However this can create problems – we have a tendency for example to explain things as though it were an individual (and more perniciously, an individual who bears more than a passing resemblance to me) reacting to the world. We say “the market did this”, or “a typical trader thinks that”, and so on – rather than recognizing reality as something complex and probabilistic (and non-human). These problems lead to a number of fallacies that make us susceptible to “analytical explanations”, even when in reality these explanations tell us nothing new. Some of these are:
Creeping Determinism. History is a single shot, an experiment run only once. Because we only have this single shot to go on, we have a tendency to think that what happened had to happen, because it did. Taking an example from physics, we think in a Newtonian way even though we live in a quantum world. In a Newtonian world, armed with the laws of motion and gravitation and a really powerful computer, I could calculate the position of everything at any point in time. Prediction would be easy, and in fact no different from explaining the past. Quantum mechanics (and thermodynamics and pretty much every related development in physics over the past century) on the other hand has replaced this Newtonian world view with something essentially probabilistic and, at an individual event (sub atomic) level at least, unpredictable. We tend to think in a Newtonian way at a macro level too, even when events are made up of many other component events interacting in complex, nonlinear ways. We thus conclude that what happened had to happen – that A and B had to cause C, there was no other possibility. As stated eloquently by Watts (1): “…rather than producing doubt, the absence of ‘counterfactual’ versions of history tends to have the opposite effect – namely that we tend to perceive what actually happened as having been inevitable”.
Sampling bias. We tend to notice some events – especially when they are unexpected or rare – and ignore all the times those things did not happen. We notice the trader having a successful run – and attribute the run to some easily identifiable things the trader is doing – rather than all those who may be doing pretty much the same thing as the successful trader but who are not performing so well. This sample bias makes us think necessary conditions are sufficient – if I only do Z then X will happen – when in reality they may not provide the explanatory power we are looking for. Worse, it may make us think something noticeable, but irrelevant, is the reason why. The trader who is doing well may wear lucky red shoes, but it would be a mistake to think the lucky red shoes are the cause of her success.
Imagined Cause and Effect. We tend to prescribe cause to events that happen one after the other, even when there is none. We would be unlikely to see a man yawning as causing a cyclist to run a red light 20 metres away. However, when it comes to market analysis we have a tendency to discombobulate “B follows A” with “A caused B”, because we are explicitly looking for such explanations.
The Over-Importance of Now. We look for explanations of the state of something (e.g. the oil price) today, rather than recognizing that it is part of an on-going process that is not completed (and never will be). We see an event occurring in (say) the oil market, and observe that oil price has gone up over the last week, and conclude that the event is bullish for oil. However, in a year’s time the oil price may be 50% of today’s levels, and if we were to do the analysis then we may instead conclude that the event was irrelevant, or even that it was actually bearish.
In summary, there are some very good reasons to be extremely skeptical of any analysis that purports to explain why something happened. We have a tendency to be susceptible to a good story, and a nice, well written piece of analysis that seems like common sense is nothing other than a good story. Writing as an analyst, it is attractive to create such stories as well – they seem to provide an answer that adds value and can help predict the future. However, as we have seen the value added is limited at best, and can make us think we know more than we actually do.
Stepping back, we should recognize that this type of analytical approach is based on the wrong question. It reflects a world-view that essentially accepts the possibility of “market prophecy”. By explaining what happened I can interpret todays events and know what will happen tomorrow. It assumes the future is fixed, and that by reading the tea leaves a little better I will be able to figure out that future.
Instead, we need to recognize that the future is not fixed – it has not actually happened yet. Instead of a single path, the future is a mess of unknowns. What we should be asking ourselves is: what can we say about these unknowns that can help us make better decisions? Instead of pretending that we can predict with any confidence what power prices will be in 15 years, we should be asking how can we better understand the unknowns driving the future of power that helps us make better decisions today? That should be the goal of good analysis – and is one of the reasons why we formed Optimeering.
The team at Optimeering is pleased to welcome Erlend Torgnes, who will start with us on the 1st of September. Formerly with Poyry Management Consulting, Erlend has a strong background in optimization, modelling and analysis and will make a significant contribution to Optimeering’s services in these areas.
All at Optimeering are pleased to finally move into our new offices in Akersgata in Oslo. A great space to work and meet clients, a super location in the middle of the city, and not least a very large coffee machine. Come and visit us here!