Risk registers often focus on "event-like" risks: things that might / might not happen in the future. But the most areas of uncertainty are not usually events.
- Suggests and examines a three way split: event-life risks, parameter (or estimation) uncertainty and model risk.
- Speculates that risk registers commonly omit 75% of all uncertainty and that the result is "false assurance".
A proposed three-way split
Risk is more than events, so here is a basic three way split:
- Events: An event-life risk has a probability (less than one) of happening. The event's potential impact may be fixed or variable. Events can be "natural" e.g. the risk of the coffee machine failing over the next year or "man-made" from non-event uncertainty e.g. the risk that interest rates increase to 5% or more over the next year; interest rates are a state of nature (with a probability of one of "being") rather than a natural event.
- Parameters: Important variables (expenses etc) have values which are subject to estimation error. UK supermarket Tesco does not know its revenue over the next year. Even the probability of revenues being under a certain amount has to be estimated; this is not the uncertainty of the coins and dice world.
- Models: In a model a set of inputs combine to gives a set of outputs. Models can be of various types: a mental model of how the world works, a cashflow model of how an insurance product works or a mathematical model of how the weather system works. Uncertainty here includes whether factors are modelled, their effect and the relationship between factors. This article covers only some aspects of model risk.
Risk certainly includes events: "things that might happen". Such risks can have fixed or variable impacts. Consider the example of a speeding driver:
- Fixed penalty regime: A driver caught exceeding the speed limit receives a fixed monetary fine. There is a single probability and impact. This is a very rare situation in real world risk management.
- Variable penalty regime: This more common situation seeks to make the punishment match the crime. The driver incurs a variable fine (or points on his driving licence) depending on the degree of excess speed.
- Drivers and accidents: An accident has a probability, but the impact of the accident varies significantly, from almost negligible to writing off the car, serious injury or death of one or more people.
Parameter (estimation) risks
Companies' results are always subject to random variation and bad (and good) things that happen "out of the blue". But usually a much bigger factor is that we simply don't know the average or "best estimate" level for variables or parameters that are important in our business model.
What appears to be random variation is at least in part due to getting estimates wrong.
Past data can help, but may not be the complete solution; there may not be enough of it or the world may simply have changed, making the data less relevant. More subtly, even using an "accurate" and data-informed average can be dangerous; what if the mix of contributors leading to the overall average changes? It can be important to understand in detail the contributors to overall results. For more on this see the front line decision makers case study.
In all of the above examples, over the long run results will be in line with largely unobservable underlying values we must estimate. Our estiamte may be "wrong", and the gap is often not best modelled as random variation, for example:
- Optimism bias: Assessors often have a tendency to be optimistic. This can mean that results are on average worse than we thought they should be.
- Exploiting weaknesses: Competitors may tighten their acceptance criteria for offering loans, leaving us with business others no longer want.
- Winner's curse: A competitive bidding process may, by construction, tend to lead to the winner having paid too much. Reinsurers should be very aware of this.
Black swans are the most famous examples of model risk. These are factors whose effect is, by definition, almost impossible to model; either we cannot imagine them at all or it seems practically impossible to associate with them a probability of more than zero – the latter was the case for the "original" black swan. There are other uncertainties which might be grouped under the general "model risk" heading.
Deliberately unmodelled factors can either arise because the implications are politically or practically unacceptable (e.g. I really don't want to think about the risks of moving house) or because the effect is deemed immaterial (that may or may not prove to be the case).
Aside: similar thinking extends the Rumsfeld view of the world e.g. What Rumsfeld doesn't know he knows about Abu Ghraib.
Inadvertently unmodelled factors can exist where the overall average value is used, without the modeller really thinking further about e.g.
- the effect of random variation
- non-random effects such as the mix of contributors which lead to the overall average
The "front line decision makers" case study in Risk registers: who, what, why and how? shows this.
Applications to risk management and risk registers
Author and risk expert James Lam sets out an a-b-c vision for risk management:
There are three major business applications of risk management:
loss reduction, uncertainty management and performance optimisation.
The combination of all three is enterprise risk management. James Lam – the world’s first Chief Risk Officer
Risk management which focuses (only) on the downside only covers (a). An event-like vision will not deliver the true value of ERM. With its limited aim of avoiding losses, how could it?
This article targets part (b), uncertainty management, highlighting a variety of model and parameter (estimation) risks. Encompassing strategic uncertainty, this is a good starting point for the board.
The optimising stage, (c), is crucial to exploiting uncertainty and thereby delivering value. Optimisation is beyond the scope of this article, but you can find out more in the Own Risk and value Assessment.
The product of a fully realized ERM programme is the optimisation of enterprise risk adjusted return Professor Harry Panjer
Risk registers can cover (document) all three types of risk. In practice they can be dominated by operational event-life risks. This can have dire long-term consequences, giving false assurance.
How can the situation get to this stage? Quite easily – picture this.
First the risk register is initially populated via a series of risk meetings attended by staff of middle seniority. There is much brainstorming of risks, little thought about the naturally sources of uncertainty (objectives, business plans and strategic uncertainty, models and their parameters).
That's the path to false assurance. Much of the uncertainty has already been missed.
Next the risk manager is tasked with overseeing the assessment and quantification of risks, in conjunction with "risk owners", many of whom did not attend the meetings above. Having been sent the risk register template ahead of time, the hard pressed executives are ready (if not willing) to assign probabilities and impacts, being careful not to place too many of the risks in the red zone. The spreadsheet's reporting functionality automatically produces heatmaps.
Even the captured risks are now summarised in a way in which prioritisation is doubtful. For a simple but detailed example see Slicing and dicing risk.
Finally the board reporting begins. The top 10 risks are shown, heatmaps produced, actions and controls documented. Somehow is just doesn't ring true; board members note the output (towards the end of their meeting) and move on. In due course the auditors sign off the increasingly robust risk management framework. Their job is done and the risk manager is congratulated.
Within a year the company is commercially unviable and is taken over.
What happened? Little true risk management took place. Risk management wasn't owned by the CEO or board. They didn't challenge ad expected too little of their risk management (and perhaps themselves). The story that can be repeated up and down the country. But obviously not for you.
How bad could it get? False assurance and the three Is
Briefly, going down a well worn path of listing risks in a register, classifying them and "quantifying" them using probability-impact risk assessment will leave you with three problems:
The first of these – inconsistency – is generally the silent "killer". It leads to an inability to prioritise. Consequently there is no robust basis for risk management, action or control.
The second area – incompleteness – can be serious. It can generally be diagnosed by noting the lack of strategic risks on the risk register and that all the activity is elsewhere.
The irreversibility is an irritant; not only is the scenario being assessed not evident from the risk assessment, assessing another scenario requires much rework; probability-impact gives no means of moving from a "1-in-10" scenario to a "1-in-3" scenario, for example.
Traditional audits of risk management can miss all this. The heatmaps are pretty and your risk registers are extensive. A lot of time is spent "doing risk management". Your auditors tell you about best practice, encouraging you to do more and being willing to support you risk-based control self assessments.
Let's be honest: if you're doing this it's all false assurance.
But how bad could this really be? A "back of the envelope" approach suggests that typical risk assessments in the average risk register might miss 75% of organisational uncertainty.As one possibility, suppose that:
- Only "events" are identified (i.e. parameter and model risks are not considered) and that these comprise 50% of all risks. This is incomplete risk assessment.
- Within the events identified, the risk owners' selections only cover 50% of uncertainty. This is incomplete and inconsistent risk assessment.
You’ve considered only 50% of 50% i.e. 25%. That's a 75% miss.
Where next? The risk register series
User beware. Many risk experts have warned of the common flaws in risk registers. It doesn't have to be this way. The first half of the set of articles below is generally positive, starting with how five potential audiences might make better use of risk registers. The second half warns of some really dangerous flaws.
- Risk registers: who, what, why and how? : Starting with the positive we ask the basic and practical question: how can we best use risk registers?
- Risk registers: good, bad, odd and ugly use : For most organisations risk registers work best alongside other tools. This article compares them to models.
- Risk management is more than risk registers : Why we should be asking both more and less of our risk registers. Includes a range of additional tools.
- Risk registers: the claimed flaws : A list of claimed flaws, with brief comments, plus a brief look at Matthew Leitch's critique of "Risk Listing".
- Risk registers: what your auditor probably won't tell you : Is your risk register inconsistent and incomplete by design? An accident waiting to happen?
- Risk is more than events : Risk registers often focus on future things that might / might not happen. But the most areas of uncertainty are not usually events.
- How to miss 75% of your risks without trying : How bad could a risk register get? Could a common approach miss 75% of all risk, for example?
- Slicing and dicing risk : Shows the flaws in probability-impact risk assessment, using a simple example. Turn on your brain and turn off probability-impact.