This is an intensely practical article, looking at my experience of risk registers. I'll start with the "who" and "what" – the audience and what they want risk management to do for them. This is the crux of the issue: don't expect risk registers (or risk management) to yield value until you know the answers.
Plan of attack
- Who? Five potential internal audiences are identified. All five need good risk management, but risk register use will vary.
- What? The central core: what is the audience need and can a good risk register deliver the goods?
- Why? I'll touch on this indirectly. The "why" of risk management is obvious and the "why" of risk registers is because they fulfil the audience needs.
- How? This will, again, follow straight from the needs. When you start with audience needs risk registers can be useful instead of flawed.
Summary of audience use of risk registers
- Central risk managers will probably use "full blown" risk registers. I give this a very light touch.
- The board can benefit from a risk registers focused on strategic risks. See "Example: a risk register that works for the board".
- Executive risk owners and internal audit will like the easy approach to bringing risks and controls together. See "Case study: risk-control matrices".
- Front line core decision makers will probably make little or no use of traditional risk registers. But see the "Better estimation" case study.
The five potential audiences
 The Board
A Board will probably make little detailed use of risk registers. I suspect this make come as a disappointment to some risk types.
- A Board will focus mainly on strategic and other significant uncertainties. A Board has lots of "non-risk" responsibility and not much time.
BUT strategic risks are usually under-managed and cause relatively more damage. See Sinking, fast and slow. Studies show the importance of strategic risk:
Research Coverage Strategic (%) Operational (%) Financial (%) IMPACT study 2009 Public companies: negative events 64 35 1 Oliver Wyman 2000 Biggest share price falls 61 33 6
Though the majority of companies' efforts in enterprise risk management programs focus on financial risks, the bulk of risk exposure lies in strategic and operational risks. Impact study, 2009: first major finding
- AND a Board has increasing risk responsibilities: risk management and internal control, the LTVS and (for insurers) the ORSA and all fall to the Board.
- AND FINALLY a packed risk register can trip up a Board:
When I started at the Government Actuary's Department a few years ago, we had a risk register running to 50 pages.
I looked for the top 7 risks and 3 were missing. When I mentioned this to a group of public sector risk managers I got the answer "you've got 4 out of 7" you’re lucky to have that many. Risk registers that work at board level – Trevor Llanwarne, former Government Actuary
Heatmaps don't really work at the strategic level. They try to get you to allocate a likelihood and impact to each risk. But for every risk there's a whole range of impacts... Anyone using a heat map in this way is taking a view ... and very rarely are they transparent in doing so. We need to have a simpler analysis which can be quantified. Source: Source: Trevor Llanwarne, former Government Actuary: Risk registers that work at board level
So there you have it: heatmaps don't work for boards, whose main risk focus is strategic uncertainty. If your risk register forces you down the probability-impact route for each risk, that piece won't work for your board: you and your risk reporting will lose credibility. So here are nine alternatives to probability-impact.
Conclusion: Boards potentially need a lot of risk management support but little of this is likely to come from risk registers. And we really mustn't mislead.
 Front line core decision makers
Front line staff will probably not need risk registers for routine core decisions. Another disappointment for some?! Here's the logic:
- Assessment expertise and decision support will be there already. Generally the lack of sophistication in risk registers won't contribute much.
- BUT risk management is still of course needed. This will often be part of individuals' core expertise and responsibilities.
- SOMETIMES central support is welcomed. Perhaps something is new, or a decision support tool should be enhanced.
- SPECIAL circumstances may make a register worthwhile. A deal may demand enhanced governance. Documented risk assessments may be required.
- FINALLY day-to-day checklists are useful. But these are usually "yes/no" reminders, not risk registers.
Conclusion: Front line decision makers usually need more sophistication that risk registers can offer. Focus on different risk management tools.
 Executive risk owners
What's an executive risk owner? It's a senior executive, probably heading up a function, who has overall responsibility for a "risk type". By risk type I mean, "credit risk, "insurance risk" etc. Lower level risks are often grouped in this way to improve manageability and clarify overall responsibilities. An example may help.
Four executives and the areas of risk they cover
|Senior executive||Risk types||Comment|
|Finance director||Market risk||For example uncertainty over interest rates and inflation. The FD may also oversee the management of liquidity risk.|
|Investment director||Credit risk||This is in respect of bond and other investments. The actuary will probably cover reinsurer counterpart risk.|
|Chief actuary||Insurance risk||Will cover solvency (even if not "identified" as a risk). May also cover uncertainty over pension scheme liabilities.|
|Operations director||Operational risk||Covered to the extent that they are not the direct responsibility of the areas listed above|
There will be overlaps, perhaps including:
- Liquidity risk may be jointly managed by the FD and actuary as this is partly a function of when the insurance benefits are due to be paid.
- Investment strategy and its relationship to liabilities will be divided between the first three executives above.
There is a lot of scope for terminology confusion. What about investment risk? Asset and liability risk? This is largely a matter of terminology and ensuring there are no gaps. More interesting is that there is no Strategy Director. Strategic risk is a part time responsibility of the board it seems. Is that a risk?
What those executives do risk-wise
Executives assigned risk type(s) will undertake a regular risk review, just as time management gurus suggest a monthly review. This is not a compliance exercise, but sits naturally alongside their responsibility for ensuring quality decision making and efficient operations in their areas.
A risk register is partly about not having to remember all the things you are responsible for and partly about not reinventing the wheel. But a good risk register can do much more than this, while remaining simple – see the case study.
The monthly review may incorporate:
- Assessment of whether risk (and risks) are being well managed. For more on this see the section "Case study: risk-control matrices" below.
- Whether risk in a particular area has increased or reduced. This is a big area beyond the scope of this article. As ever, avoid probability-impact risk assessment.
- Progress on actions outstanding as at the last review, including bringing particular risks back "within limits".
- Similarly progress on controls that need (or pass a cost-benefit case for) improvement.
Conclusion: These executives are reviewing risk and managing it on a monthly and day-to-day basis. The deserve the very best support possible.
 Central risk managers and the risk function (if any)
This section takes its lead from financial services, where a central risk function is a regulatory requirement. Why? Because safety of customers comes ahead of that of shareholders and regulators are not prepared to rely only on front line decision makers (or indeed boards alone) to ensure these organisations retain sufficient solvency and, especially in the case of banks, liquidity. Where it exists, what roles might a central risk function play?
Collaboration. Some of these function (e.g. balance sheet optimisation) may be fulfilled in conjunction with other areas.
In all private sector organisations, and potentially beyond:
- Integration: Good risk teams are deeply embedded in helping to manage and exploit uncertainty for revenue-generating units (but is more than this).
- Portfolio approach: Uncertainty is managed at a macro as well as a lower level, adjusting for the dependence and diversification between risks.
- Balance sheet optimisation: For private sector companies the debt-equity mix and risk-adjusted returns can be optimised, after allowing for default risk.
- Impact mitigation techniques: The balance sheet can also be protected using a variety of techniques, including (re)insurance and hedging.
- Operational controls: Support for the development and support of an intelligent internal control system, which goes beyond traditional financial controls.
- Technical support: Many risk-related areas (e.g. assessment) can be technically demanding. Boards and front line executives may therefore require support.
- Careful communication: Too often risk management starts with esoteric risks. It should instead understand key stakeholders' needs (so we started with WHO).
- Realistic champion: The central team's leader should play this role, seeking all the commercial and other benefits that superior risk management can deliver.
Especially in financial services:
- Solvency and liquidity management: As noted above, this is a particularly important area for financial services companies.
- Internal model ownership: In the insurance world of Solvency II, the risk function has responsibility for the internal model (where there is one).
Conclusion: To keep track of all of the above, to support boards, risk owners and the front line the central team will no doubt use a detailed risk register.
 Internal audit
It will probably come as little surprise to hear that internal auditors seem to like and make use of risk registers. Big four external auditors seem to like them even more (a major firm authored the original COSO framework and they are ongoing COSO sponsors. Rearranging "old money for rope" is always attractive).
Auditors will probably look at the risk register to see documentary evidence that risks are being managed, the controls in place and the risk assessments.
- The typical auditor focus is still internal control. Generally the lack of sophistication in risk registers won't contribute much to managing important risks.
- Risk-based audits can be quite naive. Look at gross and total mitigation claimed. Audit controls. No comment on the risk assessment nonsense.
- Typical spreadsheet risk registers are auditor unfriendly. Not so much a control framework as a mass of text in the last column.
Conclusion: Both internal and external auditors will probably make a lot of use of risk registers. Using risk-control matrices could make their lives easier.
Example: a risk register that works for the board
Rightly or wrongly, some risk registers have hundreds of risks. An effective board risk register will be much smaller.
A simple starting point
The risk register diagram below comes from the orange book, a UK government publication offering public sector guidance on risk and uncertainty (2004, but never withdrawn). How can we take this simple starting point and turn it into something that supports useful board discussion, encouraging action where appropriate?
The good bits
- Objectives. If they are reasonably demanding, objectives can be a good starting point; "risks" are reasons the objective might not be met.
- Definition of risk. The incorporation of cause suggests controls.
- Emphasis on controls. This often seems an afterthought in registers!
The bad bits
A board strategic risk register
|Strategic risk||Indicators||Net assets||Earnings||Value||Priority/RAG||Movement||Controls||Actions||Date||Owner|
|Customers: compelling offering||#||%||%||%||#||#||#||#||#||#|
|Customers: centre of decision making||#||%||%||%||#||#||#||#||#||#|
|Transformation: scale, focus, wider impacts||#||%||%||%||#||#||#||#||#||#|
|Performance: poor implementation of strategy||#||%||%||%||#||#||#||#||#||#|
|Performance: excessive short term focus||#||%||%||%||#||#||#||#||#||#|
|Financial: current and turnaround strategy||#||%||%||%||#||#||#||#||#||#|
|Financial: returns in overseas markets||#||%||%||%||#||#||#||#||#||#|
|Competition: budget, premium, online||#||%||%||%||#||#||#||#||#||#|
|Products: supply chain and relationships||#||%||%||%||#||#||#||#||#||#|
|Brand: ethics, trust and transparency||#||%||%||%||#||#||#||#||#||#|
|Technology: deliver customer proposition||#||%||%||%||#||#||#||#||#||#|
|People: attract, develop, retain, motivate||#||%||%||%||#||#||#||#||#||#|
|Bank: capital, liquidity, interchange fee caps||#||%||%||%||#||#||#||#||#||#|
Key to table items
- Strategic risk: A brief description (abbreviated from the "causal" approach above). Items based on tweaks to the Tesco Strategic Report (2015).
- Indicators: A measurable proxy for thing getting better / worse for the relevant strategic risk.
- 1-in-10 stress: The effect on each of three items, of a 1-in-10 scenario. The effect is determined by the company's internal model. The three items are:
- Net assets: i.e. the value of assets less liabilities on the balance sheet
- Earnings: the next set of annual profits (no debate on definitions here).
- Value: modelled corporate value, an internal proxy for market value.
1-in-10 stress: A big challenge for companies outside financial services. The FRC's 2014 Longer Term Viability Statement requirements may imply such models are needed. But even dropping the three stresses in favour of the board's subjective priorities would be better than probability-impact assessments.
- Priority/RAG: The current priority assigned by the board, whether based on the stresses or not. Red-amber-green status indicates satisfaction regarding item.
- Movement: Whether priority (or some other measure) can increased or decreased since last board review.
- Controls: These are the ongoing measures put in place to ensure a better rather than worse result. Not just operational controls.
- Actions: The "to be" measures, or other enhancements to the above.
- Date: When the actions are to be delivered by.
- Owner: Board member with oversight for delivery against objective and risk.
Using the stresses
The first thing to note that the stresses are a big improvement on probability-impact risk analysis (in fact even dropping the stresses columns and "quantifying" by the board's subjective prioritisation is likely to be better than that!) In summary, the probability-impact method misleads because:
- Most strategic uncertainty has a probability of 1 e.g. there will be a customer offering of some quality. Risk experts agree with this.
- Different people will select different probability-impact ratings: assessors may simply select different parts of the "risk curve". See Slicing and dicing risk.
Next, quantifying the 1-in-10 stresses may not be as difficult as you might think.
What does a 1-in-10 stress look like? Let's take the "compelling offer" – the first "risk" in the table above:
- Holding other items constant, as the customer offer improves revenue increases – and vice versa.
- So, what it the 1-in-10 "worst" result revenues as a result of the customer offering effect only?
- Yes, that's subjective, but no more so than many other areas of business. We're making our subjectivity explicit.
- The above can be easily modelled in terms of the first order effects on the current year's profits.
- Next there may be related effects; with lower volumes will suppliers' terms worsen?
- The model will show the effect on profits is more than the proportionate effect on revenue:
- Fixed expenses will have to be spread across a smaller revenue; proportionately profits will fall.
- The presence of debt will similarly mean that the effect on revenue reductions on profits is exacerbated.
EY in Finding opportunities in the new regulatory challenge suggests that the Longer Term Viability Statement will require these sort of quantifications.
Case study: better estimation for front line decision makers
A new analytics focus
Insurers typically price their business, such as life insurance, by using models known as profit tests. These make a number of assumptions typically based on averages: average commission rate, average expenses, average claim rates etc. But of course behind those averages lies variability.
Starting from about 2003 leading insurers selling life insurance through financial advisers and other agents started to build analytics capabilities to analyse business at a lower level. The idea with such projects – with names such as "risk and retail pricing" – was that the current position might not be optimal; perhaps there could be a profitable trade of quality for volume. Where a reinsurer was involved it might be possible to crystallise improvement on day 1.
A particular focus was the sales agent. It can be useful to regard the agent as the "customer" – they can certainly affect result dramatically. The careful data collection by insurers enabled databases to be populated. In turn the databases could generate summary tables such as:
|Agent #||Agent name||Sales (£)||Sales (cases)||Comm%||NPW%||CFI%||LapseYear1%||Smoker%||HeathDisc%||SEC|
|123456||Click to win||15,000,000||75,000||210||3.5||20||14||15||11||1.8|
When is a risk register not a risk register?
What's going on in the above table? The first row – "average" – below the header row is the average result for all agents. The rows below that are results aggregated at the individual sales agent level. It is quite possible to regard the rows as "risks". The % columns can be compared to the average row to observe the departure from average. Historically sales personnel in many industries have been rewarded for the level of their sales. It's not always different in the risky world of insurance.
Classifying risky agents by various metrics: Columns 6+ can all be regarded as measures of risk; they will all affect profitability and can be modelled.
Let's look at the columns:
- Agent #: is just a code to enable agent identification. Perhaps based on a code assigned by the insurer, or the agent's regulatory number.
- Agent name: is self-explanatory. Some agents have been known to trade under more than one name, sometimes not for good purposes.
- Sales £ / cases: The annual amount of sales (annual amounts / premiums paid by customers). The average case size is £300 p.a. (= 150,000 / 500).
- Comm %: Agent commission as a percentage of annual premium. "Click to win" provides 100 times more business than average and gets a "commission uplift".
- NPW %: Percentage of cases applying but not going ahead for whatever reason (NPW = "not proceeded with" i.e. not paying the first premium).
- CFI %: Percentage of those paying the first premium who cancel in the month-long "cooling off" period, to get a full refund. (CFI = "cancelled from inception").
- LapseYear1 %: Percentage of cases that have stopped paying premiums ("lapsed") by the end of year 1.
- Smoker %: Percentage of applicants who declare themselves to be smokers.
- HealthDisc %: A measure of the level of health-related disclosures on the insurance application form.
- SEC: A measure of the customer's socio-economic class (SEC) or grouping. There are various standards, used beyond the insurance industry.
How does the front line decision maker use this?
Let's take a look at the two agents: "Steady Joe" and "Click to win".
Steady Joe is a traditional advisory firm: relatively small and low profile. The sale numbers and average case size are somewhat better than average, though not astounding; this does not seem like an agent which is putting pressure on clients to buy. The various metrics tell a consistently "solid" story:
- NPW%: 82% of applicants go ahead, more than the average of 70%.
- CFI%: A tiny proportion (0.3%) cancel in the first month; few regret their decision to go ahead.
- LapseYear1%: A significant lower proportion than average give up their policy in the first year.
- Smoker%: A lower smoker percentage than average, consistent with the SEC results below.
- HeathDisc%: Interestingly there are more health disclosures than average; this could be because applicants and adviser are being careful.
- SEC: Steady Joe's clients have a higher socio-economic class than average (a likely predictor of lower mortality).
Click to win is an online specialist, paying for Google and other leads. Its call centre helps its customers to fill in application forms. It has an optimised new business process which rapidly pushes through business. The volumes are stunningly high. It has a national profile and its CEO is regularly quoted in the press. But:
- NPW%: The very (suspiciously?) low rate indicates almost everyone applying goes ahead. No one else comes close to this percentage.
- CFI%: This is more than balance by the month 1 cancellation rate; 1-in-5 clients change their minds (over selling?)
- LapseYear1%: Even of those remaining, a greater proportion than average drop their policies in the first year.
- Smoker%: A lower smoker percentage than average, but inconsistent with the SEC results below. Suspicious.
- HeathDisc%: Fewer health disclosures than average. Again inconsistent with the SEC results below. Again suspicious.
- SEC: This measure, driven by a combination of occupation and postcode tells the true story. A poorer class of lives.
Decision time. The table above can make a contribution to optimisation. Do models show the business to be better off with or without "Click to win"?
Does this happen in real life
Yes, this type of dashboards are in operation at various insurers and reinsurers. Yes they can identify poor quality business. Around 2006 I led a team which identified ahead of time an agent which sank with bad debts. The agent sold under two brand names. Worse Ageas took a £6m hit on Click life.
Why doesn't useful stuff like this appear on my risk register?
It should! But perhaps not in the above format. Both the (insurance) risk owner and central risk function might have a risk register covering:
- Pricing estimation uncertainty – general: Uncertainty over important pricing parameters, due to (e.g.) lack of data or change in mix of contributors.
- Pricing estimation uncertainty – specific: As above, but split into individual areas of uncertainty.
- Uncertainty over NPW rate due to ...
- Uncertainty over CFI rate due to ...
- Uncertainty over LapseYear1 rate due to ...
- Uncertainty over Smoker rate due to ...
- Uncertainty over HealthDisc rate due to ...
- Uncertainty over SEC mix due to ...
Case study: risk-control matrices that work for risk owners
The risk-control matrix: a popular tool
I have yet to find anyone responsible for a major area of risk who does not find risk-control matrices useful. This case study looks at risk management from the point of view of a chief actuary carrying out a monthly review.
Other risk-related work. Among many other areas of responsibility, the chief actuary oversees the calculation of the insurer's solvency position, usually with support from finance. It is evident that this is not carried out using a risk register! Nonetheless, the risk register is a helpful administrative support tool.
Notably, the risk-control matrix below cuts to the chase: what are the uncertainties and how are they managed? Quantification is done elsewhere, in the risk register and using other tools. But the risk register summarises so much information is so little space. Does your risk register produce one of these?
Here's how the risk-control matrix might be used:
- Across the top is a list of risks and uncertainty in the chief actuary's area. He knows these well, of course.
- As with other functional areas, these risks are divided according to whether they are operational or not, often reflecting an internal/external split.
- In this case study the actuary has split his operational risks further: those to do with acquisition of new business ("underwriting") and "ongoing" work.
- The wide column on the left contains a brief description of (all) the controls related to the actuary's area – see below the graphic for more.
- Like the risks, the controls are split by broad type. This sort of thinking leads to control frameworks and efficient cross-company risk management.
- The main body of the table contains a 1 if the control applies to the risk and a 0 (or empty cell) otherwise.
How can the actuary use the risk-control set up?
- Risk type versus control type. There may be some obvious consistency checks e.g. traditional controls are used mainly for operational risks (*).
- Control count for each risk. Automatically produced in row 1 – "control count". For each risk, does the absolute and relative level of control make sense?
- Risk count for each control. Automatically produced in column 1 – "row count". Is the absolute and relative use for each control sensible?
- Consistent control deployment. Could a control deployed against one risk be deployed against another? Sometimes non-deployment is an oversight.
(*) This would be even more marked if the various individual sign offs under "governance" were instead allocated to "traditional controls".
Can this quick analysis be taken further?
Of course! The tabular information above is clear, but limited; it's a restructured subset of a good risk register (a bad risk register probably can't produce this table). Here are some ways the presentation above could be extended without going into "full risk register mode". We could show more information:
- ... in the table body e.g. colour coding of cells e.g. values other than 0 and 1 could convey more information, if we change the sum to a count of positive values.
- ... on risks e.g. we could show the source (internal / external) or some measure of priority (not probability-impact!) under one of the grey shaded rows.
- ... on controls e.g. we could show the quality of design, implementation or overall effectiveness of controls to the left of the grey column.
Where next? The risk register series
User beware. Many risk experts have warned of the common flaws in risk registers. It doesn't have to be this way. The first half of the set of articles below is generally positive, starting with how five potential audiences might make better use of risk registers. The second half warns of some really dangerous flaws.
- Risk registers: who, what, why and how? : Starting with the positive we ask the basic and practical question: how can we best use risk registers?
- Risk registers: good, bad, odd and ugly use : For most organisations risk registers work best alongside other tools. This article compares them to models.
- Risk management is more than risk registers : Why we should be asking both more and less of our risk registers. Includes a range of additional tools.
- Risk registers: the claimed flaws : A list of claimed flaws, with brief comments, plus a brief look at Matthew Leitch's critique of "Risk Listing".
- Risk registers: what your auditor probably won't tell you : Is your risk register inconsistent and incomplete by design? An accident waiting to happen?
- Risk is more than events : Risk registers often focus on future things that might / might not happen. But the most areas of uncertainty are not usually events.
- How to miss 75% of your risks without trying : How bad could a risk register get? Could a common approach miss 75% of all risk, for example?
- Slicing and dicing risk : Shows the flaws in probability-impact risk assessment, using a simple example. Turn on your brain and turn off probability-impact.