Can We Trust the Experts during Risk Assessments?

The nature of being a risk manager requires us to engage extensively with subject-matter experts (SMEs) to understand concepts, processes, and risks. We use this information to understand how the SMEs’ activities fit into the bigger picture of the organization.

Without their input, it’s almost impossible to determine the appropriate risk response unless you have extensive amounts of data, quantitative or qualitative, at your disposal.

A common, introductory method for soliciting input during the risk assessment and analysis processes is to survey — either electronically or during a workshop  — a group of SMEs and ask them to rate the impact, likelihood, and potentially other metrics of a risk on a scale, typically 1-5. (As a side note, this is not my preferred method…)

However, you don’t need me to tell you that opinions and experiences vary widely in all areas of life, and risk assessments are no different.

Although you have a group of SMEs on a given topic providing input on a risk, each of these individuals has their own unique experiences, role, and information. Some score a risk at a 2 while others give that same risk a 3, 4, or even a 5…each of these participants are “experts” in a given area, but their interpretations vary widely.

A situation like this brings up an important question – how do we account for these differences?

(Now I’m going to be a bit blunt for a moment; if you do the following, I apologize, but this is something that really bugs me and can be really dangerous to the organization.)

In situations where SMEs provide a range of scores on a particular risk, many organizations will take the easy road and average out the scores.

Let’s say you have 5 SMEs who provide a range of scores like 1, 2, 3, 5, and 5.

Instead of making the effort to understand and account for this disparity, the risk manager will just take the average and say the impact (…or likelihood) of risk X is a 3.2. This rating bears little to no resemblance to reality, and this [extremely flawed] information will be given to executives and other decision-makers.

How should you handle this situation when you have such disparity in the risk ratings?

Calibration helps you figure out what’s driving differences in assessments and take action(s) to address these differences.

Calibration is a term usually associated with instruments for measuring weight, speed, or some other metric in which you refine or fine-tune a measuring instrument to ensure its accuracy.

Take this embarrassing situation: If you’ve ever been pulled over and written a ticket for speeding, one question you may ask is if the officer’s radar gun is properly calibrated.

To get an accurate picture of a particular risk or issue, risk managers must “calibrate” the inputs they receive, regardless of how experienced and insightful the SMEs are, because as Douglas Hubbard explains in his book The Failure of Risk Management:

No matter how much experience we accumulate and no matter how intelligent we are, we seem to be very inconsistent in our estimates and opinions.

One method for calibrating responses, especially for companies just starting out or who don’t have robust data analysis capabilities, is to ask about the information the SME used to arrive at the conclusion he or she did. Do this with each person who has the disparate ratings. Then give all SMEs in the sample group the same information and then rescore, but be careful to avoid groupthink, conformity and other biases that can negatively impact decision-making.

The key to making this work is to have a team of what Professor Philip Tetlock of the Wharton School at the University of Pennsylvania calls “belief updaters,” or people willing to change their minds when given new information. Otherwise, this will make for a very difficult discussion because you will not get agreement on the risk rating.

If your organization has data and the capacity to understand it, there are other quantitative-based calibration methods you can employ.

Hubbard has written about the topic of calibration mostly from a quantitative perspective in his books and in this webinar from 2020’s Risk Awareness Week (RAW) event.

While many thought leaders in risk management, including Hubbard, extol the benefits of quantitative over qualitative assessment, no method is fool proof, including data-driven models.

The best example of this is a method we’ve discussed before called Monte Carlo simulation. Data or numbers for these models may come from subjective methods, but as Hubbard explains in his RAW 2020 webinar, Monte Carlo simulations appear to measurably improve estimates and decisions of SMEs.

A word of caution about quantitative modeling – many risk thought leaders are very adamant about avoiding qualitative assessments at all costs. However, as I discuss in this article, most companies I’m familiar and/or work with are not ready for this level of risk assessment and analysis.

For example, if you’re new to exercise, you wouldn’t jump right into an intense, athlete-level workout but instead ease your way up to this level.

Risk assessments are the same way – you won’t want to use quantitative methods if your company does not have the data, structure, processes, and skill sets to gather, maintain, and monitor data. Doing so could end up being more destructive than simply taking an average of disparate scores.

What methods do you use to account for differences in risk assessment by subject-matter experts?

If you are able, we would love to hear your thoughts on this important topic, so please feel free to leave a comment below or join the conversation on LinkedIn.

Also, if your company is struggling to get accurate information to properly understand threats and opportunities to achieving strategic objectives, please don’t hesitate to contact us to discuss how your risk management processes can be harnessed to give your company a strategic advantage.

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Receive Our Weekly Blog Updates

Meet Carol Williams, SDS Founder & Lead Strategist

To our readers:

This blog was launched to provide strategy and risk practitioners with a go-to resource to better guide their efforts within their companies. Thank you for bringing me and my team along to be part of your journey towards better risk management, strategic planning and execution, and overall decision-making. Happy reading!

Find more SDS Insights