Several leading project risk management standards and guidelines suggest that individual risks can be ranked using a risk-scoring scheme that represents a combination of probability and impact, using the following definitions in the table on the right.

The Risk Score for each risk is produced by multiplying P by I, then this score is used to rank the risks. A risk with medium probability and high impact has a Risk Score of 0.5 x 0.4 = 0.20. The Risk Score for a low probability/very high-impact risk is 0.3 x 0.8 = 0.24. So the second risk ranks higher than the first.

But have you ever wondered where these numbers come from? This particular risk-scoring scheme was developed by a small group of risk consultants in the mid-1990s to bring some consistency to our practice. The numbers were derived empirically after some trial and error, but the thinking was as follows:

Both sets of scales are nondimensional numbers, and do not have any units. So, in the probability scale, for example, 0.1 does not mean 10% or 1:10, it is just a numerical indicator of the VLO probability rating. Similarly, 0.8 for impact does not "mean" anything, it is just a number referring to a VHI impact rating.

This means that you can’t multiply the P score and the I score to get a P-I Score that can be converted to days or dollars or anything else. The product is simply a rating scale that takes account of two dimensions to give a single common index that allows risks to be ranked against each other.

Both scales lie between 0-1, which is neat. Also, no two P-I Scores are the same, so there are no equal scores and ranking is unambiguous.

The probability scale is linear (0.1/0.3/0.5/0.7/0.9) because that is how most people think about probability for risks in projects, in linear blocks such as <20%, 20-40%, 40-60%, 60-80%, >80%. (It is different in health and safety, where probability is usually logarithmic, to account for extremely unlikely events.)

The impact scale is non-linear (0.05/0.1/0.2/0.4/0.8) because when we are ranking risks, impact is more important than probability. We can demonstrate this by considering a VHI-probability/VLO-impact risk (Risk A) and a VLO-probability/VHI-impact risk (Risk B).

It is intuitively clear that Risk B (tiny chance of a disaster) is more important than Risk A (near-certainty of an insignificant impact). This is reflected when you multiply the P score (linear) by the I score (nonlinear).

The product is weighted by impact. The nonlinear I score means that HI and VHI impact risks are always overweighted to give a higher product, whereas LO and VLO impact risks are underweighted to give a smaller product.

The risk-scoring scheme shown above embodies these principles and works empirically, but it is only one possible example of such a scoring scheme.

For example, there is no rule that says the I score should double each time: it could equally increase in steps of x3 or x10, or anything. The group of risk consultants who developed these scales experimented with many alternatives and decided that this was workable, simple to understand, and practical to implement. What do you think?