However, no risk register protects them from another source of risk, one that is subtler, yet much more prevalent.

PMI’s own 2016 survey report shows that only 49% of projects are *“completed on time”*, while a whopping **16**% are deemed failures. With all the risk management being done by certified professionals, you’d think PMI’s own survey would show a better rate than **49**% for timely completion, and what’s worse: **16**% is a rather high number for projects deemed failures, and yet this is consistent with other studies:

A 2011 Harvard Business Review study of 1471 large IT projects, with a cost average of USD$167M, checked hard data (not a survey) and revealed that one in six projects (

16%) had a cost overrun of 200% and a schedule overrun of almost70%.

A 2012 Mckinsey-Oxford study of 5400 IT projects (again: hard data, not survey) found that software projects have an average schedule overrun of

33%, with17% of all IT projects going so badly that“they can threaten the very existence of the company”.

It turns out that Black Swans are not that uncommon: One in 6 projects, in fact. PMI’s own survey shows that for those projects deemed a failure, **28**% of respondents said *“inaccurate task estimates”* was among the primary causes of those failures, while another **12**% mentioned *“task dependency”*.

But: ¿Is it really? ¿Is the inaccuracy of task estimates the reason their projects went so bad? Or is it more likely that these survey respondents are clueless about the risk inherent to scheduling a chain of interdependent tasks? Perhaps the 12% that mentioned “*task dependency*” are onto something!

- When you throw a dice, there’s only a 1 in 6 chance (
**16**%) of it showing a 1. - If you throw two dice, the probability of at least one dice showing a 1, raises to 11 in 36 (
**30**%) - If you throw three dice, the probability of at least one dice showing a 1, raises to 91 in 216 (
**42**%)

The same happens when you have parallel activities in a project : even if the chance for an individual task to be late is small, the probability of at least one of them to be late is larger, which means that the probability of a subsequent activity to begin late is larger, because it will not start until all the parallel dependencies have finished. This is an effect of parallelism in the schedule (not of the quality of the individual estimates), and even PERT techniques do not account for it. Some call it *Merge Bias:*

Something similar happens to sequential activities: Two estimates with **80**% confidence for two sequential activities, when added, produce a **64**% confidence estimate for the whole. Add the fact that probability distributions for activities are more complex than for dice (usually triangular or beta distributions) and a project manager would have to be a master of probability theory to provide a “90% confidence” estimate for a “whole project”.

That’s where the Monte Carlo method helps: instead of becoming an expert in probability theory, a project manager can rely on a computer to apply the math of probability distributions to simulate the network of activities, and produce results that answer simple questions such as: ¿*what is the probability the project will be done by the chosen deadline?* or ¿*which deadline should we choose if we want a probability confidence of 90%?*.

I have built an online tool for performing simple Monte Carlo analysis of projects that can help you answer those questions. It’s available for **free** at http://montecarlo.atehortua.com where I hope it can be useful.

]]>