Why Monte Carlo


A lot of Project Managers believe that the only source of risk to their projects are “events” that have a chance of “materializing”. They know how to deal with those by means of the risk register, which allows them to identify, communicate, mitigate, monitor and plan responses to those events.

However, no risk register protects them from another source of risk, one that is subtler, yet much more prevalent.

PMI’s own 2016 survey report shows that only 49% of projects are “completed on time”, while a whopping 16% are deemed failures. With all the risk management being done by certified professionals, you’d think PMI’s own survey would show a better rate than 49% for timely completion, and what’s worse: 16% is a rather high number for projects deemed failures, and yet this is consistent with other studies:

A 2011 Harvard Business Review study of 1471 large IT projects, with a cost average of USD$167M, checked hard data (not a survey) and revealed that one in six projects (16%) had a cost overrun of 200% and a schedule overrun of almost 70%.

A 2012 Mckinsey-Oxford study of 5400 IT projects (again: hard data, not survey) found that software projects have an average schedule overrun of 33%, with 17% of all IT projects going so badly that “they can threaten the very existence of the company”.

Black SwanIt turns out that Black Swans are not that uncommon: One in 6 projects, in fact. PMI’s own survey shows that for those projects deemed a failure, 28% of respondents said “inaccurate task estimates” was among the primary causes of those failures, while another 12% mentioned “task dependency”.

But: ¿Is it really? ¿Is the inaccuracy of task estimates the reason their projects went so bad? Or is it more likely that these survey respondents are clueless about the risk inherent to scheduling a chain of interdependent tasks? Perhaps the 12% that mentioned “task dependency” are onto something!

Some good project managers who want to account for risk in the task estimates, request their team to provide “optimistic”, “most likely” and “pessimistic” estimates for each task. When you have those 3 numbers, you know for a single task how likely is it to skew towards the optimistic or pessimistic scenario. However, for a network of interdependent tasks (what we call a schedule), the math to correctly handle the probabilities for estimations is daunting, and quite often counter-intuitive.

  • When you throw a dice, there’s only a 1 in 6 chance (16%) of it showing a 1.
  • If you throw two dice, the probability of at least one dice showing a 1, raises to 11 in 36 (30%)
  • If you throw three dice, the probability of at least one dice showing a 1, raises to 91 in 216 (42%)

The same happens when you have parallel activities in a project : even if the chance for an individual task to be late is small, the probability of at least one of them to be late is larger, which means that the probability of a subsequent activity to begin late is larger, because it will not start until all the parallel dependencies have finished. This is an effect of parallelism in the schedule (not of the quality of the individual estimates), and even PERT techniques do not account for it. Some call it Merge Bias:

Merge Bias

“Merge Bias is the impact of parallel tasks on the probability of completing on or before a need date”

Something similar happens to sequential activities: Two estimates with 80% confidence for two sequential activities, when added, produce a 64% confidence estimate for the whole. Add the fact that probability distributions for activities are more complex than for dice (usually triangular or beta distributions) and a project manager would have to be a master of probability theory to provide a “90% confidence” estimate for a “whole project”.

Even with pixel perfect estimations for the individual activities, extracted from hard data and demonstrably accurate, a project manager will introduce a huge source of error just by arranging those activities in a schedule. Its the nature of parallelism and sequentiality! and not accounting for this source of error would be the project manager’s fault, even if he/she later tries to blame it on “inaccurate task estimates” or “task dependency“.

That’s where the Monte Carlo method helps: instead of becoming an expert in probability theory, a project manager can rely on a computer to apply the math of probability distributions to simulate the network of activities, and produce results that answer simple questions such as: ¿what is the probability the project will be done by the chosen deadline? or ¿which deadline should we choose if we want a probability confidence of 90%?.

I have built an online tool for performing simple Monte Carlo analysis of projects that can help you answer those questions. It’s available for free at http://montecarlo.atehortua.com where I hope it can be useful.


Share this Story

One comment

  1. The missing question here is what is the Root Cause of these overages.
    I work as a contractor for Institue for Defense Analyses for the Office of Secretary of Defense (OSD) Performance Assessment and Root Cause Analysis (PARCA). This is the primary question on the assessment we work.
    While the MCS simulation is a power tool – and mandated by DI-MGMT-81861 in out domain along with Risk Management processes, without know “why” the project went over budget or came in late, no corrective actions can be developed to prevent this in the future.

    Here are some starting points to establishing a credible baseline needed for any hope of success

Leave a Reply

Your email address will not be published. Required fields are marked *


© Copyright 2016 Vladimir Atehortúa
All Rights Reserved