Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Copyright Notice:

Materials published by Intaver Institute Inc. may not be published elsewhere without prior
written consent of Intaver Institute Inc. Requests for permission to reproduce published
materials should state where and how the material will be used.

Monte Carlo Schedule Risk Analysis


Intaver Institute Inc.
303, 6707, Elbow Drive S.W.
Calgary, AB, T2V0E5, Canada
tel: +1(403)692-2252
fax: +1(403)259-4533
[email protected]
www.intaver.com

One of the fundamental questions of project management is, What will be the
duration and cost of the project given the multiple risks and uncertainties?
Program Evaluation and Review Technique (PERT) and Monte Carlo analysis
may help to answer these and other questions. Monte Carlo analysis is a
straightforward approach to deal with complex sets of project uncertainties.
However, both Monte Carlo and PERT have a number of limitations that are
related to the manner in which we identify and interpret uncertainties.
How much will it really cost?
With the recent increase in oil prices, Calgary, the oil capital of Canada, has experienced
tremendous growth. Population has reached one million people and the cost of all major
infrastructure projects has risen dramatically. The original price tag to complete a section
of Calgarys ring road was pegged at $250 million Canadian dollars in 2004. By the
middle of 2006, cost overruns were estimated to be $235 million (Braid, 2006). One third
if this amount could be attributed to changes in the project scope, which included two
new intersections; however, the remaining two-thirds was caused by increases in the cost
of labor, material, and fuel. Due to increased costs, the project will be delayed for at least
one year. Similar situations occurred with the construction of a causeway in the citys
south, where costs jumped from $100 million to $150 million, and for a wastewater plant,
which experienced an increase from $220 million to $350 million. When cost overruns
and delays of these proportions occur, two questions need to be asked:
Why did it happen?
What do we do next?
We are all blessed with remarkable hindsight; analyzing the past is always easier than
predicting the future. In the examples above, city and provincial officials, the parties
responsible for financing the projects, were trying to find a common strategy to deal with

the escalating costs. One option was to delay the projects; another was to continue with
the original plan, as costs on the deferred projects will increase even more, but how much
more? The answer to this question will drive the decision. And what will be the final tally
for these projects given all the uncertainties that surround labor, fuel, and material costs
during the oil boom in Calgary? For example, will $235 million be the total cost overruns
in the ring road project or will it be even more given the environment?
Nothing is certain in our world and this includes project durations, finish times, costs, and
other parameters. Therefore, it is not possible to
say that this road construction will cost exactly
The chance that project will be
$250 million. What we can say is that that there
completed on time and within
is a chance that project will cost $250 million.
budget is one of the most
But what is the nature of this chance? If we say
important indicators for the
that there is a 90% chance that the project will
decision-making.
cost $250 million or less, this implies that we
are very confident that the project will be
completed within budget. However, if the chance is 20%, this means that we do not have
a lot of confidence in the estimate and that we need either to review the project scope and
resources, or accept cost overruns. By quantifying the chance for each project scenario,
you can review different project alternatives and choose the one that has the highest
chance of successful completion.
So we need to find the answers to two very important questions, which will help us to
make our decision.
1. How much would the project cost and how long would it take given all the
risks and uncertainties associated with the project?
2. What is the chance that project will be completed on time and within budget?
If we know the risks and uncertainties associated with activities within a project, we can
perform calculations to find an answer to these questions. The simplest way to calculate
the effect of risks and uncertainties is to create many schedules of the same project with
combinations of input parameters: risks, different estimates of activitys cost and
duration, resources, and so on. We can then analyze all these scenarios together to find
the answer to these questions.
This method is called scenario analysis. It is a simple and straightforward approach you
can use without any sophisticated tools. It works very well for simple projects or for a
particular phase of a project. PMBOK Guide recommends what-if scenario analysis as
one of the schedule network analysis techniques.
The problem with this approach is that accurately representing the combinations of risks
and uncertainties that exist in most projects will produce an unmanageably large number
of scenarios. Each project has large number of tasks and resources, and each task and
resource can have different risks and uncertainties. To further complicate the analysis,
these risks can occur at different times, and we need to find the cumulative impact of
these risks on the project. People have developed a number of quantitative methods that
can help to overcome this problem.

PERT
Very often major technological advances are by-products of military research. Between
1956 and 1958, the consulting firm Booz Allen Hamilton assisted the U.S. Navys
Special Project Office with the development of the Polaris Fleet Ballistic Missile
program. This project was probably one of the largest and riskiest research and
development efforts the US military had ever undertaken. Managers wanted estimates of
the probabilities of meeting important milestones, such as test-launching a missile on a
particular date. A by-product of this project was the Program Evaluation and Review
Technique or PERT (Craven, 2001). Developed 50 years ago Classic PERT is well
known today, although applications are limited. Here is how it works:
Expected duration (t) of the activity or mean is calculated using the following formula:
(Optimistic duration + most likely duration* 4 + Pessimistic duration)
t = -------------------------------------------------------------------------------------

6
This formula allows managers to use expected durations to create project schedules. But
why not just create an optimistic or pessimistic schedule using these durations? Here is a
simple example that will help to explain why PERT uses expected durations instead of
optimistic or pessimistic durations separately.
Lets imagine a construction project, which requires the installation of ten prefabricated
columns. The installation of one column takes between 2 and 4 hours, and the columns
are installed one after another. However, if there is a problem with the installation of one
column, this does not mean that there will be a problem with the other columns. If one
column can take 4 hours, is it rational to determine that the pessimistic duration for
installing all of the columns will be 4 hours x 10 columns = 40 hours? Not really, this
would be an extremely pessimistic duration, which could happen only if installation of
all columns experienced the same problem, which is highly unlikely.
This example illustrates one of the biases associated with anchoring and adjustment
heuristics. People tend to significantly overestimate or underestimate probability of
conjunctive (where two conditions must exist) events. The four-hour duration is an
anchor that can lead to an incorrect judgment.
As we can see, if we use optimistic (pessimistic) task durations to create optimistic
(pessimistic) schedules, frequently we will get misleading results. Therefore PERT, by
using the expected duration approach was a major step toward the incorporation of
uncertainties in project management.
More importantly, PERT also included simple formulas and methods to calculate the
probability of meeting specific milestones and therefore was easy for project managers
and teams to use without any arduous training.
With all the elegance of PERT, it has a number of problems:

Classic PERT gives accurate results only if there is a single dominant path
through a precedence network. When a single path is not dominant, classic PERT
usually provides overly optimistic results (Klastorin 2003).

50%

40%

30%

20%

10%
0

Probability

Frequency

10

0.5 1.0 1.5 2.0

Duration

Figure 1. Frequency histogram

According to PERT, uncertainties associated with tasks are independent of each


other, which is not true in many cases.

When we are trying to estimate the most optimistic, the most pessimistic, and the
most likely duration of the task, we will be affected by the anchoring heuristic.
The most likely duration will become an unwanted anchor, which will skew our
ability to accurately estimate task or project durations

To address these and other challenges, other analytical approaches have been developed.
But before we can discuss them, we need to give a quick overview of some basic
concepts that are used with these methods.
Statistical Distributions
This may seem strange, but many people do not have a concept of maps. When they look
at a map of a city, they cannot find a correlation between what they see on a map and
what is physically around them. A similar situation can occur with statistical
distributions. Many project managers are familiar with the concept of statistical
distributions, perhaps from a basic course on probability and statistics, but they are
unable to tell anybody (including themselves) how it actually works or its practical
application. So, if you find yourself in this group, here is a quick primer.
Lets assume that you are trying to analyze the duration of the activity Install Kitchen
Sink. Depending on the type of sink, configuration of pipes, and other factors, it can
take varying lengths of time. So after installing twenty sinks and recording the duration
each time, you can develop a record of the task durations (Table 1).
Duration of the activity Install Occurrences
Kitchen Sink
Between 0 and 0.5 hours
2
Between 0.5 and 1 hours
10
Between 1 hour and 1.5 hours
5
Between 1.5 hours and 2 hours
3

Probability: Number of occurrences divided by


total number of installations (20)
2/20=0.1 (10%)
50%
25%
15%

Table 1. Activity duration on different trials


You can actually represent this as a chart with duration on the horizontal axis and
frequency on the vertical axis. The chart can also display the probability for certain
durations (Figure 1).

A statistical distribution is an arrangement of values showing their frequency of


occurrence. You may redraw this chart in another format. For each point of the chart you
can add up all frequencies (probabilities) associated with all points on the left of the
selected point. This is how we calculate cumulative probability and the chart is called a
cumulative probability plot. These manipulations allow us to determine the probability
associated with a certain value.

20

100%

16

80%

12

60%

40%

20%
0

Probability

Frequency

For example, what is the probability that the duration will be 1.2 hours? To find out,
locate the duration 1.2 hours on the horizontal axis and draw a line up to solid line. Now
draw a straight line to the right to find the probability, which in this example equals 85%.
Unfortunately, cumulative probability plots can be confusing. They do not help us
understand the interval for the parameter in question (in this case, task duration). It is
only useful if we draw lines and get a probability as shown on Figure 2.

0.5 1.0 1.5 2.0

Duration

Figure 2. Cumulative probability plot


If you have an empirical dataset, you may create a very irregular or spiked distribution
curve and you may want to make this distribution smoother using one of continuous
distributions. Continuous distributions are defined by different mathematical formulas.
Such continuous distributions are usually a better reflection of the nature of the real-life
data because it has a continuum of possible outcomes. While there is a large list of
different continuous distributions, only few of them are actively used in project
management (Figure 3). Among them are:
Uniform there is an equal probability that the parameter will be within certain range.
Triangular the parameter is estimated using minimum, maximum, and most likely
estimates; minimum and maximum are not optimistic and pessimistic estimates, they are
extremes.
Normal a symmetrical distribution, which occurs very often in business and in nature.
Remember that this distribution is unbounded, which means that it spreads to infinity
from both ends; in project management it needs to be used with some type of cutoff to
remove the infinities.
Lognormal a positively skewed (non-symmetrical) distribution (has longer tail to the
right).

Beta a bounded distribution, which uses a mathematical formula that includes two
coefficients. By changing these coefficients, beta distribution can take variety of shapes;
it can be symmetrical or non-symmetrical. The PERT formula was derived using a Beta
distribution.

Normal

Triangual

Uniform

Figure 3. Different continuous statistical distributions


In addition to these distributions that are defined by recognizable mathematical formulas,
most software tools will allow you to create custom distributions. All that is required is
data, which in this case is the frequency of occurrence of a certain value. For example,
you can enter the distribution shown on Figure 1 and use it for further analysis.
Moreover, if you have empirical data, you can find known statistical distributions that
can fit to this data. There are numerous software tools that will help you to choose the
best fit among different types of distributions for your data.
When most people think about statistical distributions, the first thing that comes to their
minds is a picture of a chart. But remember, statistical distributions are really only an
arrangement of values and there are many parameters that can be used to analyze the
distribution. Most important parameters are:
Mean a mathematical average, calculated as the sum of variable values for all the trials
divided by number of trials.
Standard Deviation - a measure of how widely dispersed the values are in a distribution.
The greater the standard deviation, the more uncertainty is associated with the parameter.
Percentile - A value on a scale of zero to one hundred that indicates percentage of a
distribution that is equal to or below this value. A value in the 95th percentile (sometimes
defined as P95) is a value equal to or better than 95 percent of all other values in the
distribution.
Monte Carlo Technique: How it Works
Now that we have provided a little background on statistical distributions, we can explain
how you can use Monte Carlo techniques to analyze a project schedule with uncertainties.
Monte Carlo methods were originally practiced under more generic names such as
"statistical sampling". Actually the name "Monte Carlo", popularized by early pioneers in
the field including Stanislaw Marcin Ulam, Enrico Fermi, John von Neumann and
Nicholas Metropolis, is a reference to the famous casino in Monaco. Its use of
randomness and the repetitive nature of the process are analogous to the activities
conducted at a casino. Stanislaw Marcin Ulam tells in his autobiography Adventures of a
Mathematician that the method was named in honor of his uncle who was a gambler
(Ulam, 1991).

Here is how it works. Let us assume that you have performed a calculation of some type.
The type of calculation does not matter. For example, it can be an economic model
defined in the spreadsheet. However, in this example, we are performing schedule
network analysis using critical path method (CPM) . Each project schedule has a number
of uncertain (probabilistic) parameters: task duration, task cost, start and finish times of a
task, rates associated with resources, and others. Our goal is to come up with statistical
distribution for project cost and duration, which we develop by running simulations. Here
are the steps of the simulation process:
1. Retrieve the values of the parameter from the statistical distribution. This process
is called sampling. Basically, you roll a die and return a random number. You
will use this number in a mathematical formula associated with the distribution.
The formula will return a value. It is at this point that the magic of Monte Carlo
occurs: if you roll a die many times, values associated with the hump or spike of
the distribution will come up more often than the values associated with the lowlying areas (or tails) of the distribution.
2. Use these values in the calculation engine that you are using to run your model,
which in this case is a project schedule. In other words, your project schedule will
be calculated using a critical path method based on task duration, cost, and the
other parameters we retrieved from the statistical distribution in the previous step.
3. Save the results of the analysis (project duration, cost, and other parameters).
Repeat the process hundreds of times, each time using a new set of values from
the statistical distribution. Each separate run is called a trial. After you have
calculated and saved the results of hundreds of trials, you will now have a
distribution of project parameters, which you can represent on a chart similar to
those shown in Figure 1 and Figure 2.
Luckily, you dont have to perform all these calculations manually as there are many
software tools specifically designed for this task. Your job is to define the distributions
for input parameters and analyze the results. One of these software packages is
RiskyProject by Intaver Institute Inc. (www.intaver.com).
Here is a more concrete example. Assume that you have a project with three tasks (Figure
4). The duration of the first task is defined by a normal distribution (with a mean of 4
days), the duration of the second task is defined by uniform distribution (between 3 and 7
days), and the duration of the third task is deterministic (always 4 days).

1 2 3 4 5 6 7

2 3 4 5 6 7 8

1 2 3 4 5 6 7

Task 1
Task 3

Task 2

7
6
5
4
3
2
1
8 9 10 11 12 13 14 15 16

Figure 4. Monte Carlo simulation process


For this example we will only run twenty trials. We use a sampling to get the duration
distributions for tasks 1 and 2. Take a look at Table 2. For task 1, we know that durations
between 3 and 5 days occur more frequently than other durations because that is where
the peak of the distribution curve or the hump is. For task 2, because it has a uniform
distribution all durations are equally distributed between 3 and 7 days. For each trial, we
add up all the task durations to get the project duration. Results will are displayed in a
histogram. As you can see, the histogram for the project duration also has a hump due to
the Task 1s normal distribution. .
Using the data from the twenty trials, we can now calculate all of the probabilistic project
parameters, including the mean (in this example it is 12.185 days), standard deviation,
and different percentiles.
Trial Task
1
1
1.2
2
4.0
3
2.5
4
3.0
5
3.5
6
4.2
7
3.8
8
4.4
9
2.1
10
4.1

Task
2
3.5
2.8
4.0
6.0
4.4
3.9
6.2
4.4
5.9
5.8

Task
3
4.0
4.0
4.0
4.0
4.0
4.0
4.0
4.0
4.0
4.0

Project
8.7
10.8
10.5

13.0
11.9
12.1
14.0
12.8
12.0
13.9

Trial Task
1
11
4.8
12
4.2
13
3.9
14
2.3
15
5.8
16
3.4
17
4.6
18
3.7
19
3.9
20
4.3

Task
2
3.1
4.9
5.5
5.1
3.1
3.9
3.7
4.8
3.5
5.5

Task
3
4.0
4.0
4.0
4.0
4.0
4.0
4.0
4.0
4.0

4.0

Project
11.9
13.1
13.4
11.4
12.9
11.3
12.3
12.5
11.4
13.8

Table 2 Monte Carlo simulation results

Which Distribution Should Be Used?


The distribution you choose for your project schedule may be based on an analysis of the
historical data related to this parameter. For example, if the task occurred regularly each

time, you can measure the duration of the task and it can be used to define the
distribution.
Unfortunately, historical data simply doesnt exist for many projects. Let us see how
expert judgment may help define statistical distributions. The probability method
(Goodwin and Wright, 2004) helps to mitigate negative effects of anchoring including
insufficient adjustment:
1. Ask an expert to establish a range of values for the parameter.
2. Ask an expert to imagine a situation that could lead to value lying outside the
range and revise the range if necessary.
3. Divide the range into 4-7 intervals and for each interval ask the expert to
assess whether they could increase or decrease that value. For example, an
expert estimated that duration range is between 5 and 10 days. Ask the expert,
What is the chance that duration is less than 6 days?, then What is the
chance that duration is less than 7 days? and so on. By the end, you have
elicited a cumulative probability distribution. You may draw the distribution,
connect the points by hand and fit the statistical distribution.
4. Perform a reality check. First, you may ask an expert to come up with
cumulative probabilities using different intervals, for example 1.5 days instead
of 1 day. Then you can compare the results with the previous assessment and
make necessary corrections if needed. You can also ask the experts to define
where they think the peak of the distribution should be and compare that with
the results found in step 3.
Another method of eliciting judgment for continuous distributions is the method of
relative heights. Using the previous example, you can ask the experts how many times
the duration will be between 5 and 6 days, between 6 and 7 days and so on. Then you can
draw a frequency histogram, similar to the chart shown on Figure 1.
Unfortunately, if your project schedule has more than a few dozen tasks, it could take
quite a while to come up with statistical distributions for all uncertainties. The good news
is that with most project schedules, if you know the range of the data, the particular shape
of the distribution may be not very critical for the analysis. If you choose a triangular
distribution for particular task duration instead of lognormal, this will not completely
skew up your analysis as long as your range is accurate. Risks and uncertainties that you
have not accounted for will cause you more problems than an inaccurate distribution
shape.
How many trials are required
One of the first questions many people ask about the Monte Carlo process is how many
trials are needed to perform a meaningful analysis. The answer depends upon the nature
of the uncertainties associated with the project schedule. In some cases, you may want to
incorporate very rare events that have dramatic outcomes into the project schedule. For
example, there is 1 chance in 1000 that a natural disaster will strike. While this event can
be modeled using a discrete distribution, you will need to run at least 1000 trials to see
how the event could affect the project schedule. Apart from these special cases, it is our
experience that you can perform a meaningful analysis on most project schedules,

including very large schedules with over a thousand activities, with only a couple of
hundred trials.
Fortunately, most software applications used in this field have a feature called
convergence monitoring. After each trial, the software will calculate statistical parameters
(mean, standard deviation, and possibly others) of a selected project variable, e.g. project
cost or duration. The software will calculate the relative difference between these
statistical parameters on two consecutive trials and if this difference stays within a
specified variance over a number of consecutive trials, the software will deem that the
results have converged and halt the Monte Carlo process. For example, if the relative
difference between project standard deviations is less than 0.5% on more than 25
consecutive trials, the process ends.
Analysis of Monte Carlo Results
What is the chance that the project will be on time and within budget
Monte Carlo helps us answer this very important question. Technically, we need to
record the number of trials in which the project was on time and within budget and divide
this by the total number of trials. For example, if you ran 100 trials and in 65 of them the
project was on time, the chance would be 65%. The best way to analyze this is to use a
statistical distribution associated with the project duration. Fortunately, the software tools
you are using should have an interactive histogram for the statistical distribution: enter
the date and you will be able to view the chance that the project will be completed before
or on a particular date.
Sensitivity and Correlations
Using Monte Carlo, you can identify a tasks uncertainties that will have the greatest
effect on the project schedule. For example, if a task is very risky, it can significantly
affect the project duration. In addition, you can identify correlations between tasks and
monitor how they affect the project schedule.
Critical Indices
If we analyze a deterministic project schedule, we can identify a critical path. However,
when we use Monte Carlo, the critical path can be different in each trial. In this case, we
can determine the percentage of time a task is on the critical path during the trials. For
example, as a result of the analysis, we find that Task A will be on the critical path 60%
of the time, Task B 30% of the time, and Task C 45% of the time. In this example, Task
A would be the most critical task and needs to be examined further. These are called
critical indices and are valuable in identifying critical tasks that have risks and
uncertainties.
Probabilistic Calendars
If there is a storm on the coast, you cannot continue a seaport improvement project
during this time. Using Monte Carlo analysis, you can define the chance that certain
calendar with working or non-working days will be used. For example, a storm calendar
could be used 5% of the time to take into account the effect of poor weather conditions.
Deadlines

If a task reaches a certain deadline, one of the results is that the project or task will be
canceled. The issue is that we do not know exactly whether the project will reach the
deadline and what is the chance that the deadline will be missed. Monte Carlo will help
you to answer this question, as it is easy to count the number of trials in which the
deadline is missed.
Conditional branching
Lets assume that your project schedule includes two different branches representing two
different alternatives. Conditional branching allows the project to branch from one task to
another task based on certain conditions. For example, the task duration is 6 days +/- 2
days (Figure 5). If the task is completed within 6 days, one project scenario will be
selected, but if it is completed after 6 days, the other alternative will be selected. These
types of conditions can be based not only on duration, but also on finish time, cost, and
other parameters.

6 days

If duration <= 6 days

If duration > 6 days

Figure 5. Conditional branching


Probabilistic branching
Probabilistic branching allows the project to branch from one task to another task or
group of tasks as part of the simulation process. For example, there is 50% chance that
one branch will be selected and 50% chance that another branch will be selected.
Chance of task existence
If you use probabilistic and conditional branching, some alternatives will be executed in
one trial but not another. Therefore, you can count how many times the task was
executed. This is the chance that the task will be executed or exists in a project.
Conclusion: Is Monte Carlo the Ultimate Solution?
PERT and then Monte Carlo represented major
In essence, Monte Carlo allows a
steps forward in project decision analysis. By
manager to model a huge number
understanding the historical trends for the costs
of combinations of project
of fuel, labor, and raw materials and accurately
scenarios as part of one
forecasting these costs into the future, we can
straightforward process
predict the cost of projects. Based on these
forecasts, we can make decisions for complex portfolios such as the City of Calgary
construction projects, which we discussed at the beginning of this paper. PERT and
Monte Carlo methods will help us to determine what may happen to a project taking into
account the cumulative effect of many risks and uncertainties.

However, for a number of reasons, Monte Carlo has not yet become part of the standard
toolset used by project managers. Many project managers are unfamiliar or
uncomfortable with the process even though it is a covered in the PMBOK Guide.
Moreover, and perhaps more importantly, Monte Carlo and PERT have not been more
widely adopted because project managers understand that inaccurate data will lead to
inaccurate results (Garbage in/Garbage Out principle). Monte Carlo by itself will not
solve the fundamental issues related to defining uncertainties:
1. Terry Williams (Williams, 2004) noted that project managers dont sit and
wait when a project slips. They perform certain recovery actions, which in
most cases are not taken into account by Monte Carlo. In this respect, Monte
Carlo may give overly pessimistic results. At the same time, as we already
know, we are all subject to the overconfidence bias, which leads to
overoptimistic project schedules. Unfortunately, the combination of optimistic
and pessimistic results does not equal accurate results. Instead, you will get
inadequate results.
2. Defining distributions is not a trivial process. Distributions are a very abstract
concept that many of us find difficult to work with. To define distributions
accurately, we have to perform a few mental steps that can be easily
overlooked. Whether we are performing estimates of project parameters or
developing distributions, we are affected by the cognitive and motivational
biases.
As with everything else, Monte Carlo is not a panacea, but it is an excellent tool for the
following situations:
a. You have either reliable historical data or data that you can use to create a
reliable probabilistic forecast. For example, you can predict the cost of raw
materials within a certain range.
b. You have tools to track actual data for each phase of the project and can
perform Monte Carlo analysis at each phase to update your schedule.
c. You have group of experts who understand the project, have experience in
similar projects, and are trained to avoid cognitive and motivational biases
when they define uncertainties and provide estimates.
If your project does not meet at least one of these criteria, Monte Carlo analysis may not
substantially help to improve your decision-making. There are a lot of projects that do not
meet these criteria. In particular, research and development projects fall into this
category. Fortunately, there is a schedule network analysis technique, event chain
methodology, which can help to address the shortcomings of Monte Carlo and PERT.
Summary

Project managers make decisions based on the answer to a fundamental question,


What will be the duration and cost of the project given its multiple risks and
uncertainties?

PERT is an easy to use analytical method, but it has a number of major


limitations.

Monte Carlo analysis is a straightforward approach to deal with project


uncertainties. Monte Carlo can be used effectively if accurate historical data is
available, project tracking is performed, and trained project-specific experts are
involved.

References
Braid, D., Project delay talk alarms mayor, Calgary Herald, Aug 29, 2006
Craven, J.P. 2001.The Silent War: The Cold War Battle Beneath the Sea. New York:
Simon & Schuster
Klastorin T. 2003. Project Management. Tools and Trade-Offs. Wiley
Ulam, S.M. 1991. Adventures of a Mathematician, University of California Press
Williams, T. 2004.Why Monte Carlo simulations of project networks can mislead,
Project Management Journal, September 2004, pp 53-61.

You might also like