Project management software selection using analytical hierarchy process.
Sutterfield, J. S. ; Swirsky, Steven ; Ngassam, Christopher 等
INTRODUCTION
Decision Analysis involving multiple variables and objectives that
can be quantified is rather commonplace. The methods for solving such
problems are rather well established. One simply quantifies with some
measure the variables and objectives involved in the problem, then
chooses the appropriate solution methodology, obtains the necessary data
and calculates an answer. However, with problems involving variables and
objectives that cannot be measured, or at best can be only partly
measured, the solution approach is not always so clear. This is
particularly true when the variables and objectives involve personal
preferences. The approach frequently taken with such problems is to
simply prioritize the decision considerations and try to choose a
solution that maximizes the desired decision quantities at minimum cost.
Although this may not be too difficult with a limited number of decision
quantities, it can become very difficult when the number of such
quantities is large. In addition, the problem becomes vastly more
complex when some of the decision quantities are in mutual conflict.
Thus, making rational decisions under such circumstances may become
extraordinarily difficult.
A number of techniques are available for arriving at decisions
having multi-attributes. Unfortunately, most of them require that the
attributes be measurable. When the attributes are of a more qualitative
nature, the multi-attribute problem becomes much more difficult to
handle. Herein lies the value and power of Analytical Hierarchy Process
(AHP). With AHP it is possible to give a qualitative type problem a
quasi-quantitative structure, and to arrive at decisions by expressing
preferences for one attribute over another, and testing whether the
preferences are rational consistent.
In this paper, we use AHP to analyze the selection process for
project management software (PMS). We approach this problem by treating
the PMS features offered by various companies as attributes. We then
have a group of PM professionals evaluate the various features as to
their importance and desirability. From these evaluations, paired
comparison matrices are developed. Next, a set of matrices is developed
for each software provider that evaluate just how well each
provider's software satisfies each attribute. A consistency ratio
is then computed to determine how rigorously rational consistency has
been maintained in the analysis. Ordinarily, an AHP analysis would end
here, but we extend our analysis by using the Student's
"t" test for small sample sizes to determine a confidence
interval within which the responses should lie. This analysis fills a
void in the literature by demonstrating a perfectly generalized process
for solving a practical multi-attribute decision problem, and for
arriving at a high level of confidence that a rational decision will
have been made.
LITERATURE REVIEW
AHP was originally conceived by Thomas L. Saaty as a structured
method for solving problems involving decision variables or decision
attributes, at least some of which, are qualitative, and cannot be
directly measured (Saaty, 1980). It met with almost immediate acceptance
and was applied to a wide range of problems. Very soon it began to be
applied to executive decisions involving conflicts in stakeholder requirements and strategic planning (Saaty, 1982; Arbel & Orgler,
1990; Uzoka, 2005). The real power of AHP consists in its use of fairly
elementary mathematics to structure complex problems in which decisions
involve numerous decision makers, and multiple decision variables.
Another facet of the power of the AHP approach consists in its ability
to impose a quasi-quantitative character on decision problems in which
the decision variables are not necessarily quantitative. The power and
versatility of AHP are demonstrated by the wide range of problems to
which the approach has been applied. It was used early for such problems
as the justification of flexible manufacturing systems (Canada &
Sullivan, 1989), and continues to be used in such applications (Chan
& Abhary, 1996; Chandra & Kodali, 1998; Albayrakoglu, 1996). It
has been used in such widely different applications as business crisis
management (Lee & Harrald, (1999) and pavement maintenance
(Ramadhan, Wahab & Duffuaa, 1999). Other interesting applications of
AHP include the evaluation of personnel during the hiring process
(Taylor, Ketcham & Hoffman, 1998), determination of investor
suitability in structuring capital investment partnerships (Bolster & Janjigian & Trahan, 1995), apportioning public sector funds
where numerous projects usually compete for limited resources
(Barbarosoglu & Pinhas, 1995), and determination of real estate
underwriting factors in the underwriting industry (Norris & Nelson,
1992). In the areas of accounting and finance, AHP has seen increasing
use helping to direct the limited resources of auditors to their most
effective and efficient use Baranoff, 1989; Arrington, Hillison &
Jensen, 1984) and in the detection of management fraud Webber, 2001;
Deshmukh & millet, 1998), and the prediction of bankruptcy (Park
& Han, 2002). Thus, AHP is a very powerful, versatile and
generalized approach for analyzing multi-attribute decision problems in
which the decision considerations do not necessarily need to be directly
measurable. Although AHP has been used in a very wide range of
applications, this literature search has not disclosed any in which it
has been used for project management software selection.
AHP METHODOLOGY
As noted above, the procedure for using Analytical Hierarchy
Process is well established. It is a tribute to Dr. Saaty that his
original work, done more than a quarter century ago, remains virtually
unmodified. Thus, the present work will follow rather closely Dr.
Saaty's original approach. This statement involves the following
steps:
1) A clear concise description of the decision objective. In
practice, this is probably best done by a team of 4 to 6 people who have
good knowledge of the objective to be achieved, and who have a stake in
arriving at the best possible decision.
2) Identification of those attributes that are to be included in
arriving at the desired objective defined in step 1. These attributes
are also best identified by a team of 4-6 people and preferably the same
team that is used to identify the objective of the analysis.
3) Determination of any sub-attributes upon which an attribute
might be based.
4) Identification of a set of alternatives that are thought to
achieve, at least partially, the desired objective. We say "at
least partially" because probably no alternative will completely
provide all desired attributes. However, the alternatives should be
selected because of providing some degree of satisfaction to all
attributes.
5) Once the attributes are identified, they are entered into a
preference matrix, and a preference or importance number is assigned to
reflect the preference for/importance of each attribute relative to all
others. The strength of preference/importance is indicated by assigning
a preference/importance number according to the following rating scale:
Preference Number for Then Attribute A is .....
Attribute A over over (or to) Attribute B
Attribute B
9 Absolutely more important or preferred
7 Very strongly more important or preferred
5 Strongly more important or preferred
3 Weakly more important or preferred
1 Equally important or preferred
Intermediate degrees of preference for attribute A over attribute B
are reflected by assigning even numbered ratings "8",
"6", "4" and "2" for one attribute over
another. For example, assigning a preference number of "6"
would indicate a preference for attribute "A" over attribute
"B" between "Strongly more important or preferred"
and "Very strongly more important or preferred." Also, the
logical principle of inverse inference is used, in that assigning
attribute "A" a preference rating of, say "8", over
attribute "B", would indicate that attribute "B"
were only "1/8" as important as attribute "A."
Either such a preference matrix is developed by each participant in the
process, or the participants arrive at agreement as to the preference
numbers for the attributes.
6) Mathematical operations are then used to normalize the
preference matrix and to obtain for it a principal or characteristic
vector.
7) Next, participants collectively rate each of the possible
approaches/options for satisfying the desired objective(s). This is done
along the same lines as shown above for attributes. The same rating
scale is used, except here the ratings reflect how well each possible
approach/option satisfies each of the attributes. For example, suppose
that Alternatives "1" and "2" were being compared as
to how well each satisfies Attribute "A" relative to the
other. If Alternative "1" were given a preference rating of,
say 5, relative to Alternative "2" it would mean that for
Attribute "A", Alternative "1" was thought to
satisfy it 5 times as well as Alternative "2". Or conversely,
Alternative "2" was believed to satisfy Attribute
"A" only "1/5" as well as did Alternative
"1". Thus, a set of preference matrices is developed, one
matrix for each attribute, that rates each alternative as to how well it
satisfies the given attribute relative to all other alternatives under
consideration.
8) Again, as with the attribute preference matrices, mathematical
operations are used to normalize each preference matrix for
alternatives, and to obtain a principal or eigenvector for each. When
complete, this provides a set of characteristic vectors comprised of one
for each attribute. Each of these vectors reflects just how well a given
alternative/option satisfies each attribute relative to other
alternatives.
9) Then with an eigenvector from "8" above, we cast these
into yet another matrix, in which the number of rows equals the number
of alternatives/options, and the number of columns equals the decision
attributes. This matrix measures how well each alternative/option
satisfies each decision attribute.
10) Next, the matrix from "9" above is multiplied by the
characteristic vector from "6" above. The result of this
multiplication is a weighted rating for each alternative/option
indicating how well it satisfies each attribute. That alternative/option
with the greatest weighted score is generally the best choice for
satisfying the decision attributes, and thus achieving the desired
objective.
11) Last, it is necessary to calculate a consistency ratio (C. R.)
to ensure that the preference choices in the problem have been made with
logical coherence. The procedure for calculating the C. R. may be found
in a number of works on AHP (Canada & Sullivan, 1989) and will not
be repeated here. According to Saaty, if choices have been made
consistently the C. R. should not exceed 0.10. If it does, it then
becomes necessary to refine the analysis by having the participants
revise their preferences and re-computing all of the above. Thus, the
entire process is repeated until the C. R. is equal to or less than
0.10.
12 Having completed steps "1" thru "11", that
alternative/option with the greatest weighted score is selected as best
satisfying the decision alternatives, and thus achieving the desired
objective.
APPLICATION OF AHP METHODOLOGY
The usual practice for developing an AHP analysis is to develop a
separate set of matrices for each of the participants, or to have
participants brainstorm until they arrive at a group decision for the
required matrices. However, in the instant case, time and distance made
the usual approach impossible. Consequently, the authors have done the
next best thing and have averaged participants' ratings. The
average ratings are used in the subsequent analysis, and the sensitivity
analysis done on the averaged ratings would be analogous to reconvening
the participants in order to refine the ratings and thus arrive at a
satisfactory C. R. The results of this are shown in Figures 1 thru 7
below. The codes for the Software Features and Companies
(alternatives/options) used in the figures immediately follow:
Software Feature Code Company
Multiple Networks Supported A 1
Simultaneous Multiple User Access B 2
Full Critical Path C 3
Time/Cost D
Multi-project PERT Chart E
Resources Used F
ANALYSIS OF RESULTS
As indicated above, the attribute preference matrices for
participants were averaged for each attribute, so that the final ratings
shown in Fig. 1 are the result of an attribute by attribute average,
rather than a matrix by matrix average.
[FIGURE 1 OMITTED]
Fig. 2 below shows the normalized group preference matrix resulting
from normalizing the averaged values in Fig. 1. The characteristic
vector for this matrix is shown in the rightmost column.
As can be seen in the last row of Fig. 2, the C. R. for these
averaged attributes is 0.05, a value well within that prescribed by
Saaty for logical coherence. This result is very interesting because
only one of the participants achieved a C. R. of less than 0.10, this
being a C. R. of 0.09. The other six had C. R.s ranging from 0.140 to
0.378. This means that the group as a whole was more consistent in
rating the attributes than was each individual. Since the averaged
ratings resulted in a C. R. of less than 0.10, it was unnecessary at
this point to perform sensitivity analysis on the averaged attribute
ratings. However, had sensitivity analysis been necessary, each
individual rating would have been adjusted slightly upward or downward
until a C. R. of equal to or less than 0.10 were obtained. Since only
one of the participants had a C. R. less than 0.10, the averaging
process was analogous to calling the participants into caucus and having
them arrive at mutually agreeable ratings for all attributes. The
conclusion to be drawn from this is that a group of individuals will
frequently make more rational choices than will a single individual. As
a practical matter, it would seem to be more expeditious to first obtain
the individual attribute preference matrices, average them and then call
participants together to negotiate more consistent ratings.
Alternatively, each individual would have to revise his individual
results and have a new C. R. computed. This process would then have to
be continued until all participants had individually arrived at C. R.s
of equal to or less than 0.10. The former approach of averaging and
caucusing employs a Delphi approach to decision making, which has been
shown usually to lead to better decision outcomes than decisions made in
isolation.
Ordinarily, selection of the options/alternatives (in our case
software providers) for satisfying the above attributes would also be
made by the primary participants. Each participant would rate each
option/alternative as to how well it satisfied each attribute. However,
in the present case this part was done by the authors based on an
analysis of several project management software products and their
common and unique features. Those ratings, as with the attribute
preference ratings, were then cast into matrices and C. R.s were
calculated for each software provider. The final results for this are
shown in Figs. 3 thru 8.
The results from this part of the process, as can be seen, turned
out quite well. Only the C. R. for the third attribute, "Full
Critical Path," turned out to have a C. R. greater than 0.10. This
C. R. was originally 0.54, a value well above the allowable. Sensitivity
analysis was performed on this matrix until the present C. R. of 0.05
was obtained. The remainder of the C.R.s were well below 0.10, as can be
seen. Remarkably, the C. R. for "Multi-project PERT Chart"
turned out to be 0.000, indicating perfect consistency among the
choices. An interesting problem arose for the "Full Critical
Path" attribute in that neither Company 2 nor 3 offered this
feature. It would be expected that a company would receive a rating of
"0" for failure to provide a feature. However, this would have
resulted in Company 1 being infinitely preferred over Companies 2 and 3.
This would have been inconsistent with the rating scale, which only
ranges from "1" to "9". To address this problem, a
preference rating of "9" was assigned for Company 1 over
Companies 2 and 3. This resulted in ratings of 1/9 or 0.11 being
assigned as to the preference for Companies 2 and 3 over Company 1.
Thus, the final values for these ratings turned out near "0",
though not exactly "0". Once all matrices were assured to have
a C. R. of equal to or less than 0.10, their values were cast into a
final matrix as shown in Fig. 9 below.
In this matrix, the top row of six numbers will be seen to be the
principal or characteristic vector from the attribute preference matrix.
Each of the six columns of three numbers under this top row will be seen
to be the principal vectors for Companies 1, 2 and 3 as obtained from
the analysis described in the immediately preceding paragraph. The
rightmost column titled "Weighted Alternative Evaluations"
contains the results of this analysis. It will be seen that Company 1
best satisfies the original attributes in that it has a rating of 0.43.
Companies 2 and 3 follow in order, with ratings of 0.36 and 0.21,
respectively. These ratings are quite stable. Sensitivity analysis was
done on these results by varying the ratings in the original attribute
preference matrix. However, these results remained virtually unchanged.
Company 1 would then be selected as best satisfying the desired
attributes.
In order to test for the consistency of the participant attribute
preferences, the C. R.s were subjected to a Student's "t"
test to obtain a confidence interval. The "t" test is used for
applications in which the sample size is smaller than about twenty, the
samples reasonably can be assumed to be normally distributed and the
distribution of the sample standard deviation is independent of the
distribution of the sample values. Having made these assumptions, we
obtained the following results:
Next a "t" test is performed with six degrees of freedom
and a confidence interval of 95%. This yields theoretical lower and
upper confidence limits of -0.07 and 0.47. Since a C.R. of zero means
perfectly consistent choices, a C.R. of less than zero is impossible.
This means that the practical value of the lower confidence limit for
this application is zero. Then for a 95% confidence level, all of the
C.R. values for our PMs should lie within a range from 0.00 to 0.47. As
can be seen from the above C.R.s, all do indeed lie within this
confidence interval as obtained from the "t" test. Thus, the
responses of all PMs were found to be highly consistent both from the
standpoint of the C.R. obtained from the averaged attribute responses,
and the confidence interval obtained from the "t"
distribution. A decision maker having the foregoing analysis could thus
conclude that Software Provider #1 would provide the best software
package for the application envisioned.
CONCLUSIONS
The thesis of this paper has been that Analytical Hierarchy Process
offers a very powerful, flexible and general approach to decision
situations in which the decision variables are not necessarily
quantifiable. Although it was not possible to quantify any of the
attributes in the above example, it was, nonetheless possible to
rationally express preferences for one above another. We say rationally
because a decision maker can always express a preference for one
attribute over another once a definite objective is established. In this
case one preference is more desirable than another in so far as it
better satisfies or achieves the objective(s), or vice versa. We have
shown in the foregoing analysis that even though none of the C.R.s for a
group of respondents may satisfy the 0.1 criterion specified by Saaty,
that this obstacle may be overcome by averaging the responses for each
attribute. We have demonstrated how to handle the problem in which a
given provider does not offer one or more of the desired
attributes/features. We have also shown through sensitivity analysis
that once the C.R.s for all matrices have been brought to values equal
to or less than 0.10 using sensitivity analysis, that the results are
remarkably stable. They are relatively insensitive to fairly large
changes in the original attribute preferences. We tested the consistency
of the responses using the Student's "t" distribution and
found that they can be placed within a 95% confidence interval, which
indicates a very good degree of consistency among PM respondents.
Finally, in using AHP in the problem of software selection, we have
demonstrated the great power and flexibility of this little known
process in solving a very wide range of practical decision problems
which are otherwise virtually intractable.
REFERENCES
Albayrakoglu, M. Murat, (1996). Justification of New Manufacturing
Technology: A Strategic approach Using the Analytical Hierarchy Process,
Production and Inventory Management Journal, Alexandria: first Quarter
1996, 37(1), 71-77.
Arbel, A. & Y.E. Orgler, (1990). An Application of the AHP to
Bank Strategic Planning: The Mergers and Acquisitions Process, European
Journal of Operational Research, .48(1), 27-37.
Arrington, C.E., W. Hillison & R.E. Jensen, (1984). An
Application of Analytical Hierarchy Process to Model Expert Judgments on
Analytical Review Procedures, Journal of Accounting Research, 22(1),
298-312.
Bagranoff, N.A., (1989). Using an Analytic Hierarchy Approach to
Design Internal Control Systems, Journal of Auditing and EDP, 37-41.
Barbarosoglu, Gulay, & David Pinhas, (1995). Capital Rationing in the Public Sector Using the Analytical Hierarchy Process, The
Engineering Economist, Norcross, Summer, 40(4), 315-342
Bolster, Paul J., Vahan Janjigian & Emery A. Trahan, (1995).
Determining Investor Suitability Using the Analytical Hierarchy Process,
Financial Analysts Journal, Charlottesville, July/Aug., 51(4), 63-76
Canada, John R. & William G. Sullivan, (1989). Economic and
Multiattribute Evaluation of Advanced Manufacturing Systems,
Prentice-Hall, Englewood cliffs, New Jersey, Ch. 10, 268-279..
Chan, Felix T. S. & K. Abhary, (1996). Design and Evaluation of
Automated Cellular Manufacturing Systems with Simulation Modeling and
the AHP Approach: A Case Study, Integrated Manufacturing Systems,
Bradford: 7(6), 39.
Chandra, S. & Rambadu Kodali, (1998). Justification of
Just-in-Time Manufacturing systems for Indian Industries, Integrated
Manufacturing Systems, Bradford: 9(5), 314
Deshmukh, Ashutosh & Ido Millet, (1998). An Analytic Hierarchy
Process Approach to Assessing the Risk of Management Fraud, Journal of
Applied Business Research, 15(1), 87-102.
Lee, Young-Jai & John R. Harrald, (1999). Critical Issue for
Business Area Impact Analysis in Business Crisis Management: Analytical
Capability, Disaster Prevention and Management, Bradford: 8(3), 184.
Norris, Daniel M. & Mark Nelson, (1992). Real Estate Loan
Underwriting factors in the Insurance Industry, Real Estate Finance, New
York, Fall, 9(3), 79.
Park, Cheol-Soo & Ingoo Han, (2002). A Case-based Reasoning
with the Feature Weights derived by Analytic Hierarchy Process for
Bankruptcy Prediction, Expert Systems with Applications, 23(3), 255-264.
Ramadhan, Rezquallah H., Hamad I. Al-Abdul Wahab & Salih O.
Duffuaa, (1999). The Use of an Analytical Hierarchy Process in Pavement
Maintenance Priority Ranking, Journal of Quality in Maintenance
Engineering, Bradford: 5(1), 25
Saaty, Thomas L. (1980). The Analytical Hierarchy Process, New
York: McGraw-Hill.
Saaty, Thomas L.(1982). Decision Making for Leaders, Belmont, CA:
Wadsworth Publishing Company, Inc.
Taylor, Frank A. III, Allen F. Ketcham & Darvin Hoffman,
(1998). Personnel Evaluation with AHP, Management Decision, London,
36(10), 679
Uzoka, Faith M.E. (2005). Analytical Hierarchy Process-based system
for Strategic Evaluation of Financial Information, Information Knowledge
Systems Management, 5(1), 49-61.
Webber, Sally. (2001). The Relative Importance of Management Fraud
Risk Factors, Behavioral Research in Accounting, 1(1), 1-10.
J. S. Sutterfield, Florida A&M University
Steven Swirsky, Florida A&M University
Christopher Ngassam, Florida A&M University
PM C.R.
1 0.14
2 0.15
3 0.14
4 0.20
5 0.09
6 0.38
7 0.32
y-bar = 0.20
sample standard deviation = 0.11
Fig 2: Normalized group attribute performance
preference matrix
A B C D E F
A. 1.000 1.000 0.200 0.500 1.000 0.200
B. 1.000 1.000 0.250 1.000 1.000 1.000
C. 5.000 4.000 1.000 4.000 5.000 1.000
D. 2.000 1.000 0.250 1.000 3.000 0.500
E. 1.000 1.000 0.200 0.333 1.000 0.250
F. 5.000 1.000 1.000 2.000 4.000 1.000
Decimal Equivalents
A B C D E F
A. 0.067 0.111 0.069 0.057 0.067 0.051
B. 0.067 0.111 0.086 0.113 0.067 0.253
C. 0.333 0.444 0.345 0.453 0.333 0.253
D. 0.133 0.111 0.086 0.113 0.200 0.127
E. 0.067 0.111 0.069 0.038 0.067 0.063
F. 0.333 0.111 0.345 0.226 0.267 0.253
1.000 1.000 1.000 1.000 1.000 1.000
Row Row
Sums Averages
A. 0.421 0.070 1.000 1.000 0.200 0.500
B. 0.697 0.116 1.000 1.000 0.250 1.000
C. 2.162 0.360 5.000 4.000 1.000 4.000
D. 0.770 0.128 2.000 1.000 0.250 1.000
E. 0.414 0.069 1.000 1.000 0.200 0.333
F. 1.536 0.256 5.000 1.000 1.000 2.000
1.000
A. 1.000 0.200 x 0.070 = 0.442
B. 1.000 1.000 x 0.116 = 0.729
C. 5.000 1.000 x 0.360 = 2.287
D. 3.000 0.500 x 0.128 = 0.809
E. 1.000 0.250 x 0.069 = 0.434
F. 4.000 1.000 x 0.256 = 1.614
D = 6.3171 6.2846.3528 6.320 6.285 6.305
Lambda Max = 6.310C.I. = 0.0621 C.R. = 0.050
Figure 3: Multiple networks supported
P Q R
Company 1 1.000 0.500 4.000
Company 2 2.000 1.000 7.000
Company 3 0.250 0.143 1.000
Column totals =
1.000 0.500 4.000
2.000 1.000 7.000
0.250 0.143 1.000
D = 3.002 3.004 3.0008
Lambda max = 3.0023 C.I. = 0.0012 C.R. = 0.002
Decimal equivalents Row Row
P Q R sums averages
Company 1 0.308 0.304 0.333 0.945 0.315
Company 2 0.615 0.609 0.583 1.807 0.602
Company 3 0.077 0.087 0.083 0.247 0.082
Column totals = 1.000 1.000 1.000 1.000
0.315 0.946
x 0.602 = 1.810
0.082 0.247
D = 3.002 3.004 3.0008
Lambda max = 3.0023 C.I. = 0.0012 C.R. = 0.002
Figure 4: Simultaneous multiple user access
P Q R
Company 1 1.000 1.000 4.000
Company 2 1.000 1.000 6.000
Company 3 0.250 0.167 1.000
Decimal equivalents
Row Row
P Q R sums averages
Company 1 0.444 0.461 0.364 1.270 0.423
Company 2 0.444 0.461 0.545 1.451 0.484
Company 3 0.111 0.077 0.091 0.279 0.093
Column totals = 1.000 1.000 1.000 1.000
Company 1 1.000 1.000 4.000 x 0.423 = 1.279
Company 2 1.000 1.000 6.000 x 0.484 = 1.465
Company 3 0.250 0.167 1.000 x 0.093 = 0.280
D = 3.023 3.028 3.0057
Lambda max = 3.0189 C.I. = 0.0095 C.R. = 0.019
Figure 5: Full critical path
Decimal equivalents
P Q R P Q R
Company 1 1.000 9.000 9.000 0.818 0.750 0.857
Company 2 0.111 1.000 0.500 0.091 0.083 0.048
Company 3 0.111 2.000 1.000 0.091 0.167 0.095
Column totals = 1.000 1.000 1.000
Row Row
sums averages
Company 1 2.425 0.808 1.000
Company 2 0.222 0.074 0.111
Company 3 0.353 0.118 0.111
1.000
Company 1 9.000 9.000 x 0.808 = 2.532
Company 2 1.000 0.500 x 0.074 = 0.222
Company 3 2.000 1.000 x 0.118 = 0.355
D = 3.132 3.009 3.0208
Lambdamax = 3.0539 C.I. = 0.027 C.R. = 0.054
Figure 6: Time/cost trade-off
Decimal equivalents
P Q R P Q R
Company 1 1.000 0.500 0.333 0.167 0.143 0.182
Company 2 2.000 1.000 0.500 0.333 0.286 0.273
Company 3 3.000 2.000 1.000 0.500 0.571 0.546
Column totals = 1.000 1.000 1.000
Row Row
sums averages
Company 1 0.491 0.164 1.000 0.500
Company 2 0.892 0.297 2.000 1.000
Company 3 1.617 0.539 3.000 2.000
1.000
Company 1 0.333 x 0.164 = 0.492
Company 2 0.500 x 0.297 = 0.894
Company 3 1.000 x 0.539 = 1.625
D = 3.004 3.008 3.0144
Lambdamax = 3.0088 C.I = 0.004 C.R. = 0.009
Figure 7: Multi-project PERT chart
Decimal equivalents
P Q R P Q R
Company 1 1.000 1.000 2.000 0.400 0.400 0.400
Company 2 1.000 1.000 2.000 0.400 0.400 0.400
Company 3 0.500 0.500 1.000 0.200 0.200 0.200
Column totals = 1.000 1.000 1.000
Row Row
sums averages
Company 1 1.200 0.400 1.000 1.000
Company 2 1.200 0.400 1.000 1.000
Company 3 0.600 0.200 0.500 0.500
1.000
Company 1 2.000 x 0.400 = 1.200
Company 2 2.000 x 0.400 = 1.200
Company 3 1.000 x 0.200 = 0.600
D = 3.000 = 3.000 3
Lambda max = 3 C.I. = 0 C.R. = 0.000
Figure 8: Resources used
Decimal equivalents
P Q R P Q R
Company 1 1.000 0.143 0.200 0.077 0.097 0.048
Company 2 7.000 1.000 3.000 0.538 0.678 0.714
Company 3 5.000 0.333 1.000 0.385 0.226 0.238
Column totals = 1.000 1.000 1.000
Row Row
sums averages
Company 1 0.221 0.074 1.000 0.143
Company 2 1.930 0.643 7.000 1.000
Company 3 0.848 0.283 5.000 0.333
1.000 D = 3.013
Company 1 0.200 x 0.074 = 0.222
Company 2 3.000 x 0.643 = 2.008
Company 3 1.000 x 0.283 = 0.866
D = 3.121 3.063
Lambdamax = 3.0657 C.I. = 0.0328 C.R. = 0.066
Figure 9: Weighted alternative evaluations
A B C
0.070 0.116 0.360
Company 1 0.315 0.423 0.808
Company 2 0.602 0.484 0.074
Company 3 0.082 0.093 0.118
D E F Weighted
Alternative
0.128 0.069 0.256
Company 1 0.164 0.400 0.074 0.430
Company 2 0.297 0.400 0.643 0.355
Company 3 0.539 0.200 0.283 0.215
Total = 1.000