Benchmarking project level engineering productivity/Inzinerines veiklos nasumo lyginimas projektu lygmeniu.
Liao, Pin-Chao ; Thomas, Stephen R. ; O'Brien, William J. 等
1. Introduction
Reliable engineering productivity measurement is essential for
monitoring engineering performance and assessing whether change in the
current engineering approach is warranted (Shouke et al. 2010). Although
engineering costs have increased in recent years, in some cases as much
as 20% of the total project cost, engineering productivity remains less
understood than construction labor productivity (Kim 2007). A reason for
this lack of understanding is based upon difficulties in measuring
engineering productivity. Compared to construction, engineering has many
intangible outputs (models, specifications, etc.) making it even more
difficult to assess and track.
1.1. Measuring engineering productivity
While challenges remain, there have been many attempts to define
engineering productivity in various ways: hours per drawing (Chang, Ibbs
2006; Thomas et al. 1999), hours per engineered element (Song 2004;
Song, AbouRizk 2005), and hours per engineered quantity (CII 2001, 2004;
Kim 2007). The Construction Industry Institute (CII) suggests that
engineering productivity measured in hours per engineered quantity
(quantity-based measures) is superior to the other approaches due to its
direct relationship to engineering activities and also because it
requires less subjective manipulation of intermediate deliverables. As
an added advantage, it is based upon similar definitions to construction
productivity (CII 2001).
CII, an industry and academic consortium, worked closely with its
industry members to develop a standardized system, the Engineering
Productivity Metric System (EPMS) for benchmarking engineering
productivity (Kim 2007). The EPMS uses engineering productivity metrics
with standardized definitions developed through consensus reached
between industry and academia. It broadly incorporates input from
numerous workshops and other industry forums. Many CII engineering firms
now employ the EPMS for benchmarking allowing them to better understand
their competitive position and improve performance.
1.2. The engineering productivity metric system (EPMS)
The EPMS defines engineering productivity as the ratio of direct
engineering work-hours to issued for construction (IFC) quantities.
While it does not include all project engineering work-hours, the EPMS
consists of six major disciplines for concrete, steel, electrical,
piping, instrumentation, and equipment, which account for the majority
of project engineering work-hours. The system tracks engineering
productivity at three levels using a hierarchical metric structure shown
in Fig. 1: Level II (discipline), Level III (sub-category), and Level IV
(element). As noted previously, the current EPMS does not provide a
means of aggregating measurement to the project level which would be a
Level 1 metric.
[FIGURE 1 OMITTED]
In the hierarchical EPMS, each discipline level metric is composed
of underlying sub-category and element metrics. For instance, the
discipline level metric "total concrete" includes three
sub-category level metrics for "foundations",
"slabs", and "concrete structures." The sub-category
metrics are further divided into element metrics such as foundations
">= 5 cubic yards" and foundations "< 5 cubic
yards". The major advantage of a hierarchical EPMS is that
engineering productivity data can be collected flexibly at various
levels of details, and can be aggregated from the element level to the
discipline level because they have identical units for measurement (Kim
2007). Concrete metrics for instance, are all measured in hours per
cubic yard. In order to make engineering productivity comparable across
different organizations, standard metric definitions for engineering
work-hours and quantities were established (Kim 2007).
Discipline level metrics include all hours directly charged for the
discipline for engineering deliverables, including site investigations,
meetings, planning, constructability activities, and request for
information (RFIs). For concrete, this includes all engineering hours
for embedments for slabs, foundations, and concrete structures.
Engineering hours and quantities for piling are not included.
The EPMS collects productivity data via a secure web-based system
developed by CII for its member companies. Using this system,
organizations can input their engineering productivity data and submit
it to the EPMS system for validation and benchmarking after attending
and completing training in use of the online system.
The EPMS provides a major breakthrough for benchmarking engineering
productivity; however, given the different units of the various
disciplines it poses a challenge to summarize metrics from the
discipline level to the project level (Level I). Lacking a project level
engineering productivity metric (PEPM), it is impossible to assess
overall engineering productivity performance; an issue frequently noted
in the feedback from CII companies. Furthermore, the lack of a PEPM
hinders analyses of the relationships among engineering productivity,
overall performance at the project level, and performance improving best
practices.
1.3. Historical background and development of the PEPM
Although many approaches have been adopted for performance
evaluation in the various industry sectors such as software industry,
manufacturing industry, and even the construction industry (Bang et al.
2010; Benestad et al. 2009; de Aquino, de Lemos Meira 2009; Fleming et
al. 2010; Issa et al. 2009; Niu, Dartnall 2010; Ren 2009; Yang, Paradi
2009), there are limited suitable approaches for the development of a
project level engineering productivity metric. The Project-Level
Productivity (PLP) index was developed by Ellis and Lee (2006) for
monitoring multidiscipline daily labor productivity. They employed an
equivalent work unit (EWU) to standardize and aggregate the outputs of
different construction crafts. Nonetheless, this approach standardized
installed quantities of different crafts without considering their
variations. Therefore, such an approach may panelize productivity of
some crafts and lose precision on the assessment of project level
productivity.
A general approach which first standardizes metrics individually
and then aggregates them caught the attention of numerous researchers.
Standardization generally involves rescaling the variables or removing
their variations for aggregation. Maloney and McFillen (1995)
standardized job characteristics with different scales into a range from
0 to 1 in order to make comparisons. The z statistic (z-score) is
another common method to standardize variables with consideration for
both sample mean and standard deviation (Agresti, Finlay 1999). The
standardized metrics are then aggregated, by applying the weighted-sum
of underlying metrics to develop a summarized measure. For example, Ibbs
(2005) suggests cumulative project productivity metric in terms of the
sum product of change-impacted, change-unimpacted productivity and their
corresponding work hours divided by total work hours (see Eq. 1).
These methods proposed feasible solutions to either standardization
of different variables or aggregation of variables with the same
measures; however, none of them provided a complete approach considering
both aspects. Therefore, this study developed an approach addressing
both standardization of the different variables and their aggregation to
summarize engineering productivity at the project level.
end-of-project productivity = ([PROD.sub.Unimpacted])(W -
[HR.sub.Unimpacted]) + ([PROD.sub.Impacted])(W - [HR.sub.Impacted])/W -
[HR.sub.Unimpacted] + W - [HR.sub.Impacted]. (1)
2. Methodology
The CII Productivity Metrics (PM) team established criteria for the
desired PEPM. These criteria required that any metric developed would
have to first be comprehensible; it would also have to satisfy a
condition termed homogeneity, and finally, it would have to be suitable
for trending. To be comprehensible the PEPM should be readily
interpretable by both industry and academia. Homogeneity refers to the
accuracy of the PEPM for summarizing the underlying engineering
productivity. And to be useful for tracking engineering productivity,
the PEPM must be capable of being measured and reported over time.
Initially, three candidate approaches for the aggregation of
engineering productivity of various disciplines were considered. Using
data collected via the EPMS, the three approaches were used to calculate
a PEPM and each PEPM was separately evaluated using the criteria above.
Characteristics of the three PEPM were compared and assessed
systematically for comprehensibility, homogeneity, and trending ability.
In this process qualitative evaluations were converted into quantitative
weightings using the Analytic Hierarchy Process (AHP) for approach
selection. The approach with the highest overall score was considered
the most suitable and thus ultimately selected for the PEPM.
3. The EPMS database
A total of 112 heavy industrial projects with engineering
productivity data were submitted to the EPMS database irom 2002 to 2008.
The total installed cost of all projects is US$ 4.5 billion. Table 1
presents the distribution of these projects by respondent type, project
type (process or non-process), project nature (addition, grass roots, or
modernization), and also project size.
Contractors submitted the majority of data with a total of 92
projects whereas owners submitted only 20. Based on the observation of
the PM team, the data disparity by respondent is primarily because
contractors are better staffed to track engineering productivity and
more readily have access to the data. All projects submitted were heavy
industrial projects which are further classified into two major
categories: process and non-process. Process projects include projects
such as chemical manufacturing, oil refining, pulp and paper and natural
gas processing projects. Non-process projects include power and
environmental remediation projects. This taxonomy was developed based on
Watermeyer's definition, which defined non-process projects as
those that yield products that cannot economically be stored (Watermeyer
2002). Process projects comprise the majority of the productivity
dataset with a total of 77, and the remaining 35 are non-process
projects. An analysis of project nature reveals that 37 are additions,
53 are modernizations, and 22 are grass roots. In accordance with CII
convention, a project with a budget greater than five million dollars is
categorized as a large project. Accordingly, 68 projects were
categorized as large projects (greater than five million dollars) and
the remaining 44 projects were categorized as small ones (less than five
million dollars).
A distribution of direct engineering work hours by discipline was
also produced and is presented in Fig. 2. The piping discipline accounts
for the majority of work hours with 45%, a substantially higher
percentage of the total hours than other disciplines. This distribution
may not be typical of most projects but is reasonable since these are
industrial construction projects.
4. Software used in data preparation and analyses
Data preparation is the essential foundation for effective data
analysis. In this research, engineering productivity data were first
collected and stored in a secured Microsoft SQL Server 2000[R] database.
Next, data tables were exported and saved as Microsoft Access[R] files
for ease of access and query. The data tables were further exported to
Microsoft Excel[R] because of its high compatibility with statistical
packages. Minitab[R] and SPSS[R] were utilized to perform data analyses.
5. Development of the approaches to calculate PEPM
Three approaches were proposed to calculate PEPM; these included
the earned-value method, the z-score method, and the max-min method
(Liao 2008). The earned-value method calculates project level
productivity with total actual work hours divided by the total predicted
(baseline) work hours of all disciplines. The z-score method converts
the raw engineering productivity metric of every discipline to a
dimensionless measure for further aggregation. Work hours were
subsequently employed to weight the z-scores of various disciplines
during aggregation to the project level. The max-min method first
standardizes discipline productivity metrics by subtracting the minimum
productivity value of the discipline and then dividing it by the range
of the metric (maximum-minimum). Similar to the z-score method, max-min
standardized discipline productivity metrics are weighted by their work
hours for aggregation.
The authors worked closely with the PM team for evaluation of the
three approaches using the criteria previously presented. Applying the
Analytical Hierarchy Process (AHP), the z-score method was selected and
only this approach is presented in detail in this paper. The details of
the other approaches as well as the entire assessment and selection
procedure are documented in Liao (2008).
5.1. The Z-score method
The z-score approach uses a statistical procedure for
standardization which allows comparison of observations from different
normal distributions. It includes three main steps: transformation,
standardization, and aggregation of the underlying metrics.
5.1.1. Transformation
At the outset, the authors assessed the normality of data using the
quantile-quantile probability plot (Q-Q plot) to determine if
transformation was necessary before standardization. The Q-Q plot
provides a visual assessment of "goodness of fit" of the
distribution of the data and what the distribution should look like if
it were normally distributed. For instance, Fig. 3 demonstrates a Q-Q
plot for concrete engineering productivity (work hours per cubic yard,
Wk-Hr/CY). If the data were normally distributed, the data points should
reasonably fall along a straight line. As a further check the mean value
is compared to the median as a quick check for evidence of skew as shown
in Table 2. Here the mean value exceeds the median by 2.77 (156%),
illustrating that the distribution of concrete engineering productivity
data is positively skewed. Similarly, engineering productivity metrics
for the other disciplines in the table are all positively skewed. This
finding coincides with the scientific nature discovered in previous
research (Zener 1968). Because the z-scores of raw engineering
productivity metrics are not normally distributed, the PEPM developed
from the linear combination of such z-scores can be misleading when
interpreting project level engineering productivity compared to the
grand mean.
[FIGURE 3 OMITTED]
To address the skew of the data, natural log transformations were
employed. After the natural log transformation, the data tends to
scatter around a straight line as shown in Fig. 3. As confirmation, the
difference between mean and median values is reduced to only 0.07 (12%)
as illustrated in Table 2. Consequently, all metrics were transformed
using natural log transformations and then the distributed were ready
for standardization.
5.1.2. Standardization and aggregation
All projects in the EPMS were categorized by the year of the
midpoint of the project, though actual projects usually spanned two or
more years. The year with the most projects was chosen as the base year
because sample means theoretically converge to population means as the
sample size increases. In these data, 2004 was selected as the base year
with a total of 32 projects. Using this reference dataset, each
individual metric was standardized by subtracting its mean and then
dividing it by its standard deviation. Thus, the variability of the
different discipline metrics was neutralized and calibrated in the same
scale, suitable for aggregation. Next, the standardized discipline
productivity is aggregated using work hours as the weights since
"work hour" is the common parameter amongst different
disciplines. In terms of workload, it represents the disciplines
relative importance in engineering productivity performance.
5.1.3. An example for calculation of a PEPM
Using the z-score method, Table 3 presents an example of the steps
for calculation of a PEPM for a single project. For ease of
understanding, only concrete and steel disciplines are included in this
example. It begins with the natural logarithm transformation of
engineering productivity metrics to address data skew, thereby making
the distributions more normal. For instance, concrete engineering
productivity is 3.06 (Wk-Hr/CY) as shown in the column with note (2).
The natural logarithm transformed value shows in the column with note
(3). Next, the transformed engineering productivity metrics are
standardized using the mean (note 4) and standard deviation (note 5) of
the reference dataset to account for the variations in the
distributions. Lastly, all of the standardized concrete and steel
engineering productivity metrics (note 6) are weighted by their work
hours (note 1) and aggregated to a PEPM shown in the formula at note
(7). Given the concrete z-score of 0.38 and its 5200 engineering work
hours and the steel z-score of -0.47 with its 7500 hours, their
composite score is -0.12. This value, which incorporates both concrete
and steel engineering productivity, indicates that the sample project
outperforms the overall mean with 0.12 standard deviations. Similarly,
all discipline level productivity metrics could be aggregated to a PEPM
by using the same approach.
In summary, the PEPM can be expressed mathematically with Eq. 2:
[PEMP.sub.p] = [n.summation over (i=1)]([WH.sub.ip] x
[z.sub.ip])/[n.summation over (i=1)][WH.sub.ip], (2)
where: [WH.sub.ip]--work hours of the ith underlying metric on the
pth project; [z.sub.ip]--the z score of the ith underlying metric on the
p11 project.
5.1.4. Characteristics of a PEPM generated by the z-score approach
The authors and the PM team assessed the z-score approach using the
pre-established criteria of comprehensibility, homogeneity, and trending
ability. The approach when compared with the other two approaches
performs the best among the three candidate approaches and thus was
selected. Characteristics of the z-score approach are presented as
follows.
Comprehensibility
The z-score method produces a PEPM in which the mean value
approximates zero and its data ranges from -3 to 3 standard deviations
as depicted in Fig. 4. The Kolmogorov-Smirnov (K-S) test was performed
to examine if the distribution differs significantly from normal. The
p-value was greater than 0.1 suggesting that the null hypothesis cannot
be rejected and that the distribution of the PEPM approximates a normal
distribution. A negative composite score as illustrated in Table 3
indicates that the project is more productive than the norm of the base
year. A positive PEPM score would imply that the indicated project is
less productive than the norm of the base year. In addition, the PEPM is
interpreted with the same convention as other CII performance metrics
(e.g. the smaller the value, the better the performance), such as cost
growth, schedule growth, etc.; therefore, the authors and the PM team
concluded that this approach creates compatible between the PEPM and
existing CII benchmarking metrics. The PEPM was presented to industry
representatives in CII benchmarking workshops to gain broad feedback.
The benchmarking users easily recognized the PEPM's value for
assessing productivity performance and thus it was deemed acceptable. In
summary, the z-score approach produces an appropriate PEPM, which is
easily comprehended by both industry and as later demonstrated by
academia.
[FIGURE 4 OMITTED]
Homogeneity
Homogeneity of the PEPM to indicates how accurately it summarizes
overall engineering productivity at the discipline level. In this study,
homogeneity is defined as the difference between the percentile of the
PEPM and percentile of the weighted average of the discipline level
metrics in each project. The smaller the percentile difference (PD), the
better the homogeneity.
To examine the homogeneity of the PEPM under various benchmarking
scenarios, the dataset was first divided into subgroups by project
characteristics and then homogeneity was examined accordingly. For
instance, projects were divided by the ith characteristic with j
subgroups where the kth project in the jth subgroup has l disciplines.
[Percentile.sub.ijkl] was calculated for the lth discipline level
metric. A summarized percentile of all discipline level metrics was
derived from the average weighted by their work hours. This
weighted-average percentile is called overall expected performance of
discipline level metrics (OEPDLM) for the kth project (Eq. 3):
[OEPDLM.sub.ijk] = [[summation].sub.l]([Percentile.sub.ijkl] x Work
- [Hour.sub.ijkl])/[[summation].sub.l] Work - [Hour.sub.ijkl]. (3)
The percentile (Pijk) of the PEPM of the kth project is directly
calculated with the other projects in the jth subgroup. The percentile
difference ([PD.sub.ijk]) of the kth project is defined as the absolute
difference between [OEPDLM.sub.ijk] and [P.sub.ijk] (Eq. 4):
[PD.sub.ijk] = [absolute valuve of [P.sub.ijk] - [OEPDLM.sub.ijk]].
(4)
Lastly, calculating the PD for the jth subgroup of the ith
characteristic, [PD.sub.ij] is simply calculated with arithmetic mean
because every project was considered equally important (Eq. 5):
[PD.sub.ij] = [[summation].sub.k][PD.sub.ijk]/k. (5)
A grassroots project is used as an example to illustrate how the PD
was calculated. First, compared against grassroots projects, the z-score
of concrete engineering productivity (5200 work hours) of this project
was converted into a percentile, 70%. Similarly, percentiles of the
engineering productivity z-scores (illustrated in 5.1.3.) for other
disciplines were calculated and presented with their work hours as
follows: steel (40%, 7500 work hours), electrical equipment (51%, 1154
work hours), conduit (63%, 577 work hours), cable tray (59%, 981 work
hours), wire and cable (55%, 3462 work hours), lighting (58%, 173 work
hours), piping (53%, 23077 work hours), instrumentation (64%, 8654 work
hours), and equipment (41%, 7500 work hours). The percentile (45%) of
the PEPM was also calculated. Second, OEPDLM was derived from averaged
percentiles of all discipline level metrics weighted by their work
hours, 53.23%. Therefore, the PD (8.23%) of this project was obtained
using the absolute value of the difference between the OPEDLM (53.23%)
and the PEPM percentile (45%).
Applying a similar logic for generating project level engineering
productivity for grassroots projects, the PDs for addition and
modernization projects were also calculated. As shown in Table 4, for 37
addition projects, the average PD is 7.2%; for 22 grassroots projects,
the average PD is 8.5%; for 53 modernization projects, the average PD is
9.4%.
Besides project nature, the authors also examined PDs by other
project characteristics such as project size, type, priority, contract
type, and work involvement. Table 4 demonstrates that the average PD of
PEPM by different project characteristics with 13 subgroups and the
overall average PD equals 8.4%. Compared with the 30% error rate found
in previous research (Cha 2003; Jarrah 2007), the precision of the PEPM
is reasonably acceptable. The results also indicate that the PEPM
represents discipline engineering productivity metrics homogeneously.
Trending Ability
One of the major advantages of the z-score approach is that it
produces a PEPM for trend-tracking because it utilizes a fixed reference
dataset in 2004. Data points from 1998-2000, 1999-2001, and 2007-2008
were ignored because they included less than 30 projects and thus may
not be statistically reliable. Nevertheless, the average PEPM for each
year vary dramatically because of limited sample size in each period. In
order to level out fluctuations and to observe trends more clearly, a
three-year moving average (3-yr MA) is utilized to demonstrate the trend
of engineering productivity.
Fig. 5 depicts a 3-yr MA trend line. Given that engineering
productivity is defined as input over output in this research, rising
values reflect lower productivity. The trend depicts declining
productivity from 2000 to 2006. The authors discussed this with the
experts of the PM team as well as other practitioners from industry
forums. In summary, the authors found that the trend illustrated by the
PEPM is consistent with industry expectations of engineering
productivity. A more rigorous explanation is the dissemination of 3D
computer-aided-design (CAD) and technology. In order to improve field
productivity, engineers intensively use 3D CAD to deal with
constructability or safety issues beforehand and thus may consume more
time than implementations of 2D CAD (Datatech 1994). Poor technology
integration also likely hampers engineering productivity improvement as
well. For instance, an engineering firm may save work hours on designing
piping layouts with 3D CAD; however, a low degree of technology
integration with other disciplines results in inefficient data mapping
or transformation (Brynjolfsson 1993).
6. Application: benchmarking PEPM
The PEPM was developed for benchmarking engineering productivity
and identifying problems at the project level. If engineering
productivity at the project level is determined to be a major concern,
problems at the discipline level can be identified by tracking the
corresponding workload (engineering hours) and also by benchmarking
results. From Figs 6-8, an oil refining plant is provided as an example
for illustration.
An oil refining plant is engineered for modernization of equipment,
piping, and instrumentations. The z-score (standardized engineering
productivity, SEP) of the equipment discipline is -0.75 with 140 work
hours. The z-score of the piping discipline is 0.75 with 1140 work
hours. The z-score of the instrumentation discipline is 0.52 with 342
work hours. In order to control project characteristics and benchmark
more meaningfully, 11 oil refining plants (engineered for modernization)
were selected as the comparison samples. Using the PEPM for
benchmarking, as shown in Fig. 6, engineering productivity of the sample
project appears to be worse (4th quartile) than most of the comparison
samples.
[FIGURE 6 OMITTED]
When project engineering productivity is found to be low, the
engineering hours (workload) of various disciplines can be prioritized
to track engineering productivity because the engineering hours of each
discipline represent relative impact on project productivity. For the
sample project, piping engineering work hours account for 70% of total,
instrumentation work hours account for 21%, and equipment work hours
account for 9% (Fig. 7).
Therefore, piping engineering productivity should be of major
concern because it accounts for the largest workload among all
disciplines. As shown in Fig. 8, the standardized engineering
productivity (SEP) of piping discipline resides in the fourth quartile,
equipment resides in the second quartile, and instrumentation resides in
the third quartile. The figure demonstrates that piping engineering
productivity is the worst among all disciplines. Considering the
workload and performance of the sample project, the project manager
should place the piping discipline as the top priority and allocate
major management resources in order to efficiently improve project
engineering productivity.
[FIGURE 8 OMITTED]
6.1. Strategy for engineering productivity improvement through
benchmarking
Engineering productivity improvement raises project delivery
efficiency and reduces cost. Benchmarking is an effective approach with
which the project manager identifies problems. A following strategy is
provided as guidance for project managers. First, project
characteristics by which the analysis is to be performed should be
determined. Second, a similar comparison dataset is selected. Third, if
the PEPM demonstrates a less than satisfactory result, the project
manager can track engineering productivity by discipline level given
their relative importance by work-hour percentage of the entire project.
Lastly, improvement plans can be developed and thus resources can be
consumed effectively for productivity improvement.
Project managers should be aware that benchmarking results become
more meaningful as more project characteristics are identified, implying
that more project complexity is controlled. The size of the comparison
samples; however, becomes smaller accordingly. If the sample size of a
comparison dataset becomes too small to benchmark, less constraints (or
characteristics) should be used to gain more comparison samples. The
benchmarking result though, may be sub-optimized. Users should prudently
select a benchmarking dataset by leveraging comparison sample size and
thus obtain meaningful results.
7. Conclusions and recommendations
Using a dataset of 112 heavy industrial projects, this research
developed a z-score approach to produce a Project Level Engineering
Productivity Metric (PEPM). This approach consists of the steps:
transformation, standardization, and aggregation of various discipline
level metrics. Considering the positive skewed distributions of
engineering productivity data, the application of the natural logarithm
function results in transformed metrics at the discipline level
producing an approximately normally distribution. Thus, the transformed
metrics are suitable for standardization. Although the transformed
metrics were approximately normally distributed, their different central
tendencies and variation were recognized and thus their z-scores were
calculated for standardization. The different disciplines were weighted
by work hours for aggregation into a PEPM for the assessment of
engineering productivity at the project level. The PEPM is easily
understood by the industry; it represents underlying metrics accurately
and can be used to track engineering productivity trends.
From the benchmarking perspective, it is critical to have a PEPM
which summarizes the various discipline engineering productivity
metrics. It provides project managers a macro-view of engineering
productivity. Project managers can either benchmark their engineering
productivity against the CII database or historical information
collected within their organizations.
In the hierarchical EPMS, the PEPM calculated with the z-score
approach allows project managers to identify project productivity
problems at a glance. When project engineering productivity appears to
be low, the project manager can track problems of the underlying metrics
prioritized by their workload (engineering hours). An informed decision
can then be made for improving overall engineering productivity.
The development of the PEPM also opens new opportunities for
engineering productivity analysis. Further research can be conducted to
analyze impacts of best practice use such as front-end planning,
constructability, or change management on project level engineering
productivity. In addition, the relationship between engineering
productivity and project performance can also be explored with the PEPM.
doi: 10.3846/13923730.2012.671284
References
Agresti, A.; Finlay, B. 1999. Statistical methods for the social
sciences. 3rd ed. Prentice Hall, Inc., Upper Saddle River, NJ. 706 p.
Bang, V. S. S.; Pope, G.; Sharma, M. M.; Baran, J.; Ahmadi, M.
2010. A new solution to restore productivity of gas wells with
condensate and water blocks, SPE Reservoir Evaluation & Engineering
13(2): 323-331. http://dx.doi.org/10.2118/116711-PA
Benestad, H. C.; Anda, B.; Arisholm, E. 2009. Are we more
productive now? Analyzing change tasks to assess productivity trends
during software evolution, in The 4th International Conference on
Evaluation of Novel Approaches to Software Engineering (ENASE 2009),
9-10 May, 2009, Milan, Italy. INSTICC Press, 161-176.
Brynjolfsson, E. 1993. The productivity paradox of information
technology: Review and assessment, Communications of the ACM36(12):
67-77.
Cha, H. S. 2003. Selecting value management processes for
implementation on capital facility projects. PhD Thesis. Austin: The
University of Texas at Austin.
Chang, A. S.; Ibbs, W. 2006. System model for analyzing design
productivity, Journal of Management in Engineering ASCE 22(1): 27-34.
http://dx.doi.org/10.1061/(ASCE) 0742-597X(2006)22:1(27)
Construction Industry Institute (CII). 2001. Engineering
productivity measurement. RR156-11. Austin: Construction Industry
Institute, The University of Texas at Austin. 356 p.
Construction Industry Institute (CII). 2004. Engineering
productivity measurement II. RR192-11. Austin: Construction Industry
Institute, The University of Texas at Austin. 27 p.
3D plant design systems: Benefits and paybacks. 1994. Cambridge:
Daratech, Inc. 14 p.
de Aquino, G. S.; de Lemos Meira, S. R. 2009. An approach to
measure value-based productivity in software projects, in The 9th
International Conference on Quality Software (QSIC 2009), 24-25 August,
2009, Jeju, Republic of Korea. IEEE Computer Society, 383-389.
Ellis, R. D.; Lee, S.-H. 2006. Measuring project level productivity
on transportation projects, Journal of Construction Engineering and
Management ASCE 132(3): 314-320.
http://dx.doi.org/10.1061/(ASCE)0733-9364(2006)132: 3(314)
Fleming, N.; Ramstad, K.; Mathisen, A. M.; Selle, O. M.; Tjomsland,
T.; Fadnes, F. H. 2010. Squeeze related well productivity impairment
mechanisms preventative/ remedial measures utilised: Norwegian
continental shelf, in The 10th SPE International Conference on Oilfield
Scale 2010, 26-27 May, 2010, Aberdeen, United Kingdom. Society of
Petroleum Engineers, 1-16.
Ibbs, W. 2005. Impact of change's timing on labor
productivity, Journal of Construction Engineering and Management ASCE
31(11): 219-1223. http://dx.doi.org/10.1061/
(ASCE)0733-9364(2005)131:11(1219)
Issa, M. H.; Rankin, J. H.; Christian, A. J. 2009. A methodology to
assess the costs and financial benefits of green buildings from an
industry perspective, in The Canadian Society for Civil Engineering
Annual Conference 2009, 27-30 May, 2009, Canada. Canadian Society for
Civil Engineering, Canada, 1111-1120.
Jarrah, R. T. 2007. Plan for facilitating and evaluating design
effectiveness. PhD Thesis. Austin: University of Texas at Austin.
Kim, I. 2007. Development and implementation of an engineering
productivity measurement system (EPMS) for benchmarking. PhD Thesis.
Austin: University of Texas at Austin.
Liao, P.-C. 2008. Influence factors of engineering productivity and
their impact on project performance. PhD Thesis. Austin: University of
Texas at Austin.
Maloney, W. F.; McFillen, J. M. 1995. Job characteristics:
Union-nonunion differences, Journal of Construction Engineering and
Management ASCE 121(1): 43-54.
http://dx.doi.org/10.1061/(ASCE)0733-9364(1995)121: 1(43)
Niu, J.; Dartnall, J. 2010. Using the fuzzy method to evaluate
manufacturing productivity, in The ASME International Mechanical
Engineering Congress and Exposition (IMECE2009), 13-19 November, 2009,
Lake Buena Vista, FL, United States. American Society of Mechanical
Engineers, 115-121.
Ren, L. 2009. Productivity evaluation to Chinese agriculture based
on SBM model, in The International Conference on Information Management,
Innovation Management and Industrial Engineering (ICIII 2009), 26-27
December, 2009, Xi'an, China. IEEE Computer Society, 505-508.
Shouke, C.; Zhuobin, W.; Jie, L. 2010. Comprehensive evaluation for
construction performance in concurrent engineering environment,
International Journal of Project Management 28(7): 708-718.
http://dx.doi.org/10.1016/j.ijproman.2009.11.004
Song, L. 2004. Productivity modeling for steel fabrication
projects. PhD Thesis. Alberta: University of Alberta.
Song, L.; AbouRizk, S. M. 2005. Quantifying engineering project
scope for productivity modeling, Journal of Construction Engineering and
Management ASCE 131(3): 360-367.
http://dx.doi.org/10.1061/(ASCE)0733-9364(2005) 131:3(360)
Thomas, H. R.; Korte, Q. C.; Sanvido, V. E.; Parfitt, M. K. 1999.
Conceptual model for measuring productivity of design and engineering,
Journal of Architectural Engineering ASCE 5(1): 1-7.
http://dx.doi.org/10.1061/(ASCE)1076-0431(1999)5:1(1)
Watermeyer, P. 2002. Handbook for process plant project engineers.
Professional Engineering Publishing, UK. 236 p.
Yang, Z.; Paradi, J. C. 2009. A DEA evaluation of software project
efficiency, in The IEEE International Conference on Industrial
Engineering and Engineering Management (IEEM 2009), 8-11 December, 2009,
Hong Kong, China. IEEE Computer Society, 1723-1727.
Zener, C. 1968. An analysis of scientific productivity, in Proc. of
the National Academy of Sciences of the United States of America,
1078-1081.
Pin-Chao Liao (1), Stephen R. Thomas (2), William J. O'Brien
(3), Jiukun Dai (4), Stephen P. Mulva (5), Inho Kim (6)
(1) Department of Construction Management, Tsinghua University,
Beijing, 100084, P.R. China
(2,4,5) Construction Industry Institute, Austin, TX 78759-5316, USA
(3) Department of Civil, Architectural and Environmental
Engineering, The University of Texas at Austin, Austin, TX, 78712-0276,
USA
(6) California Department of Transportation, Oakland, CA 94612, USA
E-mails: (1) pinchao@mail.tsinghua.edu.cn (corresponding author);
(2) sthomas@mail.utexas.edu; (3) wjob@mail.utexas.edu; (4)
jiukun.dai@cii.utexas.edu; (5) smulva@cii.utexas.edu; (6)
inho_kim@dot.ca.gov
Received 24 Jun. 2010; accepted 11 Feb. 2011
Pin-Chao LIAO. A member of research faculty at Dept. of
Construction Management, School of Civil Engineering, Tsinghua
University, Beijing. His research interests are construction management,
benchmarking, engineering productivity, and legal issues.
Stephen R. THOMAS. An associate director in the Construction
Industry Institute. Member of American Society of Civil Engineers
(ASCE). His research interests are Benchmarking performance and practice
use for the engineering and construction industry and Qualitative
methods for project management.
William J. O'BRIEN. A member of research faculty at
Construction Engineering Project Management program, Dept. of Civil,
Architectural and Environmental Engineering, the University of Texas at
Austin. His research interests are production systems and project
controls and computer integrated construction.
Jiukun DAI. A research engineer in the Construction Industry
Institute. His research interests are productivity measures,
benchmarking and metrics, project control, and safety.
Stephen MULVA. An associate director in the Construction Industry
Institute. His research interests are field productivity metrics,
offshore projects, and repetitive building programs.
Inho KIM. A transportation engineer. His research interests are
engineering productivity, construction productivity, and project
control.
Table 1. The EPMS database
Project Sample Size
Characteristics (N = 112)
Respondent Type
Owner 20
Contractor 92
Project Type
Process 77
Non-Process 35
Project Nature
Addition 37
Grass Roots 22
Modernization 53
Project Size
Large (> $5MM) 68
Small (< = $5MM) 44
Table 2. Test of normality for engineering productivity metrics
Engineering Productivity N Raw
Metrics
Mean Median SD * ND **?
Concrete 44 4.54 1.77 6.36 No
Steel 60 9.31 6.95 7.73 No
Electrical Elect. Equip. 25 21.06 18.42 14.36 No
Conduit 32 0.20 0.12 0.21 No
Cable Tray 29 0.62 0.52 0.54 No
Wire and Cable 35 0.03 0.02 0.04 No
Lighting 19 5.29 3.93 5.82 No
Piping 90 0.83 0.68 0.79 No
Instrumentation 70 9.69 6.21 9.59 No
Equipment 56 166.46 105.00 165.13 No
Engineering Productivity Transformed
Metrics
Mean Median SD * ND **
Concrete 0.64 0.57 1.41 Yes
Steel 1.91 1.94 0.85 Yes
Electrical Elect. Equip. 2.82 2.91 0.72 Yes
Conduit -2.20 -2.12 1.26 Yes
Cable Tray -1.01 -0.66 1.19 Yes
Wire and Cable -3.96 -3.88 1.16 Yes
Lighting 1.26 1.37 0.95 Yes
Piping -0.54 -0.38 0.90 Yes
Instrumentation 1.75 1.83 1.11 Yes
Equipment 4.68 4.65 1.00 Yes
Note: * SD = Standard Deviation; ** ND = Normal Distribution (Yes/No)
Table 3. A simplified Example using the Z-score Approach to Calculate
the PEPM for a project
Mean of
Transformed Transformed
Disciplines Wk-Hrs Quantity EP * EPM EPM in 2004
(1) (2) (3) (4)
Concrete 5200 1700 (CY) 3.06 1.12 0.88
Steel 7500 800 (Ton) 9.38 2.24 2.47
Standard
Deviation of
Transformed
Disciplines EP in 2004 Z score
(5) (6)
Concrete 0.63 0.38
Steel 0.49 -0.47
Composite Score -0.12
Calculations (3) = ln(2)
(6) = [(3)-(4)]/(5)
(7) = [Product of (1)*(6)]/[S(1)]
* EP = Engineering Productivity
Table 4. Average percentile difference (PD) by project characteristics
Project Subgroups (j) # of Average
Characteristics (i) projects
Project Nature Addition 37 7.2
Grassroots 22 8.5
Modernization 53 9.4
Project Size Large 68 6.7
Small 44 9.3
Project Type Process 77 8.7
Non-Process 35 9.2
Work Involvement Design-Only 39 7.6
Design-and-Construction 68 7.9
Contract Type Lump Sum 20 12.5
Cost 45 8.6
Reimbursable
Project Priority Non-Schedule Driven 44 8.1
Schedule 35 8.1
Driven
Average 8.4
Fig. 2. Work-hour distributions in
the EPMS
Discipline Hour
Percentage
Piping 45%
Civil 20%
Instrumentation 14%
Electrical 11%
Equipment 10%
Note: Table made from bar graph.
Fig. 5. Engineering
productivity trend
Period 3-yr
MA of
EP
'00-'02 -0.31
N = 31
'01-'03 -0.22
N = 50
'02-'04 -0.03
N = 73
'03-'05 -0.03
N = 74
'04-'06 -0.00
N = 53
Note: Table made from
line graph.
Fig. 7. Work-hour percentage
of the sample project
PEPM = 0.52
Pippin Hrs. 70%
Equipment Hrs. 9%
Instr. Hrs. 21%
Note: Table made from pie
chart.