The revolution of six-sigma: an analysis of its theory and application.
Drake, Dominique ; Sutterfield, J.S. ; Ngassam, Christopher 等
INTRODUCTION
Data is good but good data is better. However, in order to get this
good data it is important to have measures in place such as quality
control systems to ensure information accuracy. Quality has evolved over
the past two centuries when it first became an important business
measure of comparison. The ways in which it has been used to define and
assess quality have also evolved as new business practices and degrees
of acceptance are enforced. One of the first definitions of quality was
conformance to valid customer requirements or the Goalpost View, Gitlow
& Levine and Levine (2005). As long as the company's process
met the customer's requirements, then both the customer and the
company were satisfied. Frederic Taylor in his theories of scientific
management subscribed to this viewpoint. He believed segmenting jobs
into specific works tasks was the best way to control outputs because
then efficiency would increase, Basu (2001). Employees were told what to
do and how to do it with little or no input. They also rarely worked in
teams or cross-functional efforts because each person had his or her own
tasks to complete. Moreover, postproduction inspection was the primary
means of quality control, which showed the lack of concern for waste and
error at intermediary steps in the process, Evans and Lindsay (2005).
Under this traditional approach, companies did little to understand
customer requirements, neither external nor internal, and rarely focused
efforts on finding a way to improve quality if it did not come from a
technological breakthrough.
As companies began to realize this approach did not take into
consideration the cost of waste and variation, many began looking for
new ways to measure and define quality. This led to a more modern
definition of quality being accepted as the predictable degree of
uniformity and dependability, Gitlow & Levine (2005). This
Continuous Improvement View supported the notion that every process
contains an element that can be improved. Inherently, continuous quality
improvement focused on fine tuning parts of the whole through
incremental changes. When a problem was discovered, it was addressed
until the next problem was discovered and so on. Industrial scientists
such as William Edwards Deming, Joseph Juran, Walter Shewhart, Harold
Dodge and others were at the forefront of this era. They began by
applying statistical methods of control to quality. These methods showed
the shift from inspection of the final product and putting a band-aid on
the problem to actually implementing quality control into the
manufacturing process. This also illustrated the increasing importance
of management decision-making in the quality control efforts instead of
simply finding a quick fix to the problem so it fell within the
specification limits of product or service satisfaction and acceptance.
Deming and Juran believed in a new concept called Total Quality
Management. Research by Evans and Lindsay (2005) indicate this viewpoint
was based on three fundamental principles:
1. Focus on customers and stakeholders
2. Participation and teamwork by everyone in the organization
3. A process focus supported by continuous improvement and learning
Through Total Quality Management (TQM), workers were empowered to
provide input throughout the entire process to ensure and instill confidence that products met customer specifications. In meeting the
customer requirements, the term quality assurance became widely used as
any planned and systematic activity directed toward providing consumers
with products of appropriate quality along with the confidence that
products meet consumers' requirements Evans and Lindsay (2004).
This extension of the Continuous Improvement viewpoint coined the terms
Big Q and Little Q, which referred to the quality of management and the
management of quality, respectively Evans and Lindsay (2004). Driven by
upper management, TQM focused on not just identifying and eliminating
problems that caused defects but educating the workforce to improve
overall quality. Upper management realized it needed to focus more on
identifying and eliminating problems that were causing the defects
throughout the process rather than just waiting until the end for
post-production inspections. In order to do this, continuous education
programs were implemented at all levels. As time passed, these total
quality management efforts soon began to fall short of expectations of
businesses in the United States. Companies vowed they could promise a
certain level of quality by getting it right the first time, but in the
end the processes were still not meeting all needs and requirements of
the consumer. TQM was joined by many other acronyms like JIT, MRP and
TPM that all guaranteed an unreachable quality standard Basu (2001).
Because these methods focused so greatly on the parts of the whole
rather than the whole, they fell short in implementing rapid changes.
This led to the introduction of certain holistic quality control
programs, one of the most recent being Six- sigma.
Six-sigma is a discipline based on the Greek letter sigma which is
the conventional symbol for standard deviation. Six-sigma is a very
structured process that focuses on quantitative data to drive decisions
and reduce defects Benedetto (2003). Many companies have tried to
implement this program without truly understanding the theory and
concept behind it. Because of such misguided efforts, Six-sigma has
received a somewhat jaded reputation, but it is not the discipline that
is flawed but rather the application of it. As Evans and Lindsay note, a
cookbook approach in which management reads the latest self-help book
advocated by business consultants and blindly follows the author's
recommendations, will destine Six-sigma or any other management practice
to failure because experience only describes what happened and is no
help to the emulating management team Evans and Lindsay (2004).
Understanding the subtle theory, on the other hand, will enable one to
better understand the cause and effect relationships that are applied in
Six-sigma and other management practices. In this paper, we will further
develop the historical roots of the quality revolution already
illustrated, show how quality revolution developed into Six-sigma, delve
further into the underlying theory of Six-sigma and then analyze the
uses of some Six-sigma tools used in an effective, coherent Six-sigma
program.
LITERATURE REVIEW
Multiple scientists have contributed to the evolution of quality
management. The main three who added to the development of six-sigma and
will be discussed here are William Edwards Deming, Joseph Juran and
Philip Crosby. Deming had a Ph. D. in physics and was trained as a
statistician at Yale University. He and Juran both worked for Western
Electric during the 1920s and 1930s Evans and Lindsay (2004). From there
he started working with the U.S. Census Bureau where he perfected his
skills in statistical quality control methods. He opened his own
practice in 1941 and thus began teaching SQC methods to engineers and
inspectors in multiple industries. Deming, better known as the Father of
Quality, was widely ignored in the United States for his works. As a
result, he went to Japan right after World War II to begin teaching them
SQC. His work there is what truly bolstered the "Made in
Japan" label to its well-respected level of quality today W.
Edwards Deming (2007). Dr. Deming's most famous work is the 14
points in his System of Profound Knowledge. There are four interrelated parts in the program: Appreciation for a System, Understanding of
Variation, Theory of Knowledge and Psychology, Evans and Lindsay (2004).
In the first point, Deming believed it was poor management to purchase
materials at the lowest price or minimize the cost of manufacturing if
it were at the expense of the product. The second and third points
emphasized the importance of management understanding the project first
and foremost before taking any steps to reduce variation. The methods to
reduce variation include technology, process design and training. Under
the fourth point, Deming stressed understanding the dynamics of
interpersonal relationships because everyone learns in different ways
and speeds and thus the system should be managed accordingly. Other
points in the Deming school of thought were on-the-job training,
creating trust, building quality into a product or service from the
beginning and inclusion of everyone in the company to accomplish project
improvement. The common methodology used by Deming for improving
processes is PDCA which stands for Plan--Do--Check--Act.
Joseph Juran is well known for his book, the Quality Control
Handbook published in 1951. He too went to Japan in the 1950s after
working at Western Electric to teach the principles of quality
management as they worked to improve their economy. His school of
thought was different from Deming's in that he believed to improve
quality companies should use systems with which managers are already
familiar. When consulting firms, he would design programs that fit the
company's existing strategic efforts to ensure minimal rejection by
staff. The main points he taught were known as the Quality Trilogy:
Quality Planning, Quality Control and Quality Improvement. Quality
planning was the process of preparing companies to meet quality goals by
identifying the customers and their needs. Quality control was the
process of meeting quality goals during operations with minimal
inspection. Quality improvement was the process of breaking through to
unprecedented levels of performance to produce the product (Evans and
Lindsay, 2004). His philosophy required a commitment from top management
to implement the quality control techniques and emphasized training
initiatives for all.
The third philosopher who contributed greatly to six-sigma was
Philip Crosby. Crosby worked as the Corporate VP for Quality at
International Telephone and Telegraph for 14 years. His most well-known
work is the book Quality is Free where he emphasized management should
"do it right the first time" and popularized the idea of the
"cost of poor quality." Crosby emphasized that management must
first set requirements and those are on which the degree of quality will
be judged. He also believed doing things right the first time to prevent
defects would always be cheaper than fixing problems which developed
into the cost of poor quality idea. Similar to six-sigma, Crosby focused
on zero defects and created four Absolutes of Quality Management to
ensure that goal was accomplished Skymark Corporation (2007):
1. Quality is defined as conformance to requirements, not as
"goodness" or "elegance"
2. The system for causing quality is prevention, not appraisal
3. The measurement standard must be zero defects, not that's
close enough
4. The measurement of quality is the Price of Nonconformance, not
indices.
These philosophies have been categorized as more evolutionary
because the changes resulting from their implementation were more
gradual. As time continued, managers became increasingly disappointed
with TQM and searched for philosophies that would create more drastic
changes in the existing processes and systems. These revolutionary
processes recognized that processes are key to quality, that most
processes are poorly designed and implemented, that overall success is
sensitive to individual sub-processes success rates and that rapid,
dramatic change requires looking at the entire process. One of the first
of these revolutionary philosophies was Reengineering. Reengineering is
defined as the fundamental rethinking and radical redesign of business
processes to achieve dramatic improvements in critical, contemporary
measures of performance Benedetto (2003). It places heavy reliance on
statistics and quantitative analysis.
Six-sigma emerged during this revolutionary time period after
reengineering. The two approaches are very similar but also have many
differences. Whereas reengineering focuses on the statistical aspect,
Six-sigma places more emphasis on data to drive decisions. When
comparing six-sigma to Total Quality Management it is very apparent that
great changes have come about in the evolution of quality. Six-sigma has
a more structured and rigorous training development program for those
professionals using it. Six-sigma is a program owned by the business
leaders, also known as champions, while TQM programs are based on worker
empowerment. Six-sigma is cross functional and requires a verifiable
return on investment in contrast to TQM's function or process based
methodology that has little financial accountability Evans and Lindsay
(2004).
Six-sigma started as a problem solving approach to reduce variation
in a product and manufacturing environment. It has since grown to be
used in a multitude of other industries and business areas such as
service, healthcare and research and development. It represents a
structured thought process that begins with first thoroughly
understanding the requirements before proceeding or taking any action.
Those requirements define the deliverables to be produced and the tasks
to produce those deliverables which in turn illustrate the tools to be
used to complete the tasks and produce the deliverables Hambleton
(2007). Six-sigma was initiated in the 1980s by Motorola. The company
was looking to focus its quality efforts on reducing manufacturing
defects by tenfold within a five-year period. This goal was then revised
to a tenfold improvement every two years, 100-fold every four years and
3.4 defects per million in five years Gitlow & Levine (2005). A
defect in six-sigma terms is any factor that interferes with
profitability, cash flow or meeting the customers' needs and
expectations. This led to the next important concept of six-sigma:
critical to quality. Critical-to-Quality (CTQ) items are points of
importance to the customer be it internally or externally, Adomitis and
Samuels (2003). Six-sigma projects aim to improve CTQs because research
has proven that a high number of CTQs correlates with lost customers and
reduced profitability. Ultimately, the company will have reduced the
number of defects and focused on the CTQs such that it will cost more to
correct the defects any further than to prevent their occurrence in the
first place.
This thought was advanced in the 1980s by Dr. Genichi Taguchi with
his Quality Loss function. Prior to that time, it was thought that the
customer would accept anything within the tolerance range and that
product costs do not depend on the actual amount of variation as long as
it was still within the tolerance range, Evans and Lindsay (2004).
Taguchi introduced a model that discredited that school of thought by
showing the approximate financial loss for any deviation from a
specified target value--an idea that coincides with six-sigma's
inclusion of financial data in the assessment of a process or function.
His work contradicted the original concept and showed that variation is
actually a measure of poor quality such that the smaller the variation
is about the nominal level, the better the quality is. As the graph
below depicts, this loss function also applied an economic value to
variation and defects from the standard. As the process or project
deviates from its target value, costs become increasingly higher and the
customer becomes increasingly dissatisfied. Quality loss can be
minimized by reducing the number of deviations in performance or by
increasing the product's reliability. In six-sigma, the goal is to
minimize the number of defects and keep the process as close to the
nominal value as possible to avoid high degrees of variability. The
concepts of Taguchi are part of Six-sigma's key philosophy of
reducing defects and variation from the norm.
[FIGURE 1 OMITTED]
As previously mentioned six-sigma is a product of the statistical
methods era of controlling quality. It stressed the common measure of
quality, the defect. In measuring output quality, the key metrics used
are defects per million opportunities (DPMO), Cost of Poor Quality
(COPQ) and sigma level, Druley and Rudisill (2004). Defects per million
opportunities takes into consideration the number of defects produced
out of the entire batch and the number of opportunities for error. It is
important to assess quality performance using the defects per million
opportunities because two different processes may have significantly
different numbers of opportunities for error, thus making comparison
unbalanced. Furthermore, large samples provide for a higher probability
of detection of changes in process characteristics than a smaller
sample. When Motorola created this ultimate goal of six-sigma, it was
set to be equivalent to a defect level of 3.4 defects per million
opportunities. According to Evans and Lindsay, this figure was chosen
because field failure data suggested that Motorola's processes
drifted by this amount on average, Evans and Lindsay (2004). The cost of
poor quality metric was introduced by Crosby. It includes the costs of
lost opportunities, lost resources and all the rework, labor and
materials expended up to the point of rejection. This metric is used to
justify the beginning of a six-sigma project. When a company is choosing
which project to undertake, the project with the highest COPQ will most
likely be selected. Lastly, the sigma level indicates the degree of
quality for the project. Sigma is a statistical term that measures how
much a process varies from the nominal or perfect target value based on
the number of DPMO. As mentioned, Motorola's goal was to have 3.4
defects per million opportunities which would happen at the six-sigma
level. The DPMOs for various sigma levels are compared in Table 1 below
(iSixSigma, 2002).
One phenomenon that has been widely discussed in relation to
six-sigma is the 1.5 sigma shift. Because six-sigma focuses on
variation, a chart to understand deviation from the mean should be used
to further understand this concept. A Z-table shows the standard
deviation from the mean. A process' normal variation, defined as
process width, is +/- 3-sigma about the mean. In looking at Table 2, it
is clear that a quality level of 3-sigma actually corresponds with 2,700
defects per million opportunities. This number may sound small but it is
similar to 22,000 checks deducted from the wrong bank account each hour
or 500 incorrect surgical operations each week. This level of operating
efficiency was deemed unacceptable by Motorola hence the search for a
design that would ensure greater quality levels. A six-sigma design
would have no more than 3.4 DPMO even if shifted +/-1.5 sigma from the
mean. In looking once again at Table 2, it is evident a quality level of
six-sigma actually corresponds with two defects per billion
opportunities while the more commonly known value of 3.4 defects per
million opportunities is found at the six-sigma level with a +/-1.5
sigma shift off the center value or +/-4.5-sigma. This shift is a result
of what is called the Long Term Dynamic Mean Variation, Swinney (2007).
In simpler terms, no process is maintained perfectly at center at all
times, thus allowing drifting over time. The allowance of a shift in the
distribution is important because it takes into consideration both short
and long-term factors that could affect a project such as unexpected
errors or movement. Also worth noting is that the goal of 3.4 DPMO can
be attained at 5-sigma with a +/-0.5-sigma shift or at 5.5-sigma with a
+/-1-sigma shift. However as Taguchi pointed out, it is important to
minimize the noise factors that could shift the process dramatically
from the nominal value. Table 2 below shows the DPMO quality levels
achieved for various combinations of off-centering and multiples of
sigma, Evans and Lindsay (2004).
METHODOLOGY
Within six-sigma there are different project based methods that can
be followed. Among them are Lean Sigma, Design for Six-sigma, Six-sigma
for Marketing and the most common DMAIC--Define, Measure, Analyze,
Improve and Control, Hambleton (2007). Lean Sigma is a more refined
version of the standard Six-sigma program that streamlines processes to
its essential value-adding activities. In doing this, the company aims
to do things right the first time with minimum to no waste. Wait time,
transportation, excess inventory and overproduction are among the
wasteful activities that can be eliminated. One test used to identify
the value-add activities that will reduce waste is the 3C Litmus Test:
Change, Customer, Correct, Hambleton (2007). If the activity changes the
product or service, if the customer cares about the outcome of the
activity and/or if the activity is executed and produced correctly the
first time then it is a value-add activity and should be kept. Lean
Sigma was popularized in Japan after World War II by Toyota. The Toyota
Production System challenged Ford's standard for a large lot
production system by using systematically and continuously reducing
waste through small lot productions only. Design for Six-sigma (DFSS)
expands Six-sigma by taking a preventative approach by designing quality
into the product or process. DFSS focuses on growth through product
development, obtaining new customers and expanding sales into the
current customer base, Hambleton (2007). It focuses on not just customer
satisfaction but customer success because from the inception of the
product or service the company will focus on optimizing CTQs. In order
to do this, companies use multivariable optimization models, design of
experiments, probabalistic simulation techniques, failure mode and
effects analysis and other statistical analysis techniques. This method
is usually begun only after the Six-sigma program has been implemented
effectively. The Six-sigma for Marketing methodology simply means
applying six-sigma ideas and principles to improving other functions of
the business such as marketing, finance and advertising. The most common
methodology is DMAIC. One goal of using the DMAIC methodology is to
identify the root cause of the problem and select the optimal level of
the CTQs to best drive the desired output. Another goal is to improve
PFQT: Productivity, Financial, Quality and Time spent, Hambleton (2007).
It is used as an iterative method to combat variation. With the built in
ability to contrast variation, DMAIC intrinsically allows for
flexibility because as knowledge is learned thorough implementation,
assumptions of the root cause may be disproved, requiring the team to
modify or revisit alternative possibilities. Kaoru Ishikawa promoted
seven basic tools that can be used in assessing quality control. The
list has since been expanded upon to include many other tools but the
original seven were Cause-and Effect diagrams, Check sheets, Control
charts, Histograms, Pareto charts, Scatter diagrams and Stratification,
Hambleton (2007). Some of these tools will be further explained below.
In the first step Define, the company clearly defines the problem.
It begins the trust and dedication from all stakeholders and all persons
included in the project team. The project team consists of the Champion,
Master Black Belt, Black Belt, Green Belt and Team Members. Key
activities occurring in this phase include selecting the team members
and their roles, developing the problem statement, goals and benefits
and developing the milestones and high level process map iSixSigma
(2007). One tool used in this step is the process flowchart. Four main
types of flow charts are considered: top-down, detailed, work flow
diagram and deployment, Kelly and Sutterfield (2006). This tool shows
how various steps in a process work together to achieve the ultimate
goal. Because it is a pictorial view, a flow chart can be applied to fit
practically any need. The process map allows the user to gain an
understanding of the process and where potential waste or bottlenecks
could occur. It also could be used to design the future or desired
process. An example process map for a call answering operation is shown
below in Figure 2.
[FIGURE 2 OMITTED]
In the second step Measure, the company quantifies the problem by
understanding the current performance and collecting necessary data to
improve all CTQs. Key activities occurring in this phase include
defining the defect, opportunity, unit and cost metrics, collecting the
data, determining the process capability. One tool used in this step is
the SIPOC Diagram. SIPOC stands for Suppliers, Inputs, Processes,
Outputs and Customers. This tool is applied to identify all related
aspects of the project--who are the true customers, what are their
requirements, who supplies inputs to the process--at a high level before
the project even begins (iSixSigma, 2007). These diagrams are very
similar to process maps which are also a tool applied at this phase of
the DMAIC cycle. A SIPOC diagram is shown below in Figure 3, Simon
(2007).
[FIGURE 3 OMITTED]
In the third step Analyze, the root cause of the project's
problem is investigated to find out why the defects and variation are
occurring. Through this detailed research, the project team can begin to
find the areas for improvement. Key activities in this phase are
identifying value and non-value added activities and determining the
vital few or critical-to-quality elements. The care applied to this
phase of the Six-sigma project is very important to the project's
success because a lack of understanding and thorough analysis is what
causes most defects and variation. Evans and Lindsay (2005) described
that it could also be caused by:
* Lack of control over the materials and equipment used
* Lack of training
* Poor instrument calibration
* Inadequate environmental characteristics;
* Hasty design of parts and assemblies.
Multiple tools exist to guide the efforts of this process. They
include but are not limited to scatter diagrams, Pareto charts,
histograms, regression analyses and fishbone diagrams. A scatter diagram shows the relationship between two variables. The correlation is
indicated by the slope of the diagram which could be linear or
non-linear. If linear the correlation could be direct or indirect and
positive, negative or null. If non-linear, a regression analysis can be
used afterwards to measure the strength of the relationship between the
variables. The various types of scatter plots are shown in Figure 4.
[FIGURE 4 OMITTED]
The Pareto chart is used to separate the "vital few" from
the "trivial many" when analyzing a project. It communicates
the 80/20 rule which states that 80 percent of an activity comes from 20
percent of the causes which thus provides rationale for focusing on
those vital few activities, Hambleton (2007). The problem frequencies
are plotted in the order of greatest to least and show the problems
having the most cumulative effect on the system, Kelly and Sutterfield
(2006). As mentioned earlier, six-sigma focuses on eliminating as many
noise factors as possible and this Pareto analysis helps to do just
that. So, the company does not expend resources on wasteful activities
that do not add value, but instead cost. to the end project. Figure 5
shows a typical cumulative Pareto chart.
[FIGURE 5 OMITTED]
In the fourth step Improve, the research of the problem's root
cause is actually put into work by eliminating all the defects and
reducing the degree of variation. Key activities in this phase are
performing design of experiments, developing potential solutions,
assessing failure modes of potential solutions, validating hypotheses,
and correcting/re-evaluating potential solutions. Failure Mode and
Effects Analysis is a tool used in this step of the DMAIC process to
identify a failure, its mode and effect through analysis. The analysis
prioritizes the failures based on severity, occurrence and detection.
Through this analysis a company can create an action plan if a failure
occurs. Results of an FMEA include a list of effects, causes, potential
failure modes, potential critical characteristics, documentation of
current controls, requirements for new controls and documentation of the
history of improvements. Items listed low on the priority list do not
necessitate an action plan to correct them unless they have a high
probability of occurrence or are special cause circumstances, Hambleton
(2007).
In the fifth and final step Control, the project improvements are
monitored to ensure sustainability. Key activities include developing
standards and procedures, implementing statistical process control,
determining process capability, verifying benefits, costs, and profit
growth, and taking corrective action when necessary to bring the project
back to its nominal value, iSixSigma (2007). The most important tool
used in this phase is the Control chart. It is a statistical method
based on continuous monitoring of process variation. The chart is drawn
using upper and lower control limits along with a center or average
value line. As long as the points plot within the control limits, the
process is assumed to be in control. The upper and lower control limits
are usually placed three sigma away from the center line because for a
normal distribution data points fall within 3-sigma limits 99.7 percent
of the time. These control limits are different than the specification
limits. Control limits help identify special cause variation and confirm
stability while specification limits describe conformance to customer
expectations (Skymark Corporation, 2007). However, if there are many
outliers, points gravitating toward one of the control limits, or there
seems to be a trend in the points, the process may need to be adjusted
to reduce the variation and/or restore the process to its center. The
control chart can help to assess quantitative gains made through the
improvements because each point will be compared to the target value. An
example of a control chart for a process in control is shown in Figure 6
below.
[FIGURE 6 OMITTED]
APPLICATION OF METHODOLOGY
Multiple success stories have come from implementing the
aforementioned tools into a six-sigma project. The most well known
success story is that of General Electric under the direction of Jack
Welch. Welch began a six-sigma project to work on the rail car repairs
and aircraft engine imports for Canadian customers. Through his guided
efforts, the company reduced repair time, redesigned its leasing
process, reduced border delays and defects and improved overall customer
satisfaction. Quantitatively, General Electric saved over $1 billion in
its first year of six-sigma application and then $2 billion in the
second year. The company's operating margin rose to 16.7 percent
while revenues and earnings increased by 11 and 13 percent respectively,
Black, Hug and Revere (2004).
Six-sigma has grown in application from just in the manufacturing
environment to also banking, healthcare and automotive. In 2001 Bank of
America's CEO Ken Lewis began focusing on increasing the customer
base while improving company efficiency. The company handled nearly 200
customer interactions per second and thus set the ultimate goal of its
six-sigma efforts to be customer delight. Bank of America established a
customer satisfaction goal and created a measurement process to evaluate
current performance in order to work toward improving the state. In the
first year, the bank's defects across electronic channels fell 88
percent, errors in all customer delivery channels and segments dropped
24 percent, problems taking more than one day to resolve went down 56
percent and new checking accounts had a 174 percent year over year net
gain. Within four years of beginning the project, the bank's
customer delight rose 25 percent. Through the application of six-sigma,
Bank of America was able to focus on the voice of the customer in
determining what was most important to them to make their experience a
pleasant one, Bossert and Cox (2004).
The Scottsdale Healthcare facility in Arizona began a six-sigma
project to work on its overcrowded emergency department because it took
38 percent of the patient's total time within the department to
find a bed and transfer the patient out of the waiting room. Before
implementing quality efforts, multiple intermediary steps existed in the
process which inevitably slowed down the time from start to finish and
reduced the potential yield. As a result of the DMAIC and Lean Sigma
efforts, the facility identified the root cause of the problem was not
that of finding a bed, as originally thought, but rather reducing the
number of steps involved in the transfer process. This solution produced
incremental profits of $600,000 and reduced the cycle time for bed
control by 10 percent. Moreover, the patient throughput in the emergency
room increased by 0.1 patients/hour (Lazarus and Stamps, 2002). This
project proved one of six-sigma's key arguments that inspection is
unproductive and instead quality control should be implemented from the
beginning of a product or service to reduce non-value add activities.
It is important to note that not all applications of six-sigma have
led to success. A common explanation for the failures is that companies
and managers read the latest self-help book and blindly followed the
author's recommendations. In doing this, they failed to fully grasp
the theory behind the approach. Experience only describes programs like
six-sigma, but understanding the theory will help to understand the
cause-and-effect relationships which can then be used for rational
prediction and management decisions. Also it is important to note that
many of the failures are not due to a flaw in six-sigma's
conceptual basis, but rather failures in the mechanics of team
operation. According to Evans and Lindsay, 60 percent of six-sigma
failures are due to the following factors:
* Lack of application of meeting skills
* Improper use of agendas
* Failure to determine meeting roles and responsibilities
* Lack of setting and keeping ground rules
* Lack of appropriate facilitative behaviors
* Ambivalence of senior management
* Lack of broad participation
* Dependence on consensus;
* Too broad/narrow initiatives.
To ensure the success of a six-sigma implementation, it is vitally
important that the project champion and other top management officials
are very intimately involved in the process. Also implementing constant
training for all employees on related topics will minimize the
misunderstanding and maximize resources, both financial and human.
CONCLUSION
The objective of this paper was not to examine the tools of
Six-sigma in great detail, but rather to explain their history and
evolution, because it is more important to understand the origin and
development of a discipline, and the reason(s) that it is better than
its predecessors. As explained, Six-sigma is used to resolve problems
resulting in deviation between what should be happening and what is
actually happening. The methodology and tools of Six-sigma can handle
any problem type from conformance to efficiency to product/service
design. However, in order for the program to effectively mitigate all
project risks and be implemented successfully, there must be commitment
from the top-down. Through this paper the history of quality has been
presented. As its definition and application have evolved from Frederic
Taylor's scientific management theories to the present program of
six-sigma, and even looking forward to more refined programs such as Fit
Sigma, quality continues to be a concept of vital importance for all
businesses and industries. In this paper, the underlying theory and
concepts contributed by each quality philosopher, and each revolutionary
model were explained to show how they are similar but more importantly
to show how they are different. For each company and each industry the
same quality control methodology may not be appropriate because each
operates under different circumstances. The benefits that accrue from
applying the different techniques of six-sigma to various production
situations are invaluable to a company truly intent upon quality
improvement, longevity in its market, and global competitiveness.
REFERENCES
Basu, R. (2001). Six-sigma to Fit Sigma. IIE Solutions, 33(7),
28-34.
Benedetto, A. R. (2003). Adapting manufacturing-based Six-sigma
methodology to the service environment of a radiology film library.
Journal of Healthcare Management, 48(4), 263.
Black, K., Revere, L., & Hug, A. (2004). Integrating six-sigma
and CQI for improving patient care. The TQM Magazine, 16(2), 105.
Bossert, J., & Cox, D. (2005). Driving organic growth at Bank
of America. Quality Progress, 38(2), 23-28.
Druley, S. & Rudisill, F. (2004). Which six-sigma metric should
I use? Quality Progress, 37(3), 104.
Evans, J.R. & Lindsay, W. M. (2004). An introduction to
six-sigma, United States: Thompson Southwestern.
Evans, J.R. & Lindsay, W. M. (2005). The management and control
of quality. United States: Thompson Southwestern.
Gitlow, H. S. & Levine, D. M. (2005). Six-sigma for green belts
and champions. United States: Pearson Education.
Hambleton, L., (2007). Treasure chest of six-sigma growth methods,
tools and best practices. United States: Pearson Education.
iSixSigma (2007). Six sigma DMAIC roadmap. Retrieved January 15,
2008 from http://www.isixsigma.com/library/content/c020617a.asp.
iSixSigma (2002). Sigma level. Retrieved January 15, 2008 from
http://www.isixsigma.com/dictionary/Sigma_Level84.htm.
Lazarus, I. R. & Stamps, B. (2002). The promise of six-sigma.
Managed Healthcare Executive, 12(1), 27-30.
Samuels, D. I. and Adomitis, F. L. (2003). Six-sigma can meet your
revenue-cycle needs. Healthcare Financial Management, 57(11), 70.
Simon, K. (2007). SIPOC diagram. Retrieved January 15, 2008 from
http://www.isixsigma.com/library/content/c010429a.asp.
Skymark Corporation (2007). Control Charts. Retrieved January 12,
2008 from http://www.skymark.com/resources/tools/control_charts.htm.
Skymark Corporation (2007). Dr. W. Edwards Deming. Retrieved
January 12, 2008 from
http://www.skymark.com/resources/leaders/deming.asp.
Skymark Corporation (2007). Philip Crosby: The Fun Uncle of the
Quality Revolution. Retrieved January 12, 2008 from
http://www.skymark.com/resources/leaders/crosby.asp.
Sutterfield, J.S., and Kelly, C. S. J. (2006). The six-sigma
quality evolution and the tools of six-sigma. 2006 IEMS Proceedings,
Cocoa Beach, FL, pp. 370-378.
Swinney, Jack (2007). 1.5 sigma process shift explanation.
Retrieved January 15, 2008 from
http://www.isixsigma.com/library/content/c010701a.asp.
Dominique Drake, Florida A&M University
J. S. Sutterfield, Florida A&M University
Christopher Ngassam, Florida A&M University
Table 1: Defects per Million Opportunities at Various Sigma Levels
Sigma Level Defects per Million Opportunities
One Sigma 690,000
Two Sigma 308,000
Three Sigma 66,800
Four Sigma 6,210
Five Sigma 230
Six-sigma 3.4
Table 2: Defectives per Million Opportunities for Various Process
Off-Centering and Quality Levels
Quality Level
Off-Centering 3-sigma 3.5-sigma 4-sigma 4.5-sigma
0 2,700 465 63 6.8
0.25-sigma 3,577 666 99 12.8
0.5-sigma 6,440 1,382 236 .32
0.75-sigma 12,288 3,011 665 88.5
1-sigma 22,832 6,433 1,350 233
1.25-sigma 40,111 12,201 3,000 577
1.5-sigma 66,803 22,800 6,200 1,350
1.75-sigma 105,601 40,100 12,200 3,000
2-sigma 158,700 66,800 22,800 6,200
Quality Level
Off-Centering 5-sigma 5.5-sigma 6-sigma
0 0.57 0.034 0.002
0.25-sigma 1.02 0.1056 0.0063
0.5-sigma 3.4 0.71 0.019
0.75-sigma 11 1.02 0.1
1-sigma 32 3.4 0.39
1.25-sigma 88.5 10.7 1
1.5-sigma 233 32 3.4
1.75-sigma 577 88.4 11
2-sigma 1,300 233 32