THE ORIGINS OF SIX SIGMA




Sigma is the letter in the Greek alphabet used to denote standard deviation, a statistical measurement of variation, the exceptions to expected outcomes. Standard deviation can be thought of as a comparison between expected results or outcomes in a group of operations, versus those that fail.
The term "six sigma process" comes from the notion that if one has six standard deviations between the process mean and the nearest specification limit, as shown in the graphic, there will be practically no items that fail to meet specifications. This is based on the calculation method employed in process capability studies.
In a capability study, the number of standard deviations between the process mean and the nearest specification limit is given in sigma units. As process standard deviation goes up, or the mean of the process moves away from the center of the tolerance, fewer standard deviations will fit between the mean and the nearest specification limit, decreasing the sigma number and increasing the likelihood of items outside specification.
The measurement of standard deviation shows us that rates of defects, or exceptions, are measurable. Six Sigma is the definition of outcomes as close as possible to perfection. With six standard deviations, we arrive at 3.4 defects per million opportunities, or 99.9997 percent.
This would mean that at Six Sigma, an airline would lose only three pieces of luggage for every one million that it handles, or that the phone company would have only three unhappy customers out of every one million who use the phone that day. The purpose in evaluating defects is not to eliminate them entirely, but to strive for improvement to the highest possible level that we can achieve.

Role of the 1.5 sigma shift

Experience has shown that in the long term, processes usually do not perform as well as they do in the short. As a result, the number of sigmas that will fit between the process mean and the nearest specification limit is likely to drop over time, compared to an initial short-term study. To account for this real-life increase in process variation over time, an empirically-based 1.5 sigma shift is introduced into the calculation. According to this idea, a process that fits six sigmas between the process mean and the nearest specification limit in a short-term study will in the long term only fit 4.5 sigmas – either because the process mean will move over time, or because the long-term standard deviation of the process will be greater than that observed in the short term, or both.

Hence the widely accepted definition of a six sigma process is one that produces 3.4 defective parts per million opportunities (DPMO). This is based on the fact that a process that is normally distributed will have 3.4 parts per million beyond a point that is 4.5 standard deviations above or below the mean (one-sided capability study). So the 3.4 DPMO of a "Six Sigma" process in fact corresponds to 4.5 sigmas, namely 6 sigmas minus the 1.5 sigma shift introduced to account for long-term variation. This is designed to prevent underestimation of the defect levels likely to be encountered in real-life operation.

Sigma levels

The table gives long-term DPMO values corresponding to various short-term Sigma levels.

Note that these figures assume that the process mean will shift by 1.5 sigma towards the side with the critical specification limit. In other words, they assume that after the initial study determining the short-term sigma level, the long-term Cpk value will turn out to be 0.5 less than the short-term Cpk value. So, for example, the DPMO figure given for 1 sigma assumes that the long-term process mean will be 0.5 sigma beyond the specification limit (Cpk = –0.17), rather than 1 sigma within it, as it was in the short-term study (Cpk = 0.33). Note that the defect percentages only indicate defects exceeding the specification limit that the process mean is nearest to. Defects beyond the far specification limit are not included in the percentages.

No comments:

Post a Comment