+ All Categories
Home > Documents > Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes...

Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes...

Date post: 19-Apr-2020
Category:
Upload: others
View: 6 times
Download: 0 times
Share this document with a friend
46
Common Pitfalls in Interpreting Quality Indexes “My parts are all in spec but my Cpk is below required! How can this be?” Taking Cpk and Ppk at face value may not provide an accurate assessment of your shop’s true capability. From sample size to tool wear, from multiple stations (or spindles) to taper, there are many factors that can make your indexes appear worse or better than actual. In this presentation will be taking a look at the common pitfalls in interpreting and computing quality indexes. At the end of this session, you will understand how you can better defend your quality indexes and what you can do to improve them. Presented by: Alex Zaks President Altegra April 16, 2013
Transcript
Page 1: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Common Pitfalls in Interpreting Quality Indexes

“My parts are all in spec but my Cpk is below required! How can this be?” Taking Cpk and Ppk at face value may not provide an accurate assessment of your shop’s true capability. From sample size to tool wear, from multiple stations (or spindles) to taper, there are many factors that can make your indexes appear worse or better than actual. In this presentation will be taking a look at the common pitfalls in interpreting and computing quality indexes. At the end of this session, you will understand how you can better defend your quality indexes and what you can do to improve them.

Presented by: Alex Zaks President Altegra

April 16, 2013

Page 2: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Common Pitfalls in Interpreting Quality Indexes

Agenda

I. Introduction

II. Quick Review

III. Common Pitfalls

IV.Conclusions

V. Q&A

A disclaimer: for the sake of understanding, this presentation will intentionally not be strict with symbols and definitions.

Page 3: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Common Pitfalls in Interpreting Quality Indexes

Introduction

Feeling like you are behind the eight ball once in a while?

“My parts are all in spec but my Cpk is below required! How can this be?”

Page 4: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Quick Review

Key Stats

Average Sigma

Cp Pp

Cpk Ppk

ppm % Defective

USL

LSL

Average (Location)

Sigma (Spread)

Cpk/Ppk = number of times 3 sigma fit between Average and the nearest spec

Cp/Pp = number of times 6 sigma fit in Tolerance

% Defective above USL

% Defective below LSL

Ppm

Ppm

Page 5: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Quick Review

Key Stats

Statistic Meaning

Average Process location

Sigma A statistical measure of variation (spread)

Cp and Pp Assess variation relative to tolerance

Cpk and Ppk Assess closeness of location relative to the nearest spec given the current level of variation

ppm and % Defective

Fraction of defective parts out of the total population of parts

Important: All values we compute are ESTIMATES!

Page 6: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Quick Review

Impact of Location and Spread

Average on Nominal

Average away from Nominal

Low Spread

Cp – High ↑ Cpk – High ↑ ppm – Low ↓

Cp – High ↑ Cpk – Lower ↘ ppm - Higher ↗

High Spread

Cp – Low ↓ Cpk – Lower ↘ ppm - Higher ↗

Cp – Low ↓ Cpk – Low ↓ ppm - High ↑

Page 7: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Common Pitfalls

Problems with indexes may be due to:

• index design defects (formulas)

• use in inappropriate conditions (stability, distribution type)

• use for a wrong purpose (compute ppm based on Cpk)

• incorrect interpretation (data collection vs. results)

• forgetting that indexes are estimates, not actuals

Problems with indexes may lead to both: an overestimate or an underestimate of the quality level.

Page 8: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Cpk / Ppk “Calculator”

Average

Average Variation (by sample)

USL / LSL

Cpk / Ppk “Calculator”

Average

Total Variation (all samples combined into one)

USL / LSL

Page 9: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

USL

LSL

Cpk vs. Ppk

Case 1 Case 2 Case 3

Case 1 Case 2 Case 3

Uptime the same

Grand Average on the Nominal

Sample Sigma the same in all samples

Cpk the same in all cases

Ppk • A little worse than Cpk • Better than in Cases 2 and 3

The same in Cases 2 and 3

Page 10: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

USL

LSL

Cpk, Ppk and Process Drift due to Tool Wear Example

“Batch” 1 “Batch” 2 “Batch” 3

Sample Spread (the same in all 3 samples)

Ppk Cpk “By-batch” Cpk

ppm(Ppk) ppm(Cpk) ppm(“By –Batch” Cpk)

Sigma = 1/12 T Cp = 2.0

1.26 2.00 1.50 149 0 7

Same sample spread in all Samples. Used in Cpk.

Distance from these samples’ averages to Nominal = 1/8 Tolerance

Overall spread. Used in Ppk.

Think: Batch or Box or Lot

Sigma = 1/10 T Cp = 1.67

1.19 1.67 1.25 349 1 177

Sigma = 1/8 T Cp = 1.33

1.09 1.33 1.00 1115 63 2764

Note: In this example, all batches are assumed to be of equal size. The overall approach works for batches of different sizes also.

Page 11: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

USL

LSL

Cpk, Ppk and Process Drift due to Tool Wear

“Batch” 1 “Batch” 2 “Batch” 3

• Neither Cpk or Ppk can be used to estimate defectives • Ppk is likely to be an overestimate of defectives • Cpk can be be an underestimate of defectives • Alternate “by-batch” Ppm and the corresponding Cpk are

more accurate estimates

For a Process with Drift due to Tool Wear:

Page 12: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

ppm “Calculator” (% Defective)

OR

Assumption: Normal

Distribution

Look Up Tables (based on Normal distribution)

Assumption: Process in

Statistical Control

Ppm “Calculator”

Page 13: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Bilateral Tolerances Ppm based on Cpk/Ppk overestimates defectives

Cpk = Min (Cpl, Cpu)

ppm is computed for both sides based on the worse side.

ppm (Cpk) ≤ ppm (Cpl) + ppm (Cpk)

Result: for bilateral tolerances, ppm computed based on Cpk/Ppk tends to overestimate defectives.

Page 14: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Non-Linear Relationship between Cpk and Ppm

0

200

400

600

800

1,000

1,200

1,400

1,6001

.00

1.0

5

1.1

0

1.1

5

1.2

0

1.2

5

1.3

0

1.3

5

1.4

0

1.4

5

1.5

0

1.5

5

1.6

0

1.6

5

1.7

0

1.7

5

1.8

0

1.8

5

1.9

0

1.9

5

2.0

0

Ppm as a function of one-sided Cpk (partial graph)

Small changes in low Cpk values correspond to increasingly large changes in ppm values.

At Cpk of 1.20, ppm is 159 At Cpk of 1.10, ppm is 483 In this example, a 0.1 decrease in Cpk leads to a 3x increase in ppm.

Page 15: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Cpl Cpu Cpk ppm (Cpk)

ppm(Cpl) + ppm(Cpu)

ppm Delta

ppm % Penalty

“Reverse” Cpk

1.0 1.0 1.0 2,700

2,700 0 0 1.0

1.0 1.33 1.0 2,700 1,383 1,317 95.2% 1.07

1.0 1.67 1.0 2,700 1,350 1,350

100.0% 1.07

In some cases, computing ppm based on Cpk instead of Cpu and Cpl, may double your estimate of defectives.

If your ppm looks terrible, consider computing it based on Cpl and Cpu instead of Cpk.

Bilateral Tolerances Ppm based on Cpk/Ppk overestimates defectives

If your ppm looks terrible, consider computing it based on Cpl and Cpu instead of Cpk. More generally, Cpk/Ppk indexes are not needed at all to compute ppm. Ppm can be computed directly from Average, Sigma, and Normal Distribution.

Page 16: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

All indexes we compute are estimates, not true values.

They are not:

actuals in advance of producing the product, or

true values obtained at a lower inspection cost.

All Indexes are Estimates

Page 17: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

How good are our estimates?

What is the discrepancy between the estimates and the true values?

Quality of Estimates and Confidence Intervals

Confidence Interval (CI)

Confidence Level (alpha)

Sample Size

Type of Distribution

Example: Distribution: Normal Sample Size = 50 Confidence Level = 95% (Alpha=0.95) Computed Cpk = 1.33 Cpk Confidence Interval = [0.93 .. 1.75]

Cpk Cpk +/- Note: Confidence intervals presented by Altegra are based on assumption of Normality only, and require no additional restrictions on process as opposed to other approaches.

Page 18: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Quality of Estimates and Confidence Intervals

Cpk - Cpk Cpk +

1.46 2.0 2.57

1.20 1.67 2.17

0.93 1.33 1.75

0.68 1.0 1.34

0.42 0.67 0.93

0.15 0.33 0.51

Cpk Cpk +/- For Normal Distribution, Sample Size = 50 and Confidence Level 95% (Alpha=0.95)

If the computed Cpk value is … then the true Cpk value is between … and …

Page 19: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Quality of Estimates and Confidence Intervals

2.17 1.67 1.20

1.75 1.33 0.93

1.34 1.00 0.68

Normal Distribution, Sample Size = 50, Confidence Level 95%

Page 20: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

The Impact of Double-Sided Cpk Confidence Intervals on Ppm Estimates

Cpk - Cpk Cpk +

1.20 1.67 2.17

0.93 1.33 1.75

0.68 1.0 1.34

ppm (Cpk-)

ppm (Cpk)

ppm (Cpk+)

318 1 0

5,271 66 0

4,1350 2,700 58

Normal Distribution, Sample Size = 50, Confidence Level 95%

ppm(-) ppm ppm(+)

318.2914 0.5452 0.0001

5270.9230 66.1037 0.1524

41350.1892 2699.9344 58.2262

For Cpk of 1.33, estimated ppm is between 0 and 5,271

Page 21: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Cpk - Cpk ppm (Cpk) ppm (Cpk-)

1.52 2.0 0 4

1.26 1.67 0 166

0.98 1.33 66 3,240

0.72 1.0 2,670 31,960

0.45 0.67 44,432 178,342

0.17 0.33 322,174 601,232

The Impact of Single-Sided Cpk Confidence Interval on Ppm Estimates

The numbers with single-sided Cpk a little better than with double-sided but the overall concern is the same: ppm based on lower end of the Cpk confidence interval may be significantly higher than ppm based on the computed Cpk.

Normal Distribution, Sample Size = 50, Confidence Level 95%

Page 22: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

The only way to improve (squeeze) the confidence interval is to increase sample size… but this might not be practical.

Improving (Squeezing) Confidence Intervals

Sample=50

Computed Cpk

Cpk- Cpk+ Width of Confidence Interval Width as % of Computed Value

1.00 0.68 1.34 0.66 66%

1.33 0.93 1.75 0.81 61%

1.67 1.20 2.17 0.97 58%

2.00 1.46 2.57 1.12 56%

Sample Size = 1,000

Computed Cpk

Cpk- Cpk+ Width of Confidence Interval Width as % of Computed Value

1.00 0.93 1.07 0.15 15%

1.33 1.24 1.42 0.18 14%

1.67 1.56 1.78 0.21 13%

2.00 1.88 2.12 0.25 12%

Page 23: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Cpk confidence intervals for low sample sizes are very wide. Small sample sizes increase uncertainly of estimates.

Watch out for the low end of the confidence internal of the low Cpk values – corresponding ppm values may be very high.

Using a single-side confidence interval will result in a slightly better lower end value.

Cpk/Ppk Confidence Intervals Conclusions

Confidence Intervals are computed for a specific distribution type, specific sample size, and specific confidence level.

Currently, formulas exist for Normal distribution, for Ppk or single-sample Cpk

Given the above, what’s the point of thinking of confidence intervals for Cpk?

It’s another reason why Cpk can be so inaccurate and why ppm values computed based on Cpk can contain a very larger error.

Page 24: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Effects of Non-Normality on Cpk-based Ppm Estimates

• Most commonly used tables for obtaining ppm based on Cpk are for the Normal distribution (although there are also tables for a few other distributions)

• Normality is often assumed without verification… but it’s not necessary so!

Looks pretty normal to me…

This must be the new normal…

Page 25: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Question:

If you use normal tables to compute ppm but the data is not normal, what do you think the impact on ppm is?

Effects of Non-Normality on Cpk-based Ppm Estimates

• Using Normal distribution tables for data which is not normally distributed can result in Ppm “errors of several orders of magnitude” (i.e. 10x, 20x, 30x) Source: Introduction to Statistical Process Control, Douglas C. Montgomery, 4th edition, pp 360-361

• The table says ppm = 100… but it might be 1,000 or 10,000 or near zero.

• That’s like saying: my Cpk is 1.30 … but it might be 1.10 or 0.86 or 2.0.

What would be nice:

– run tests for normality,

– If the distribution is normal, use normal distribution tables to determine Ppm

– if distribution is not normal, then attempt to determine distribution

– If you manage to determine distribution, use distribution-specific tables or distribution-specific computations to determine Ppm

… Nice but not practical for most QA departments.

Page 26: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Effect of Shape Non-Uniformity on Cpk and Ppm

Let’s say we have taper…

Desired Shape

Actual Shape

Left End Right End Middle

Page 27: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Taper as % of

Tolerance

Location Sigma

Cpk Ppm (Cpk)

True Cpk

Ppm (True Cpk)

Computed Cpk

- True Cpk

Ppm (Computed Cpk)

– Ppm (True Cpk)

Reality vs.

Perceived Quality

25 Sigma = 1/12 T

Cp = 2.0 1.26 145 1.50 7 -0.24 143 Higher

25 Sigma = 1/10 T

Cp = 1.67 1.19 349 1.25 177 -0.06 172 Higher

25 Sigma = 1/8 T

Cp = 1.33 1.09 1,115 1.00 2,700 0.09 -1,585 Lower

USL

LSL

Right End

Middle

Left End

Effect of Shape Non-Uniformity on Cpk and Ppm Capturing Full Taper in Each Sample

Page 28: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

USL

LSL

Effect of Shape Non-Uniformity on Cpk and Ppm Averaging of Readings on the Shape

Location Sigma

Taper as % Location

Sigma

Taper as % of

Tolerance

Cpk Ppm (Cpk)

True Cpk

Ppm (True Cpk)

Computed Cpk

- True Cpk

Ppm (Computed Cpk) – Ppm (True Cpk)

Reality vs.

Perceived Quality

Sigma = 1/10 T

Cp = 1.67

50 5 1.67 0.6 1.58 2 0.08 -1 Lower

100 10 1.67 0.6 1.50 7 0.17 -6 Lower

150 15 1.67 0.6 1.42 21 0.25 -21 Lower

Sigma = 1/8 T

Cp = 1.33

50 6 1.33 63.3 1.25 177 0.08 -113 Lower

100 13 1.33 63.3 1.17 465 0.17 -402 Lower

150 19 1.33 63.3 1.08 1154 0.25 -1091 Lower

Right End

Middle

Left End

Page 29: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Effect of Shape Non-Uniformity on Cpk and Ppm

Sample Formation Method

What happens… Resulting Estimate is…

Measuring each piece in a different location (ex: left, middle, right)

Piece-to-piece sigma is replaced with taper

For processes with low piece-to-piece variation (in the same location), the resulting estimate is worse than reality*

Measuring each piece in several locations and taking the average

Process appears to run in a narrower band than actual

Resulting estimate appears to be better than reality*

Page 30: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Examples of multi-stream processes

• Multi-Spindle Screw Machines

• Rotary Transfer Machines

• Multi-Cavity Fixtures

• Multi-Cavity Molds

• Layers of a Plating Rack, etc.

Effect of Multi-Stream Processes on Cpk and Ppm What is a Multi-Stream Process?

Properties of a “multi-stream” process

• Parts travel through the machine via equivalent but different paths producing the same geometry

• Parts that follow the same path through the machine form a process stream

• In machining, different streams usually share the same tooling

• The output of all streams is usually combined when assessing product quality

(this is an informal definition)

Page 31: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Effect of Multi-Stream Processes on Cpk and Ppm Stream Cpk, Combined Cpk, True Cpk

• Stream Cpk – computed using measurements from the same stream (i.e., spindle)

• Combined Cpk – computed using measurements from all streams • True Cpk – value corresponding to the true ppm of the final product.

• True ppm can be computed from ppms of individual streams. When

each stream produced the same number of pieces, Total (True) PPM = Average of Stream ppms

Page 32: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

USL

LSL

Samples we would see if we collected data for each stream separately.

Stream 1

Stream 2

Stream 3

Samples we get when we form samples by taking one piece from each stream. Stream = Spindle, Station, Cavity, etc.

In this example: within sample variation for all streams is the same.

Effect of Multi-Stream Processes on Cpk and Ppm Data Collection: One Piece from Each Stream

Page 33: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Spindle Sigma

Distance from Spindle Average

to Nominal (Spindles 1 & 3)

True Cpk

Combined Cpk

Ppm (Combined

Cpk)

True Ppm

Ppm for Spindles

1 & 3

Ppm for Spindle

2

Combined Cpk

- True Cpk

Combined Ppm

- True Ppm

Reality vs.

Perceived Quality

1/10 T

1/10 T 1.33 0.75 24,595 64 63 1 -0.58 24,531 Lower

1/ 8 T 1.25 0.63 60,538 177 176 1 -0.62 60,361 Lower

1/6 T 1.11 0.49 144,480 859 858 1 -0.62 143,621 Lower

1/4 T 0.83 0.33 317,341 12,420 12,419 1 -0.50 304,921 Lower

Effect of Multi-Stream Processes on Cpk and Ppm Example 1: Impact of Stream Location

Stream = Spindle

Page 34: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Effect of Multi-Stream Processes on Cpk and Ppm

Example 2: Impact of Stream Variation

Stream = Spindle

Distance from Spindle Average to Nominal as %

Tolerance (Spindles 1 & 3)

Spindle Sigma

True Cpk

Computed Cpk

Ppm (computed

Cpk)

True Ppm

Ppm Spindles

1&3

Ppm Spindle

2

Combined Cpk

- True Cpk

Combined Ppm

- True Ppm

Reality vs.

Perceived Quality

25

1/12 T 1.00 0.34 313,981 64 2700 0 -0.66 311,281 Better

1/10 T 0.83 0.33 317,341 177 12419 1 -0.50 304,921 Better

1/8 T 0.67 0.33 323,406 859 45500 63 -0.34 277,843 Better

1/6 T 0.50 0.32 336,006 12,420 133614 2700 -0.18 199,685 Better

Page 35: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

When we compute Cpk for a multi-stream process based on a combined sample (measurements from all streams are combined in one sample), the Combined Cpk we obtain:

• May be lower than the Cpk of each of all of the individual process streams

(Combined Cpk can be worse than the worst of streams);

• May be higher than the Cpk of some of the streams;

• Cannot be higher than the Cpk of the best of the streams (Cannot be better than the best of streams). Also, Combined Cpk may be higher or lower than the Total (True) Cpk. Most likely, you can expect the Combined Cpk to be worse than Total (True) Cpk.

In a few cases, if you are lucky, it can be better than the Total (True) Cpk. Note: Statements above apply to normally distributed data. They might apply to other distribution types but

we did not check all of them.

Stream Cpk, Combined Cpk, True Cpk

Page 36: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Data Collection

The data you chose to collect needs to reflect sources of variation that can actually cause parts go out of spec. Mixing data from different points on the shape, multiple spindles, machines, etc. dramatically distorts (likely lowers) Cpk/Ppk and kills the value of these indexes in estimating ppm. Noise starts to overwhelm the true signal.

Page 37: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

How much does measurement error cost you in terms of Cpk and Ppm?

USL

LSL

• Averages of all samples are on the nominal • Variation in all samples is the same • Cpk = 1.00 • Gage R&R study shows that EV sigma is 25% of Cpk’s sigma

Let’s say …

Question:

Measurement Error Contribution to Cpk and Ppm

Page 38: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Computed Cpk

EV Sigma as % of

Cpk Sigma

Cpk sigma as % of

Tolerance

EV Sigma % of

Tolerance

“Clean” Cpk

Cpk Penalty

Ppm “Clean” Ppm

Ppm Penalty

0.67 5.00 24.88 1.24 0.67 0.00 44431 44165 266

0.67 10.00 24.88 2.49 0.67 0.00 44431 43370 1061 0.67 25.00 24.88 6.22 0.69 0.02 44431 37901 6530 0.67 40.00 24.88 9.95 0.73 0.06 44431 28301 16130

1 5.00 16.67 0.83 1.00 0.00 2700 2667 33 1 10.00 16.67 1.67 1.01 0.01 2700 2569 131

1 25.00 16.67 4.17 1.03 0.03 2700 1946 754 1 60.00 16.67 10.00 1.25 0.25 2700 177 2523

1.33 5.00 12.53 0.63 1.33 0.00 66 65 1 1.33 10.00 12.53 1.25 1.34 0.01 66 61 5 1.33 25.00 12.53 3.13 1.37 0.04 66 38 28

1.33 80.00 12.53 10.03 2.22 0.89 66 0 66 1.67 5.00 9.98 0.50 1.67 0.00 1 1 0

1.67 10.00 9.98 1.00 1.68 0.01 1 0 0 1.67 25.00 9.98 2.50 1.72 0.05 1 0 0

1.67 99.00 9.98 9.88 11.84 10.17 1 0 1 2 5.00 8.33 0.42 2.00 0.00 0 0 0

2 10.00 8.33 0.83 2.01 0.01 0 0 0 2 25.00 8.33 2.08 2.07 0.07 0 0 0 2 99.99 8.33 8.33 141.42 139.42 0 0 0

Reminder: in this example, all sample averages are on the nominal. Computed Cpk values include the effects of measurement error.

Measurement Error Contribution to Cpk and Ppm

Page 39: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

How much does measurement error cost you in terms of Cpk and Ppm?

• For processes with low Cpk values, measurement error may contribute from a hundred to thousands ppm and reduce Cpk by a few hundreds to a few tenths of a point.

• For processes with high Cpk values, measurement error impact on Cpk varies but measurement error does not contribute much to ppm.

Measurement Error Contribution to Cpk and Ppm

Page 40: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Effect of Sample Formation on Cpk

Let’s say the customer asks for a 50-piece capability study.

Have you wondered what will produce a higher Cpk: 10 samples of 5 pieces each or 5 samples of 10 pieces each? Can the two be different?

If your study will collect 50 consecutively produced pieces, then… • In theory, fewer samples of larger size should produce a slightly more accurate

estimate of Cpk but not necessarily a higher value. • In practice, the result is highly dependent on data and will be different for

different data sets. So, you cannot tell in advance which grouping will produce a higher Cpk.

If your study will have multiple samples over a period of time, then … • The size and the number of samples should be based on the nature of the

process (i.e., multiple spindles?) and the sources of variation you want to include in the study.

• Sample size should probably be the same as you will use for real-time process control.

Page 41: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

In-house Cpk vs Customer Cpk

“Your Cpk does not match our Cpk!”

What to consider: Did they just get one large sample of randomly drawn parts as opposed

to your Cpk being based on many samples collected during production? What gage did the customer use? Did they compute Cpk or Ppk? Does their sample include data from multiple machines while your Cpk

values are based on data separated by machine? Does the customer understand the impact of process trends due to tool

wear on Cpk/Ppk values? If you take confidence interval into account, are the two Cpk values really

different?

Page 42: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

1. As much as feasible and economical, center the process and reduce variation (within sample and sample-to-sample).

2. Ppk: Reduce within-sample and sample-to-sample variation. Avoid using for Ppm estimates.

3. Cpk: Reduce within-sample variation.

4. Cpk: Consider using the alternate “by batch” method to compute Cpk.

5. If your Ppm looks terrible, consider computing it based on Cpu and Cpl instead of Cpk.

6. Bring spindles closer together in terms of averages and spread.

7. Reduce shape non-uniformity.

8. Reduce measurement error.

9. Increase total number of measurements to increase confidence.

10. Cpk/Ppk indexes are not needed at all to compute Ppm. Ppm can be computed directly from data and the knowledge of the distribution type.

11. When possible, educate the customer on the specifics of the machining processes and the pitfalls of Cpk/Ppk interpretation.

Summary Steps to improve quality estimates based on Cpk and Ppk

Page 43: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Common Pitfalls in Interpreting Quality Indexes Cpk/Ppk – Bottom Line

Cpk/Ppk are not good as estimates of absolute quality • Ppm values computed based on Cpk/Ppk can be very inaccurate

• Ppk will overestimate defectives in processes with trends due to tool wear

• Including wrong causes of variation in data collection may make indexes meaningless

Cpk/Ppk might be OK as very approximate indicators of relative quality: • Comparing feature to feature

• Comparing current vs past or target

• Comparing Cpk to Ppk to identify processes with trend/instability

Keep confidence intervals in mind when comparing Cpk/Ppk values

Page 44: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Cpk/Ppk – can’t live with them, can’t live without them.

What we need:

Adoption of more accurate methods to estimate quality levels

… based on more advanced statistical tools

… tailored to the specifics of individual manufacturing processes.

What is required for adoption of more accurate methods?

Understanding of new methods by producers and customers.

Deeper understanding of the manufacturing processes by customers.

Acceptance of new methods by customers.

Leadership from larger companies.

Common Pitfalls in Interpreting Quality Indexes Conclusion – Our 2¢

Page 45: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

This presentation was developed by Altegra staff. • Altegra produces SPC, Tool Change Management, Downtime Tracking, and Gage

Calibration Management software. • Visit us at: www.altegra.com

Our thanks go to: • PMPA for inviting us to present • Monte Guitar at PMPA for helping define and organize this session • Miles Free at PMPA for brining up the subject of confidence intervals

Common Pitfalls in Interpreting Quality Indexes Credits

Copyright © 2013 Altegra. All Rights Reserved.

Page 46: Common Pitfalls in Interpreting Quality Indexes...Common Pitfalls in Interpreting Quality Indexes Introduction Feeling like you are behind the eight ball once in a while? “My parts

Questions?

For Post-Conference Questions ….

E-mail your questions to [email protected]

or

Post your questions under PMPA on LinkedIn

Common Pitfalls in Interpreting Quality Indexes Q & A


Recommended