+ All Categories

Quality

Date post: 26-Nov-2014
Category:
Upload: ragupathy1962
View: 73 times
Download: 1 times
Share this document with a friend
Popular Tags:
42
Handling & Process (pdf) DEPARTMENT PROFILES Reliability Test Services QUALITY WHITE PAPERS Defects in DMOS Antenna Ratio Violation ADDITIONAL INFORMATION FAQ EOS/ESD Information Obsolete Product Listings Environment al Safety Glossary: Quality Terms CONTACT US Comments & Questions National Locations FMEA is a systematic analysis of potential failure modes aimed at preventing failures. This is intended to be a preventative action process carried out before implementing new or changes in products or processes. An effective FMEA identifies corrective actions required to prevent failures from reaching the customer and to assure the highest possible yield, quality and reliability. In semiconductor device design and manufacturing, it is common to perform different types of FMEA. These can be divided into two primary categories: product related and process related. These are often called Design FMEA and Process FMEA and they are often further subdivided to focus on specific areas of product or process development. The main purpose of an FMEA is: To identify possible failure modes that could occur in the design or manufacturing of a product. To identify corrective actions that could reduce or eliminate the potential for failures to occur. To provide documentation of the process. To quantify the risk level associated with each potential failure mode. An FMEA is a two-phase process. First the potential failure modes and corrective actions are identified. Then after the corrective actions are implemented, the product or process is re-evaluated to determine if the result is acceptable. This is best accomplished with a cross-functional team with members from all affected work groups (e.g. Marketing, Engineering, Manufacturing, Test, QRA, etc.) Benefits of FMEA An FMEA provides benefits to both the manufacturer and the customer. Some of these benefits include: Assists in determining the best possible design and development options to provide high reliability and manufacturability potential. Assists in considering the possible failure modes and their effect on the reliability and manufacturability of the product. Provides a well-documented record of improvements from corrective actions implemented. Provides information useful in developing test programs and in-line monitoring criteria. Provides historical information useful in analyzing potential product failures during the manufacturing process. Provide new ideas for improvements in similar designs or processes. Documenting an FMEA This preventative action process provides a methodical approach to study the cause and effect of potential failures. The form below (Fig. 1) is used to document the process. Product or Process: FMEA Type: FMEA Date:
Transcript
Page 1: Quality

Handling & Process (pdf)

DEPARTMENT PROFILES

Reliability Test Services

QUALITY WHITE PAPERS

Defects in DMOS

Antenna Ratio Violation

ADDITIONAL INFORMATION

FAQ EOS/ESD

InformationObsolete

Product ListingsEnvironmental

SafetyGlossary:

Quality Terms

CONTACT USComments &

QuestionsNational

Locations

 

FMEA is a systematic analysis of potential failure modes aimed at preventing failures. This is intended to be a preventative action process carried out before implementing new or changes in products or processes. An effective FMEA identifies corrective actions required to prevent failures from reaching the customer and to assure the highest possible yield, quality and reliability.

In semiconductor device design and manufacturing, it is common to perform different types of FMEA. These can be divided into two primary categories: product related and process related. These are often called Design FMEA and Process FMEA and they are often further subdivided to focus on specific areas of product or process development. The main purpose of an FMEA is: To identify possible failure modes that could occur in the design or manufacturing of a product.

To identify corrective actions that could reduce or eliminate the potential for failures to occur.

To provide documentation of the process.To quantify the risk level associated with each potential failure mode.

An FMEA is a two-phase process. First the potential failure modes and corrective actions are identified. Then after the corrective actions are implemented, the product or process is re-evaluated to determine if the result is acceptable. This is best accomplished with a cross-functional team with members from all affected work groups (e.g. Marketing, Engineering, Manufacturing, Test, QRA, etc.)

Benefits of FMEA

An FMEA provides benefits to both the manufacturer and the customer. Some of these benefits include:

Assists in determining the best possible design and development options to provide high reliability and manufacturability potential.

Assists in considering the possible failure modes and their effect on the reliability and manufacturability of the product.

Provides a well-documented record of improvements from corrective actions implemented.Provides information useful in developing test programs and in-line monitoring criteria.Provides historical information useful in analyzing potential product failures during the

manufacturing process.Provide new ideas for improvements in similar designs or processes.

Documenting an FMEA

This preventative action process provides a methodical approach to study the cause and effect of potential failures. The form below (Fig. 1) is used to document the process.

 

Product or Process: FMEA Type: FMEA Date:

FMEA Team Members:   Rev __ / Rev Date:

Process/Product Description or P

 or Process: FMEA Type: FMEA Date:

FMEA Team Members:   Rev __ / Rev Date:

Process/Product

Description or Purpose

Potential

Failure Modes

Potential

Effect(s) of

Failure

S

E

V

C

L

A

Potential Causes/

Mechanisms

Of Failures

O

C

C

Current

Design/

Process

Control

D

E

T

R

P

N

Recommended

Actions

Who

When

Actions

Taken

S

E

V

O

C

C

D

E

T

R

P

N

Page 2: Quality

S

S

Prevention

Detection

Fig. 1:  Typical FMEA Form

Fig. 1:  Typical FMEA Form

The Process/Product Description or Purpose should be clearly defined. This may be broken down into sub-processes with each being considered separately. Potential Failure Modes lists the different ways the process might fail to meet the process requirements or design intent. The Potential Effect(s) of Failure is how the customer perceives the failure. The "customer" includes subsequent design or manufacturing operations and/or the end customer. Each of the Failure Modes and Effects is assigned a Severity (SEV) value and Classification (CLASS). How a failure could occur should be described in terms of something that can be corrected or controlled and listed under Potential Causes/Mechanisms of Failures. The probability that a given cause or mechanism that might occur is assigned a numeric value in OCC. Current Design/Process Control Prevention/Detection describes any controls that can prevent or detect each of the failure mechanisms. The probability that each control will effectively detect the failure is assigned a numeric value in DET.

The Risk Priority Number (RPN) is the product of Severity, Occurrence and Detection. These are assessed and engineering judgment is used to determine if the risk is acceptable. Recommended Actions are developed to reduce the RPN with priority given to the highest RPN values and customer defined specific characteristics. Once the actions are implemented, the Severity, Occurrence and Detection values are reassessed, and a new RPN is calculated. This process continues until the risk level is acceptable.

An FMEA is a living document that should be reviewed and updated periodically. Any change in design, process or use of the product should be updated in the FMEA.

Quality System Certifications - ISO/TS 16949, ISO9001 & StackNational Semiconductor Product Groups and Manufacturing Sites have successfully completed & ISO/TS16949 based (including ISO9001;2000) assessments conducted by Det Norske Veritas (DNV) Certification Incorporated and are currently certified.

The following DNV certifications certify conformance of National Semiconductor's Quality Systems:

Corporate Sites CertificationISO / TS16949

Corporate Sites CertificationISO 9001:2000

STACK Certification

All Sites Click to View Certificate

Click to Expand

Santa Clara, CA

Arlington, TX

So. Portland, Maine

Greenock, UK

Malacca, Malaysia

Suzhou, China

DNV is one of the leading international certification bodies. For more information about the Det Norske Veritas, visit their web site at:

Page 3: Quality

Local units have been awarded accreditation for their certification services by the respective National Authorities, in the following countries:

AustraliaBelgiumDenmarkFinlandGermany

HollandItalySwedenSwitzerlandUnited Kingdom

National Semiconductor has been audited and meets the requirements of the STACK International, StackTrack - Supplier Certification Program.

Sign-on  

My User Preferences  My Briefcase(0)  My Designs(0)  

Samples & Orders(0)

Sign-on to save and manage products in y

Page 4: Quality

our MyBriefcase, order free samples, create WEBENCH designs in mi

Page 5: Quality

nutes, and set your personal preferences. Send us your feedback or

Page 6: Quality

technical question:   

Site Map | About "Cookies" | National is ISO/TS16949 Certified | Privacy/Security Statement | Contact Us/Feedback | RSS FeedCopyright © 2011 National Semiconductor Corporation

Reliability

National Semiconductor Corporation strives to achieve best-in-class quality and reliability performance on all their products through a systematic approach that emphasizes quality at every phase of product development through manufacturing. From initial design conception to fabrication, test, and assembly; quality is built-in and assured through stringent SPC monitoring of fabrication and assembly processes, materials inspections, wafer level reliability (WLR), new product qualifications, reliability monitoring of finished product and strict change control management.

What is Reliability?

Reliability is the characteristic expressed by the probability that the part will perform its intended function for a specific period of time under defined usage conditions.

Reliability Failures

There are 2 basic types of failures, Early Failures and Wear Out Failures. These are reflected in the curve known as the Bathtub curve.

Page 7: Quality

National Semiconductor uses Reliability Testing to ensure all its products are below targets set for Early Failure Rates in PPM and Wear Out Failures in FITs.

Qualification

New Processes and New Packages

New Processes and New Packages are qualified using a minimum 3 lot (77 units per lot) testing for:

Early Failure Testing (915 samples) Operating Life Test Temp and Humidity Biased Test Temperature Cycling Auto-Clave ESD/Latch-Up Board Level Temp Cycle (for packages)

Power Cycling and Data Retention Testing is also done when applicable.

Smart Quals

Products designed to Process and Package Design Rules and using Qualified Processes and Packages are released using 168hr Rel Data.This approach supports Time to Market needs without compromising Reliability.To ensure there is no customer risk, National has continuous reliability monitoring in place.

Reliability Monitor Program

An Ongoing Reliability Monitor is in place to ensure that products manufactured to Qualified Processes under Qualified Reliability Standards, has not drifted.

Results of the Rel Monitor Program are published on National’s Quality Web Page; Test Frequency is as posted below.

TEST FREQUENCY

EFR(All major processes)

Every week

OPL (1000 hr)

THBT (1000 hr)

ACLV (96 hr)

TMCL (1000 cycles)

Every 8 weeks

Every 8 weeks

Every 8 weeks

Every 8 weeks

Fix Reliability Testing Capabilities Reliability Test Services and ESD and Latch-Up Testing Labs are fully equipped to support the Reliability Qualification Testing. Details of the Lab Equipment are listed in the following two tables. Reliability Testing Services Equipment Inventory

Dynamic Operating Life (Op Life)13 ADEC burn-in ovens3 Wakefields1 AMT

65 to 160°C65 to 160°C65 to 160°C

Comments:Vector drivenDriver board dr.Driver board dr.

Static Operating Life4 Jarvis ovens9 Marin

65 to 160°C65 to 160°C

 

Page 8: Quality

Temp & Humidity (T&H)12 BLUM M ovens 85 C at 85% humidity

(normal)

Can do:20 to 90 C at 30 to 90% humidity

Temp Cycles2 Blue M Systems4 Ransco systems (batch loaded)

-40 to 125°C-65 to 150°C-40 to  60°C  0  to 125°C

200lbs. Each1 - 200lbs, 2 – 40lbs1 - 151lbs

Auto-Clave (ACLV)5 Dispatch systems (batch loaded) 121 C at 15 PSI

 

High Accelerated Stress Test  (HAST)2 Hirayama1 Express1 Dispatch

135C at 85% RH/PSI135C at 85% RH/PSI135C at 85% RH/PSI

Board loaded voltage applied

Power Temp Cycle1 Thermo Dynamic system3 ICA1 systems

40 to 125°C (Ambient)40 to 125°C (Ambient)

Board loaded voltage appliedBoard loaded voltage applied

Air Power Cycle2 Approval systems 25 to 150°C (Ambient) Board loaded voltage

applied

Water Power Cycle-   1 Approval systems 25 to 150°C (Ambient) Board loaded voltage

applied

Thermo Shock  (Liquid to Liquid)-   1 Approval systems

   

Electromigration-   1 Micro-instrument 210C, 175C, 150C Forcing current

Highly Accelerated Life Test(HALT)-   1 Qualmark System

Vibration and thermal3 axis, plus 65C to 150°C

BSBO – Ethernet cardsPlexis/Tigris

Equipment in the ESD/Latch-Up Lab

 Max Pins

HBM Voltage

MMVoltage

IEC 1000Capable

On Board Clock

Vectored Latch-up

KeytekZapMaster

256 25 - 12000 25 - 2000 Yes No No

RCDM N/A 50 - 4000 N/A No No No

MK-2 768 50 - 8000 50 - 2000 No Yes Yes

Failure Mechanisms/Failure Models

Various failure mechanisms are tested during Rel Testing. Major ones are listed below.

Failure Mechanism and Model

Failure Mechanism Failure Model

Electromigration Blacks Model

Excessive Intermetallics Kidsons Model

Page 9: Quality

Reverse Bias Breakdown Tasca

Stress Dependent Diffusive Voiding

Okabayashi Model n NE 1, Okabayashi Model n EQ 1

Time Dependent Dielectric Breakdown

Fowler Nordhiem Tunnel Model

Slow Trapping Positive Gate Voltage Model, Negative Gate Voltage Model

Metallization Corrosion Plastic Metal Corrosion, Hermetic Metal Corrosion

Die fracture Westergaard Bolger Model Die, Suhirs Vert Crack Model Die, Suhirs Horz Crack Model Die, Westergaard Model Power, Suhirs Horz Crack Power

Modular Case Fatigue Shear Fatigue Model Case

Modular Case Fracture Shear Fatigue Model Case

Substrate Fracture Westergaard Bolger Model Sub, Suhirs Vert Crack Model Sub, Suhirs Horz Crack Crack Model Sub

Die Attach Fatigue Attach Fracture Model Brittle, Attach Fatigue Model Brittle, Tensile Fatigue Model Ductile, Shear Fatigue Model ductile, Raja Die Attach Fatigue

BGA Solder Fatigue Time to fail by Creep,  Coffin Manson BGA Solder Fat

Discrete Solder Fatigue Dis Solder Jnt Cap 90pb10sn, Dis Soldr Jnt Fat Cap 63sn37pb

Flip Chip Solder Fatigue Inner Flip Chip Revised, Hybrid Flip Chip Revised

Lead Seal Fracture Principal Stress Model

Lead Solder Joint Fatigue Thermal Cycle Fatigue Model

Lid Seal Fracture Tensile Strength Model

Substrate Attach Fatigue Substrate Attach Fracture Model, Substrate Attach Fatigue Model

Wire Bond Fatigue Hu Pecht Dasgupta Model, Wirebond Pad Shear Failure, Bond Pad Fatigue Revised

Wire Fatigue Hu Pecht Dasgupta Model

Electro Static Discharge Wunsch and Bell Model, Wunsch and Bell Model, Wunsch and Bell Model

Determination of Failure Rate (Point Estimate)

Failure rate can be determined by using actual test results. Determine "demonstrated" failure rate from actual test data as follows:

Failure Rate=No. rejects/sample size x no. hours

Example 1. Assume a sample size of 13500, 2 failures and test duration of 500 hours. To calculate FR:

FR = 2 rejects/13500 devices x 500 hoursFR = 2/6750000 device-hours=0.000000296 rejects per device-hour          296 FITS          or 3375,000hours MTBF (reciprocal of 0.000000296)

In expressing Failure Rate, the equivalent values below may be helpful.

No. Failure Per Device-Hours

Failure Rate% Per 1000 Hours

PPM (Hours) FITSMTBF (Hour)

1/1 x 109 0.000000001 0.0001 0.001 1 1 x 109

Page 10: Quality

1/1 x 108 0.00000001 0.001 0.01 10 1 x 108

1/1 x 107 0.0000001 0.01 0.1 100 1 x 107

1/1 x 106 0.000001 0.1 1 1000 1 x 106

1/100,000 0.00001 1.0 10 10,000 1 x 105

1/10,000 0.0001 10.0 100 100,000 1 x 104

1/1,000 0.001 100 1000 1,000,000 1 x 103

Determination of Failure Rate (Statistical Estimates)

In addition to point estimates, FR and MTBF may be estimated by using the chi-square statistic at 2 (r + 1) degrees of freedom. The 50% probability statistic would give the "best estimate", the 60% or 90% probability statistic would give the upper confidence limit.

Acceleration Factors

In order to express accelerated test results in terms of expected failure rate at actual use conditions, semiconductor manufacturers commonly use the Arrhenius model.

The Arrhenius model assumes that degradation of a performance parameter is linear with time, with the rate of degradation depending on the temperature stress. To put it another way, the Arrhenius equation relates the time rate of change of a process to the temperature at which the process is taking place.

If appropriate, the calculated acceleration factors listed in the following table may be used.

Acceleration Factors for Common Junction Temperaturesand Common Activation Energies

Est. RJAccel.Tests

Estimated TJ9Normal Use Application

Activation Energies

  25°C 35°C 40°C 45°C 50°C 55°C 60°C 70°C 85°C eV

125°C 49 31 23 18 15 12 9.6 6.4 3.7  

130°C 58 35 27.5 22 17.4 14 11.3 7.5 4.3 0.4

150°C 89 60 47 37.4 29.8 24 19.4 12.9 7.3  

125°C 134 71 52.7 39.4 29.7 22.6 17.3 10.4 5.1  

130°C 160 85 63.1 47 35.5 27.1 20.7 12.4 6.1 0.5

150°C 317 169 124 92.6 69.9 53.4 40.8 24.5 12  

125°C 942 388 255 171 114 77.6 54 26 9.7  

130°C 1,218 500 330 219 148 101 69 34 12.6 0.7

150°C 3,159 1,300 855 569 383 259.1 180 88 32.7  

125°C 2,540 914 567 358 226 145 95.6 43.4 13.6  

Page 11: Quality

130°C 3,377 1,221 754 476 300 193 127 57.7 18.1 0.8

150°C 10,041 3,632 2,240 1,414 893 575 378 171 53.8  

125°C 6,691 2,140 1,250 735 449 272 168 67 18.8  

130°C 9,174 2,964 1,710 1,006 616 370 229 92.2 26 0.9

150°C 31,256 10,100 5,825 3,429 2,101 1,261 781 314 88.2  

Calculation of Applicable Junction Temperature

Failure rates and MTBFs obtained from Operating Life Tests pertain when the junction temperature is the same as the ambient test temperature. Temperatures used during OPL tests are usually TA=125°C or TA=150°C. In most cases, these ambient temperatures are very close to the junction temperature TJ  . However, when a significant difference between TA and TJ exists, respective TJ must be considered. This would be the case with parts that dissipate significant amounts of power, such as certain linear and MOS devices.

Confidence Factors

The failure rate resulting from a High Temperature Bias test is an average, or estimate, of the typical expected failure rate for a product or process; but has no statistical boundaries established.

National Semiconductor generally states the upper 60% confidence limit for failure rate estimate using the chi-squares statistic, per the following formula.

Page 12: Quality

Values of chi square are found in a number of statistical tables. A few more typical values are shown as follows:

Percentiles of the Chi2 Distribution

(Values of Chi2 corresponding to certain selected probabilities)

Typical Use AQL Best Estimate 60% ConfidenceLTPD or 90% Confidence

Probability in % 5.0 50.0 60.0 90.0

0.05 0.50 0.60 0.90

dfTotal Failures

       

2

4

6

8

10

12

14

16

18

20

22

26

0

1

2

3

4

5

6

7

8

9

10

12

0.103

0.711

1.640

2.730

3.940

5.230

6.570

7.960

9.390

10.900

12.800

15.400

1.390

3.360

5.350

7.340

9.340

11.300

13.300

15.300

17.300

19.300

21.300

25.300

1.830

4.040

6.210

8.350

10.500

12.600

14.700

16.800

18.900

21.000

23.000

27.200

4.61

7.78

10.60

13.40

16.00

18.50

21.10

23.50

26.00

28.40

30.80

35.60

Page 13: Quality

32

42

15

20

20.100

28.200

31.300

41.300

33.400

43.700

42.60

54.10

With regards to the role of a QAM, that is dependent on the nature of the company and the size of your facility and whether or not you are wearing multiple hats or just one...

Our facility has approximately 250 employees and some of the roles of our Quality System Manager (comparable to a QAM I believe) are as follows:* Management Representative for our ISO Audits* Manage and oversee our Internal Audit Process, including the responsibility for overseeing the training and development of our Internal Auditors* Manage and oversee our Corrective Action Process* Assist in the effective resolution of our Customer Complaints* Manage and oversee our Quality System Documentation* Organize and schedule our Management Reviews* Help support our Supplier Management Program (in conjunction with our Purchasing and Maintenance functions)* Assist in the Quality Planning portion of Product Realization* etc.

PS: Our Quality System Manager currently wears one hat.

Just our slant on things...__________________~ ERL ~ :)

There went the easy answer. Here is a list of the non-contract specific things I normally handle.

Ensures documentation of quality procedures and work instructionsDevelop and maintain a quality system in accordance with ANSI/ISO/ASQC Q 9001:2000 Quality Management Systems-Requirements (2000-12-15: Third Edition) and ANSI/ISO/ASQC Q 9001:2000.Enforces adherence to the Quality Manual, procedures, work instructions, and ANSI/ISO/ASQC Q 9001:2000 Quality Management Systems-Requirements (2000-12-15: Third Edition).Train auditorsSchedules and communicates information on external auditsPrepare and implement the audit scheduleInterface with project managers and site managerInterface with team leadsInterface with purchasing, system engineer, and quality controlRetain or monitor the retaining of quality records and their anaylsisParticipates Monthly Management Reviews and Corprorate Program ReviewsChair any ISO 9000-based/Process meetingsMay attend department product quality meetingsTake notes at the Management Review meetings with the site managerPerform, schedule, create process awareness training for staffActs as mediator or impartial chair in meetings as ask by managementInterfaces directly with management on process manament resourse needsInforms management of any negitive trendsManage and validate the corrective and preventive action systemManage on the job training databaseCordinate process activities between teamsProcess Cheerleader- rewards for good process and quality related activities & tours to the customer

Quality Manager Responsibilities

Page 14: Quality

Posted on November 16, 2009 by At-PQC™

The Quality Manager is responsible for the administration of the Quality Plan and has

the authority to manage all work affecting quality. The Quality Manager will provide

leadership for the development, implementation, communication and maintenance of

quality systems policies and procedures for the Company according to the approved

quality system. A primary goal is to achieve a high degree of joint ownership of quality

and compliance strategies with all of the major operational stakeholders in the

Company while addressing regulatory requirements in an effective, timely and

responsible manner.

Responsibilities

Formulate and manage the development and implementation of goals,

objectives, policies, procedures and systems pertaining to the quality assurance

and regulatory functions.

Develop, implement, communicate and maintain a quality plan to bring the

Company’s Quality Systems and Policies into compliance with quality system

requirements.

Manage documentation related to Quality System guidelines.

Quality Assurance project lead for cross-functional projects including

determining QA timelines, plans and position strategies.

Manage and maintain the quality aspects of the Design Control Program

including but not limited to design input and output documentation, applicable risk

analyses, verification and validation activities and formal design reviews.

Provide leadership for developing and directing Quality Assurance and Quality

improvement initiatives (Cost-of-Quality reductions, Audit system, CAPA system,

etc.) for all products, processes and services.

Manage and maintain the Company’s internal quality audit program and assess

improvement initiatives resulting from all Quality Audits – internal and external.

Effectively interact with Production and Development teams to maintain product

supply and help introduce new products.

Page 15: Quality

Manage training of all company personnel in the requirements, documentation

and maintenance of the corporate Quality System.

Manage and maintain the Company’s quality inspection and product release

programs for incoming and in-process materials and components, processes and

finished goods.

Establish an auditing program and lead compliance audits of third party

suppliers, manufacturers and distributors.

Report on timely basis to executive management on the performance of the

quality system, any non-compliance issues and recommended actions.

Qualifications

Minimum of 5 years of Quality Systems supervisory/management experience

within a related industry.

Experience interacting with regulatory authorities.

Experience in quality management systems.

Experience in quality system audits.

Working knowledge of design control processes.

Experience with statistical sampling plans and trending analyses.

Knowledgeable about electronic data management.

Professional certifications (e.g. American Society for Quality (ASQ), Certified

Quality Auditor (CQA) desirable).

Skills

Ability to lead projects and programs with a positive “Get It Done” attitude.

Page 16: Quality

Organized, attentive to detail and able to prioritize and handle multiple projects

with competing deadlines.

Works efficiently both on independent basis and as part of a team.

Ability to deal effectively with all levels within the organization as well as with

external parties, including regulatory bodies.

Proven strong problem solving ability with attention to root cause.

Maintains a personal and professional “continuous improvement” philosophy.

Excellent written and verbal communication skills, strong interpersonal skills.

The quality manager should be familiar with management tools for problem solving,

process management and various metrics. Quality managers use problem solving tools

to determine root causes and suggest solutions from various perspectives using data

to make decisions.

Problem solving tools include:

1. The Seven Quality Tools using Pareto charts, cause and effect diagrams,

flowcharts, control charts, check sheets, scatter diagrams and histograms.

2. Basic management and planning tools using Affinity diagrams, tree diagrams,

process decision program charts, matrix diagrams, interrelationship digraphs,

prioritization matrices and activity network diagrams.

3. Process improvement tools using root cause analysis, PDCA, six sigma DMAIC

model, failure mode and effects analysis and statistical process control.

4. Innovation and creativity tools using creative decision making and problem

solving techniques that include brainstorming, mind mapping, lateral thinking,

critical thinking and design for six sigma.

5. Cost of Quality using prevention, appraisal, internal and external failure costs to

suggest ways and means to improve the bottom line.

Process Management Tools include:

1. Planning and setting goals

Page 17: Quality

2. Establishing controls

3. Monitoring and measuring performance

4. Documentation

5. Cycle-time reduction

6. Waste elimination

7. Theory of constraints for local vs system optimization and physical vs policy

constraints that affect throughput

8. Elimination of special causes of variation

9. Maintaining improvements

10. Continuous improvement

11. Process mapping

12. Flowcharting

13. 5 S’s

14. Just in time

15. Kanban

16. Value streams

Measurement, Assessment and Metric Tools include:

1. Goal-question-metric modeling to identify when, what and how to measure

projects and processes

2. Sampling techniques when appropriate

3. Statistical analysis to measure central tendency, dispersion and types of

distributions to monitor processes and make data oriented decisions

4. Trend and pattern analysis to assess data-sets, graphs and charts for various

cyclic, seasonal and environment trends and pattern shifts

5. Theory of variation to identify common and special causes of variation

6. Process capability using Cp and Cpk indices

7. Reliability and validity measurement theories using content, construct and

criterion types of measures

8. Qualitative assessments using anecdotal feedback, observations and focus group

output instead of objective measurements

9. Survey analysis interpretation

Additional quality manager skills include:

1. Customer relationship management to ensure partnerships and alliances

2. Energizing internal Customers to improve products, processes and services

3. Identification and prioritization of Customer needs and expectations using tools

such as Voice of the Customer, House of Quality, Quality Function Deployment,

focus groups and Customer surveys

Page 18: Quality

4. Measurement of Customer satisfaction and loyalty using complaints, surveys,

interviews, warranty data, value analysis and corrective actions

5. Conflicting requirement resolution and management of resources

How-To Achieve the Goal

Cheat – that’s right – cheat; well, another way to put it is to re-design the process to

control the sheet music. Remember the movie clip from “Dune” – “He who controls the

spice controls the universe”? Well, if you control the sheet music then you control the

orchestra. That’s the tail wagging the dog but the technique works. Everybody in the

orchestra needs sheet music so all you have to do is implement Efficient QMS™ in

your business process to truly orchestrate (manage) an efficient quality management

system.

This entry was posted in Efficient QMS™ and tagged . Bookmark the permalink.

What is PPM?

(Parts per million)Click here for print version

PPM (Parts per million) is a measurement used today by many customers to measure quality performance.

Definition: One PPM means one (defect or event) in a million or 1/1,000,000

There was a time when you were considered a pretty good supplier when your defect rate was less than 1%, (10,000 PPM), then the expectation was increased to 0.1% or 1,000 PPM. Now the rate for most automotive components is targeted at 25 PPM or 0.0025%

To calculate: For example, let's say you had 25 pieces defective in a shipment of 1,000 pieces. 25/1000= .025 or 2.5% defective. .025 X 1,000,000 = 25,000 PPM.

Use the PPM Calculator:

Let's put it in perspective. Let's say, you produce 10,000,000 parts per year for your customer. If you're like most companies, your plant operates around 5 days a week, 50 weeks per year or 250 days a year. Let's assume your customers requirements are 25 PPM. That means that you would be allowed 25 pieces for

Page 19: Quality

every 1,000,000 pieces or 250 defective parts per year. Or to put it another way, one bad part per day! Notice I did not say one bad part per day per person or job, or shift or machine. One bad part for the entire plant for 24 hours!

Impossible? Yes if you are still using the same manufacturing methods which were responsible for the 5 or 10% scrap you used to accept.

A friend of mine had the following quotation on his wall:

Insanity is doing things the same way we've always done them and expecting different results. 

The monitoring and measurement systems we have today have paved the way for improved quality. We can now control the process in order to control the quality. Quality can not be inspected in after the fact. It is the result of careful planning, design, and execution.

If you don't measure it you can not control it.

If you would like to learn how to reduce your PPM while increasing profits I can help.

First by demonstrating a systematic and scientific approach to design and manufacturing in your facility.

Second, by training your personnel in the use of the tools and methodology.

For a more detailed article on PPM visit the following page.http://www.drdiecast.com/PPM_is_it_possible.htm

Click the following for information on consulting and training services offered by McClintic & Associates.

 

Process capability indexFrom Wikipedia, the free encyclopedia

In process improvement efforts, the process capability index or process capability ratio is a statistical measure of process

capability: The ability of a process to produce output within specification limits.[1] The concept of process capability only holds

meaning for processes that are in a state of statistical control. Process capability indices measure how much "natural

variation" a process experiences relative to its specification limits and allows different processes to be compared with respect

to how well an organization controls them.

Page 20: Quality

If the upper and lower specification limits of the process are USL and LSL, the target process mean is T, the estimated mean

of the process is   and the estimated variability of the process (expressed as a standard deviation) is  , then commonly-

accepted process capability indices include:

Index Description

Estimates what the process would be capable of producing if the process could be centered. Assumes process output is approximately normally distributed.

Estimates process capability for specifications that consist of a lower limit only (for example, strength). Assumes process output is approximately normally distributed.

Estimates process capability for specifications that consist of an upper limit only (for example, concentration). Assumes process output is approximately normally distributed.

Estimates what the process is capable of producing if the process target is centered between the specification limits. If the process mean is not

centered,  overestimates process capability.   if the process mean falls outside of the specification limits. Assumes process output is approximately normally distributed.

Estimates process capability around a target, T.   is always greater than

zero. Assumes process output is approximately normally distributed.   is also known as the Taguchi capability index.[2]

Estimates process capability around a target, T, and accounts for an off-center process mean. Assumes process output is approximately normally distributed.

 is estimated using the sample standard deviation.

Page 21: Quality

Contents

 [hide]

1   Recommended values

2   Relationship to measures of process fallout

3   Example

4   References

5   See also

[edit]Recommended values

Process capability indices are constructed to express more desirable capability with increasingly higher values. Values near or

below zero indicate processes operating off target (  far from T) or with high variation.

Fixing values for minimum "acceptable" process capability targets is a matter of personal opinion, and what consensus exists

varies by industry, facility, and the process under consideration. For example, in the automotive industry, the AIAG sets forth

guidelines in theProduction Part Approval Process, 4th edition for recommended Cpk minimum values for critical-to-quality

process characteristics. However, these criteria are debatable and several processes may not be evaluated for capability just

because they have not properly been assessed.

Since the process capability is a function of the specification, the Process Capability Index is only as good as the

specification . For instance, if the specification came from an engineering guideline without considering the function and

criticality of the part, a discussion around process capability is useless, and would have more benefits if focused on what are

the real risks of having a part borderline out of specification. The loss function of Taguchi better illustrates this concept.

At least one academic expert recommends[3] the following:

SituationRecommended minimum process capability

for two-sided specificationsRecommended minimum process capability

for one-sided specification

Existing process 1.33 1.25

New process 1.50 1.45

Safety or critical parameter for existing process

1.50 1.45

Safety or critical parameter for new process

1.67 1.60

Six Sigma quality process 2.00 2.00

Page 22: Quality

It should be noted though that where a process produces a characteristic with a capability index greater than 2.5, the

unnecessary precision may be expensive[4].

[edit]Relationship to measures of process fallout

The mapping from process capability indices, such as Cpk, to measures of process fallout is straightforward. Process fallout

quantifies how many defects a process produces and is measured by DPMO or PPM. Process yield is, of course, the

complement of process fallout and is approximately equal to the area under the probability density

function   if the process output is approximately normally distributed.

In the short term ("short sigma"), the relationships are:

CpkSigma level

(σ)Area under the probability density

functionΦ(σ) Process yieldProcess fallout (in terms of

DPMO/PPM)

0.33 1 0.6826894921 68.27% 317311

0.67 2 0.9544997361 95.45% 45500

1.00 3 0.9973002039 99.73% 2700

1.33 4 0.9999366575 99.99% 63

1.67 5 0.9999994267 99.9999% 1

2.00 6 0.9999999980 99.9999998% 0.002

In the long term, processes can shift or drift significantly (most control charts are only sensitive to changes of 1.5σ or greater

in process output), so process capability indices are not applicable as they require statistical control.

[edit]Example

Consider a quality characteristic with target of 100.00 μm and upper and lower specification limits of 106.00 μm and 94.00 μm

respectively. If, after carefully monitoring the process for a while, it appears that the process is in control and producing output

Page 23: Quality

predictably (as depicted in the run chart below), we can meaningfully estimate its mean and standard

deviation. 

If   and   are estimated to be 98.94 μm and 1.03 μm, respectively, then

Index

The fact that the process is running about 1σ below its target is reflected in the markedly different values for Cp, Cpk, Cpm, and

Cpkm.

Page 24: Quality

[edit]References

1. ̂  "What is Process Capability?". NIST/Sematech Engineering Statistics Handbook. National Institute of Standards

and Technology. Retrieved 2008-06-22.

2. ̂  Boyles, Russell (1991). "The Taguchi Capability Index". Journal of Quality Technology (Milwaukee,

Wisconsin: American Society for Quality Control) 23 (1): pp. 17 – 26. ISSN 0022-4065. OCLC 1800135

3. ̂  Montgomery, Douglas (2004). Introduction to Statistical Quality Control. New York, New York: John Wiley &

Sons, Inc.. p. 776.ISBN 9780471656319. OCLC 56729567.

4. ̂  Booker, J. M.; Raines, M.; Swift, K. G. (2001). Designing Capable and Reliable Products. Oxford: Butterworth-

Heinemann.ISBN 9780750650762. OCLC 47030836.

[edit]See also

Process performance index

Categories: Index numbers | Process management | Quality control

Log in / create account

Article

Discussion

Read

Edit

View history

Main page

Contents

Featured content

Current events

Random article

Donate to Wikipedia Interaction

Help

About Wikipedia

Community portal

Recent changes

Contact Wikipedia ToolboxPrint/exportLanguages

Česky

Deutsch

Español

한국어 日本語 Svenska

This page was last modified on 27 September 2010 at 08:14.

Page 25: Quality

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of Use for

details.

Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

Contact us

Privacy policy

About Wikipedia

Disclaimers

The Seven new Management and Planning Tools have their roots in Operations Research work done after World

War II and theJapanese Total Quality Control (TQC) research. In 1979 the book Seven New Quality Tools for Managers

and Staff was published and in 1983 was translated into English.

The seven new tools include:

1. Affinity Diagram  (KJ Method)

2. Interrelationship Diagraph  (ID)

3. Tree Diagram

4. Prioritization Matrix

5. Matrix Diagram

6. Process Decision Program Chart  (PDPC)

7. Activity Network Diagram

Page 26: Quality

Contents

 [hide]

1   The seven new tools

o 1.1   Affinity Diagram

o 1.2   Interrelationship Diagraph

o 1.3   Tree Diagram

o 1.4   Prioritization Matrix

o 1.5   Matrix Diagram

o 1.6   Process Decision Program Chart (PDPC)

o 1.7   Activity Network Diagram

2   References

3   Further reading

4   External links

[edit]The seven new tools

[edit]Affinity Diagram

This tool takes large amounts of disorganized data and information and enables one to organize it into groupings based

on natural relationships. It was created in the 1960s by Japanese anthropologist Jiro Kawakita. Its also known as KJ

diagram,after Jiro Kawakita. Affinity diagram is a special kind of brainstorming tool.

[edit]Interrelationship Diagraph

This tool displays all the interrelated cause-and-effect relationships and factors involved in a complex problem and

describes desired outcomes. The process of creating an interrelationship diagraph helps a group analyze the natural

links between different aspects of a complex situation.

Page 27: Quality

[edit]Tree Diagram

This tool is used to break down broad categories into finer and finer levels of detail. It can map levels of details of tasks

that are required to accomplish a goal or task. It can be used to break down broad general subjects into finer and finer

levels of detail. Developing the tree diagram helps one move their thinking from generalities to specifics.

[edit]Prioritization Matrix

This tool is used to prioritize items and describe them in terms of weighted criteria. It uses a combination of tree and

matrix diagramming techniques to do a pair-wise evaluation of items and to narrow down options to the most desired or

most effective.

[edit]Matrix Diagram

This tool shows the relationship between items. At each intersection a relationship is either absent or present. It then

gives information about the relationship, such as its strength, the roles played by various individuals or measurements.

Six differently shaped matrices are possible: L, T, Y, X, C, R and roof-shaped, depending on how many groups must be

compared.

[edit]Process Decision Program Chart (PDPC)

Page 28: Quality

A useful way of planning is to break down tasks into a hierarchy, using a Tree Diagram. The PDPC extends the tree

diagram a couple of levels to identify risks and countermeasures for the bottom level tasks. Different shaped boxes are

used to highlight risks and identify possible countermeasures (often shown as 'clouds' to indicate their uncertain nature).

The PDPC is similar to the Failure Modes and Effects Analysis (FMEA) in that both identify risks, consequences of

failure, and contingency actions; the FMEA also rates relative risk levels for each potential failure point.

[edit]Activity Network Diagram

This tool is used to plan the appropriate sequence or schedule for a set of tasks and related subtasks. It is used when

subtasks must occur in parallel. The diagram enables one to determine the critical path (longest sequence of tasks).

(See also PERT diagram.)

[edit]References

This article does not cite any references or sources.Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged andremoved. (October 2008)

[edit]Further reading

Brassard, M. (1996) The Memory Jogger Plus+. ISBN 1-879364-83-2.

Seven New Management and Planning Tools

Seven Basic Tools of Quality .

QS-9000

QS-9000 is the name for the Quality System Requirements used to increase customer confidence in the quality of its suppliers.

The idea of QS-9000 is quite similar to ISO-9000, International Quality System Standard, but QS-9000 applies particularly to the automotive industry for Chrysler Corporation, Ford Motor Company, General Motors Corporation, and truck manufacturers. QS-9000 is made up of three sections: an ISO-9000 based requirement, a sector-specific requirement, and a customer-specific requirement. These requirements guarantee a supplier procures a good quality product. Furthermore, by developing QS-9000, we will be able to improve our product, customer satisfaction, and supplier relations as well.

Page 29: Quality

Standards for ISO-9001 and QS-9000 ____________________________________________________________________ ISO QS Quality System Requirements 9001 9000 -------------------------------------------------------------------- Management Responsibility X X Quality System X X Contract Review X X Design Control X X Document and Data Control X X Purchasing X X Control of Customer-Supplied Product X X Product Identification and Tractability X X Process Control X X Inspection and Testing X X Control of Inspection, Measuring, and Test Equipment X X Inspection and Test Status X X Control of Non-Conforming Product X X Corrective and Preventive Action X X Handling, Storage, Packaging, Preservation and Delivery X X Control of Quality Audits X X Training X X Servicing X X Statistical Techniques X X -------------------------------------------------------------------- Production Parts Approval Process X Continuous Improvement X Manufacturing Capability X -------------------------------------------------------------------- Customer-Specific Requirement X ____________________________________________________________________

Back to Original Page

Quality

Quality means a totality of characteristics of an entity that bear on its ability to satisfy stated and implied needs. In some references, Quality is referred to as "fitness for use", "fitness for purpose", "customer satisfaction", or "conformance to the requirements."

To achieve satisfactory quality we must concern all stages of the product or service cycle. In the first stage quality is due to a definition of needs. In the second stage it is due to product design and conformance. In the last stage quality is due to product support throughout its lifetime.

There are two major aspects of quality: quality of design and quality of conformance. Quality of design involves the variations of a product or services in grades or levels of quality. This includes the types of materials used in construction, tolerance in manufacturing, reliability, etc. Quality of conformance concerns how well the product conforms to the specifications and tolerances required by the design. Quality of conformance is influenced by the choices of manufacturing processes, training and supervision of the workforce, the type of quality-assurance system used, and the motivation of the workforce to achieve quality.

Back to Original Page

Page 30: Quality

Quality at Source (Source Inspection)

Source inspection is a technique used to prevent product defects by controlling the conditions that influence quality at their source. It is the performance of the supplier's facilities to increase customer confidence with the supplier's product quality. The following elements are essential parts of source inspection.

The quality history of suppliers. Any possible effects that occur during purchasing, based on the performance, safety, and

reliability of the final product. Product complexity. The ability to measure the product quality from buyer data. The availability of special measuring equipment at the buyer's plant to perform the required

inspection. The product's nature and its quality.

It is important to have either external or internal company inspectors to assure adequate product control. A sources inspection is performed to insure that the decision making is correct and unbiased. Furthermore, source inspection can be devised into two categories as follows;

1. Vertical source inspection inspects the process flow to identify and control external conditions that affect quality.

2. Horizontal source inspection inspects an operation to identify and control interval conditions that affect quality.

Back to Original Page

SOP - Standard Operating Procedures

Standard Operating Procedures (SOP) are the instructions that cover operational parts. Initially, an SOP is based on Armywide publications and then modified to use local operating conditions and command policies as a guideline. The scope of SOP is extensive and varies. It provides the major instructions for all division elements of operational features.

In general, there are two formats for an SOP to follow:

A format that publishes all comprising documents which details of the function and the responsibilities of subordinate units.

A format that is published as a basic document which includes general instructions to all units. This kind of format has specific instructions for each individual unit. It is more detailed and easier to use.

Back to Original Page

Page 31: Quality

SMED - Single Minute Exchange of Die

SMED, often called "Quick Changeover", is an process that can help us to reduce downtime due to set-ups and changeovers. Quick Changeover means we reduce time to set up a machine or process. We use SMED as a guideline to eliminate our waste changeover time in our production process, especially while changing a machine from one product to another.

There are six major steps that we should be concerned with :

1. Ensure that everything needed for setup is already organized and on hand to save time finding something in the process setup.

2. It is good to move your arms but not your legs to avoid spending too much time during adjustment or set-ups.

3. Do not remove bolts completely to save time during removing bolts and setting up the process.

4. Regard bolts as enemies; do what ever you can to get rid of them to save time by using some equipment that is better than bolts when changing the process.

5. Do not allow any deviation from die and jig standards to save time by using the same standards. For example, use the same size of nut and bolts for each die and jig.

6. Adjustment is waste to make the jig or figure simple to setup and avoid wasting time to adjust the positions.

Back to Original Page

SPC - Statistical Process Control

Statistical Process Control (SPC) is an collection of statistical techniques that are used to monitor critical parameters and reduce variations. We used SPC to achieve process stability and improve the capability through reduction of variability. Often the term "Statistical Quality Control" is used interchangeably with "Statistical Process Control."

The objective of SPC is to get a process under control. This is done by identifying and eliminating any specific causes of variation not associated with the process itself. A process that is in control will constantly perform within its own natural limits.

SPC can be broken into two components: process control and acceptance sampling. In process control, SPC involves these seven tools: Histogram, Check Sheet, Parato Chart, Cause and Effect Diagram, Defect Concentration Diagram, Scatter Diagram, and Control Chart These tools often called "The Seven QC tools." Most of the tools help us to identify a problem in the process. Acceptance sampling is used to reduce variation in the process by using statistical sampling techniques to select the proper sampling size and to interpret whether our whole product should be accepted or rejected.

Back to Original Page

Page 32: Quality

Sampling

Sampling is the process of obtaining samples from a large group of data (or called population). There are numerous data, so it is difficult or impossible to examine the whole group. Examining all data will expend a lot of time, so doing only a small part of entire data, a sample, is more appropriate. Additionally, sampling theory is a study of the relationship between the whole data and the samples. It is useful to understand whether there are differences between two samples.

All possible samples of size n can be drawn from a given population. For each sample, we can calculate a statistic; for example, the mean and the standard deviation of the data will vary from sample to sample. So a sampling distribution is useful to explain the data characteristics.

There are three types of sampling processes:

1. Single sampling is composed of selecting a specifically random sample of n items from each group of items presented, and then condemning each group depends up on the results. For example, chose n items from each group for inspection. We will accept the group if the number of defects is less than or equal to d, a specific value. Otherwise, we will reject them.

2. Double sampling is composed of selecting two specifically random samples of n1 and n2 from data. By a technique of this type, the results of selecting a first sample (n1) are accepting the group, rejecting, or talking another sample of n2 items. The decision making depends on the associated results.

3. Multiple sampling is a technique of sampling that is similar to the double sampling, but there are more than two sampling items used in decision making.

Back to Original Page

Scatter Diagram

A scatter diagram is a graphical diagram to show the relationship between two data variables. It is used to display the change of one variable when another changes. From a scatter diagram, we can find a mathematical equation that relates to the variables. To create a scatter diagram, these steps are followed:

Collect data. This is the most essential step. Build a data sheet to show the information from the data. Define the variable axis of the graph.

1. The horizontal axis (X axis) displays the variable's measurement values; most are cause variables.

2. The vertical axis (Y axis) shows the measurement values of another variable; most are effect variables.

Plot data on the graph. Construct a mathematical equation.

Page 33: Quality

From a scatter diagram, curves are tentatively devised for linear and non-linear curves. With this, we can call two relationships between variables to linear and non-linear relationships.

Back to Original Page

Self Inspection

Self inspection is a technique of inspection in which workers check their own work. Self inspection provides the most immediate feedback. With this technique, the worker may accept products that ought to be rejected. Furthermore, the worker may not notice all the errors.

On the other hand, if the errors in decision and careless mistakes are eliminated, self inspection would be the efficient technique. However, it could be improved by developing tools or using devices that could automatically detect defects or mistakes. Providing new knowledge of quality processes to workers is an efficient method to improve the self inspection technique.

Generally, the results from inspections are reported in terms of the total percentage of defects. With this method, inspectors will check the final products. They may find some mistakes or product errors, but they will not know the actual error source. As mentioned, self inspection is a method to solve this problem.

Back to Original Page

Sensory Inspection

Inspections involve distinguishing acceptable from unacceptable goods and comparing them with a standard. Sensory inspection is a kind of inspection, conducted by the human senses, such as

Page 34: Quality

inspections of paint saturation or judgments of plating adequacy. They are different from physical inspection, which involves the use of devices, like calipers, micrometers or gauges, to measure.

For inspection of this kind, it is difficult to set criteria because it depends on the physical condition of human workers, the period of work, and the skills acquired from experience. Naturally, different people have different senses and even the same person may make different judgments at different times. It is laborious to judge an object with a complex form or a not-well-defined shape.

Back to Original Page

Seven Steps or Seven QC Steps

The 7 QC Steps process is a structured problem solving approach for improving weak processes. This approach is known as reactive improvement. The 7 QC Steps is easy to understand and learn, easy to use, and easy to monitor.

The 7 QC steps process is structured as follows:

Step 1: Select a Theme. In this step, the weakness in the process or the problem to be solved is clarified in a theme statement. A Flowchart, a Theme Selection Matrix, or a Cause & Effect Diagram is used as a tool in this step.

Step 2: Collect and Analyze Data. This step focuses facts about the problem and discovers what types of problems occur frequently. When collecting data, you must think of all possible causes. Checksheets and Pareto Diagrams are the tools most often used.

Step 3: Analyze Causes.With sufficient data from step 2, the root cause, or fundamental cause, is found by constructing a Cause & Effect Diagram.

Step 4: Plan and Implement Solution. In this step, you brainstorm for ideas that are causing the problem and develop a solution that prevents the root cause from recurring. Then, you implement an adjustment to the process. The 4W's and 1H Matrix (What, When, Where, Who, and How Matrix) is used to develop a plan.

Step 5: Evaluate Effects.You evaluate the effects of implemented solution to make sure the solution worked and does not have unacceptable results from the comparison of data, before and after the implementation of the solution. In this step, comparative Pareto Charts and Graphs are frequently used to identify the results.

Step 6: Standardize Solution. A standardized solution is confirms that the old process is replaced with an improved process and indicates that the solution is workable. A flowchart is most often used.

Step 7: Reflect on Process and the Next Problem. In this step, you consider what the team's accomplishment was in the first 6 steps and recommend a weakness to work on next.

Page 35: Quality

Back to Original Page

7QC Tools

Seven QC tools are fundamental instruments to improve the quality of the product. They are used to analyze the production process, identify the major problems, control fluctuations of product quality, and provide solutions to avoid future defects. Statistical literacy is necessary to effectively use the seven QC tools. These tools use statistical techniques and knowledge to accumulate data and analyze them.

Seven QC tools are utilized to organize the collected data in a way that is easy to understand and analyze. Moreover, from using the seven QC tools, any specific problems in a process are identified.

7QC tools always include :

Check Sheet is used to easily collect data. Decision-making and actions are taken from the data.

Pareto Chart is used to define problems, to set their priority, to illustrate the problems detected, and determine their frequency in the process.

Cause-and-Effect Diagram (Fishbone Diagram) is used to figure out any possible causes of a problem. After the major causes are known, we can solve the problem accurately.

Histogram shows a bar chart of accumulated data and provides the easiest way to evaluate the distribution of data.

Scatter Diagram is a graphical tool that plots many data points and shows a pattern of correlation between two variables.

Flow Chart shows the process step by step and can sometimes identify an unnecessary procedure.

Control Chart provides control limits which are generally three standard deviations above and below average, whether or not our process is in control.

Back to Original Page


Recommended