+ All Categories
Home > Documents > Benchmark Six Sigma Black Belt Preparatory Module v12

Benchmark Six Sigma Black Belt Preparatory Module v12

Date post: 20-Feb-2016
Category:
Upload: amlen-singha
View: 30 times
Download: 6 times
Share this document with a friend
Description:
Six Sigma Black Belt
Popular Tags:
162
BLACK BELT PREPARATORY MODULE benchmark six sigma Lean Six Sigma Black Belt Black Belt Preparatory Module V12 Volume 1
Transcript
Page 1: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

benchmark six sigma

Lean Six Sigma Black Belt

Black Belt Preparatory Module V12

Volume

1

Page 2: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

B E N C H M A R K S I X S I G M A

Lean Six Sigma Black Belt Preparatory Module

Benchmark Six Sigma The Corenthum • Office no.2714, Lobe No-2,7th floor,

A-41, Sector-62, Noida (NCR)-201301 • India

Toll free number: 1800-102-30

Page 3: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

Table of Contents

What is Six Sigma? ................................................................................................... 1

History and Evolution of Lean Six Sigma ....................................................... 1

Six Sigma and Balanced Scorecard ................................................................ 3

Understanding Lean ................................................................................................. 4

Why do we want to lose excess Fat? ............................................................. 4

Simple Lean tools with extraordinary benefits .............................................. 5

Lean - Total Productive Maintenance ......................................................... 10

Statistics for Lean Six Sigma ................................................................................... 21

Elementary Statistics for Business .............................................................. 21

Types of Variables ...................................................................................... 24

Summary Measures .................................................................................... 26

Managing Six Sigma Projects .................................................................................. 38

Managing Projects ...................................................................................... 38

What is a Project? ....................................................................................... 38

Managing Teams ........................................................................................ 44

Managing Change ....................................................................................... 44

Define .................................................................................................................... 56

Define Overview ............................................................................................... 56

Step 1- Generate Project Ideas ............................................................................... 58

Step 2- Select Project .................................................................................. 69

Step3- Finalize Project Charter and High Level Map .................................... 70

Define Phase Tollgate checklist ................................................................... 76

Measure ................................................................................................................ 77

Measure Phase Overview ........................................................................... 77

Step 4-Finalize Project Y, Performance Standards for Y ............................... 77

Step 5- Validate measurement system for Y ............................................... 78

Page 4: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

Step 6- Measure current performance and Gap .......................................... 83

Measure Phase Tollgate checklist ............................................................... 86

Analyze .................................................................................................................. 87

Analyze Phase Overview ............................................................................. 87

Step 7- List all Probable X’s ......................................................................... 88

Step 8- Identify Critical Xs .......................................................................... 111

Step 9- Verify Sufficiency of Critical X’s for project .................................... 119

Analyze phase Tollgate checklist ................................................................ 119

Improve ................................................................................................................ 120

Improve Phase Overview ........................................................................... 120

Step 10- Generate and evaluate Alternative Solutions ............................... 120

Step 11-Select and optimize best solution ................................................. 129

Step 12- Pilot, Implement and Validate solution ........................................ 141

Improve Phase Tollgate Checklist .............................................................. 143

Control .................................................................................................................. 144

Control Phase Overview ............................................................................ 144

Step 13- Implement Control System for Critical X’s .................................... 144

Step 14- Document Solution and Benefits .................................................. 148

Step 15-Transfer to Process Owner, Project Closure .................................. 150

Control Phase Tollgate Checklist ................................................................ 151

Appendix .............................................................................................................. 152

Acronyms .................................................................................................. 153

Important Links for online Learning and Discussion ................................... 156

References ................................................................................................. 157

Page 5: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

Preface This preparatory module focuses on basic tools and technique that are most important for Six Sigma Black Belts. The objective of this book is to familiarize readers with these tools, and also in understanding the application of the tools and techniques during the classroom training and also thereafter. In this preparatory module, we have provided

Elementary Statistics

Introduction of tools and techniques

Basics of Six Sigma Project Management

Six Sigma Glossary

Practice data files

Reference Study links

It is imperative for Black Belt participants to thoroughly study the preparatory module before attending the workshop. You may list down your questions and share it with the facilitator on the 1st day. Benchmark Six Sigma has invested over 2400 hours of research in developing Lean Six Sigma Black Belt Workshop and continues to invest 320-480 hours/ year of content research and development exclusively for Black Belt workshop. We encourage you to participate in this activity. If you spot an error or would like to suggest changes, or want to share specific case studies, articles, please e-mail us at [email protected]

Page 6: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

1

What is Six Sigma?

History and Evolution of Lean Six Sigma

Defining Six Sigma Six Sigma has been labelled as a metric, methodology, a management and now a philosophy. Green Belts, Black Belts, Master Black Belts, Champions and Sponsors have been trained on Six Sigma as a metric and methodology, however very few have experienced or been exposed to Six Sigma as an overall management system and a way of life. Reviewing the metric and the methodology will help create a context for beginning to understand Six Sigma as a management system. Six Sigma is a vehicle for strategic change ... an organizational approach to performance excellence. It is important for business operations because it can be used both to increase top-line growth and also reduce bottom line costs. Six Sigma can be used to enable:

Transformational change by applying it across the board for large-scale fundamental changes

throughout the organization to change processes, cultures, and achieve breakthrough results.

Transactional change by applying tools and methodologies to reduce variation and defects and

dramatically improve business results.

When people refer to Six Sigma, they refer to several things: Table 1

It is a philosophy It is based on facts & data

It is a statistical approach to problem solving It is a structured approach to solve problems or reduce variation

It refers to 3.4 defects per million opportunities It is a relentless focus on customer satisfaction

Strong tie-in with bottom line benefits Metric, Methodology, Management System

six Sigma as a methodology provides businesses with the tools to improve the capability of their business

processes. For Six Sigma, a process is the basic unit for improvement. A process could be a product or a service process that a company provides to outside customers, or it could be an internal process within the company, such as a billing or production process. In Six Sigma, the purpose of process improvement is to increase performance and decrease performance variation. This increase in performance and decrease in

Chapter

1

Page 7: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

2

performance variation will lead to defect reduction and improvement in profits, to employee morale, and Quality of product, and eventually to business excellence. The name “Six Sigma” derives from statistical terminology; Sigma(s) means Standard Deviation. In a production process, the “Six Sigma standard” means that the defective rate of the process will be 3.4 defects per million units. Clearly Six Sigma indicates a degree of extremely high consistency and extremely low variability. In statistical terms, the purpose of Six Sigma is to reduce variation to achieve very small standard deviations. When compared with other Quality initiatives, the key difference of Six Sigma is that it applies not only to product Quality but also to all aspects of business operation by improving key processes. For example, Six Sigma may help create well-designed, highly reliable, and consistent customer billing systems; cost control systems; and project management systems.

History of Six Sigma

In the late 1970's, Dr. Mikel Harry, a senior staff engineer at Motorola's Government Electronics Group (GEG), experimented with problem solving through statistical analysis. Using this approach, GE's products were being designed and produced at a faster rate and at a lower cost. Subsequently, Dr. Harry began to formulate a method for applying Six Sigma throughout Motorola. In 1987, when Bob Galvin was the Chairman, Six Sigma was started as a methodology in Motorola. Bill Smith, an engineer, and Dr. Mikel Harry together devised a 6 step methodology with the focus on defect reduction and improvement in yield through statistics. Bill Smith is credited as the father of Six Sigma. Subsequently, Allied Signal began implementing Six Sigma under the leadership of Larry Bossidy. In 1995, General Electric, under the leadership of Jack Welch began the most widespread implementation of Six Sigma.

Mikel Harry Bill Smith Larry Bossidy Jack Welch

General Electric: “It is not a secret society, a slogan or a cliché. Six Sigma is a highly disciplined process that helps focus on developing and delivering near-perfect products and services. Six Sigma has changed our DNA – it is now the way we work.” Honeywell: “Six Sigma refers to our overall strategy to improve growth and productivity as well as a Quality measure. As a strategy, Six Sigma is a way for us to achieve performance breakthroughs. It applies to every function in our company and not just to the factory floor.”

The tools used in Six Sigma are not new. Six Sigma is based on tools that have been around for centuries. For example, Six Sigma relies a lot on the normal curve which was introduced by Abraham de Moivre in 1736 and later popularized by Carl Friedrich Gauss in 1818.

Page 8: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

3

Six Sigma and Balanced Scorecard The Balanced Scorecard was first developed in the early 1990s by two guys at the Harvard Business School: Robert Kaplan and David Norton. The key problem that Kaplan and Norton identified in today’s business was that many companies had the tendency to manage their businesses based solely upon financial measures that may have worked well in the past; the pace of business in today’s world requires better and more comprehensive measures. Though financial measures are necessary, they can only report what has happened in the past, where your business has been and they are not able to report where it is headed. Balanced Scorecard is a management system — not a measurement system. Yes, measurement is a key aspect of the Balanced Scorecard, but it is much more than just measurement: it is a means to setting and achieving the strategic goals and objectives for your organisation. Balanced Scorecard is a management system that enables your organisation to set, track and achieve its key business strategies and objectives. Once the business strategies are developed, they are deployed and tracked through what we call the Four Legs of the Balanced Scorecard. These four legs are made up of four distinct business perspectives: The Customer Leg, the Financial Leg, the Internal Business Process Leg, and the Knowledge, Education, and Growth Leg.

Customer scorecard: Measures your customers’ satisfaction and their performance requirements

for your organisation and what it delivers, whether it is products or services.

Financial scorecard: Tracks your financial requirements and performance.

Internal Business Process scorecard: Measures your critical to customer process requirements and

measures.

Knowledge, Education, and Growth scorecard: Focuses on how you train and educate your

employees, gain and capture your knowledge, and how Green Belt/Black Belt use it to maintain a

competitive edge within your markets

Six Sigma projects should positively impact these Key Performance Indicators; therefore Balanced Scorecard is closely monitored. Six Sigma is Management System for executing business strategy. Six Sigma is a solution to help organizations to:

Align their business strategy to critical improvement efforts

Mobilize teams to attack high impact projects

Accelerate improved business results

Govern efforts to ensure improvements are sustain

Page 9: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

4

Understanding Lean

Why do we want to lose excess Fat? Defining Lean Lean operation principles are derived from the Lean manufacturing practices. Lean manufacturing is a very effective strategy first developed by Toyota. The key focus of Lean is to identify and eliminate wasteful actions that do not add value to customers in the manufacturing process. Because Lean deals with production system from a pure process point of view, and not a hardware point of view, it has been found that the principles of Lean can be readily adopted in other types of processes, such as office process and product development process. Therefore, Lean operation principles can be used to greatly improve the efficiency and speed of all processes. The strategy part of Lean looks at balancing multiple value streams (i.e., typically, a family of products or services) and integrating the work done in operations and in the rest of the organisation (be it a factory, a hospital, a software development company) with the customer in mind. The concept is simple. "Lean" describes any process developed to a goal of near 100% value added with very few waste steps or interruptions to the workflow. That includes physical things like products and less tangible information like orders, request for information, quotes, etc. Lean is typically driven by a need for quicker customer response times, the proliferation of product and service offerings, a need for faster cycle times, and a need to eliminate waste in all its forms. The Lean approach challenges everything and accepts nothing as unchangeable. It strives to continuously eliminate waste from all processes- a fundamental principle totally in alignment with the goals of the Six Sigma Management System. These methods are especially effective in overcoming cultural barriers where the impossible is often merely the untried. Lean, like any other major business strategy, is best if driven from the top, linked into the organisation's performance measurement systems, and used as a competitive differentiator. This is what we would like to do however sometimes reality differs. In most instances, the Champions driving this approach should look for pilot areas in the organisation to test the concept and see if a business case for Lean can be built over time. One cannot just flip a switch and get the whole organisation doing this anyway, so starting small and building from there can be a valuable approach.

Chapter

2

Page 10: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

5

Lean in the Office Lean in an assembly manufacturing plant tends to focus on one-piece flow. In a process or job shop, it tends to focus on eliminating wait time. The idea of eliminating wait time and defining "standard work" also applies to the office and administrative environments. Most overhead departments and activities do not have effective metrics. Standard work does not exist for most tasks. Most overhead departments would score poorly in a 5S assessment (Sort, Set-order, Shine, Standardize, and Sustain). Many administrative or transaction type systems are typically designed to handle the most complex transactions. These problems can cause excessive rework, delays in processing, and confusion. A few examples:

An accounting close typically takes a long time to do because a flood of transactions take place

at the end of the period: journal entries, special analysis, allocations, report preparation, etc.

The excessive transactions cause accounting departments to be somewhat chaotic places at

the end of the period. Adopting Lean in this world is different from a factory, but the goal is

still stable amounts of work, flexible operations (within defined parameters), and pull from the

customers of the process.

Imagine a purchase order going through a process. Ninety-nine percent of its processing life is

going to be "wait" time. It may also have re-work problems as people try to get the

information right, and in terms of workload balance, some purchase orders are more difficult

to do than others. This is not so different from what goes on in the factory. Many of these

problems can be individually addressed using Kaizen and Lean Teams.

Multiple re-inputs of information into Excel spread sheet, Access database, requirements

generators, etc. Or the different languages used inside a business for the same physical

product. Purchasing has a different numbering scheme than engineering, which has a different

numbering scheme than accounting. And someone is supposed to keep a matrix up-to-date

that maps these relationships now there's a value adding activity from a customer perspective!

Simple Lean tools with extraordinary benefits What is 5S? 5S is a process and method for creating and maintaining an organised, clean, and high performance workplace. 5S enables anyone to distinguish between normal and abnormal conditions at a glance. 5S is the foundation for continuous improvement, zero defects, cost reduction, and a safe work area. 5S is a systematic way to improve the workplace, our processes and our products through production line employee involvement. 5S can be used in Six Sigma for quick wins as well as control. 5S should be one of the Lean tools that should be implemented first. If a process is in total disarray, it does not make sense to work on improvements. The process needs to be first organised (stabilized) and then improved.

The 5 S’s are:

Sort – Clearly distinguish needed items from unneeded items and eliminate the latter.

Straighten / Stabilize / Set in Order – Keep needed items in the correct place to allow for easy and immediate retrieval.

Page 11: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

6

Shine – Keep the work area clean.

Standardize – Develop standardized work processes to support the first three steps.

Sustain – Put processes in place to ensure that the first four steps are rigorously followed.

Figure1: 5S

Standard Work

Standard Work is a term used in 'Lean Production'. Standard work is work in which the sequence of job elements has been efficiently organized and is repeatedly followed by a team member. It is an important prerequisite for successful implementation of Kaizen and many other aspects of Six Sigma.

• Clearly document steps in the process and include images if possible

• Should be done by the persons responsible for the work

• Audit the process to check this is being followed.

Standard Work is the most efficient method to produce a product (or perform a service) at a balanced

flow to achieve a desired output rate. It breaks down the work into elements, which are sequenced,

organized and repeatedly followed. Each step in the process should be defined and must be performed

repeatedly in the same manner. Any variations in the process will most likely increase cycle time and

cause quality issues. It typically describes how a process should consistently be executed and

documents current ‘best practices.’ It provides a baseline from which a better approach can be

developed, allowing continuous improvement methods to leverage learning.

Benefits of Standardized Work • Ensures predictable output each time

• Reduces process variation

• Improves performance

Page 12: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

7

• Stepping stone for future improvements • Employee involvement and empowerment, • Improved, consistent quality, • Work process stability, • Increased employee safety • Improved cost management as wastes are removed, • A great tool for staff training • Visual management-managers and supervisors can see when processes are not operating

normally.

What is Kaizen? Kaizen is a Japanese word that means to break apart to change or modify (Kai) to make things better (Zen). Kaizen is used to make small continuous improvements in the workplace to reduce cost, improve Quality and delivery. It is particularly suitable when the solution is simple and can be obtained using a team based approach. Kaizen assembles small cross-functional teams aimed at improving a process or problem in a specific area. It is usually a focused 3-5 day event that relies on implementing “quick” and “do-it-now” type solutions. Kaizen focuses on eliminating the wastes in a process so that processes only add value to the customer.

The benefits of doing Kaizen are less direct or indirect labour requirements, less space requirements, increased flexibility, increased quality, increased responsiveness, and increased employee enthusiasm. Figure 2 shows a Kaizen team in action discussing improvements.

Figure 2: A Kaizen Team at Boeing in Action

Typical tools used: • 7 Wastes, 5S, SOP

• Spaghetti diagram

• Pareto analysis, 5 Why’s

• PDCA

Page 13: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

8

Poka-Yoke (Mistake Proofing) Poka-yoke is a structured methodology for mistake-proofing operations. It is any device or mechanism that either prevents a mistake from being made or ensures that the mistakes don’t get translated into errors that the customers see or experience. The goal of Poka-Yoke is both prevention and detection: “errors will not turn into defects if feedback and action take place at the error stage.” (Shigeo Shingo, industrial engineer at Toyota. He is credited with starting “Zero Quality Control”). The best operation is one that both produces and inspects at the same time.

There are three approaches to Poka-Yoke:

Warning (let the user know that there is a potential problem – like door ajar warning in a car)

Auto-correction (automatically change the process if there is a problem – like turn on windshield wipers in case of rain in some advanced cars)

Shutdown (close down the process so it does not cause damage – like deny access to ATM machines if password entered is wrong 3 times in a row)

Figure 3: Poka-Yoke Example – Possibility of Parachute failing to open should not exist

Mistake- proofing is used to either detect or predict an error by either of the following techniques • Giving a warning,

• Controlling automatically

• Shutdown

Table 2: Example of Mistake-proofing

Technique Prediction Detection

WARNING Washing Machine will not spin clothes till the cover of the tub is closed

Smoke detectors provide warning that smoke has been detected and that there is a possible fire.

CONTROL The tank hole for unleaded gas is smaller than the one for leaded

1. Air conditioner does not cool

beyond the adjusted

Page 14: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

9

gasoline temperature

2. Speed governors in buses.

SHUTDOWN Camera will not click photograph in darkness

Iron automatically switches off when a particular (high) temperature is reached

Mistake-Proofing: Principles Mistake- proofing may be used to perform either of the following:

• Mitigation : Minimize the impact of mistakes

• Detection : Detect the mistakes before progressing further

• Facilitation : Make the operation easy to perform

• Replacement : Replace the operator with more reliable process

• Elimination : Eliminate the chance of mistakes

Visual Standards

Visual Communication – Visual signs used to give information

Visual Control – Visual Signs used to Control mistakes

• A visual control is any communication device used in the work environment that tells us at a

glance how work should be

Page 15: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

10

• Generally applies to information which is critical to the flow of work activities such as equipment,

safety warnings, actions and procedures.

• Standardization can be implemented in any situation in such a way that all standards are identified

by visual controls.

• Production displays, Traffic signals are examples of visual controls.

Figure 4: Visual Control Chart

Lean - Total Productive Maintenance

Total Productive Maintenance (TPM) is a maintenance program which involves a newly defined concept for maintaining plants and equipment. The goal of the TPM program is to markedly increase production while, at the same time, increasing employee morale and job satisfaction. TPM brings maintenance into focus as a necessary and vitally important part of the business. It is no longer regarded as a non-profit activity. Down time for maintenance is scheduled as a part of the manufacturing day and, in some cases, as an integral part of the manufacturing process. The goal is to hold emergency and unscheduled maintenance to a minimum.

A holistic system to maximize equipment efficiency by participation from all departments and through autonomous small group activities, it Improves process control as team is trained and follows standards and procedures established by the Six Sigma project.

Benefits of Lean – Total Productive Maintenance

• Increased productivity

Page 16: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

11

• Reduced breakdown

• Reduced defects

• Reduction in maintenance cost

• Reduced stock

• Zero accidents

• Improved morale

Figure5: Elements of Total Productive Maintenance

Page 17: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

12

Autonomous Maintenance (Jishu Hozen)

This pillar is geared towards developing operators to be able to take care of small maintenance tasks, thus freeing up the skilled maintenance people to spend time on more value added activity and technical repairs. The operators are responsible for upkeep of their equipment to prevent it from deteriorating.

Involving operators in routine care and maintenance of critical plant assets offers three major benefits. The most obvious is reduced maintenance labor cost. In addition, the proximity of the operator to the asset greatly reduces or eliminates travel time, waiting for availability and other inefficiencies. Overall, autonomous maintenance represents a much better use of resources. To be truly world-class, maintenance and production must function as an integrated team. Involving the operators in routine care and maintenance of the plant's assets will begin to crumble the traditional barriers between these two departments.

The ultimate reason for autonomous maintenance is simply that it saves money and improves bottom-line profitability. Operators are typically under used and have the time to perform these lower-skilled tasks. Transferring these tasks to operating teams, improves the payback on the burdened sunk cost of the production workforce and at the same time permits more effective use of the maintenance crafts.

Kaizen

"Kai" means change, and "Zen" means good (for the better). Kaizen is the opposite of big spectacular innovations. Kaizen is small improvements carried out on a continual basis and involves all people in the organization. Kaizen requires no or little investment. The principle behind Kaizen is that a large number of small improvements are more effective in an organizational environment than a few large-scale improvements. Systematically using various Kaizen tools in a detailed and thorough method eliminates losses. The goal is to achieve and sustain zero loses with respect to minor stops, measurement and adjustments, defects, and unavoidable downtimes. Kobetsu Kaizen uses a special event approach that focuses on improvements associated with machines and is linked to the application of TPM. Kobetsu Kaizen begins with an up-front planning activity that focuses its application where it will have the greatest effect within a business and defines a project that analyses machine operations information, uncovers waste, uses a form of root cause analysis (e.g., the 5 Why approach) to discover the causes of waste, applies tools to remove waste, and measures results. The objective of TPM is maximization of equipment effectiveness. TPM maximizes machine utilization, not merely machine availability. As one of the pillars of TPM activities, Kaizen activities promote efficient equipment and proper utilization of manpower, materials, and energy by eliminating 16 major losses. Examples of Kobetsu Kaizen to make machines easier to maintain include:

Relocating gauges and grease fittings for easier access.

Making shields that minimize contamination.

Centralizing lubrication points.

Making debris collection accessible.

Planned Maintenance

Planned maintenance, then, is a maintenance program designed to improve the effectiveness of maintenance through the use of systematic methods and plans. The primary objective of the maintenance effort is to keep equipment functioning in a safe and efficient manner. This allows production to meet production targets with minimum operating cost.

Page 18: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

13

It is aimed to have trouble free machines and equipment producing defect free products for total customer satisfaction. This breaks down maintenance into 4 "families" or groups which were defined earlier.

Preventive Maintenance

Breakdown Maintenance

Corrective Maintenance

Maintenance Prevention

With Planned Maintenance we evolve our efforts from a reactive to a proactive method and use trained maintenance staff to help and train the operators to better maintain their equipment. Production involvement is extremely important. Without this, any maintenance program will be jeopardized. Commitment to the success of a maintenance program must extend from top production management through the front-line supervisors. If production management is not committed to a maintenance program, then unrealistically high or low requirements may be made of the maintenance forces. Either situation can cause poor performance and low morale.

Quality Management

Quality Maintenance (QM) targets customer satisfaction through defect free manufacturing of the highest quality products. The focus is on eliminating non-conformances in a systematic manner. Through QM we gain an understanding of what parts of the equipment affect product quality, eliminate quality concerns, and then move to potential quality concerns. The transition is from reactive to proactive (From Quality Control to Quality Assurance). Quality Management activities control equipment conditions to prevent quality defects, based on the concept of maintaining perfect equipment to maintain perfect quality of products. These conditions are checked and measured in time series to verify that measured values are within standard values to prevent defects. The transition of measured values is trended to predict possibilities of defects occurring and to take countermeasures before defects occur.

QM activities to support Quality Assurance through defect free conditions and control of equipment. The focus is on effective implementation of operator quality assurance and detection and segregation of defects at the source. Opportunities for designing Poka-Yoke (foolproof system) are investigated and implemented as practicable.

Education and Training

The goal of training is to have multi-skilled revitalized employees whose morale is high and who are eager to come to work and perform all required functions effectively and independently. The focus is on achieving and sustaining zero losses due to lack of knowledge / skills / techniques. Ideally, we would create a factory full of experts. Operators must upgrade their skills through education and training. It is not sufficient for operators to learn how to do something; they should also learn why they are doing it and when it should be done. Through experience operators gain “know-how” to address a specific problem, but they do so without knowing the root cause of the problem and when and why they should be doing it. Hence it becomes necessary to train operators on knowing why. This will enable the operators to maintain their own

Page 19: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

14

machines, understand why failures occur, and suggest ways of avoiding the failures occurring again. The different phase of skills are:

Phase 1: Do not know. Phase 2: Know the theory but cannot do. Phase 3: Can do but cannot teach Phase 4: Can do and also teach.

P o l i c y

Focus on improvement of knowledge, skills and techniques.

Creating a training environment for self-learning based on felt needs.

Training curriculum / tools /assessment etc conductive to employee revitalization

Training to remove employee fatigue and make work enjoyable.

Safety, Health and Environment

The focus is on creating a safe workplace and surrounding areas that are not damaged by our process or procedures. This pillar plays an active role in each of the other pillars on a regular basis.

The target of the Safety, Health & Environment pillar is:

Zero accident

Zero health damage

Zero fires

Office TPM

Office TPM should be started after activating four other pillars of TPM (Jishu Hozen, Kobetsu Kaizen, Quality Maintenance, and Planned Maintenance). Office TPM must be followed to improve productivity, efficiency in the administrative functions, and identify and eliminate losses. This includes analyzing processes and procedures towards increased office automation. Office TPM addresses twelve major losses:

Processing loss

Cost loss including in areas such as procurement, accounts, marketing, sales leading to high

inventories

Communication loss

Idle loss

Set-up loss

Accuracy loss

Office equipment breakdown

Communication channel breakdown, telephone and fax lines

Time spent on retrieval of information

Unavailability of correct on-line stock status

Customer complaints due to logistics

Page 20: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

15

Expenses on emergency dispatches/purchases.

Benefits

Involvement of all people in support functions for focusing on better plant performance

Better utilized work area

Reduce repetitive work

Reduced inventory levels in all parts of the supply chain

Reduced administrative costs

Reduced inventory carrying cost

Reduction in number of files

Reduction of overhead costs (to include cost of non-production/non capital equipment)

Productivity of people in support function

Reduction in breakdown of office equipment

Reduction of customer complaints due to logistics

Reduction in expenses due to emergency dispatches/purchases

Reduced manpower

Clean and pleasant work environment.

7 Waste in Lean The 7 Wastes (also referred to as Muda) in Lean are:

Table 3: 7 Waste in Lean

The underutilization of talent and skills is sometimes called the 8th waste in Lean.

Waiting is non- productive time due to lack of material, people, or equipment. This can be due to slow or

broken machines, material not arriving on time, etc. Waste of Waiting is the cost of an idle resource. Examples are:

Processing once each month instead of as the work comes in

Waiting on part of customer or employee for a service input

Delayed work due to lack of communication from another internal group.

W O R M P I T

Waiting Over-Production Rework Motion Over-processing Inventory Transportation

Page 21: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

16

Over-Production refers to producing more than the next step needs or more than the customer buys.

Waste of Over-production relates to the excessive accumulation of work-in-process (WIP) or finished goods inventory. It may be the worst form of waste because it contributes to all the others. Examples are:

Preparing extra reports

Reports not acted upon or even read

Multiple copies in data storage

Over-ordering materials

Rework or Correction or defects are as obvious as they sound. Waste of Correction includes the waste of

handling and fixing mistakes. This is common in both manufacturing and transactional settings. Examples are:

Incorrect data entry

Paying the wrong vendor

Misspelled words in communications

Making bad product or materials or labour discarded during production

Motion is the unnecessary movement of people and equipment. This includes looking for things like documents or parts as well as movement that is straining. Waste of Motion examines how people move to ensure that value is added. Examples are:

Extra steps

Extra data entry

Search for something for approval

Over-Processing refers to tasks, activities and materials that don’t add value. It can be caused by poor

product or process design as well as from not understanding what the customer wants. Waste of Over-processing relates to over-processing anything that may not be adding value in the eyes of the customer. Examples are:

Sign-offs

Reports that contain more information than the customer wants or needs

Communications, reports, emails, contracts, etc. that contain more than the necessary points (concise is better)

Voice mails that are too long

Page 22: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

17

• Duplication of effort/reports

Inventory is the liability of materials that are bought, invested in and not immediately sold or used. Waste

of Inventory is identical to over-production except that it refers to the waste of acquiring raw material before the exact moment that it is needed. Examples are:

Transactions not processed

Bigger “in box” than “out box”

• Over-stocking raw materials

Transportation is the unnecessary movement of material and information. Steps in a process should be

located close to each other so movement is minimized. Examples are:

Extra steps in the process

Moving paper from place to place

Forwarding emails to one another

Pull Systems Traditional systems are based on “Push” – keep making items even if not needed downstream. This causes excess inventory, rework, scrap. In a Pull system, we pull work only when needed in the right Quantities. Push systems can be summarized as ‘‘Make a lot of stuff as cheaply as possible and hope people will buy it.’’ Pull systems can be summarized as ‘‘Don’t make anything until it is needed, then make it fast.’’ A pull system controls the low and quantity produced by replacing items when they are consumed. How do we make value flow at the pull of the customer? Pull system requires overloading and flexible processes. A pull system is exactly what it sounds like. The production of a product or system is varied depending strictly on the demand from the customer or the market, not from forecasts or previous performance. While most businesses strive to use a pull business model from end user to shop floor, it is rare for this to happen, as there are usually some aspects of the supply chain that are push systems. A pull system is one in which the supply chain sends a product through the supply chain because there is a specific demand for that one product, as opposed to creating inventory and “pushing” the product out to distributors, wholesalers, vendors, or customers so they have to keep inventory, or worse, the production company has to keep inventory. A “push” supply chain is the exact opposite: they consist of many warehouses, retail stores, or other outlets in which large amounts of inventory are kept to satisfy customer demand on the spot.

Page 23: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

18

Figure 6: Push and Pull approach

Continuous Flow (for reducing Cycle Time) In a continuous flow production process in which a unit undergoes each stage of production sequentially. The unit remains at the first stage until that particular segment is complete; it is then sent to the next stage, where the process is repeated. This method continues until the unit has completed all sequences of the production process.

According to this tactic, it is important to go for improvement effort continuously something that calls for an amalgamation of all the fundamentals that are involved in the process of manufacturing. By ensuring a minimum level of wastage, error free production and cost saving approach, the Continuous Flow Manufacturing or CFM process intends to obtain a balanced production line.

The CFM system is usually implemented in discrete manufacturing in an attempt to deal with the production volumes comprising discrete units of production in a flow. It is to be mentioned that there is more possibility of the discrete manufacturing to be performed in batches of product units that are moved from one process to another inside the factory. During a work-time or run-time, each process is likely to add value to the batch.

A product that fails to satisfy the needs of a customer may turn out to be a product that has lacking in efficiency and quality. Under the CFM system a manufacturer produces that product only which has demand in the market. Some of the pros that are associated with CFM include more efficiency, flexibility, cost saving & greater satisfaction of the customers.

• One piece flow of work between process steps

• Minimal or no inventory between steps

• True continuous flow is hard to achieve (use supermarkets, pull signals)

Page 24: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

19

Figure 7: Batch & Queue Processing VS Continuous Flow Processing

In simple words, we can say that CFM is a particular process that is implemented to develop improved process flow through diligent team work as well as combined problem solving.

Single Minute Exchange of Dies (SMED) SMED (Single-Minute Exchange of Dies) is a system for dramatically reducing the time it takes to complete equipment changeovers. The essence of the SMED system is to convert as many changeover steps as possible to “external” (performed while the equipment is running), and to simplify and streamline the remaining steps. The name Single-Minute Exchange of Dies comes from the goal of reducing changeover times to the “single” digits (i.e. less than 10 minutes).

A successful SMED program will have the following benefits:

• Reduce changeover time between making different products/services

• Enables reduction in batch sizes & overall reduction in cycle time

• Lower manufacturing cost (faster changeovers mean less equipment down time) • Smaller lot sizes (faster changeovers enable more frequent product changes) • Improved responsiveness to customer demand (smaller lot sizes enable more flexible scheduling) • Lower inventory levels (smaller lot sizes result in lower inventory levels) • Smoother startups (standardized changeover processes improve consistency and quality)

Page 25: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

20

Figure 8: SMED

SMED was developed by Shigeo Shingo, a Japanese industrial engineer who was extraordinarily successful in helping companies dramatically reduce their changeover times. His pioneering work led to documented reductions in changeover times averaging 94% (e.g. from 90 minutes to less than 5 minutes) across a wide range of companies. Changeover times that improve by a factor of 20 may be hard to imagine, but consider the simple example of changing a tire:

• For many people, changing a single tire can easily take 15 minutes. • For a NASCAR pit crew, changing four tires takes less than 15 seconds.

Many techniques used by NASCAR pit crews (performing as many steps as possible before the pit stop begins; using a coordinated team to perform multiple steps in parallel; creating a standardized and highly optimized process) are also used in SMED. In fact the journey from a 15 minute tire changeover to a 15 second tire changeover can be considered a SMED journey. In SMED, changeovers are made up of steps that are termed “elements”. There are two types of elements:

• Internal Elements (elements that must be completed while the equipment is stopped) • External Elements (elements that can be completed while the equipment is running)

The SMED process focuses on making as many elements as possible external, and simplifying and streamlining all elements.

Page 26: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

21

Statistics for Lean Six Sigma

Elementary Statistics for Business The field of statistics deals with the collection, presentation, analysis, and use of data to make decisions, solve problems, and design products and processes. Statistical techniques can be a powerful aid in designing new products and systems, improving existing designs, and designing, developing, and improving processes. Statistical methods are used to help us describe and understand variability. By variability, we mean that successive observations of a system or phenomenon do not produce exactly the same result. We all encounter variability in our everyday lives, and statistical thinking can give us a useful way to incorporate this variability into our decision-making processes. For example, consider the gasoline mileage performance of your car. Do you always get exactly the same mileage performance on every tank of fuel? Of course not—in fact, sometimes the mileage performance varies considerably. This observed variability in gasoline mileage depends on many factors, such as the type of driving that has occurred most recently (city versus highway),the changes in condition of the vehicle over time (which could include factors such as tire inflation, engine compression, or valve wear), the brand and/or octane number of the gasoline used, or possibly even the weather conditions that have been recently experienced. These factors represent potential sources of variability in the system. Statistics gives us a framework for describing this variability and for learning about which potential sources of variability are the most important or which have the greatest impact on the gasoline mileage performance. Descriptive statistics focus on the collection, analysis, presentation, and description of a set of data. For example, the United States Census Bureau collects data every 10 years (and has done so since 1790) concerning many characteristics of residents of the United States. Another example of descriptive statistics is the employee benefits used by the employees of an organisation in fiscal year 2005.These benefits might include healthcare costs, dental costs, sick leave, and the specific healthcare provider chosen by the employee. Inferential statistics focus on making decisions about a large set of data, called the population, from a subset of the data, called the sample. The invention of the computer eased the computational burden of statistical methods and opened up access to these methods to a wide audience. Today, the preferred approach is to use statistical software such as Minitab to perform the computations involved in using various statistical methods.

Chapter

3

Page 27: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

22

Basic terms and Sampling Methods Now, in order to become familiar with sampling, you need to become familiar with some terms.

A population, also called a universe, is the entire group of units, items, services, people, etc., under investigation for a fixed period of time and a fixed location.

A frame is a physical list of the units in the population.

The gap is the difference between the units in the population and the units in the frame.

If the units in the gap are distributed like the units in the frame, no problems should occur due to the gap. However, if the units in the gap are not distributed like the units in the frame, a systematic bias could result from the analysis of the frame. For example, if the frame of New York City residents over 18 years of age is the voter registration list, then a statistical analysis of the people on the list may contain bias if the distribution of people 18 and older is different for people on the list (frame) and people not on the list (gap). An example of where this difference might have an impact is if a survey was conducted to determine attitudes toward immigration because the voter registration list would not include residents who were not citizens.

A sample is the portion of a population that is selected to gather information to provide a basis for action on the population. Rather than taking a complete census of the whole population, statistical sampling procedures focus on collecting a small portion of the larger population. For example, 50 accounts receivable drawn from a list, or frame, of 10,000 accounts receivable constitute a sample. The resulting sample provides information that can be used to estimate characteristics of the entire frame.

There are four main reasons for drawing a sample.

A sample is less time-consuming than a census.

A sample is less costly to administer than a census.

A sample is less cumbersome and more practical to administer than a census.

A sample provides higher-quality data than a census.

There are two kinds of samples: non-probability samples and probability samples.

In a non-probability sample, items or individuals are chosen without the benefit of a frame. Because non-probability samples choose units without the benefit of a frame, there is an unknown probability of selection (and in some cases, participants have self-selected). For a non-probability sample, the theory of statistical inference should not be applied to the sample data. For example, many companies conduct surveys by giving visitors to their web site the opportunity to complete survey forms and submit them electronically. The response to these surveys can provide large amounts of data, but because the sample consists of self-selected web users, there is no frame. Non-probability samples are selected for convenience (convenience sample) based on the opinion of an expert (judgment sample) or on a desired proportional representation of certain classes of items, units, or people in the sample (quota sample). Non-probability samples are all subject to an unknown degree of bias. Bias is caused by the absence of a frame and the ensuing classes of items or people that may be systematically denied representation in the sample (the gap).

Page 28: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

23

Non-probability samples have the potential advantages of convenience, speed, and lower cost. However, they have two major disadvantages: potential selection bias and the ensuing lack of generalize ability of the results. These disadvantages more than offset the advantages. Therefore, you should only use non-probability sampling methods when you want to develop rough approximations at low cost or when small-scale initial or pilot studies will be followed by more rigorous investigations.

You should use probability sampling whenever possible, because valid statistical inferences can be made from a probability sample. In a probability sample, the items or individuals are chosen from a frame, and hence, the individual units in the population have a known probability of selection from the frame.

The four types of probability samples most commonly used are simple random, stratified, systematic, and cluster. These sampling methods vary from one another in their cost, accuracy, and complexity.

Simple Random Sample

In a simple random sample, every sample of a fixed size has the same chance of selection as every other sample of that size. Simple random sampling is the most elementary random sampling technique. It forms the basis for the other random sampling techniques. With simple random sampling, n represents the sample size, and N represents the frame size, not the population size. Every item or person in the frame is numbered from 1 to N. The chance of selecting any particular member of the frame on the first draw is 1/N. You use random numbers to select items from the frame to eliminate bias and hold uncertainty within known limits.

Two important points to remember are that different samples of size n will yield different sample statistics, and different methods of measurement will yield different sample statistics. Random samples, however, do not have bias on average, and the sampling error can be held to known limits by increasing the sample size. These are the advantages of probability sampling over non-probability sampling.

Stratified Sample

In a stratified sample, the N items in the frame are divided into sub populations or strata, according to some common characteristic. A simple random sample is selected within each of the strata, and you combine the results from the separate simple random samples. Stratified sampling can decrease the overall sample size, and, consequently, lower the cost of a sample. A stratified sample will have a smaller sample size than a simple random sample if the items are similar within a stratum (called homogeneity) and the strata are different from each other (called heterogeneity). As an example of stratified sampling, suppose that a company has workers located at several facilities in a geographical area. The workers within each location are similar to each other with respect to the characteristic being studied, but the workers at the different locations are different from each other with respect to the characteristic being studied. Rather than take a simple random sample of all workers, it is cost efficient to sample the workers by location, and then combine the results into a single estimate of a characteristic being studied.

Systematic Sample

In a systematic sample, the N individuals or items in the frame are placed into k groups by dividing the size of the frame N by the desired sample size n. To select a systematic sample, you choose the first individual or item at random from the k individuals or items in the first group in the frame. You select the rest of the sample by taking every kth individual or item thereafter from the entire frame.

Page 29: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

24

If the frame consists of a listing of pre-numbered checks, sales receipts, or invoices, or a preset number of consecutive items coming off an assembly line, a systematic sample is faster and easier to select than a simple random sample. This method is often used in industry, where an item is selected for testing from a production line (say, every fifteen minutes) to ensure that machines and equipment are working to specification. This technique could also be used when questioning people in a sample survey. A market researcher might select every 10th person who enters a particular store, after selecting a person at random as a starting point; or interview occupants of every 5th house in a street, after selecting a house at random as a starting point.

A shortcoming of a systematic sample occurs if the frame has a pattern. For example, if homes are being assessed, and every fifth home is a corner house, and the random number selected is 5, then the entire sample will consist of corner houses. Corner houses are known to have higher assessed values than other houses. Consequently, the average assessed value of the homes in the sample will be inflated due to the corner house phenomenon.

Cluster Sample

In a cluster sample, you divide the N individuals or items in the frame into many clusters. Clusters are naturally occurring subdivisions of a frame, such as counties, election districts, city blocks, apartment buildings, factories, or families. You take a random sampling of clusters and study all individuals or items in each selected cluster. This is called single-stage cluster sampling.

Cluster sampling methods are more cost effective than simple random sampling methods if the population is spread over a wide geographic region. Cluster samples are very useful in reducing travel time. However, cluster sampling methods tend to be less efficient than either simple random sampling methods or stratified sampling methods. In addition, cluster sampling methods are useful in cutting cost of developing a frame because first, a frame is made of the clusters, and second, a frame is made only of the individual units in the selected clusters. Cluster sampling often requires a larger overall sample size to produce results as precise as those from more efficient procedures.

Types of Variables

Variable: In statistics, a variable has two defining characteristics:

A variable is an attribute that describes a person, place, thing, or idea.

The value of the variable can "vary" from one entity to another.

For example, a person's hair colour is a potential variable, which could have the value of "blonde" for one

person and "brunette" for another.

Data are classified into two types: attribute data and measurement data. Attribute data (also referred to as categorical or count data) occurs when a variable is either classified into categories or used to count occurrences of a phenomenon. Attribute data places an item or person into one of two or more categories. For example, gender has only two categories. In other cases, there are many possible categories into which the variable can be classified. For example, there could be many reasons for a defective product or service. Regardless of the number of categories, the data consists of the number or frequency of items in a particular category, whether it is the number of voters in a sample who prefer a particular candidate in an election or the number of occurrences of each reason for a defective product or

Page 30: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

25

service. Count data consists of the number of occurrences of a phenomenon in an item or person. Examples of count data are the number of blemishes in a yard of fabric or the number of cars entering a highway at a certain location during a specific time period. The colour of a ball (e.g., red, Green, blue) or the breed of a dog (e.g., collie, shepherd, terrier) would be examples of qualitative or categorical variables.

Examples of discrete data in nonmanufacturing processes include:

Number of damaged containers

Customer satisfaction: fully satisfied vs. neutral vs. unsatisfied

Error-free orders vs. orders requiring rework

Measurement data (also referred to as continuous or variables data) results from a measurement taken on an item or person. Any value can theoretically occur, limited only by the precision of the measuring process. For example, height, weight, temperature, and cycle time are examples of measurement data. E.g. suppose the fire department mandates that all fire fighters must weigh between 150 and 250 pounds. The weight of a fire fighter would be an example of a continuous variable; since a fire fighter's weight could take on any value between 150 and 250 pounds. Examples of continuous data in nonmanufacturing processes include:

Cycle time needed to complete a task

Revenue per square foot of retail floor space

Costs per transaction

From a process point of view, continuous data are always preferred over discrete data, because they are more efficient (fewer data points are needed to make statistically valid decisions) and they allow the degree of variability in the output to be quantified. For example, it is much more valuable to know how long it actually took to resolve a customer complaint than simply noting whether it was late or not. Variables can also be described according to the level of measurement scale. There are four scales of measurement: nominal, ordinal, interval, and ratio. Attribute data classified into categories is nominal scale data—for example, conforming versus nonconforming, on versus off, male versus female. No ranking of the data is implied. Nominal scale data is the weakest form of measurement. An ordinal scale is used for data that can be ranked, but cannot be measured—for example, ranking attitudes on a 1 to 5 scale, where 1 = very dissatisfied, 2 = dissatisfied, 3 = neutral, 4 = satisfied, and 5 = very satisfied. Ordinal scale data involves a stronger form of measurement than attribute data. However, differences between categories cannot be measured. Measurement data can be classified into interval- and ratio-scaled data. In an interval scale, differences between measurements are a meaningful amount, but there is no true zero point. In a ratio scale, not only are differences between measurements a meaningful amount, but there is a true zero point. Temperature in degrees Fahrenheit or Celsius is interval scaled because the difference between 30 and 32 degrees is the same as the difference between 38 and 40 degrees, but there is no true zero point (0° F is not the same as 0° C).Weight and time are ratio-scaled variables that have a true zero point; zero pounds are the same as zero grams, which are the same as zero stones. Twenty minutes is twice as long as ten minutes, and ten minutes is twice as long as five minutes.

Page 31: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

26

Summary Measures Population: A population consists of the set of all measurements for which the investigator is

interested.

Sample: A sample is a subset of the measurements selected from the population.

Census: A census is a complete enumeration of every item in a population.

We use measures of central tendency (Mean, Median) to determine the location and measures of dispersion (Standard Deviation) to determine the spread. When we compute these measures from a sample, they are statistics and if we compute these measures from a population, they are parameters. (To distinguish sample statistics and population parameters, Roman letters are used for sample statistics, and Greek letters are used for population parameters.) Central Tendency: The tendency of data to cluster around some value. Central tendency is usually expressed by a measure of location such as the mean, median, or mode.

Measures of Central Tendency

Mean (Arithmetic Mean)

The mean of a sample of numerical observations is the sum of the observations divided by the number of observations. It is the simple arithmetic average of the numbers in the sample. If the sample members are denoted by x1, x2, ... , xn where n is the number of observations in the sample or the sample size, then the sample mean is usually denoted by and pronounced "x-bar”. The population mean is denoted by µ. The arithmetic mean (also called the mean or average) is the most commonly used measure of central tendency. You calculate the arithmetic mean by summing the numerical values of the variable, and then you divide this sum by the number of values. For a sample containing a set of n values, X1, X2. . .Xn, the arithmetic mean of a sample (given by the symbol X called X-bar) is written as:

n

X

X

n

i

i 1

To illustrate the computation of the sample mean, consider the following example related to your Personal life: the time it takes to get ready to go to work in the morning. Many people wonder why it seems to take longer than they anticipate getting ready to leave for work, but very few people have actually measured the time it takes them to get ready in the morning. Suppose you operationally define the time to get ready as the time in minutes (rounded to the nearest minute) from when you get out of bed to when you leave your home. You decide to measure these data for a period of 10 consecutive working days, with the following results:

Page 32: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

27

Table 4

Day Time 1 2 3 4 5 6 7 8 9 10

Minutes 39 29 43 52 39 44 40 31 44 35

To compute the mean (average) time, first compute the sum of all the data values, 39 + 29 + 43 + 52 + 39 + 44 + 40 + 31 + 44 + 35, which are 396. Then, take this sum of 396 and divide by 10, the number of data values. The result, 39.6, is the mean time to get ready. Although the mean time to get ready is 39.6 minutes, not one individual day in the sample actually had that value. In addition, the calculation of the mean is based on all the values in the set of data. No other commonly used measure of central tendency possesses this characteristic. CAUTION: WHEN TO USE THE ARITHMETIC MEAN Because its computation is based on every value, the mean is greatly affected by any extreme value or values. When there are extreme values, the mean presents a distorted representation of the data. Thus, the mean is not the best measure of central tendency to use for describing or summarizing a set of data that has extreme values.

Median

It is the point in the middle of the ordered sample. Half the sample values exceed it and half do not. It is used, not surprisingly, to measure where the center of the sample lies, and hence where the center of the population from which the sample was drawn might lie. The median of a set of data is that value that divides the data into two equal halves. When the number of observations is even, say 2n, it is customary to define the median as the average of the nth and (n+ 1)st rank ordered values. The median is the value that splits a ranked set of data into two equal parts. If there are no ties, half the values will be smaller than the median, and half will be larger. The median is not affected by any extreme values in a set of data. Whenever an extreme value is present, the median is preferred instead of the mean in describing the central tendency of a set of data. To calculate the median from a set of data, you must first rank the data values from smallest to largest. Then, the median is computed, as described next. We can use Equation to compute the median by following one of two rules: Rule 1: If there are an odd number of values in the data set, the median is the middle ranked value. Rule 2: If there is an even number of values in the data set, then the median is the average of the two values in the middle of the data set.

Where n= the number of values To compute the median for the sample of 10 times to get ready in the morning, you place the raw data in order as follows:

Page 33: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

28

Table 5

29 31 35 39 39 40 43 44 44 52

RANKS

1 2 3 4 5 6 7 8 9 10

Median=39.5

Using rule 2 for the even-sized sample of 10 days, the median corresponds to the (10 + 1)/2 = 5.5 ranked value, halfway between the fifth-ranked value and the sixth ranked value. Because the fifth-ranked value is 39 and the sixth ranked value is 40, the median is the average of 39 and 40, or 39.5. The median of 39.5 means that for half of the days, the time to get ready is less than or equal to 39.5 minutes, and for half of the days, the time to get ready is greater than or equal to 39.5 minutes.

Mode

The mode of a sample is that observed value that occurs most frequently.

Measures of Dispersion (Spread) A second important property that describes a set of numerical data is variation. Variation is the amount of dispersion, or spread, in a set of data, be it a sample or a population. Three frequently used measures of variation are the range, the variance, and the standard deviation

The Range

The range is the simplest measure of variation in a set of data. The range is equal to the largest value minus the smallest value. The smallest sample value is called the minimum of the sample, and the largest sample value is called the maximum. The distance between the sample minimum and maximum is called the range of the sample.The range clearly is a measure of the spread of sample values. As such it is a fairly blunt instrument, for it takes no cognizance of where or how the values between the minimum and maximum might be located.

Range = largest value – smallest value Using the data pertaining to the time to get ready in the morning

Range = largest value – smallest value

Range = 52 – 29 = 23 minutes This means that the largest difference between any two days in the time to get ready in the morning is 23 minutes.

Page 34: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

29

Inter-quartile Range

Difference between third and first quartile (Q3 - Q1) Quartiles divide the sample into four equal parts. The lower quartile has 25% of the sample values below it and 75% above. The upper quartile has 25% of the sample values above it and 75% below. The middle quartile is, of course, the median. The middle half of the sample lies between the upper and lower quartile. The distance between the upper and lower quartile is called the inter-quartile range. Like the range, the inter-quartile range is a measure of the spread of the sample. It measures variability or dispersion.

The Variance and the Standard Deviation Although the range is a measure of the total spread, it does not consider how the values distribute around the mean. Two commonly used measures of variation that take into account how all the values in the data are distributed around the mean are the variance and the standard deviation. These statistics measure how the values fluctuate around the mean. A simple measure around the mean might just take the difference between each value and the mean, and then sum these differences. However, if you did that, you would find that because the mean is the balance point in a set of data, for every set of data, these differences would sum to zero. One measure of variation that would differ from data set to data set would square the difference between each value and the mean and then sum these squared differences. In statistics, this quantity is called a sum of squares (or SS).This sum of squares is then divided by the number of values minus 1 (for sample data) to get the sample variance. The square root of the sample variance (s2) is the sample standard deviation (s). This statistic is the most widely used measure of variation. Sample Variance and Sample Standard Deviation The standard deviation of a sample of numerical observations is a measure of the spread or range of the sample values. It is derived from the distance of each point in the sample from the sample mean (positive distance to the right, negative to the left). These distances are the deviations of the title - they are deviations from the sample mean. If you sum the squared deviations, and then divide by one less than the sample size, you get what is known as the sample variance. Typically this is denoted by ‘s2’. The sample variance is a useful measure in itself of the variability in the sample values, but its units of measurement are the square of those of the sample values themselves. The standard deviation of a sample is the (positive) square root of the sample variance, and is usually denoted by ‘s’. It is a measurement on the same scale as that of the original sample values. The standard deviation cannot be less than zero. If the standard deviation of a sample is zero, then all sample values are the same. If the sample values are not all the same then they must exhibit some form of variability. How much variability the sample values exhibit is encapsulated by the standard deviation. If the standard deviation is small, then the sample values cluster close to the sample mean. If the standard deviation is large then the sample values are widely dispersed. The steps for computing the variance and the standard deviation of a sample of data are

Page 35: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

30

COMPUTING s2 AND s

To compute s2, the sample variance, do the following:

1. Compute the difference between each value and the mean.

2. Square each difference.

3. Add the squared differences.

4. Divide this total by n– 1.

To compute s, the sample standard deviation, take the square root of the variance.

Table6 illustrates the computation of the variance and standard deviation using the steps for the time to get ready in the morning data. You can see that the sum of the differences between the individual values and the mean is equal to zero.

Table 6

Time ( X ) Difference Between the X and the Mean

Squared Difference Around the Mean

39 - 0.6 0.36

29 - 10.6 112.36

43 3.4 11.56

52 12.4 153.76

39 - 0.6 0.36

44 4.4 19.36

40 0.4 0.16

31 - 8.6 73.96

44 4.4 19.36

35 - 4.6 21.16

Mean Difference =39.6

Sum of Differences = 0 Sum of Squared Differences = 412.4

You calculate the sample variance S2 by dividing the sum of the squared differences computed in step 3 (412.4) by the sample size (10) minus 1:

( ) ⁄

Because the variance is in squared units (in squared minutes for these data), to compute the standard deviation, you take the squared root of the variance. Thus:

( ) √

Page 36: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

31

As summarized, we can make the following statements about the range, variance, and standard deviation. Characteristics of Range, Variance and Standard Deviation

The more spread out or dispersed the data are, the larger will be the range, the variance, and the

standard deviation.

The more concentrated or homogeneous the data are the smaller will be the range, the variance,

and the standard deviation.

If the values are all the same (so that there is no variation in the data), the range, variance, and

standard deviation will all be zero.

The range, variance, or standard deviation will always be greater than or equal to zero.

Equations for Sample Variance And Standard Deviation

1

1

2

2

n

XX

s

n

i

i

1

1

2

n

XX

s

n

i

i

Where

= Sample Mean n = Sample Size Xi =ith Value of the Variable X

∑ ( )

= summation of all the squared differences between the X values and

1

1

2

2

n

n

i

XiX

s

= 45.82

Page 37: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

32

GRAPHICAL PLOT Histogram:

A histogram (from the Greek histos meaning mast of the ship – vertical bars of the histogram) of a sample of numerical values is a plot which involves rectangles which represent frequency of occurrence in a specific interval. A Histogram can be used to assess the shape and spread of continuous sample data.

60504030

10

8

6

4

2

0

Transaction Time

Fre

qu

en

cy

Histogram of Transaction Time

Figure 9

Worksheet: Transaction.mtw

Box Plot

Box Plot can be used to show the shape of the distribution, its central value, and variability. The median for each dataset is indicated by the black center line, and the first and third quartiles are the edges of the Green area, which is known as the inter-quartile range (IQR). The extreme values (within 1.5 times the inter-quartile range from the upper or lower quartile) are the ends of the lines extending from the IQR. We may identify outliers on box plots by labeling observations that are at least 1.5 times the inter quartile range (Q3 – Q1) from the edge of the box and highlighting the data point as an asterisk. . These points represent potential outlier.

70

60

50

40

30

20

Tra

nsa

cti

on

Tim

e

Boxplot of Transaction Time

Figure 10 Worksheet: Transaction.mtw

Page 38: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

33

Scatter Plot

A graph of a set of data pairs (x, y) used to determine whether there is a statistical relationship between the variables x and y. In this scatter plot, we explore the correlation between Weight Gained (Y) and Calories Consumed (X)

400035003000250020001500

1200

1000

800

600

400

200

0

Calories Consumed

We

igh

t g

ain

ed

Scatterplot of Weight gained vs Calories Consumed

Figure 11

Worksheet: Calories Consumed.mtw

Recall Basic Probability Theory

• Used whenever we deal with uncertainty. Probability lies between 0 and 1.

• Larger the probability value, more likely it is to occur.

• Total probability of all events occurring ∑

• Probability of A & B occurring together ( )

• If A and B are mutually exclusive, ( )

• Sum Rule : ( ) ( ) ( ) ( )

• Conditional Probability ( ) Probability of A occurring given B has Occurred.

• Multiplication Rule : ( ) ( ) ( )

Page 39: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

34

Random Variable and Probability Distributions

A random variable is a variable whose value is determined by the outcome of a random experiment.

A discrete random variable is one whose set of assumed values is countable (arises from counting).

The probability distribution of a discrete random variable is a discrete distribution. E.g. Binomial,

Poisson

A continuous random variable is one whose set of assumed values is uncountable (arises

from measurement.). The probability of a continuous random variable is a continuous distribution.

E.g. Normal

Binomial distribution Binomial distribution describes the possible number of times that a particular event will occur in a sequence of observations. The event is coded binary; it may or may not occur. The binomial distribution is used when a researcher is interested in the occurrence of an event, not in its magnitude. For instance, in a clinical trial, a patient may survive or die. The researcher studies the number of survivors, and not how long the patient survives after treatment. The binomial distribution is specified by the number of observations, n, and the probability of occurrence, which is denoted by p. Other situations in which binomial distributions arise are quality control, public opinion surveys, medical research, and insurance problems. The following conditions have to be met for using a binomial distribution:

• The number of trials is fixed

• Each trial is independent

• Each trial has one of two outcomes: event or non-event

• The probability of an event is the same for each trial

Suppose a process produces 2% defective items. You are interested in knowing how likely is it to get 3 or more defective items in a random sample of 25 items selected from the process. The number of defective items (X) follows a binomial distribution with n = 25 and p = 0.02.

Poisson Distribution The Poisson Distribution was developed by the French mathematician Simeon Denis Poisson in 1837.The Poisson distribution is used to model the number of events occurring within a given time interval. The Poisson distribution arises when you count a number of events across time or over an area. You should think about the Poisson distribution for any situation that involves counting events. Some examples are:

Figure 12

One of the properties of a binomial distribution is that when n is large and p is close to 0.5, the binomial distribution can be approximated by the standard normal distribution. For this graph, n = 100 and p = 0.5.

Page 40: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

35

the number of Emergency Department visits by an infant during the first year of life,

The number of white blood cells found in a cubic centimetre of blood.

Sometimes, you will see the count represented as a rate, such as the number of deaths per year due to horse kicks, or the number of defects per square yard.

Poisson Distribution for Discrete Data type

For discrete data of the type defects probability of occurrence of “r” defects is given by:

Where, m = Mean = defects per unit = dpu Variance = dpu

Examples:

• Number of accidents on roads per day in City C

• Number of soldering defects per PCB

• Number of machine A breakdowns in a month

Binomial or Poisson

• If a mean or average probability of an event happening per unit time/per page/per mile cycled etc., is

given, and you are asked to calculate a probability of n events happening in a given time/number of

pages/number of miles cycled, then the Poisson Distribution is used.

• If, on the other hand, an exact probability of an event happening is given, or implied, in the question,

and you are asked to calculate the probability of this event happening k times out of n, then the

Binomial Distribution must be used.

Four Assumptions

Information about how the data was generated can help Green Belt/Black Belt decide whether the Poisson distribution fits. The Poisson distribution is based on four assumptions. We will use the term "interval" to refer to either a time interval or an area, depending on the context of the problem.

The probability of observing a single event over a small interval is approximately proportional to the

size of that interval.

The probability of two events occurring in the same narrow interval is negligible.

The probability of an event within a certain interval does not change over different intervals.

The probability of an event in one interval is independent of the probability of an event in any other

non-overlapping interval.

The Poisson distribution is similar to the binomial distribution because they both model counts of events. However, the Poisson distribution models a finite observation space with any integer number of events

Page 41: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

36

greater than or equal to zero. The binomial distribution models a fixed number of discrete trials from 0 to n events. Normal Distribution The most widely used model for the distribution of a random variable is a normal distribution. Whenever a random experiment is replicated, the random variable that equals the average (or total) result over the replicates tends to have a normal distribution as the number of replicates becomes large. De Moivre presented this fundamental result, known as the central limit theorem, in 1733. Unfortunately, his work was lost for some time, and Gauss independently developed a normal distribution nearly 100 years later. Although De Moivre was later credited with the derivation, a normal distribution is also referred to as a Gaussian distribution. A Normal distribution is an important statistical data distribution pattern occurring in many natural phenomena, such as height, blood pressure, lengths of objects produced by machines, etc. Certain data, when graphed as a histogram (data on the horizontal axis, amount of data on the vertical axis), creates a bell-shaped curve known as a normal curve, or normal distribution. Normal distribution is symmetrical with a single central peak at the mean (average) of the data. The shape of the curve is described as bell-shaped with the graph falling off evenly on either side of the mean. Fifty percent of the distribution lies to the left of the mean and fifty percent lies to the right of the mean. The spread of a normal distribution is controlled by the standard deviation. The smaller the standard deviation, more concentrated the data. The mean and the median are the same in a normal distribution. In a normal standard distribution, mean is 0 and standard deviation is 1. For example, the heights of all adult males residing in the state of Punjab are approximately normally distributed. Therefore, the heights of most men will be close to the mean height of 69 inches. A similar number of men will be just taller and just shorter than 69 inches. Only a few will be much taller or much shorter. The mean (μ) and the standard deviation (σ) are the two parameters that define the normal distribution. The mean is the peak or center of the bell-shaped curve. The standard deviation determines the spread in the data. Approximately, 68.27% of observations are within +/- 1 standard deviation of the mean; 95.46% are within +/- 2 standards deviations of the mean; and 99.73% are within +/- 3 standard deviations of the mean. For the height of men in Punjab, the mean height is 69 inches and the standard deviation is 2.5 inches. For a continuous distribution, like normal curve, the area under the probability density function (PDF) gives the probability of occurrence of an event.

Page 42: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

37

Figure 13

Approximately 68.27% of men in Punjab are between 66.5 (m - 1s) and 71.5 (m + 1s) inches tall.

Figure 14

Approximately 95.46% of men in Punjab are between 64 (m - 2s) and 74 (m + 2s) inches tall.

Figure 15

Approximately 99.73% of men in Punjab are between 61.5 (m - 3s) and 76.5 (m + 3s) inches tall.

A useful link for calculating probabilities: http://www.graphpad.com/quickcalcs/probability1.cfm

Page 43: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

38

Managing Six Sigma Projects

Managing Projects Six Sigma is different than other quality or process improvement methodologies- ‘IT DEMANDS RESULTS’. These results are delivered by PROJECTS that are tightly linked to customer demands and enterprise strategy.The efficacy of Six Sigma projects is greatly improved by combining project management and business process improvement practices.

What is a Project? A project is a temporary endeavor undertaken to create a unique product, service, or result. The temporary nature of project indicates a definite beginning and end. The end is reached when the project's objectives have been achieved or when the project is terminated because its objectives will not or cannot be met, or when the need for the project no longer exists. Temporary doesn't necessarily mean short in duration. Temporary doesn't generally apply to the product, service, or result created by the project; most projects are undertaken to create a lasting outcome. The logical process flow A logical process flow as explained by Thomas Pyzdek is as follows:

1. Define the project’s goals and deliverables.

a. If these are not related to the organisation’s strategic goals and objectives, stop. The project

is not a Six Sigma project. This does not necessarily mean that it isn’t a “good” project or that

the project shouldn’t be done. There are many worthwhile and important projects that are

not Six Sigma projects.

2. Define the current process.

3. Analyze the measurement systems.

4. Measure the current process and analyze the data using exploratory and descriptive statistical

methods.

a. If the current process meets the goals of the project, establish control systems and stop,

else…

5. Audit the current process and correct any deficiencies found.

a. If the corrected process meets the goals of the project, establish control systems and stop,

else …

6. Perform a process capability study using SPC.

Chapter

4

Page 44: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

39

7. Identify and correct special causes of variation.

a. If the controlled process meets the goals of the project, establish control systems and stop,

else …

8. Optimize the current process by applying statistically designed experiments.

a. If the optimized process meets the goals of the project, establish control systems and stop,

else …

9. Employ breakthrough strategy to develop and implement an entirely new process that meets the

project’s goals.

10. Establish control and continuous improvement systems and stop.

Project Plan Develop the Project Charter

Project charters (sometimes called project scope statements) should be prepared for each project and subproject. The Project Charter includes the project justification, the major deliverables, and the project objectives. It forms the basis of future project decisions, including the decision of when the project or subproject is complete. The Project Charter is used to communicate with stakeholders and to allow scope management as the project moves forward. The Project Charter is a written document issued by the Project Sponsor. The Project Charter gives the project team authority to use organisational resources for project activities.

Conduct a Feasibility Analysis- Is This a Valid Project?

Before launching a significant effort to solve a business problem, be sure that it is the correct problem and not just a symptom. Is the “defect” Green Belt/Black Belt are trying to eliminate something the customer cares about or even notices? Is the design requirement really essential, or can engineering relax the requirement? Is the performance metric really a key business driver, or is it arbitrary? Conduct a project validation analysis and discuss it with the stakeholders

Project Metrics

At this point Green Belt/Black Belt know have identified project’s customers and what they expect in the way of project deliverables. Now Green Belt/Black Belt must determine precisely how Green Belt/Black Belt will measure progress toward achieving the project’s goals.

What Is the Total Budget for This Project?

Projects consume resources, therefore to accurately measure project success, it is necessary to keep track of how these resources are used. The total project budget sets an upper limit on the resources this project will be allowed to consume. Knowing this value, at least approximately, is vital for resource planning.

How Will I Measure Project Success?

Green Belt/Black Belt should have one or more metrics for each project deliverable.

Metrics should be selected to keep the project focused on its goals and objectives.

Metrics should detect project slippage soon enough to allow corrective action to avert damage.

Metrics should be based on customer or Sponsor requirements.

Page 45: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

40

Financial Benefits

Preliminary estimates of benefits were made previously during the initial planning. However, the data obtained by the team will allow the initial estimates to be made more precisely at this time. Whenever possible, “characteristics” should be expressed in the language of management: dollars. One needn’t strive for to-the-penny accuracy; a rough figure is usually sufficient. It is recommended that the finance and accounting department develop dollar estimates; however, in any case it is important that the estimates at least be accepted (in writing) by the accounting and finance department as reasonable. This number can be used to compute a return on investment (ROI) for the project.

How Will I Monitor Satisfaction with Project Progress?

Six Sigma projects have a significant impact on people while they are being conducted. It is important that the perspectives of all interested parties be periodically monitored to ensure that the project is meeting their expectations and not becoming too disruptive. The Green Belt/Black Belt should develop a means for obtaining this information, analyzing it, and taking action when the results indicate a need. Data collection should be formal and documented. Relying on “gut feeling” is not enough.

Means of monitoring includes but not limited to Personal interviews, Focus groups, Surveys, Meetings, Comment cards. Green Belt/Black Belt may also choose to use Stakeholder analysis and Force Field analysis to proactively assess change management challenges that lie ahead.

Work Breakdown Structures (WBS):

The creation of work breakdown structures involves a process for defining the final and intermediate products of a project and their interrelationships. Defining project activities is complex. It is accomplished by performing a series of decompositions, followed by a series of aggregations. For example, a software project to develop an SPC software application would disaggregate the customer requirements into very specific engineering requirements. The customer requirement that the product create charts would be decomposed into engineering requirements such as subroutines for computing subgroup means and ranges, plotting data points, drawing lines, etc. Re-aggregation would involve, for example, linking the various modules to produce an Xbar chart and display it on the screen.

Creating the WBS

The project deliverables expected by the project’s sponsors were initially defined in the Project Charter. For most Six Sigma projects, major project deliverables are so complex as to be unmanageable. Unless they are broken into components, it isn’t possible to obtain accurate cost and duration estimates for each deliverable. WBS creation is the process of identifying manageable components or sub-products for each major deliverable.

Project Schedule Development

Project schedules are developed to ensure that all activities are completed, reintegrated, and tested on or before the project due date. The output of the scheduling activity is a time chart (schedule) showing the start and finish times for each activity as well as its relationship to other activities in the project and responsibility for completing the activity. The schedule must identify activities that are critical in the sense that they must be completed on time to keep the project on schedule. The information obtained in preparing the schedule can be used to improve it. Activities that the analysis indicates to be critical are prime candidates for improvement. Pareto analysis can be used to identify those

Page 46: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

41

critical elements that are most likely to lead to significant improvement in overall project completion time. Cost data can be used to supplement the time data and the combined time/cost information can be analyzed using Pareto analysis

Project Deadline

What is the latest completion date that allows the project to meet its objective?

What are the penalties for missing this date? Things to consider are lost market share, contract

penalties, fines, lost revenues, etc.

Activity Definition

Once the WBS is complete, it can be used to prepare a list of the activities (tasks) necessary to complete the project. Activities don’t simply complete themselves. The resources, time, and personnel necessary to complete the activities must be determined. A common problem to guard against is scope creep. As activities are developed, be certain that they do not go beyond the project’s original scope. Equally common is the problem of scope drift. In these cases, the project focus gradually moves away from its original Charter. Since the activities are the project, this is a good place to carefully review the scope statement in the Project Charter to ensure that the project remains focused on its goals and objectives.

Activity Dependencies

Some project activities depend on others: sometimes a given activity may not begin until another activity is complete. To sequence activities so they happen at the right time, Green Belt/Black Belt must link dependent activities and specify the type of dependency. The linkage is determined by the nature of the dependency. Activities are linked by defining the dependency between their finish and start dates.

Estimating Activity Duration

In addition to knowing the dependencies, to schedule the project Green Belt/Black Belt also need estimates of how long each activity might take. This information will be used by senior management to schedule projects for the enterprise and by the project manager to assign resources, to determine when intervention is required, and for various other purposes.

Identify Human Resources and other resources

Needed to complete the Project - All resources should be identified, approved, and procured. Green Belt/Black Belt should know who is to be on your team and what equipment and materials Green Belt/Black Belt are acquiring to achieve project goals. In today’s business climate, it’s rare for people to be assigned to one project from start to finish with no additional responsibilities outside the scope of a single project. Sharing resources with other areas of the organisation or among several projects requires careful resource management to ensure that the resource will be available to your project when it is needed.

Visualize Project plan using Gantt Charts

A Gantt chart shows the relationships among the project tasks, along with time estimates. The horizontal axis of a Gantt chart shows units of time (days, weeks, months, etc.). The vertical axis shows the activities to be completed. Bars show the estimated start time and duration of the various activities.

Page 47: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

42

Figure 16: Visualization of Project plan using Gantt chart

Analyze Network Diagrams

A project network diagram shows both the project logic and the project’s critical path activities, i.e., those activities that, if not completed on schedule, will cause the project to miss its due date. Although useful, Gantt charts and their derivatives provide limited project schedule analysis capabilities. The successful management of large-scale projects requires more rigorous planning, scheduling, and coordinating of numerous, interrelated activities. To aid in these tasks, formal procedures based on the use of networks and network techniques were developed beginning in the late 1950s. The most prominent of these procedures have been PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). Critical Path Method systems are used to:

Aid in planning and controlling projects

Determine the feasibility of meeting specified deadlines

Identify the most likely bottlenecks in a project

Evaluate the effects of changes in the project requirements or schedule

Evaluate the effects of deviating from schedule

Evaluate the effect of diverting resources from the project or redirecting additional resources to the project.

Project scheduling by CPM consists of four basic phases: planning, scheduling, improvement, and controlling.

The planning phase involves breaking the project into distinct activities. The time estimates for these activities are then determined and a network (or arrow) diagram is constructed, with each activity being represented by an arrow.

Page 48: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

43

The ultimate objective of the scheduling phase is to construct a time chart showing the start and finish times for each activity as well as its relationship to other activities in the project. The schedule must identify activities that are critical in the sense that they must be completed on time to keep the project on schedule.

It is vital not to merely accept the schedule as a given. The information obtained in preparing the schedule can be used to improve it. Activities that the analysis indicates to be critical are candidates for improvement. Pareto analysis can be used to identify those critical elements that are most likely to lead to significant improvement in overall project completion time. Cost data can be used to supplement the time data. The combined time/cost information can be analyzed using Pareto analysis.

The final phase in CPM project management is project control. This includes the use of the network diagram and time chart for making periodic progress assessments. CPM network diagrams can be created by a computer program or constructed manually.

The Critical Path Method (CPM) calculates the longest path in a project so that the project manager can focus on the activities that are on the critical path and get them completed on time.

Figure 17: Example Critical Path Method (CPM)

Procedure

Use sticky notes to individually list all tasks in the project. Underneath each task, draw a horizontal arrow pointing right.

Arrange the sticky notes in the appropriate sequence (from left to right).

Between each pair of tasks, draw circles to represent events and to mark the beginning or end of a task.

Label all events, in sequence, with numbers. Label all tasks, in sequence, with letters.

For each task, estimate the completion time. Write the time below the arrow for each task.

Draw the critical path by highlighting the longest path from the beginning to the end of the project.

Page 49: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

44

Managing Teams Stages of a Team - The four stages that all teams go through are shown below. In each phase, the project leader has to use different techniques to push the team along.

Some of the problems with teams are:

Groupthink – which is the unquestioned acceptance of teams’ decisions

Feuding – fighting between different team members

Floundering – teams that take forever to reach a decision

Rushing – teams that want to skip all steps and finish the project soon

Managing Change We will discuss three change management tools: Force Field analysis, Stakeholder analysis, and Resistance analysis.

Force Field Analysis Force Field Analysis is a useful technique for looking at all the forces for and against a decision. It helps in identifying the restrainers and drivers to change. In effect, it is a specialized method of weighing pros and cons. By carrying out the analysis Green Belt/Black Belt can plan to strengthen the forces supporting a decision, and reduce the impact of opposition to it. Figure 18 shows an example of Force Field analysis. In this example, there are 4 forces for the change and 2 forces against the change. This indicates that there are more forces for the change. Can Green Belt/Black Belt think of deploying some action items to further increase the forces for change?

Form

•Identifying and informing members

•Everyone is excited at the new responsibilities

•Use Project Charter to establish a common set of expectations for all team members

Storm

•Teams start to become disillusioned. Why are we here, is the goal achievable?

•Identifying resistors, counselling to reduce resistance.

•Help people with the new roles & responsibilities

•Have a different person take meeting minutes, lead team meetings etc

Norm

•Communication of norms (rules), building up of relationships amongst members.

•Productivity of team is increasing

•Help team push to the next stage

Perform

•Contribution from the members- Ideas, innovation, creation.

•All members contribute to the fullest.

•Teams should reach this stage quickest for the best results.

•Motivate team members by recognition, financial rewards, quick-win opportunities.

Page 50: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

45

Figure 18: Sample Force Field Analysis

Procedure of Force Field Analysis

Write the desired change or problem at the top of easel. Draw a vertical line underneath.

Brainstorm all the driving forces that support the change (or cause the problem to occur). Write these

on the left side of the line. Determine how strong each force is. In the area between the words and

the centerline, draw an arrow pointing to the centerline. The arrow’s length can be used to represent

the strength of each particular driving force. An alternative to different arrow lengths would be to

assign a score (between 1 and 5) to each force.

Brainstorm all the restraining forces that hinder the change (or prevent the problem from occurring).

Write these on the right side of the line. Determine how strong each force is. In the area between the

words and the centerline, draw an arrow pointing to the centerline. The arrow’s length can be used to

represent the strength of each particular restraining force. An alternative to different arrow lengths

would be to assign a score (between 1 and 5) to each force.

For a desired change, discuss various means to diminish or eliminate the restraining force. For a

problem, discuss various means to diminish or eliminate the driving forces. Focus on the strongest

forces.

Note: It is possible to have a one-to-many relationship between driving and restraining forces. For example the driving force--control rising maintenance costs could have two restraining forces--cost and disruption.

Stakeholder Analysis Stakeholder Analysis is a technique that identifies individuals or groups affected by and capable of influencing the change process. Assessment of stakeholder’s issues and viewpoints are necessary to identify the range of interests that need to be taken into consideration in planning change and to develop the vision and change process in a way that generates the greatest support. The following parameters are used to develop the segmentation of the stakeholders:

Levels of Influence: High, Medium, Low

Level of Impact: High, Medium, Low

Minimum Support Required: Champion, Supporter, Neutral

Current Position: Champion, Supporter, Neutral, Concerned, Critic, Unknown

Page 51: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

46

Stakeholder Action Plan

o The plan should outline the perceptions and positions of each Stakeholder group, including means of involving them in the change process and securing their commitment

o Define how Green Belt/Black Belt intend to leverage the positive attitudes of enthusiastic stakeholders and those who “own” resources supportive of change

o State how Green Belt/Black Belt plan to minimize risks, including the negative impact of those who will oppose the change

o o Clearly communicate coming change actions, their benefits and desired Stakeholder roles

during the change process

Resistance Analysis

Technical Resistance:

Stakeholders believe Six Sigma produces feelings of inadequacy or stupidity on statistical and process knowledge

Political Resistance:

Stakeholders see 6 Sigma as a loss of power and control

Organisational Resistance:

Stakeholders experience issues of pride, ego and loss of ownership of change initiatives

Individual Resistance:

Stakeholders experience fear and emotional paralysis as a result of high stress

Strategies to Overcome Resistance

Technical Resistance:

Focus on high level concepts to build competencies. Then add more statistical theorems as knowledge base broadens

Political Resistance:

Address issues of “perceived loss” straight on. Look for Champions to build consensus for 6 Sigma and its impact on change

Organisational Resistance:

Look for ways to reduce Resistance.

Page 52: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

47

Individual Resistance:

Decrease the fear by increased involvement, information and education

John Kotter’s 8 Step Change Management Plan John Kotter considers ‘lack of communication’ as one of the most common reasons for project failure. According to John Kotter, the following eight steps can be followed to enable change within an organization.

Step 1: Increase Urgency

The data is clear that the rate of change in the world is increasing exponentially. Numerous factors indicate that not only is the world moving more quickly, but that the rate of change may be the defining characteristic of the business world for the foreseeable future. Establishing a sense of urgency is necessary to gain the cooperation required to drive a significant change effort. Most companies ignore this step - indeed close to 50% of the companies that fail to make needed change make their mistakes at the very beginning. Leaders who understand the importance of a sense of urgency are good at taking the pulse of their company and differentiating between complacency, false urgency and true urgency

Tactics that can make this happen include:

Bringing the Outside In

Behaving with Urgency Every Day

Finding Opportunity in Crisis

Dealing with No Nos

Step 2: Build a Guiding Team

Successful transformation is driven through the power of a volunteer army made up of leaders from across the enterprise who band together to knock down all the barriers to successful strategy implementation.

Putting together the right coalition of people to lead a change initiative is critical to its success. That coalition must have the right composition, a significant level of trust, and a shared objective. In a rapidly changing world, complex organizations are forced to make decisions more quickly and with less certainty than they would like and with greater sacrifice than they would prefer. It is clear that teams of leaders and managers, acting in concert, are the only effective way to make productive decisions under these circumstances.

In putting together a Guiding Coalition, the team as a whole should reflect:

Position Power: Enough key players on board so that those left out cannot block progress.

Expertise: All relevant points of view should be represented so that informed intelligent decisions can be made.

Credibility: The group should be seen and respected by those in the firm so that the group’s pronouncements will be taken seriously by other employees.

Leadership: The group should have enough proven leaders to be able to drive the change process.

Page 53: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

48

This last point is particularly important. The Guiding Coalition should be comprised of both managers and leaders who work together as a team. The managers keep the process under control while the leaders drive the change.

Step 3: Get the Vision Right

A clear message that speaks to both the hearts and the minds of the organization will focus the energy of the organization for the results that the leaders seek.

A clear vision serves three important purposes. First, it simplifies hundreds or thousands of more detailed decisions. Second, it motivates people to take action in the right direction even if the first steps are painful. Third, it helps to coordinate the actions of different people in a remarkably fast and efficient way. A clear and powerful vision will do far more than an authoritarian decree or micromanagement can ever hope to accomplish. Often the vision is part of a larger system that includes strategies, plans and budgets. Such visions must be seen as strategically feasible. To be effective, a vision must take into account the current realities of the enterprise, but also set forth goals that are truly ambitious.

Thus, effective visions have six key characteristics, They are:

Imaginable: They convey a clear picture of what the future will look like.

Desirable: They appeal to the long-term interest of employees, customers, shareholders and others who have a stake in the enterprise.

Feasible: They contain realistic and attainable goals.

Focused: They are clear enough to provide guidance in decision making.

Flexible: They allow individual initiative and alternative responses in light of changing conditions.

Communicable: They are easy to communicate and can be explained quickly.

Step 4: Communicate for Buy-In

Successful transformation has little to do with competent project management. It's focus is not on overcoming resistance to change. It is not about convincing or pushing people to do their part because they have to. It goes way beyond "engaging" employees. It goes way beyond getting people to "buy in" to a plan. It mobilizes an aligned army of people.

Gaining an understanding and commitment to a new direction is never an easy task, especially in complex organizations. Under-communication and inconsistency are rampant. Both create stalled transformations. To be effective, the vision must be communicated in hour-by-hour activities. The vision will be referred to in emails, in meetings, in presentations – it will be communicated anywhere and everywhere. Actions Speak Louder Than Words Even more important than what is said is what is done. Leaders who transform their organizations “walk the talk.” They seek to become a living example of the new corporate culture that the vision aspires to. Nothing undermines a communication program more quickly than inconsistent actions by leadership. Nothing speaks as powerfully as someone who is backing up their words with behavior. When an entire team of senior management starts behaving differently and embodies the change they want to see, it sends a powerful message to the entire organization. These actions increase motivation, inspire confidence and decrease cynicism.

Page 54: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

49

Step 5: Empower Action

Generated solutions should create people who more energized, more skilled, and wanting to help provide initiative and leadership to handle future changes.

Typically, empowering employees involves addressing four major obstacles: structures, skills, systems and supervisors. We will explore two of these here:

Structural Barriers

Often the internal structures of companies work as cross-purposes to the change vision. An organization that claims to want to be customer focused finds its structures fragment resources and responsibilities for products and services. Many times, it is difficult to remove these barriers in the midst of the change process. However, some obstacles are so disempowering that they must be changed. Typically, the most effective of these changes can occur in the human resources area. Realigning incentives and performance appraisals to reflect the change vision can have a profound effect on the ability to accomplish the change vision. Management information systems can also have a big impact on the successful implementation of a change vision.

Troublesome Supervisors

Another barrier to effective change can be troublesome supervisors. Often these managers have dozens of interrelated habits that add up to a style of management that inhibits change. They may not actively undermine the effort, but they are simply not “wired” to go along with what the change requires. Often enthusiastic change agents refuse to confront these people. While that approach can work in the early stages of a change initiative, by Step 5 it becomes a real problem. Easy solutions to this problem don’t exist. Sometimes managers will concoct elaborate strategies or attempt manipulation to deal with these people. If done skillfully this only slows the process and, if exposed, looks terrible – sleazy, cruel and unfair – and undermines the entire effort. Typically, the best solution is honest dialogue.

Step 6: Create Short-Term Wins

Celebrate success visibly and reward those that have helped accomplish it. Maintaining the momentum for change is critical to long term victory in implementing strategic initiatives.

For leaders in the middle of a long-term change effort, short-term wins are essential. Running a change effort without attention to short-term performance is extremely risky. To ensure success, short term wins must be both visible and unambiguous. The wins must also be clearly related to the change effort. Such wins provide evidence that the sacrifices that people are making are paying off. This increases the sense of urgency and the optimism of those who are making the effort to change. These wins also serve to reward the change agents by providing positive feedback that boosts morale and motivation. The wins also serve the practical purpose of helping to fine tune the vision and the strategies. The guiding coalition gets important information that allows them to course-correct.

Page 55: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

50

Step 7: Don’t Let Up

Ultimately, the 8-Step Process for Leading Change can lead to a sustained dual-operating organization that is highly adaptive. This type of organization works in an aligned, urgent and accelerated manner, which creates a significant competitive advantage for organizations of all types.

Resistance is always waiting in the wings to re-assert itself. Even if you are successful in the early stages, you may just drive resistors underground where they wait for an opportunity to emerge when you least expect it. The consequences of letting up can be very dangerous. Whenever you let up before the job is done, critical momentum can be lost and regression may soon follow. The new behaviors and practices must be driven into the culture to ensure long-term success. Once regression begins, rebuilding momentum is a daunting task.

In a successful major change initiative, by stage 7 you will begin to see:

More projects being added Additional people being brought in to help with the changes Senior leadership focused on giving clarity to an aligned vision and shared purpose Employees empowered at all levels to lead projects Reduced interdependencies between areas Constant effort to keep urgency high Consistent show of proof that the new way is working

Leadership is invaluable in surviving Step 7. Instead of declaring victory and moving on, these transformational leaders will launch more and more projects to drive the change deeper into the organization. They will also take the time to ensure that all the new practices are firmly grounded in the organization’s culture. Managers, by their nature, think in shorter timeframes.

Step 8: Make Change Stick

New practices must grow deep roots in order to remain firmly planted in the culture. Culture is composed of norms of behavior and shared values. These social forces are incredibly strong. Every individual that joins an organization is indoctrinated into its culture, generally without even realizing it. Its inertia is maintained by the collective group of employees over years and years. Changes – whether consistent or inconsistent with the old culture – are difficult to ingrain. This is why cultural change comes in Step 8, not Step 1. Some general rules about cultural change include:

Cultural change comes last, not first

You must be able to prove that the new way is superior to the old

The success must be visible and well communicated

You will lose some people in the process

You must reinforce new norms and values with incentives and rewards – including promotions

Reinforce the culture with every new employee

Return on Investment (ROI) Return on Investment (ROI) analysis is one of several commonly used financial metrics for evaluating the financial consequences of business investments, decisions, or actions. ROI analysis compares the magnitude and timing of investment gains directly with the magnitude and timing of investment costs. To calculate ROI,

Page 56: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

51

the benefit (return) of an investment is divided by the cost of the investment; the result is expressed as a percentage or a ratio. The return on investment formula:

Table 7: Return on Investment Formulas

ROI Formula Calculations

Net Benefit

Demonstrates the benefits after considering the cost

Net Benefit = Benefit - Cost

Benefit Cost Ratio (BCR)

Demonstrates the return for every dollar invested BCR = Benefits/Costs

ROI %

Demonstrates the percentage of return for every dollar invested when considering cost

ROI % = (Net Benefits/Costs) × 100

For Example, Ratio of money gained or lost divided by initial investment Situation 1: A $1000 investment that results in interest of $50 has a 5% ROI Situation 2: A $100 investment that results in interest of $20 has a 20% ROI Calculation for return on investment, can be modified to suit the situation, it all depends on what you include as returns and costs.

Net Present Value (NPV) The difference between the present value of cash inflows and the present value of cash outflows. NPV is used in capital budgeting to analyze the profitability of an investment or project. NPV analysis is sensitive to the reliability of future cash inflows that an investment or project will yield. It is the total present value for a series of cash flows that considers the time value of money. Usually used to appraise long-term projects. NPV compares the value of a dollar today to the value of that same dollar in the future, taking inflation and returns into account. If the NPV of a prospective project is positive, it should be accepted. However, if NPV is negative, the project should probably be rejected because cash flows will also be negative.

( )

( )

( )

( )

Where,

CFt= the cash flow at time t and r = the cost of capital.

Page 57: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

52

Example 1

The example below illustrates the calculation of Net Present Value. Consider Capital Budgeting projects A and B which yield the following cash flows over their five year lives. The cost of capital for the project is 10%.

Table 8

Project A Project B

Year Cash Flow Cash Flow 0 $ -1000 $ -1000 1 500 100 2 400 200 3 200 200 4 200 400 5 100 700

Solution:

Project A

( )

( )

( )

( )

( )

Project B

( )

( )

( )

( )

( )

Thus, if Projects A and B are independent projects then both projects should be accepted. On the other hand, if they are mutually exclusive projects then Project A should be chosen since it has the larger NPV.

If NPV >0, then the project would add value to the firm.

Example 2

A Present Value Table will allow you to look up Present Value Factors (PVF) for various combinations of n (project year) and d (project discount rate). As an example, look at a CP investment with the following parameters:

Table 9

Initial Investment US $150,000

Future Savings (FV) Year 1 – US$ 45,000

Page 58: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

53

Year 2 – US$ 45,000

Year 3 – US$ 77,000

Discount Rate (d) 10%

Number of Years (n) 3

Using the Present Value table attached, look up the Present Value Factors (PVF) for a discount rate of 10% and for project years 1,2, and 3.

Year 1 PVF: 0.9091 Year 2 PVF: 0.8264 Year 3 PVF: 0.7513

Using the PVFs shown above, the future cost savings for each year can be converted to their present value. These values are then added together to estimate the project's Net Present Value. The initial investment (which is already in present-day dollars) is subtracted from the sum. The result is the Net Present Value of the project.

Table 10

Year Future Savings Present Value Factor (10%) Present Value

1 $ 45000 × 0.9091 = $ 40910

2 $ 45000 × 0.8264 = $ 37188

3 $ 77000 × 0.7513 = $ 57850

$ 135948

Less: Initial Investment – $ 150000

Equals: Net Present Value – $ 14052

For this example, the NPV is calculated to be -$14,052 which means that the investment is not profitable within three years. A positive value for NPV would indicate that the investment is profitable within three years. Net Present Value can often be calculated using tables, and spreadsheets such as Microsoft Excel. Although net present value (NPV) analysis can vary by company, it commonly takes into account:

Net profit per year

Customer retention rates

Desired return on assets

Extensive customer segmentation can make this analysis more insightful since long-term customers tend to spend more and have fewer bad debts, and new customers are often attracted by discounts. There are costs associated with attracting new customers and retaining long-term customers, but generally the costs to retain are lower than those to attract. The NPV of customers can be used to determine the cost of attracting new customers and leverage customer satisfaction offers.

Page 59: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

54

Example 3

Calculating NPV Pyzdek gives the following calculation for NPV:

Determine a meaningful period of time over which to do the calculation (e.g., a life insurer would

track decades, whereas a diaper manufacturer would track only a few years).

Calculate net profit (net cash flow) generated by customers each year. For the first year, subtract the

cost of attracting the pool of customers. Specific numbers, such as profit per customer in year one,

are more valuable because long-term customers tend to spend more.

Chart the customer "life expectancy" using samples to fine out how much the customer base erodes

each year. Again, specific numbers are more valuable. In retail banking, 26% of account holders

defect in the first year, while in the ninth year, the rate drops to 9%.

Pick a discount rate. If you want a 15% annual return on assets, use that. Costs that should be

considered when determining attraction and retention costs:

o Advertising

o Commissions

o Account set up

o Loyalty and customer satisfaction programs

Calculations

NPV1 = Profit / 1.15

NPV2 = (Year 2 profit x retention rate) / (1.15)2

Last year: NPVn = (Year n's adjusted profit) / (1.15)n

The sum of years 1 through n is how much your customer is worth. This is the NPV of all the profits

you can expect.

Internal Rate of Return (IRR) It is the effective compound rate of interest which can be earned on the invested capital. It is essentially equal to the (annualized) interest rate a bank would have to pay you to duplicate the performance of your portfolio.

IRR, or Internal Rate of Return, is a special application of the logic behind NPV, or Net Present Value

calculations. It is a commonly-used concept in project and investment analysis, including capital

budgeting.

IRR takes into account the amount of time that has elapsed since making an investment.

The internal rate of return is compared to the company's required rate of return. The required rate of return is the minimum rate of return that an investment project must yield to be acceptable. If the internal rate of return is equal to or greater than the required rate of return, than the project is acceptable. If it is less than the required rate of return, then the project is rejected. Quite often the company's cost of capital is used as the required rate of return. The reasoning is that if a project cannot provide a rate of return at least as greater as the cost of the funds invested in it, then it is not profitable.

Page 60: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

55

Formula: IRR = lower discount rate + (NPV at lower % rate / distance between 2 NPV) × (Higher % rate - Lower % rate)

Example 1

A project is expected to have a net present value of $865 at a discount rate of 20% and a negative NPV of $1,040 at a discount rate of 22%. Calculate the IRR. Solution: Distance between 2 NPV = 865 + 1040 = $1,905 IRR = 20% + (865 / 1905) × (22% - 20%) = 20.91%

Example 2

The following information relates to Venture Ltd investment project: Net Present Value (NPV) at 25% cost of capital: $1,714 NPV at 30% cost of capital: ($2,937) Calculate the Internal Rate of Return. Solution: Distance between 2 NPV = 1714 + 2937 = $4,651 IRR = 25% + (1714 / 4651) × (30% - 25%) = 26.84% If the IRR is greater than alternate investment with similar risk, then it is a worthwhile project. Typical hurdle rate is 12%. However a major disadvantage of using the Internal Rate of Return instead of Net Present Value is that if managers focus on maximizing IRR and not NPV, there is a significant risk in companies where the return on investment is greater than the Weighted Average Cost of Capital (WACC) that managers will not invest in projects expected to earn greater than the WACC, but less than the return on existing assets. IRR is a true indication of a project’s annual return of investment only when the project generates no interim cash flows - or when those interim investments can be invested at the actual IRR.

Page 61: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

56

Define

Define Overview A Lean Six Sigma project starts out as a practical problem that is adversely impacting the business and ultimately ends up as a practical solution that improves business performance. Projects state performance in quantifiable terms that define expectations related to desired levels of performance and timelines. The primary purpose of the define phase is to ensure the team is focusing on the right thing. The Define phase seeks to answer the question, "What is important?" That is, what is important for the business? The team should work on something that will impact the Big Y's - the key metrics of the business. And if Six Sigma is not driven from the top, a Green/Black Belt may not see the big picture and the selection of a project may not address something of criticality to the organisation. The Key Performance Indicators (KPIs) that are tracked by businesses to measure its progress towards strategic objectives are usually displayed together on a scorecard. This scorecard is reviewed by management on at least a monthly basis to identify problem areas and take corrective actions as needed. There are four primary areas within a scorecard: Financial, Customer, Internal Business Processes, and Learning & Growth. Some indicators are lagging indicators in the sense that they talk about what has already occurred. An example of a lagging indicator is revenue in the last quarter. It can be an important indicator for the business to know about but it does not tell the full story of what is going to happen in the future. Hence, scorecards must also contain indicators that predict future performance. These indicators are called leading indicators. For example, if we know that all employees are trained 20 hours this year, this could be a leading indicator of future employee performance. Following are some traditional KPIs that businesses track within their scorecard:

Table 11

Financial Indicators Customer Indicators

Revenue (amount of money collected by

selling products or services)

Cost of Goods Sold (amount of money

expended to produce products or services)

Gross Income (difference between

Revenue & Cost of Goods Sold)

Net Income (profitability of the company

after subtracting all expenses)

Percentage of Industry Sales (PINS –

Customer Returns (Amount of $$ returned

by customers – an indicator of how

satisfied customers are with

products/services)

Warranty (More the money spent on

warranty, less satisfied are the customers)

Net Promoter Score (NPS – will our

customers recommend us to others based

on survey results)

Chapter

5

Page 62: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

57

indicator of market share)

Earnings per Share

Cash Flow (Amount of money earned vs.

spent during the indicated period)

On time delivery (% of products/services

delivered on time)

Number of Complaints Received

Customer Churn

Internal Business Process Learning & Growth

Efficiency (Productivity indicator for key

resources)

New Product Introduction Time (Time

taken for development of new products)

Net Revenue by Product (Indicator of

which products contribute to revenue

Material or Production Costs

Quality Indicators (Re-work)

Production Cycle Time

Training duration per Employee

Labour Productivity

Staff Turnover

Pace of Promotion

Six Sigma or Lean Benefits

The objective is to identify and/or validate the improvement opportunity, develop the business processes, define Critical Customer Requirements, and prepare an effective project team. The deliverables from the Define phase include:

Identification of CTQ (Critical to Quality)/Output Measure

Project Charter

High Level Process Map

The three steps that enable us to do so are:

• Generate Project Ideas Step 1

• Select Project Step 2

• Finalize Project Charter and High level process map Step 3

Page 63: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

58

Step 1- Generate Project Ideas Project ideas may be generated from (a) Voice of Customer, (b) Voice of Business and (c) Cost of Poor Quality. The common sources of such Project ideas are

Table 12

Customer Dashboards Surveys and Scorecards

Internal/External Audits Financial Analysis of Business Units/ Center

Benchmarking Data Focus Groups

VOC, VOB and COPQ data help us develop project themes which can help us understand the Big Y (output) Some common Project themes are

Table 13

Product returns or High Warranty Cost Customer Complaints

Accounts receivables or Invoicing issues Cycle time, Lead time

Defective Services or Products Yield Improvement, Re-work or Scrap

Reduction

Capacity Constraints, Inventory Resource Utilisation

Post the team that has the pertinent VOC, VOB and COPQ data, they then need to translate this information into Critical Customer Requirements (CCR's). A CCR is a specific characteristic of the product or service desired by and important to the customer. The CCR should be measurable with a target and an allowable range. The team would take all the data, identify the key common customer issues, and then define the associated CCR's. The Critical Customer Requirements (CCR's) often are at too high a level to be directly useful to the team. So the next step is to take the applicable CCR's and translate them into Critical to Quality (CTQ) measures. A CTQ is a measure on the output of the process. It is a measure that is important to meeting the defined CCR's.

Table 14

Big Y Reduce Operational Expenditure

Voice of Business Collection of accounts receivable is taking too long

CCR Accounts Receivable to be closed within 60 days

CTQ Time to receive payments

Of course there are multiple alternative methods to drill down to an appropriate CTQ. Project team may also choose to drill down to the Project CTQ from the Big Y.

Kano Model Analysis The Kano Model of Customer (Consumer) Satisfaction classifies product attributes based on how they are perceived by customers and their effect on customer satisfaction. These classifications are useful for guiding design decisions. Project activities in which the Kano Model is useful

o Identifying customer needs o Determining functional requirements

Page 64: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

59

o Concept development o Analyzing competitive products

Other tools those are useful in conjunction with the Kano Model o Eliciting Customer Input o Prioritization Matrices o Quality Function Deployment

Introduction The Kano Model of Customer satisfaction (Figure 19) divides product attributes into three categories: threshold, performance, and excitement. A competitive product meets basic attributes, maximizes performances attributes, and includes as many “excitement” attributes as possible at a cost the market can bear.

Figure 19: Kano Model

Threshold Attributes

Threshold (or basic) attributes are the expected attributes or “musts” of a product, and do not provide an opportunity for product differentiation. Increasing the performance of these attributes provides diminishing returns in terms of customer satisfaction, however the absence or poor performance of these attributes results in extreme customer dissatisfaction. An example of a threshold attribute would be brakes on a car. Threshold attributes are not typically captured in QFDs (Quality Function Deployment) or other evaluation tools as products are not rated on the degree to which a threshold attribute is met, the attribute is either satisfied or not. Attributes that must be present in order for the product to be successful, can be viewed as a 'price of entry.' However, the customer will remain neutral toward the product even with improved execution of these aspects.

o Bank teller will be able to cash a check

Page 65: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

60

o Nurses will be able to take a patient's temperature

o Mechanic will be able to change a tire

o Keyboards will have a space bar

Customers rarely mention this category, unless they have had a recent negative experience, because it is assumed to be in place.

Performance Attributes

Performance attributes are those for which more is generally better, and will improve customer satisfaction. Conversely, an absent or weak performance attribute reduces customer satisfaction. Of the needs customers verbalize, most will fall into the category of performance attributes. These attributes will form the weighted needs against which product concepts will be evaluated. The price for which customer is willing to pay for a product is closely tied to performance attributes. For example, customers would be willing to pay more for a car that provides them with better fuel economy.

Excitement Attributes

Excitement attributes are unspoken and unexpected by customers but can result in high levels of customer satisfaction, however their absence does not lead to dissatisfaction. Excitement attributes often satisfy latent needs – real needs of which customers are currently unaware. In a competitive marketplace where manufacturers’ products provide similar performance, providing excitement attributes that address “unknown needs” can provide a competitive advantage. Although they have followed the typical evolution to a performance then a threshold attribute, cup holders were initially excitement attributes. Increased functionality or quality of execution will result in increased customer satisfaction. Conversely, decreased functionality results in greater dissatisfaction. Product price is often related to these attributes. Examples are:

o The shortest waiting time possible in the bank drive-up window

o The shortest waiting time possible for the nurse to answer the patient call button

o The auto mechanic performing the services on the car as efficiently and inexpensively as

possible

o Tech support being able to help with a problem as quickly and thoroughly as possible

Customers give the most information on this category.

Other Attributes

Products often have attributes that cannot be classified according to the Kano Model. These attributes are often of little or no consequence to the customer, and do not factor into consumer decisions. An example of this type of attribute is a plate listing part numbers can be found under the hood on many vehicles for use by repairpersons. Customers get great satisfaction from a feature - and are willing to pay a price premium. However, satisfaction will not decrease (below neutral) if the product lacks the feature. These features are often

Page 66: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

61

unexpected by customers and they can be difficult to establish as needs up front. Sometimes called unknown or latent needs.

o The drive-up bank teller greets you by name; remembers you from a previous visit

o The nurse brings you a book that you mentioned you enjoy

o The mechanic cleans and vacuums the car making it better than when you brought it in

o The tech support individual emails you a $5 coupon to compensate for the issue

Customers rarely provide VOC on this category because they don't know to expect it. Innovation is needed to provide this level of quality consistently. Application of the Kano Model Analysis

A relatively simple approach to applying the Kano Model Analysis is to ask customers two simple questions for each attribute:

o Rate your satisfaction if the product has this attribute?; and

o Rate your satisfaction if the product did not have this attribute?

Customers should be asked to answer with one of the following responses: A. Satisfied B. Neutral (Its normally that way) C. Dissatisfied D. Don’t care

Basic attributes generally receive the “Neutral” response to Question 1 and the “Dissatisfied” response to Question 2. Exclusion of these attributes in the product has the potential to severely impact the success of the product in the marketplace.

Eliminate or include performance or excitement attributes that their presence or absence respectively lead to customer dissatisfaction. This often requires a trade-off analysis against cost. As Customers frequently rate most attributes or functionality as important, asking the question “How much extra would you be willing to pay for this attribute or more of this attribute?” will aid in trade-off decisions, especially for performance attributes. Prioritization matrices can be useful in determining which excitement attributes would provide the greatest returns on Customer satisfaction.

Consideration should be given to attributes receiving a “Don’t care” response as they will not increase customer satisfaction nor motivate the customer to pay an increased price for the product. However, do not immediately dismiss these attributes if they play a critical role to the product functionality or are necessary for other reasons than to satisfy the customer.

The information obtained from the Kano Model Analysis, specifically regarding performance and excitement attributes, provides valuable input for the Quality Function Deployment process.

Page 67: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

62

Quality Function Deployment In a few words: The voice of the customer translated into the voice of the engineer To design a product well, a design teams needs to know what it is they are designing, and what the end-users will expect from it. Quality Function Deployment is a systematic approach to design based on a close awareness of customer desires, coupled with the integration of corporate functional groups. It consists in translating customer desires (for example, the ease of writing for a pen) into design characteristics (pen ink viscosity, pressure on ball-point) for each stage of the product development (Rosenthal, 1992). Quality function deployment (QFD) is “a system to assure that customer needs drive the product design and production process”. Ultimately the goal of QFD is to translate often subjective quality criteria into objective ones that can be quantified and measured and which can then be used to design and manufacture the product. It is a complimentary method for determining how and where priorities are to be assigned in product development. The intent is to employ objective procedures in increasing detail throughout the development of the product. Quality Function Deployment was developed by Yoji Akao in Japan in 1966. By 1972 the power of the approach had been well demonstrated at the Mitsubishi Heavy Industries Kobe Shipyard (Sullivan, 1986) and in 1978 the first book on the subject was published in Japanese and then later translated into English in 1994 (Mizuno and Akao, 1994). In Akao’s words, QFD "is a method for developing a design quality aimed at satisfying the consumer and then translating the consumer's demand into design targets and major quality assurance points to be used throughout the production phase. ... [QFD] is a way to assure the design quality while the product is still in the design stage." As a very important side benefit he points out that, when appropriately applied, QFD has demonstrated the reduction of development time by one-half to one-third. (Akao, 1990) QFD is a cross-functional planning tool which is used to ensure that the voice of the customer is deployed throughout the product planning and design stages. QFD is used to encourage breakthrough thinking of new concepts and technology. Its use facilitates the process of concurrent engineering and encourages teamwork to work towards a common goal of ensuring customer satisfaction. The basic concept of QFD is to translate the requirements of the stakeholders into product design or engineering characteristics, and subsequently into process specifications and eventually into production requirements. It is a method that structures the translation of stakeholders’ requirements into technical specifications which are mainly understood by engineers. Every translation involves a matrix. Through a series of interactive matrices, QFD can be employed to address almost any business situation requiring decisions involving a multitude of criteria, requirements or demands. This stems from QFD inherently employing and orchestrating many of the Total Quality Management tools and processes in a rigorous and strategic fashion. When used in the evaluation phase of a project, QFD can assure that all relevant issues have been addressed and can provide a new basis for prioritizing projects. The 3 main goals in implementing QFD are:

Prioritize spoken and unspoken customer wants and needs.

Translate these needs into technical characteristics and specifications.

Build and deliver a quality product or service by focusing everybody toward customer satisfaction.

Since its introduction, Quality Function Deployment has helped to transform the way many companies:

Plan new products

Page 68: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

63

Design product requirements

Determine process characteristics

Control the manufacturing process

Document already existing product specifications

Application of QFD in various fields:

How a process/technique can become environmentally friendly, high quality products are still

maintained and technologies are available to curb the process emissions. Such as environmental

design, sustainable design, environmentally-conscious processes or products, clean technologies, and

green products or systems.

QFD can be effectively used are regulatory compliance, emission reduction, pollution and loss

prevention programmes, construction or operating permit acquisition, and equipment procurement

(equipment leaks).

The House of Quality A house of quality (HOQ) involves the collection and analysis of the “voice of the customer” which includes the customer needs for a product, customers’ perceptions on the relative importance of these needs and the relative performance of the producing company and its main competitors on the needs. It also requires the generation and analysis of the “voice of the technician” which includes the technical measures converted from the customer needs, technicians’ evaluations on the relationship between each customer need and each technical measure, and the performance of the relevant companies in terms of these technical measures.

Page 69: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

64

Figure 20: Quality Function Deployment Template

The HOQ Elements A typical HOQ contains some of the following elements or concepts:

“a”- Customer needs (WHATs)

This is the generally the first portion of the House of Quality (HOQ) matrix to be completed and also the most important. It documents a structured list of a product’s customers requirements described in their own words (the voice of the customer). These are the requirements of customers for the product expressed in customer’s language. The team then gathers information from customers on the requirements they have for the product or service. The information gathered through conversations with the customer to describe their needs and problems. In order to organize and evaluate this data, the team uses simple quality tools like Affinity diagrams and Tree diagrams. The completed Affinity Diagram can then be used as the basis of a Tree Diagram. This is constructed from the top down with each level being considered in turn for errors and omissions. The result is a family tree type hierarchy of customer needs. If there are many customer needs, grouping them into meaningful hierarchies or categories is necessary for easy understanding and analysis. This structure is then documented in the customer requirement portion of the HOQ matrix.

“b”- Planning Matrix

This matrix contains the correlation between each pair of customer needs (WHATs) through empirical comparisons. The information is provided by customers and usually is difficult to obtain since a lot of pair

Page 70: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

65

wise comparisons are needed. The purpose of completing this correlation matrix is for the company to identify where trade-off decisions and further research may be required. Firstly it quantifies the customer’s requirement priorities and their perceptions of the performance of existing products. Secondly it allows these priorities to be adjusted based on the issues that concern the design team.

o Importance Weighting

The first and most important measure in this section is the requirement Importance Weighting. This figure quantifies the relative importance of each of the customer requirements (described in the left hand portion of the HOQ matrix) from the customer’s own perspective. This measure is often shown in a column alongside the customer requirement descriptions in the left section of the HOQ matrix. A questionnaire is used to gather these importance weightings. In this a customer is asked to give importance for each documented requirement. On a scale from 1 - 5, customers then rate the importance of each requirement. A better but more involved approach is to use the Analytical Hierarchy Process (AHP). This also utilizes a questionnaire but offers the customer pairings of requirement to consider. They choose the most important from this pair. These results are then interpreted into numerical weightings using a matrix. The other measures which are determined by the design team can also be included in the planning matrix. These can include:

o Planned satisfaction Rating

The planned Satisfaction Rating quantifies the design team’s desired performance of the envisaged product in satisfying each requirement.

o Improvement Factor

It can be then calculated by subtracting the performance score of the company’s existing product from its planned performance score i.e. the number of improvement points. This difference is multiplied by an improvement increment (e.g. 0.2) and this is added to 1 to give the improvement factor.

o Sales-point

A sales-point is a kind of possibility which will give your company a unique business position. This measure can be used to a weight to those requirements which can be utilized to market the product (Usually between 1 and 1.5). A “strong” sales point is reserved for important WHATs where each comparing company is rated poorly. A “moderate” sales point means the importance rating or competitive opportunity is not so great. And a “no” sales point means no business opportunity. Numerically, 1.5, 1.25 and 1 are assigned to strong, moderate and no sales point respectively. Additional measures (e.g. environmental impact, competitor’s future actions etc.) can also be included where these are deemed useful by the team.

o Overall Weighting

An Overall Weighting relating to each requirement can then be calculated by multiplying the importance Weights by the improvement Ratio and the Sales Point.

“c”- Technical measures (HOWs) Voice of Company

Page 71: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

66

This section of the House of Quality (HOQ) matrix is also referred to as engineering Characteristics or the Voice of the Company. It describes the product in the terms of the company. This information is generated by the QFD design team who identify all the measurable characteristics of the product which they perceive are related to meeting the specified customer requirements. These are design specifications, substitute quality characteristics, engineering attributes or methods, which can relate to and measure customer needs. The technical descriptors are attributes about the product or service that can be measured and benchmarked against the competition. Technical descriptors may exist that your organization is already using to determine product specification, however new measurements can be created to ensure that your product is meeting customer needs. In the same way that customer requirements are analyzed and structured, affinity and tree diagrams are applied to interpret these product characteristics.

“d” - Interrelationship Matrix (of WHATs vs. HOWs)

This section forms the main body of the House of Quality matrix and can be very time consuming to complete. Its purpose is to translate the requirements as expressed by the customer into the technical characteristics of the product. Its structure is that of a standard two dimensional matrix with cells that relate to combinations of individual customer and technical requirements. It is the task of the QFD team to identify where these interrelationships are significant. Each combination of customer and technical is considered in turn by the QFD team. E.g. How significant is Padding thickness is satisfying comfortable when hanging?

This matrix is a systematic means for identifying the level of relationship between each WHAT and each HOW. The relationship matrix is where the team determines the relationship between customer needs and the company's ability to meet those needs. The team asks the question, "what is the strength of the relationship between the technical descriptors and the customers needs?" The level of interrelationship discerned is weighted usually on a four point scale (High, Medium, Low, None) and a symbol representing this level of interrelationship is entered into the matrix cell.

“e”- Roof / Correlation Matrix

The triangular “roof” matrix of the House of Quality is used to identify where the technical requirements that characterize the product, support or impede one another. As in the interrelationship section, the QFD team work through the cells in this roof matrix considering the pairings of technical requirements these represent. For each cell the question is asked: Does improving one requirement cause a deterioration or improvement in the other technical requirement? Where the answer is a deterioration, an engineering trade- off exists and a symbol is entered into the cell to represent this (usually a cross or “-“). Where improving one requirement leads to an improvement in the other requirement, an alternative symbol is entered into the cell (Usually a tick or “+”). Different levels of such positive or negative interaction (e.g. Strong/Medium/ Weak) can be indicated using different coloured symbols. The information recorded in the roof matrix is useful to the design team in several ways. It highlights where a focused design improvement cloud lead to a range of benefits to the product. Also focuses attention on the negatives relationships in the design. These can represent opportunities for innovative solutions to be developed which avoid the necessity for such compromises being made. The correlation matrix is probably the least used room in the House of Quality; however, this room is a big help to the design engineers in the next phase of a comprehensive QFD project. Team members must

Page 72: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

67

examine how each of the technical descriptors impact each other. The team should document strong negative relationships between technical descriptors and work to eliminate physical contradictions.

“ f “ - Technical Priorities (Goals for the HOWs)

The producing company can set performance goals on each HOW to be more technically competitive. At this stage in the process, the QFD team begins to establish target values for each technical descriptor. Target values represent "how much" for the technical descriptors, and can then act as a base-line to compare against. The relative importance of each technical requirement of the product in meeting the customer’s specified needs, can be simply calculated from the weightings contained in the planning and interrelationship weighting is multiplied by Overall Weighting from the Planning matrix. These values are then summed down the columns to give a score for each technical requirement.

“g” - Competitive Benchmarking

Each of the technical requirements that have been identified as important characteristics of the product can be measured both for the company’s own existing product and the available competitive products. This illustrates the relative technical position of the existing product and helps to identify the targets levels of performance to be achieved in a new product. This is to technically evaluate the performance of the company’s product and its main competitors’ similar products on each HOW. To better understand the competition, engineering then conducts a comparison of competitor technical descriptors. This process involves reverse engineering competitor products to determine specific values for competitor technical descriptors.

“h” - Design Targets

The final output of the HOQ matrix is a set of engineering target values to be met by the new product design. The process of building this matrix enables these targets to be set and prioritized based on an understanding of the customer needs, the competition’s performance and the organization’s current performance. The QFD team now need to draw on all this information when deciding on these values. This is not necessarily the end of the QFD process. The output of this first HOQ matrix can be utilized as the first stage of a four part QFD process referred to as the Clausing Four-Phase Model. This continues the translation process using linked HOQ type matrices until production planning targets are developed. This approach allows the “Voice of the Customer” to drive the product development process right through to the setting of manufacturing equipment

Page 73: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

68

Figure 21: Completed Quality Deployment Matrix

In practice, it is both difficult and unnecessary to include all the HOQ elements described above. In fact, different users build different HOQ models involving different elements from the above list. The most simple but widely used HOQ model contains only the customer needs (WHATs) and their relative importance, technical measures (HOWs) and their relationships with the WHATs, and the importance ratings of the HOWs. Some models include further the customer competitive assessment and performance goals for the WHATs. Some authors add one or both of the two correlation matrices into this simple model. Fewer models include the technical competitive assessment since this information is difficult to deal with and, as such, goals and probability factors for the HOWs appear seldom in HOQ studies—even if these are included, they are hardly incorporated into the computation of the importance ratings of the HOWs, which are usually obtained by formula (Importance rating of the How’s) that does not relate to technical competitive assessment at all.

Page 74: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

69

Step 2- Select Project A good Six Sigma project should:

Impact a key business goal

Require analysis to uncover the root cause of the problem. The cause and solution of the problem

should be unknown or not clearly understood.

Address a source of customer pain or dissatisfaction

Focus on improving a key business process

Produce quantifiable results (e.g. financial savings, customer satisfaction)

Be able to be completed in time to make a difference to the business goal

Be scoped so that results can be achieved in reasonable timeframe, possible 2-3 months for Green

Belt projects, and 4-6 months for Black Belt.

Ideally, Green Belts and Black Belts are expected to work on projects and are not directly responsible for generation or selection of Six Sigma projects. Six Sigma projects are selected by senior management on certain criteria. These criteria include linkage between the proposed project and company strategy, expected timeline for project completion, expected revenue from the projects, whether data exists to work on the problem, whether the root cause or solution is already known. Table 15 shows a typical project selection template used by management to pick projects. The projects that have the highest net score are the ones that get picked for execution.

Table 15: Example Project Selection Template

Number Sponsor Project

Description Costs Benefits Timeline Strategy Risk Score

1 Job Smith Reduce Inventory

levels for 123 series product

0 1,00,000 6 months High Low 4.4

2 Bob Bright Improve efficiency for machine 456 -

2333 5000 2,00,000 6 months Medium Low 5.0

3 John

Travolta Improve employee

retention by 5% 0 40,000 3 months High Medium 4.0

4 Peter Hunt Reduce cycle time

for making products

1000 1,00,000 1 year Medium Low 3.4

5 Bill Richards Improve customer

satisfaction scores 0 0 1 year High Low 2.8

Some basic guidelines that can be considered for selecting a project are: Result

• Does it have significant impact on customer satisfaction? Customer First (Customers are known to

be the lifeblood of every business and to support these vital resources, businesses spend a

significant amount of time and effort maintaining customer satisfaction)

Page 75: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

70

• Does the project strongly relate to business goals? (Revenue Growth, Cost Reduction, Capital

Reduction, Key Business Objectives, On Time Delivery, Lead Time, Quality, Customer Satisfaction)

Effort

• Can the project be completed in 3 to 6 months? - A Good Project Must Be Manageable. Prolonged

projects risk loss of interest and start building frustrations within the team and all the way

around. The team also runs the risk of disintegrating

What not to Select?

• It should neither be a “bean-sized” project so that the improvements are too small to be

appreciated, nor a “world-hunger” project wherein implementing the solutions is beyond the

control of the stakeholders

What Problem to Select?

• The cause of this problem should be unknown or not clearly understood .There shouldn’t be any

predetermined or apparent optimal solution. If Green Belt/Black Belt already knows the answer,

then just go fix it!

Step3- Finalize Project Charter and High Level Map Project Charter The Project Charter is used to establish a clear understanding of the project amongst the team, the Project leader, Champion, Sponsor and stakeholders. It is a written document and works as an agreement between management and the team about what is expected. It includes a documented business case, opportunity for improvement, goals, scope, timeline and members of the project team. The Project Charter is the key document that defines the scope and purpose of any project. The Charter functions as the communication vehicle for the team, the Sponsor, Process Owner, Champion, the Project leader, and all other team members involved. It is used to assure that the team sees the vision of leadership, and understands what the project opportunity and performance improvement goals are, what is expected of the team and keeps the team focused and aligned on the organisational priorities. It transfers the project from the champion to the project team. One common misconception is that the Project Charter is written and then not touched again. The team should refer to the Charter often to ensure that the project is staying on track. In addition, it should be considered a living document that may be revised as the team learns more in the Define and the Measure phase.

Elements of a Project Charter Table 16

Opportunity Statement Pain or Problems

Business Case Financial Benefits

Goal Statement Success Criteria, usually a ‘SMART’ statement

Page 76: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

71

Project Scope Boundaries of the project, In and out scope list

Team Members Who and What

Project Plan Activities, Timelines, Critical Path, Review Schedule

Opportunity Statement

The opportunity statement describes the “why” of undertaking the improvement initiative. The problem statement should address the following questions:

What is wrong or not working?

When and where do the problems occur?

How extensive is the problem?

What is the impact “pain” on our customers / business / employees?

Business Case

The business case describes the benefit for undertaking a project. The business case addresses the following questions:

What is the focus for the project team?

Does it make strategic sense to do this project?

Does this project align with other business initiatives (Critical Success Factors)?

What benefits will be derived from this project?

What impacts will this project have on other business units and employees?

Goal Statement

The goal statement should be most closely linked with the Opportunity statement. The goal statement defines the objective of the project to address the specific pain area, and is SMART (Specific, Measurable, Achievable, Relevant and Time-bound). The goal statement addresses:

What is the improvement team seeking to accomplish?

How will the improvement team’s success be measured?

What specific parameters will be measured? These must be related to the Critical to Cost, Quality, and/or Delivery (Collectively called the CTQ’s).

What are the tangible results deliverables (e.g., reduce cost, cycle time, etc.)?

What is the timetable for delivery of results?

Project Scope

Page 77: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

72

The project scope defines the boundaries of the business opportunity. One of the Six Sigma tools that can be used to identify/control project scope is called the In-Scope/Out-of Scope Tool. Project Scope defines:

What are the boundaries, the starting and ending steps of a process, of the initiative?

What parts of the business are included?

What parts of the business are not included?

What, if anything, is outside the team’s boundaries?

Where should the team’s work begin and end?

Team Roles & Responsibilities

Yellow Belt

Provide support to Black Belts and Green Belts as needed

May be team members on DMAIC teams

Supporting projects with process knowledge and data collection

Green Belt

Is the Team Leader for a Project within own functional area

Selects other members of his project team

Defines the goal of project with Champion & team members

Defines the roles and responsibilities for each team member

Identifies training requirements for team along with Black Belt

Helps make the Financial Score Card along with his CFO

Black Belt

Leads project that are cross-functional in nature (across functional areas)

Ends role as project leader at the close of the turnover meeting

Trains others in Six Sigma methodologies & concepts

Sits along with the Business Unit Head and helps project selection

Provides application assistance & facilitates team discussions

Helps review projects with Business Unit Head

Informs Business Unit Head of project status for corrective action

Master Black Belt:

Participates in the Reviews and ensures proper direction.

Page 78: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

73

Trains and coaches Process Owners on Process Management principles

Team Member:

A Team Member is chosen for a special skill or competence

Team Members help design the new process

Team Members drive the project to completion

Subject Matter Expert (SME):

Is an expert in a specific functional area

May be invited to specific team meetings but not necessarily all of them

Provides guidance needed to project teams on an as needed basis

Project Sponsor:

Acts as surrogate Process Owner (PO) until an owner is named

Becomes PO at Improve/Develop if PO is not named

Updates Tracker with relevant documents and pertinent project data

Part of senior management responsible for selection / approval of projects

Process Owner:

Takes over the project after completion

Manages the control system after turnover

Turns over PO accountability to the new Process Owner if the process is reassigned to another area or another individual

Deployment Champion:

Responsible for the overall Six Sigma program within the company

Reviews projects periodically

Adds value in project reviews since he is hands-on in the business

Clears road blocks for the team

Has the overall responsibility for the project closure

Project Plan

Page 79: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

74

The project plan shows the timeline for the various activities required for the project. Some of the tools that can be used to create the project timeline are the Network diagram or the GANTT chart. We may also like to identify the activities on critical path, milestones for tollgate reviews to ensure that timely completion of the project.*Please refer to Chapter 4 to learn more about Project Management practices

Project Charter sections are largely interrelated: as the scope increases, the timetable and the deliverables also expand. Whether initiated by management or proposed by operational personnel, many projects initially have too broad a scope. As the project cycle time increases, the tangible cost of the project deployment, such as cost due to labour and material usage, will increase. The intangible costs of the project will also increase: frustration due to lack of progress, diversion of manpower away from other activities, and delay in realization of project benefits, to name just a few. When the project cycle time exceeds 6 months or so, these intangible costs may result in the loss of critical team members, causing additional delays in the project completion. These “world peace” projects, with laudable but unrealistic goals, generally serve to frustrate teams and undermine the credibility of the Six Sigma program.

High Level Process Map Process maps are graphical representations of a process flow identifying the steps of the process, the inputs and outputs of the process, and opportunities for improvement. Process maps can cross-functional boundaries if the start points and stop points are located in different departments or if several persons from different departments are responsible for satisfying specific customer need. Process maps are applicable to any type of process: manufacturing, design, service, or administrative. Process maps are used to document the actual process and helps locate value and Non-Value added steps. These maps can be an excellent way to communicate information to others and train employees. It can be very useful to start with a high level process map of five to ten steps representing the sub-processes. This helps to establish the scope of the process, identify significant issues and frame the more detailed map later. SIPOC is an acronym for Suppliers – Inputs – Process – Outputs – Customers. It is a high level process map that describes the boundaries of the process, major tasks and activities, Key Process Input and Output Variables, Suppliers & Customers. When we refer to customers, we usually talk about both internal and external customers. It can be used to identify the key stakeholders and describe the process visually to team members and other stakeholders. A Stakeholder is anyone who is either impacted by the project or could impact the outcome of the project. Not everyone is a stakeholder but a project may have several stakeholders including employees, Suppliers, customers, shareholders etc.

Suppliers: Provide inputs

Inputs: Data / unit required to execute the process

Process Boundary: Identified by the hand-off at the input (the start point of process) and the output

(the end point of the process)

Outputs: Output of a process creating a product or service that meets a customer need

Customers: Users of the output

Page 80: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

75

Table17: Shows a filled out SIPOC matrix

Suppliers Inputs Process Output Customer

Employees Employee

Setup Data

Set Up Resources

Active Employee Record

Project Manager and Team

Contractors Contractor Setup Data

Active Contractor Record

Project Manager and Team

Employees & Contractors

Planning Meeting Assign Activities

to Resources

Project Schedule Project Manager and

Team

Contracting Officer

Statement of Work

Project Schedule Project Manager and

Team

Payroll Department

Employee Pay Rates

Assign Rates to Resources

Rate Table System Administrator

Contractors Contractor Pay Rates

Rate Table System Administrator

Employees Timesheets

Enter Time Sheets

Labour Cost Report

Project Manager and Team

Contractors (Time & Materials)

Timesheets Labour Cost

Report Project Manager and

Team

Contractors (Fixed Priced)

Invoices

Enter Vendor Invoices

Contractor Cost Report

Project Manager and Team

Project Management

System

Timesheet & Invoices

Summarize and Report Costs

Monthly Performance

Reports

CIO, Program Managers, VP Finance

Page 81: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

76

Define Phase Tollgate checklist

Has the project been chosen because of its alignment with Organisation goals and the strategic direction of the ‘business’?

What is the problem statement – detailing (what) is the problem, (when) was the problem first seen, (where) was it seen, and what is the (magnitude or extent) of the problem. Is the problem measured in terms of Quality, Cycle Time, Cost Efficiency, net expected financial benefits? Ensure there is no mention or assumptions about causes and solutions.

Does a goal statement exist that defines the results expected to be achieved by the process, with reasonable and measurable targets? Is the goal developed for the “what” in the problem statement, thus measured in terms of Quality, Cycle Time or Cost Efficiency?

Does a financial business case exist, explaining the potential impact (i.e. measured in dollars) of the project on the organisation budgets, Net Operating Results, etc.?

Is the project scope reasonable? Have constraints and key assumptions been identified?

Who is on the team? Are they the right resources and has their required time commitment to the project been confirmed by your Sponsor and team?

What is the high level Project plan? What are the key milestones (i.e. dates of tollgate reviews for DMAIC projects)?

Who are the customers for this process? What are their requirements? Are they measurable? How were the requirements determined?

Who are the key stakeholders? How will they be involved in the project? How will progress be communicated to them? Do they agree to the project?

What kinds of barriers/obstacles will need assistance to be removed? Has the risk mitigation plan to deal with the identified risks been developed?

Page 82: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

77

Measure

Measure Phase Overview The primary purpose of the Measure phase is to answer the question, "How are we doing?" In other words, the team must baseline the current state of each CTQ or CTP. Many times in a project, the critical measure identified by the team is not an already reported measure. Although the team may not think the measure is meeting the goal, they need to collect data to verify the current performance level. The deliverables from the Measure phase include:

Operational definition

Measurement system analysis

Data collection plan

Baseline Performance

The objective is to identify and/or validate the improvement opportunity, develop the business processes, define critical customer requirements, and prepare an effective project team. The three steps that enable us to do so are:

Step 4-Finalize Project Y, Performance Standards for Y Data collection can be difficult. To help, the team should use operational definitions and data collection plans. Operational definition has been constant source of worry for Measurement system reliability.

Operational Definition

An Operational Definition is a precise definition of the specific Y to be measured. The data collected using this definition will be used to baseline the performance. The purpose of the definition is to provide a single, agreed upon meaning for each specific Y. This help in ensuring reliability and consistency during the measurement process although the concept is simple, the task of creating a definition should not be under estimated. A clear concise operational definition will ensure reliable data collection and reduction in measurement error.

•Finalize Project Y, Performance Standards for Y Step 4

•Validate Measurement System for Y Step 5

•Measure Current Performance and Gap Step 6

Chapter

6

Page 83: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

78

Table 18

CTQ Performance Characteristics

CTQ Measure Data Type Operational definition LSL USL Target

A B c d1 d2 d3

a: CTQ Measure is a measurable characteristic of output

b: Data type, whether it is continuous or discrete

c: A clear, concise detailed definition of a measure. Operational definitions should be very precise and

be written to avoid possible variation in interpretations.

d: Specification limits are usually derived from customer needs or stated by the customer

o USL- Upper Specification limit

o LSL- Lower Specification limit

Step 5- Validate measurement system for Y Measurement System Analysis One important step in the Measure phase, sometimes skipped by inexperienced Six Sigma teams, is conducting a Measurement System Analysis (MSA). MSA is carried out to verify that the measurement system produces valid data, before the team makes data-based decisions. A measurement system is defined as the collection of operations, procedures, gauges and other equipment, software, materials, facilities, and personnel used to assign a number to the characteristic being measured. A good measurement system possesses certain properties. First, it should produce a number that is “close” to the actual property being measured, that is, it should be accurate. Second, if the measurement system is applied repeatedly to the same object, the measurements produced should be close to one another, that is, it should be repeatable. Third, the measurement system should be able to produce accurate and consistent results over the entire range of concern, that is, it should be linear. Fourth, the measurement system should produce the same results when used by any properly trained individual, that is, the results should be reproducible. Finally, when applied to the same items the measurement system should produce the same results in the future as it did in the past, that is, it should be stable. In general, the methods and definitions presented here are consistent with those described by the Automotive Industry Action Group (AIAG) MSA Reference Manual (3rd ed.).

Properties of Continuous Measurement

Bias or Accuracy

The difference between the average measured value and a reference value is referred to as bias. The reference value is an agreed-upon standard, such as a standard traceable to a national standards body (see below). When applied to attribute inspection, bias refers to the ability of the attribute inspection system to produce agreement on inspection standards. Bias is controlled by calibration, which is the process of comparing measurements to standards. The concept of bias is illustrated in Fig.22.

Page 84: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

79

Figure 22: Bias Illustrated

Repeatability

AIAG defines repeatability as the variation in measurements obtained with one measurement instrument when used several times by one appraiser, while measuring the identical characteristic on the same part. Variation obtained when the measurement system is applied repeatedly under the same conditions is usually caused by conditions inherent in the measurement system. ASQ defines precision as “The closeness of agreement between randomly selected individual measurements or test results. NOTE: The standard deviation of the error of measurement is sometimes called ‘imprecision’”. This is similar to what we are calling repeatability. Repeatability is illustrated in Fig.23.

Figure 23: Repeatability Illustrated

Reproducibility

Reproducibility is the variation in the average of the measurements made by different appraisers using the same measuring instrument when measuring the identical characteristic on the same part. Reproducibility is illustrated in Fig.24.

Page 85: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

80

Figure 24: Reproducibility Illustrated

Stability

Stability is the total variation in the measurements obtained with a measurement system on the same master or parts when measuring a single characteristic over an extended time period. A system is said to be stable if the results are the same at different points in time. Stability is illustrated in Fig.25.

Figure 25: Stability Illustrated

Page 86: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

81

Linearity

The difference in the bias values through the expected operating range of the gage. Linearity is illustrated in Fig.26.

Figure 26: Linearity Illustrated

Measurement System Discrimination Discrimination, sometimes called resolution, refers to the ability of the measurement system to divide measurements into “data categories.” All parts within a particular data category will measure the same. For example, if a measurement system has an illustrated resolution of 0.001 inch, then items measuring 1.0002, 1.0003, 0.9997 would all be placed in the data category 1.000, that is, they would all measure 1.000 inch with this particular measurement system. A measurement system’s discrimination should enable it to divide the region of interest into many data categories. In Six Sigma, the region of interest is the smaller of the tolerance (the high specification minus the low specification) or six standard deviations. A measurement system should be able to divide the region of interest into at least five data categories. For example, if a process was capable (i.e., Six Sigma is less than the tolerance) and σ = 0.0005, then a gage with a discrimination of 0.0005 would be acceptable (six data categories), but one with a discrimination of 0.001 would not (three data categories). When unacceptable discrimination exists, the range chart shows discrete “jumps” or “steps.” Measurement System Analysis is a critical part of any Six Sigma Project, regardless of the environment (e.g. transactional, service, etc.). The philosophy behind this kind of study is applicable to all project types. Depending on the type of data, the statistical analysis will be different. For a continuous measurement, there are a variety of statistical properties that can be determined: stability, bias, precision (which can be broken down into repeatability and reproducibility), linearity, and discrimination.

Page 87: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

82

For a discrete measurement, estimates of the error rates can be determined for within appraiser, each appraiser versus standard, between appraisers, and all appraisers versus standard. The properties related to both continuous and discrete measures are discussed below.

Gage repeatability and reproducibility study (Gage R&R Study)

A Measurement System Analysis (MSA) method which evaluates your measurement system’s precision and estimates the combined measurement system repeatability and reproducibility. A Gage R&R Study helps you answer whether your measurement system variability is small compared with the process variability, how much variability in the measurement system is caused by differences between operators, and whether your measurement system is capable of discriminating between different parts.

For example, several inspectors measure the diameter of screws to make sure they meet specifications. You want to make sure you trust your data so you examine whether the inspectors are consistent in their measurements of the same part (repeatability) and whether the variation between inspectors is consistent (reproducibility).

Although the number of inspectors, parts, and trials will change for your particular application, a general procedure follows:

Step 1: Collect a sample of 15 screws that represent the entire range of your process.

Step 2: Randomly choose 3 inspectors from the regular operators of this process (they should be trained and familiar with the process because you are trying to estimate the actual process, not the best or worst case).

Step 3: It is best to randomize the order of the measurements, but sometimes this is not always possible. An alternative could be to randomize the order of the operators then have each one measure all 15 screws in random order. Record the data in a Minitab worksheet.

Step 4: Repeat step 3 for the second trial.

When all the trials have been completed, we can analyse the collected data. According to the Automobile Industry Action Group (AIAG) you can determine whether your measurement system is acceptable using the following guidelines.

If the Total Gage R&R contribution in the %Study Var column (% Tolerance, %Process) is:

Less than 10% - the measurement system is acceptable.

Between 10% and 30% - the measurement system is acceptable depending on the application, the cost of the measuring device, cost of repair, or other factors.

Greater than 30% - the measurement system is unacceptable and should be improved.

Properties of Discrete Measurements For discrete measurements, a blind study may be done. An expert would usually determine whether the product is good or bad. Then, a variety of good and bad units is given to two or three appraisers. The appraisers each then determine if they think the product is good or bad. They are asked to look at the same unit more than once, without knowing that they had evaluated the unit previously. This is called the "within appraiser" error rate. It can then be determined how well all the

Page 88: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

83

appraisers are able to get the same result on the same product, the "between appraiser" error rate. In addition, it can be determined how well the appraisers agree with the expert, known as the "appraiser versus standard" error rate.

Attribute Repeatability& Reproducibility (AR&R) Attribute gauge studies are typically used when the measurement result is binary, such as defect/no defect or successful/unsuccessful, although rating scales can also be validated with this method. Multiple measurement system operators are chosen to measure a sample set two or more times. In this way, both repeatability (variation within the operator) and reproducibility (variation between the operators) can be quantified. In an attribute study, a standard can be used for comparison with the results from the measurement system operators. The standard is the 'truth' and any discrepancy from truth due to the measurement system is considered an error (or defect).

We use Attribute Agreement Analysis to answer:

Does the appraiser agree with himself on all trials?

Does the appraiser agree with the known standard on all trials?

Do all appraisers agree with themselves and others on all trials? (within and between appraisers)

Do all appraisers agree with themselves, others, and with the standard?

AR&R studies can be done using statistical software packages which provide graphical output and other summary information; however, they are often done by hand due to the straightforward nature of the calculations.

Step 6- Measure current performance and Gap Data Collection Plan Once Measurement system reliability has been established, team develops a data collection plan of Y measure. It is a plan defining the precise data that will be collected, the amount of data that will be collected, a description of the logistical issues - who, where, when data will be collected - and what will be done with data collected.

Table 19

Y Measure Operational Definition

Data Source

Sample Size

Who will collect the data

When will the data be

collected

How will data be collected

X data that can be collected at the

same time

Cycle time

Difference between the

barcode info on receipt and barcode on customer

notification minus nonworking hours

(holidays, weekends)

IT System 300 Process

Representatives First 21 days of the month

Data collection form, IT reports

Customer type; application

method, day of the week;

Representative initiating call; IT

rep; wait time; rep sending

notification, process sub-step

cycle times

The Table 19 above shows an example data collection plan for a project focused on reducing cycle time.

Page 89: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

84

Collecting the Data

One major determinant of the duration of a Six Sigma project is the time required to collect the data. The time required is dependent on how frequently the data are available in the process and the ease of collection. The team needs to be involved in the data collection to make sure it is done right and that any anomalies or problems are recorded and understood. This will make the analysis of the data in the Analyze phase easier.

Baseline Performance Once the team collects the data, they must answer the question "How are we doing?" by base lining the Y data. This is the final step in the Measure phase. The process baseline provides a quantifiable measure of the process performance before any improvement efforts have been initiated. Process baselines are a critical part of any process improvement effort, as they provide the reference point for assertions of benefits attained. There are a variety of metrics that can be used to baseline these Y’s. These metrics include Sigma Level, Cp & Cpk, Pp & Ppk First time Yield and Rolled Throughput Yield. The team should select a metric that is appropriate for their business.

Sigma Level

Sigma Level is a commonly used Metric for evaluating performance of a process. Sigma level can be computed for continuous data, discrete data and yield; therefore can be computed for most Y measures. Sigma level corresponds to a certain Defects per million opportunities (DPMO). Higher the Sigma Level (Z), lower is the DPMO which simply means lower defect rate.

Table 20

Zlt(Sigma Level Long term) Zst(Sigma Level Short term) DPMO

1 2.5 158655.25

1.5 3 66807.20

2 3.5 22750.13

2.5 4 6209.67

3 4.5 1349.90

3.5 5 232.63

4 5.5 31.67

4.5 6 3.40

Capability Indices

Process capability is the ability of the process to meet the requirements set for that process. One way to determine process capability is to calculate capability indices. Capability indices are used for continuous data and are unit less statistics or metrics.

Cp

It is the potential capability indicating how well a process could be if it were centred on target. This is

not necessarily its actual performance because it does not consider the location of the process, only

the spread. It doesn't take into account the closeness of the estimated process mean to the

specification limits.

Page 90: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

85

Cp indices recognize the fact that your samples represent rational subgroups, which indicate how the

process would perform if the shift and drift between subgroups could be eliminated. Therefore, it

calculates process spread using within-subgroup variation.

Cpk

Measures of potential process capability, calculated with data from the subgroups in your study. They measure the distance between the process average and the specification limits, compared to the process spread:

o CPL measures how close the process mean is running to the lower specification limit

o CPU measures how close the process mean is running to the upper specification limit

o Cpk equals the lesser of CPU and CPL.

Pp

It is the resultant capability indicating how well a process could be if it were centred on target. This is not necessarily its actual performance because it does not consider the location of the process, only the spread. It doesn't take into account the closeness of the estimated process mean to the specification limits. Pp ignores subgroups and considers the overall variation of the entire process. This overall variation

accounts for the shift and drift that can occur between subgroups; therefore, it is useful in measuring

capability over time. If your Pp value differs greatly from your Cp value, you conclude that there is

significant variation from one subgroup to another.

Ppk

Measures of overall process capability, calculated with overall process standard deviation. They

measure the distance between the process average and the specification limits, compared to the

process spread:

o PPL measures how close the process mean is running to the lower specification limit

o PPU measures how close the process mean is running to the upper specification limit

o Ppk equals the lesser of PPU and PPL.

If Ppk, PPU, and PPL are equal, the process is centered at the exact midpoint of the specification

limits.

Yield

Yield can be defined in multiple ways; traditionally it has been defined as the ratio of the total units delivered defect-free to the customer over the total number of units that entered the system. However this definition can obscure the impact of inspection and rework. Process Improvement experts usually are more interested in evaluating yield without rework to understand the true capability of the process.

FTY (First time Yield a.k.a. First Pass Yield): It is the ratio of the units of a product that pass

completely through the process the first time without rework over the number of units entered

into the process.

RTY (Rolled Throughput Yield):The combined overall likelihood of an item passing through all

steps successfully first time

Page 91: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

86

Measure Phase Tollgate checklist

Has a detailed Process Map been completed to better understand the process and problem, and show where in the process the root causes might reside?

Has the team identified the specific input (x), process (x), and output (y) measures needing to be collected for both effectiveness and efficiency categories (I.e. Quality, Speed, and Cost Efficiency measures)?

Has the team developed clear, unambiguous operational definitions for each measurement and tested them with others to ensure clarity and consistent interpretation?

Has a clear, reasonable choice been made between gathering new data or taking advantage of existing data already collected by the organization?

Has an appropriate sample size and sampling frequency been established to ensure valid representation of the process we are measuring?

Has the measurement system been checked for repeatability and reproducibility, potentially including training of data collectors?

Has the team developed and tested data collection forms or check sheets which are easy to use and provide consistent, complete data?

Has baseline performance and process capability been established? How large is the gap between current performance and the customer (or project) requirements?

Has the team been able to identify any ‘Quick Wins’?

Has the team completed the baseline savings documentation?

Page 92: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

87

Analyze

Analyze Phase Overview In the Analyze phase, the question to be answered is "What is wrong?" In other words, in this phase the team determines the root causes of the problems of the process. The team identified the problems of the process in the Define and Measure phases of the project. The key deliverable from this phase is validated root causes. The team will use a variety of tools and statistical techniques to find these root causes. The team must choose the right tools to identify these root causes as if there is no fixed set of tools for a certain situation. The tools chosen are based on the type of data that were collected and what the team is trying to determine from the data. The team usually would use a combination of graphical and numerical tools. The graphical tools are important to understand the data characteristics and to ensure that the statistical analyses are meaningful (e.g., not influenced by outliers). The numerical (or statistical) analyses ensure that any differences identified in the graphs are truly significant and not just a function of natural variation. The team should have already identified the process outputs - CTQ's and CTP's, or the little Y's - in the Define phase. The X's are process and input variables that affect the CTQ's and CTP's. The first consideration in the Measure phase is to identify the ‘x’ data to collect while base lining these ‘y’ variables. The ‘y’ data are needed to establish a baseline of the performance of the process. The ‘x’ data are collected concurrently with the Y's in this phase so that the relationships between the X's and Y's can be studied in the Analyse phase will impact the Big Y's - the key metrics of the business. And if Six Sigma is not driven from the top, a Green/Black Belt may not see the big picture and the selection of a project may not address something of criticality to the organization. The objective is to identify and/or validate the improvement opportunity, develop the business processes, define critical customer requirements, and prepare an effective project team. The three steps that enable us to do so are:

•List All Probable X’s Step 7

• Identify Critical X's Step 8

•Verify Sufficiency of Critical X's for the project Step 9

Chapter

7

Page 93: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

88

Step 7- List all Probable X’s Most project executions require a cross-functional team effort because different creative ideas at different levels of management are needed in the definition and the shaping of a project. These ideas are better generated through brainstorming sessions.

Qualitative Analysis

Brainstorming

Brainstorming is used at the initial steps and during the Analyze phase of a Project to identify potential factors that affect the output. It is a group discussion session that consists of encouraging a voluntary generation of a large volume of creative, new, and not necessarily traditional ideas by all the participants. It is very beneficial because it helps prevent narrowing the scope of the issue being addressed to the limited vision of a small dominant group of managers. Since the participants come from different disciplines, the ideas that they bring forth are very unlikely to be uniform in structure. They can be organized for the purpose of finding root causes of a problem and suggest palliatives. If the brainstorming session is unstructured, the participants can give any idea that comes to their minds, but this might lead the session to stray from its objectives.

Five Why’s Analysis

Asking "Why?" may be a favourite technique of your three year old child in driving you crazy, but it could teach you a valuable Six Sigma quality lesson. The 5 Whys is a technique used in the Analyze phase of the Six Sigma DMAIC methodology. By repeatedly asking the question "Why" (five is a good rule of thumb),Green Belt/Black Belt can peel away the layers of symptoms which can lead to the root cause of a problem. Although this technique is called "5 Whys," you may find that you will need to ask the question fewer or more times than five before you find the issue related to a problem. The benefits of 5 Why’s is that it is a simple tool that can be completed without statistical analysis. Table 21 shows an illustration of the 5 Why’s analysis. Based on this analysis, we may decide to take out the non-value added signature for the director.

Table 21: 5 Why's Example

Customers are unhappy because they are being shipped products that don't meet their specifications.

Why Because manufacturing built the products to a specification that is different from what the customer and the sales person agreed to.

Why Because the sales person expedites work on the shop floor by calling the head of manufacturing directly to begin work. An error happened when the specifications were being communicated or written down.

Why Because the "start-work" form requires the sales director's approval before work can begin and slows the manufacturing process (or stops it when the director is out of the office).

Why Because the sales director needs to be continually updated on sales for discussions with the CEO.

Page 94: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

89

Affinity Diagram

If the ideas generated by the participants to the brainstorming session are few (less than 15), it is easy to clarify, combine them, determine the most important suggestions, and make a decision. However, when the suggestions are too many it becomes difficult to even establish a relationship between them. An affinity diagram or KJ method (named after its author, Kawakita Jiro) is used to diffuse confusion after a brainstorming session by organizing the multiple ideas generated during the session. It is a simple and cost-effective method that consists of categorizing a large amount of ideas, data, or suggestions into logical groupings according to their natural relatedness. When a group of knowledgeable people discusses a subject with which they are all familiar, the ideas they generate should necessarily have affinities. To organize the ideas, perform the following:

a. The first step in building the diagram is to sort the suggestions into groups based on their relatedness

and a consensus from the members.

b. Determine an appropriate header for the listings of the different categories.

c. An affinity must exist between the items on the same list and if some ideas need to be on several

lists, let them be.

d. After all the ideas have been organized, several lists that contain closely related ideas should appear.

Listing the ideas according to their affinities makes it much easier to assign deliverables to members

of the project team according to their abilities.

Cause-and-Effect Analysis

The cause-and-effect (C&E) diagram—also known as a fishbone (because of its shape) or Ishikawa diagram (named after Kaoru Ishikawa, its creator) is used to visualize the relationship between an outcome and its different causes. There is very often more than one cause to an effect in business; the C&E diagram is an analytical tool that provides a visual and systematic way of linking different causes (input) to an effect (output). It shows the relationship between an effect and its first, second and third order causes. It can be used to identify the root causes of a problem. The building of the diagram is based on the sequence of events. “Sub-causes” are classified according to how they generate “sub-effects,” and those “sub-effects” become the causes of the outcome being addressed.

The first step in constructing a fishbone diagram is to define clearly the effect being analyzed.

The second step consists of gathering all the data about the key process input variables (KPIV), the

potential causes (in the case of a problem), or requirements (in the case of the design of a production

process) that can affect the outcome.

The third step consists of categorizing the causes or requirements according to their level of

importance or areas of pertinence. The most frequently used categories are:

o Manpower, machine, method, measurement, mother-nature, and materials for

manufacturing

o Equipment, policy, procedure, plant, and people for services

Subcategories are also classified accordingly; for instance, different types of machines and computers can be classified as subcategories of equipment.

The last step is the actual drawing of the diagram. The diagram is immensely helpful to draw a mind

map of the cause and effect relationship

Page 95: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

90

The fishbone diagram does help visually identify the potential root causes of an outcome. Further statistical analysis is needed to determine which factors contribute the most to creating the effect.

Figure 27: Fishbone Diagram

Page 96: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

91

Figure 28: Cause and Effect Diagram Example

Figure 29: Cause and Effect Diagram Example

Page 97: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

92

C&E Matrix (Function Deployment Matrix)

The relationship between Key Input Process Variables (KPIV) and Key Output Process Variables (KPOV) is ranked in a spread sheet to prioritize the x(s) which are listed from PFD, C&E or team inputs.

Activities to be performed:

• The team should be assembled

• Discuss KPIVs and KPOVs

• Use the FDM worksheet to list KPOVs.

• Give weights to KPOVs on the scale 1-10 (CPN)

• List KPIVs

• Rank the relationship between each KPIV with KPOV (1-10)

• The product of this rank and CPN of KPOV decides the priority of the respective KPIV

• Finally, a list of potential x(s) is obtained

Figure 30: Cause and Effect Matrix Example

Process Mapping Many business processes are poorly defined or totally lacking in description. Many procedures are simply described by word of mouth or may reside in documents that are obsolete. In process management, often by simply trying to define and map the process, we provide a means for both understanding and communicating operational details to those involved in the process.

It also provides a baseline, or standard, for evaluating the improvement. In many cases, merely

defining and charting the process as it is can reveal many deficiencies such as redundant and needless

steps and other non-value-added activities.

Page 98: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

93

Process mapping allows the team to represent the process associated with their problem in a way

that others find easy to understand, making the job of defining the current process easier

It also allows people to easily understand where waste exists in the process and where the process

has been poorly defined

Process maps can be analyzed for: o Time-per-event (reducing cycle time)

o Process repeats (preventing rework)

o Duplication of effort (identifying and eliminating duplicated tasks)

o Unnecessary tasks (eliminating tasks that are in the process for no apparent reason)

o Identifying and segregating Value-added and non-value-added tasks

The Process Map should contain enough detail to enable effective analysis

It should illustrate both the work flow and the organizational interaction

It should use a common language (symbology) which is understood by everyone

It should capture all multiple paths, decisions, and rework loops

It should contain adequate detail

o Too much detail is incomprehensible

o Too little detail has no analytical value

The Process Map should address all types of tasks

Figure 31: Process Map Tasks

The first step in analyzing process map is developing detailed process maps. The initial AS-IS process maps should always be created by cross functional team members and must reflect the actual process rather than an ideal state of desired state.

Start with a high level map: It can be very useful to start with a high level process map of say five

to ten steps. Six Sigma team would have developed a High Level process map (SIPOC) in Define Phase.

Operation

e.g. Prep patient for surgery, type letter,

prepare bill

Transportation

e.g., Move material by messenger, move material

by cart, mail

Storage

e.g., Raw material in bulk storage, filing of

documents

Delay

e.g. Wait in line to be serviced, papers waiting to

be filed and picked up

Inspection

e.g., examine service for quality, read gauge,

examine forms, proofread

Page 99: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

94

This helps to establish the scope of the process, identify significant issues and frame the more

detailed map.

Six Sigma team may choose a Top-Down Flow Chart describes the activities of the process in a

hierarchical order or Functional Deployment Flow Chart that shows the different functions that are

responsible for each step in the process flow chart.

Map the process like a flowchart detailing each activity, arrow and decision. For each arrow, boxes,

and diamond, list its function and the time spent (in minutes, hours, days).

Figure 32: Main Process Mapping Symbols

Page 100: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

95

Table 22: Typical Symbols used in Process Maps

Detailed Process Mapping

Mark the input and output parameters as C, N, X, SOP as below:

• C (Constant) - Factor is a constant. It has a fixed value and cannot be changed to improve Y.

• N (Noise) - The factor goes high or low randomly. It is uncontrollable though it may affect Y.

• X (Variable) - Factors that affect project Y and can be controlled. Such variables should be considered

for improvement in project Y.

• SOP (Standard Operating Procedure)-These are the Standard Operating Procedures which help in

defining input levels and conditions. These are the standards and generally not changed during a

project.

Page 101: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

96

Figure 33: Detailed Process Mapping Example

Page 102: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

97

Figure 34: Detail Process Mapping Example

Functional Deployment Flow Chart A functional deployment flow chart shows the different functions that are responsible for each step in the process flow chart. An example is shown below:

How to make a functional Deployment Flowchart • Step 1: List the major steps of the process in the order in which they occur. This might be the output

of your work with a top-down flowchart or you might use some other technique to create this list.

• Step 2: Across the top of your board, flipchart or paper write the names of the people or organizations involved in the process.

• Step 3: Under the name of the person or organization responsible for the first step in the process, draw a box and write that step in the box. If more than one person or group is responsible for a step, extend the box so it is also under the name of that person or group.

• Step 4: If any of the other people or groups help or advise the ones with primary responsibility for that step, draw an oval under the names of those people or groups.

• Step 5: Connect the ovals to the box with the process step in it.

• Step 6: After the first step is complete, put the second step under the people responsible for it.

• Step 7: Connect the first step to the second step, and then add ovals for any helpers or advisors in the second process step. Keep going this way with all the steps in the process.

Page 103: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

98

Figure 35: Example Process Maps with Functions Shown

• Functional Deployment Map is a chart which maps the process in relation to responsibilities (sole or

shared)

• Useful for understanding processes with parallel operations

• Applies well to both manufacturing and transactional processes

The second step is analysing the process map for: Time: One of the most effective lean tools used in understanding waste in a process map is Value add/ Non-Value add analysis. It assists in analyzing

• Time-per-event (reducing cycle time)

• Process repeats (preventing rework)

• Duplication of effort (identifying and eliminating duplicated tasks)

• Unnecessary tasks (eliminating tasks that are in the process for no apparent reason)

• Identifying and segregating Value-added and non-value-added tasks

Risks: Process Maps can be analyzed to determine prevalent risks in the current process. Failure Mode and Effects Analysis is a structured and systematic process to identify potential design and process failures before they have a chance to occur with the ultimate objective of eliminating these failures or at least minimizing their occurrence or severity. FMEA (Failure Modes and Effects Analysis) can help us determine potential failure modes, potential failure effects, severity rank of the failure effect, potential causes of failure, occurrence rank of the failure, current process controls and the effectiveness of the control measures.

• Identifying the areas and ways in which a process or system can fail (failure mode)

• Estimating risk associated with specific causes

• Identifying and prioritizing the actions that should be taken to reduce those risks

Steps

Responsible Clerk Supervisor Materials

Management

Scheduler

Log-in Order Prioritize Order Review for Specifications Materials Explosion Schedule Fabrication Inspection Distribution

N

N

Y

Y

Page 104: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

99

• Evaluating and documenting proposed process plans or current control plans

Let’s review VA/NVA analysis and FMEA in detail

VA and NVA Analysis

The objective of non-value added analysis is to:

Eliminate the hidden costs that do not add value to the customer

Reduce unnecessary process complexity, and thus errors

Reduce the process cycle time

Reduce cost and increase capacity through better utilization of resources

Reduce inventory

Increase revenue (e.g., reduce development process cycle time to introduce new products to market

faster)

What are VA and NVA activities? Value-Added Step:

Customers are willing to pay for it.

It transforms the product/ service.

It’s done right the first time.

Non value-Added Step:

Is not essential to produce output.

Does not add value to the output.

Includes:

o Defects, errors, omissions.

o Preparation/setup, control/inspection.

o Over-production, processing, inventory.

o Transporting, motion, waiting, delays.

Non Value Added Steps can be further classified

Required NVA

o Business necessity (e.g. accounting)

o Employee necessity (e.g. payroll)

o Process necessity (e.g. inspection)

NVA - Waste

How do we carry out VA/NVA analysis? VA / NVA Analysis Procedure

Step 1: Map the process like a flowchart detailing each activity, arrow and decision.

Step 2: For each arrow, box, and diamond, list its function and the time spent (in minutes, hours,

days) on the value-added check list

Page 105: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

100

Step 3: Now become the customer. Step into their shoes and ask the following questions:

o Is the customer not willing to pay for it?

o Is this inspection, testing, or checking?

o Is this just “fix it” error correction work or waste?

Step 4: If the answer to any of these questions is “yes”, then the step may be non-value-added.

o If so, can we remove it from the process? Much of the idle, non-value-adding time in a

process lies in the arrows: Orders sit in in-boxes or computers waiting to be processed, calls

wait in queue for are representative to answer. How can we eliminate delay?

Step 5: How can activities and delays be eliminated, simplified, combined, or reorganized to provide a

faster, higher quality flow through the process?

o Investigate hand-off points: how can you eliminate delays and prevent lost, changed, or

misinterpreted information or work products at these points? If there are simple, elegant, or

obvious ways to improve the process now, revise the flowchart to reflect those changes.

Failure Modes and Effects Analysis (FMEA) What is Risk?

Risk can be defined as the likelihood of occurrence of an undesirable event combined to the magnitude of its impact

What is Risk Assessment?

Risk assessment is the determination of quantitative and qualitative value of risk related to a concrete situation and a recognized threat (also called hazard).

What does Risk Analysis include?

Risk analysis includes

Identification of risks

Estimation of their likelihood of occurrence

Estimation of their causes and the magnitude of their potential impact

Risk evaluation and Development of risk mitigation plan

Failure modes and effect analysis is a structured method to identify failure modes, determine the severity of the failure, cause of the failure, frequency of the failure, current controls in place and efficiency of the controls. This enables us to evaluate current risks in the process and thereafter developing action plan to mitigate risks.

FMEA Operating Definition

Failure Mode and Effects Analysis is a structured and systematic process to identify potential design and process failures before they have a chance to occur with the ultimate objective of eliminating these failures or at least minimizing their occurrence or severity. FMEA helps in

Page 106: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

101

Identifying the areas and ways in which a process or system can fail (failure mode)

Estimating risk associated with specific causes

Identifying and prioritizing the actions that should be taken to reduce those risks

Evaluating and documenting proposed process plans or current control plans

What is Failure Mode?

A Failure Mode is:

The way in which the product, service, input, or process could:

o Fail to meet requirements of the end user

o Cause downstream operations to fail

Things that could go wrong:

o Total failure

o Partial failure (too much, too little, too fast, or too slow)

o Unintended result

Best to be stated in terms of failing to perform a function

Combination of Failure Modes

Often, catastrophes are the result of more than one failure mechanism which occur in tandem.

Each individual failure mechanism, if it had occurred by itself, would probably have resulted in a far less “deadly outcome”.

Cause and Effect

Effects are usually events that occur downstream that affect internal or external customers.

Root causes are the most basic causes within the process owner’s control. Causes, and root causes, are in the background; they are an input resulting in an effect.

Failures are what transform a cause to an effect; they are often unobservable.

PFMEA (Process Failure Modes and Effect Analysis

How does PFMEA help?

The process FMEA supports processes in reducing risk of failures by:

Page 107: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

102

Identifying and evaluating process functions and requirements

Identifying and evaluating potential product and process related failure modes, and the effects of the potential failures on the process and customers

Identifying potential manufacturing, assembly or service process causes

Identifying process variables on which to focus process controls for occurrence reduction or increased detection of the failure conditions

Enabling the establishment of a priority system for preventive/corrective actions and controls

Process FMEA doesn’t rely on design changes to overcome limitations in the process. PFMEA assumes the product/service as designed will meet the design intent. Potential failure modes that can occur because of design weakness may be included in PFMEA. Their effect and avoidance is covered by Design FMEA.

What about the Design issues?

During the development of a PFMEA, a team may identify design opportunities which if implemented, would either eliminate or reduce the occurrence of a failure mode. Such information should be provided to Process Improvement/ Design expert for consideration and possible implementation.

Who should develop PFMEA?

PFMEA is developed and maintained by multi-disciplinary (or cross functional) team typically led by the Green Belt/ Black Belt/Process Owner.

Developing PFMEA

The PFMEA begins by developing a list of what the process is expected to do and not do. It is also known as the process intent

Identification of the product/service effects from the DFMEA should be included

A flow chart of the general process should be developed. It should identify the product/process characteristics associated with each operation.

The process flow diagram is a primary input for development of PFMEA. It helps establish scope of the analysis.

o The initial flow diagram is generally considered a high level process map

o A more detailed process map is needed for identification of potential failure modes

The PFMEA should be consistent with the information in the process flow diagram

Requirement(s) should be identified for each process/ function. Requirements are the outputs of each operation/step and related to the requirements of the product/service. The Requirement provides a description of what should be achieved at each operation/step. The Requirements provide the team with a basis to identify potential failure modes

Page 108: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

103

Other sources of information may include but not limited to

o Cause and Effect Matrix

o Lesson learned from previous product or similar process

o Known failure modes based on historical data

o Performance indicators (Cp, Cpk, Sigma level, FTY, RTY) data of similar process

Once we have most of the required information, we can fill out the Process FMEA form

PFMEA Template

Process or Product Name:

Prepared by:

Page ____ of ____

Responsible:

FMEA Date (Orig) ______________ (Rev) _____________

Process Step /

Function

Requirement

Potential Failure Mode

Potential Effects

of Failure SEV

Class

Potential Cause(s)/ Mechanis

m(s) of Failure

Current Process

RP

N

Recommended

Action(s)

Responsibility and

Completion Date

Action Results

Controls Preventio

n

OC

C

Controls Detectio

n

DET

Actions

Taken

SEV

OC

C

DET

RP

N

a1 a2 b c d e f h g h i j k l m n n n n

Body of the PFMEA

‘a1’- Process Step and Function: Process Step/Function can be separated into two columns or combined into a single column. Process steps may be listed in the process step/function column, and requirement listed in the requirements column

o Enter the identification of the process step or operation being analyzed, based on the numbering process and terminology. Process numbering scheme, sequencing, and terminology used should be consistent with those used in the process flow diagram to ensure traceability and relationships to other documents.

o List the process function that corresponds to each process step or operation being analyzed. The process function describes the purpose or intent of the operation.

Page 109: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

104

‘a2’- Requirements: are the inputs to the process specified to meet design intent and other customer requirements.

o List the requirements for each process function of the process step or operation being analyzed.

o If there are multiple requirements with respect to a given function, each should be aligned on the form with the respective associated failure modes in order to facilitate the analysis

‘b’- Potential Failure Mode – is defined as the manner in which the process could potentially fail to meet the process requirements (including the design intent). The team should also assume that the basic design issues which result in process concerns, those issues should be communicated to the design team for resolution

o List the potential failure mode(s) for the particular operation in terms of the process requirement(s).

o Potential failure modes should be described in technical terms, not as a symptom noticeable by the customer

‘c’- Potential effect(s) of failure are defined as the effects of the failure mode as perceived by the customer(s).

o The effects of the failure should be described in terms of what the customer might notice or experience, remembering that the customer may be an internal customer, as well as the End User.

o For the End User, the effect should be stated in terms of product or system. If the customer is the next operation or subsequent operation (s), the effects should be stated in terms of process/operation performance.

A few questions we can ask to determine the potential effect of the failure

o Does the potential failure Mode physically prevent the downstream processing or cause potential harm to equipment or operators?

o What is the potential impact on the End User?

o What would happen if an effect was detected prior to reaching the End User?

‘d’- Severity is the value associated with the most serious effect for a given failure. The team should agree on evaluation criteria and a ranking system and apply them consistently, even if modified for individual process analysis. It is not recommended to modify criteria for ranking values 9 and 10. Failure modes with a rank of 1 should not be analyzed further.

‘e’- Classification column may be used to highlight high priority failure modes or causes that may requirement additional engineering assessment. This column may also be used to classify any product or process characteristics (e.g., critical, key, major, significant) for components, subsystems, or systems that may require additional process controls.

Page 110: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

105

‘f’- Potential causes(s) of Failure Mode is defined as a indication of how the failure could occur, and is described in terms of something that can be corrected or can be controlled. Potential causes of failure may be an indication of a design or process weakness, the consequence of which is the failure.

o Identify and document every potential causes for each failure mode. The cause should be detailed as concisely and completely as possible.

o There may be one or more causes that can result in the failure mode being analyzed. This results in multiple lines for each cause in the table or form. Only specific errors or malfunction should be listed.

‘g’- Occurrence is the likelihood that a specific cause of failure will occur. The likelihood of occurrence ranking has a relative meaning rather than an absolute value.

o Estimate the likelihood of occurrence of a potential cause of failure on a 1 to 10 scale. A consistent occurrence rank should be used to ensure continuity.

o The occurrence ranking number is a relative ranking within the scope of FMEA and may not reflect the actual likelihood of occurrence.

o If statistical data are available from a similar process, the data should be used to determine the occurrence ranking. In other cases, subjective assessment may be used to estimate the ranking.

‘h’-Current process controls are descriptions of the controls that can either prevent to the extent possible, the cause of failure from occurring or detect the failure mode or cause of failure should it occur. There are two types of Process Controls to consider

o Prevention: Eliminate (prevent) the cause of the failure or the failure mode from occurring, or reduce its rate of occurrence

o Detection: Identify (detect) the cause of failure or the failure mode, leading to the development or associated corrective action(s) or counter-measures

o The preferred approach is to first use prevention controls, if possible. The initial occurrence rankings will be affected by the prevention controls provided they are integrated as part of the process. The initial detection rankings will be based on process control that either detect the cause of failure, or detect the failure mode

‘i’-Detection is the rank associated with the best detection control listed in the detection controls column. Detection is a relative ranking within the scope of the individual FMEA. Detection is relative ranking within the scope of the individual FMEA. In order to achieve a lower ranking, generally the planned detection control has to be improved.

o When more than one control is identified, it is recommended that the detection ranking of each control be included as part of the description. Record the lowest ranking in the detection column

o Assume the failure has occurred and then assess the capabilities of all the “Current Process Controls” to detect the failure

Page 111: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

106

o Do not automatically presume that the detection ranking is low because the occurrence is low, but do assess the ability of the process controls to detect low frequency failure modes and prevent them from going further in the process.

o Random quality checks are unlikely to detect the existence of an isolated problem and should not influence the detection ranking.

‘j’ - RPN stands for Risk Priority Number which is the product of Severity, Occurrence and Detection

o RPN = Severity (S) * Occurrence (O) * Detection (D)

o Within the scope of the individual FMEA, this value can range between 1 and 1000

Determining action priorities – Once the team has completed initial identification of failure modes and effects, causes and controls, including rankings for severity, occurrence and detection, they must decide if further efforts are needed to reduce the risk. Due to the inherent limitation on resources, time, technology, and other factors they must choose how to best prioritize these efforts.

o The initial focus of the team should be oriented towards failure modes with highest severity ranks. When the severity is 9 or 10, it is imperative that the team ensure that the risk is addressed through existing design controls or recommended actions.

o For failure modes with severities of 8 or below the team should consider causes having the highest occurrence or detection rankings. It is the team’s responsibility to look at the information, decide upon an approach, and determine how to best prioritize their risk reduction efforts which best sever their organization and customers.

‘k’- Recommended Action (s) – In general, prevention actions (i.e. reducing the occurrence are preferable to detection actions. The intent of any recommended action is to reduce rankings in the following order: severity, occurrence, and detection. Example approaches to reduce these are explained below:

o To reduce Severity(S) Ranking: Only a design or process revision can bring about a reduction in the severity ranking.

o To reduce Occurrence (O) Ranking: To reduce occurrence, process and design revisions may be required. A reduction in the occurrence ranking can be effected by removing or controlling one or more of the causes of the failure mode through a product or process design revision.

o To reduce Detection (D) Ranking: The preferred method is the use of error/mistake proofing. A redesign of the detection methodology may result in a reduction of the detection ranking. In some cases, a design change in the process step may be required to increase the likelihood of detection (i.e. reduce the detection ranking)

‘l’ - Responsibility and Target Completion data – Enter the name of the individual and organization responsible for completing each recommended action including the target completion date.

‘m’- Action(s) Taken and Completion Date- After the action has been implemented, enter a brief description of the action taken and actual completion date

Page 112: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

107

‘n’- Severity, Occurrence, Detection and RPN – After the preventive, correction action has been completed, determine and record the resulting severity, occurrence, and detection rankings. Calculate and record the resulting RPN

Severity

A numerical measure of how serious is the effect of the failure.

Table 22: PFMEA Severity Evaluation Criteria

Effect

Criteria :

Severity of effect on Product (Customer Effect)

Rank Effect

Criteria :

Severity of effect on Process (Manufacturing /Assembly

Effect)

Failure to meet safety and or/

regulatory requirements

Potential failure affects safe vehicle operation and/ or involves noncompliance with government regulation without warning

10

Failure to meet safety and or/

regulatory requirements

May endanger operator (machine or assembly) without warning

Potential failure affects safe vehicle operation and/ or involves noncompliance with government regulation with warning

9 May endanger operator (machine or assembly) with warning

Loss or Degradation of

Primary Function

Loss of primary function (vehicle inoperable , does not effect safe vehicle operation)

8 Major

Disruption

100% of product may have to be scrapped. Line shutdown or stop ship.

Degradation of primary function (vehicle operable, but at reduced level of performance)

7 Significant Disruption

A portion of production run may have to be scrapped. Deviation from primary process including decreased line speed or added manpower.

Loss or Degradation of

Secondary Function

Loss of Secondary Function (vehicle operable , but comfort/ convenience functions inoperable)

6

Moderate Disruption

100% of production run may have to be reworked off line and accepted.

Degradation of Secondary Function (vehicle operable, but comfort/convenience functions at reduced level

5 A portion of production run may have to be reworked off line and accepted.

Page 113: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

108

of performance)

Annoyance

Appearance or Audible Noise, Vehicle operable, item does not conform and noticed by most customers (>75%)

4

Moderate Disruption

100% of production run may have to be reworked in station before it is processed.

Appearance or Audible Noise, Vehicle operable, item does not conform and noticed by many customers (50%)

3 A portion of production run may have to be reworked in station before it is processed.

Appearance or Audible Noise, Vehicle operable, item does not conform and noticed by discriminating customers (< 25%)

2 Minor

Disruption Slight inconvenience to process, operation or operator.

No Effect No discernible effect 1 No Effect No discernible effect

Occurrence

Occurrence: A measure of probability that a particular failure will actually happen.

The degree of occurrence is measured on a scale of 1 to 10, where 10 signifies the highest probability of occurrence.

Table 23: PFMEA Occurrence Evaluation Criteria

Likelihood of Failure

Criteria: Occurrence of Cause – PFMEA (Incidents per items/vehicles)

Rank

Very High ≥ 100 per thousand

≥ 1 in 10 10

High

50 per thousand

1 in 20 9

20 per thousand

1 in 50 8

10 per thousand 7

Page 114: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

109

1 in 100

Moderate

2 per thousand

1 in 500 6

.5 per thousand

1 in 2,000 5

.1 per thousand

1 in 10,000 4

Low

.01 per thousand

1 in 100,000 3

≤ .001 per thousand

1 in 1,000,000 2

Very Low Failure is eliminated through preventive control 1

Detection

A measure of probability that a particular failure or cause in our operation shall be detected in current operation and shall not pass on to the next operation. (would not affect the internal/ external customer)

Table 24: Detection Evaluation Criteria

Opportunity for Detection

Criteria:

Likelihood of Detection by Process Control Rank

Likelihood of Detection

No detection Opportunity

No process control; cannot detect or is not analyzed 10 Almost

Impossible

Not likely to detect at any

stage

Failure Mode and or/ Error (Cause) is not easily detected (e.g. Random Audits).

9 Very

Remote

Problem Detection Post

Processing

Failure Mode Detection Post Processing by operator through visual/tactile/audible means.

8 Remote

Problem Detection at

Failure Mode detection in station by operator through visual/tactile/audible means or post processing through use

7 Very Low

Page 115: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

110

Source of attribute gauging (go/no-go, manual torque check/clicker wrench etc).

Problem Detection Post

Processing

Failure Mode post processing by operator through use of variable gauging or in station by operator through use of attribute gauging (go/no-go, manual torque check/clicker wrench, etc).

6 Low

Problem Detection at

Source

Failure Mode or Error (Cause) detection in -station by operator through use of variable gauging or by automated controls in-station that will detect discrepant part and notify operator (light, buzzer, etc). Gauging performed on setup and first-piece check (for set-up causes only).

5 Moderate

Problem Detection Post

Processing

Failure Mode detection post-processing by automated controls that will detect discrepant part and lock part to prevent further processing.

4 Moderately

High

Problem Detection at

Source

Failure Mode Detection in station by automated controls that will detect discrepant part and automatically lock part in station to prevent further processing.

3 High

Error Detection and/or Problem

Prevention

Error (Cause) detection in-station by automated controls that will detect error and prevent discrepant part from being made.

2 Very High

Detection not applicable; Error

Prevention

Error (Cause) prevention as a result of fixture design, machine design or part design. Discrepant parts cannot be made because item has been error proofed by process/product Design.

1 Almost Certain

Graphical Tools Data displayed graphically helps us develop hypothesis and determine further analysis plans. We can use

Box Plot

Histogram

Pareto Plots

Pareto P lots

The Pareto chart is based on the Pareto principle. Vilfredo Pareto was an economist in the early 1900'swho discovered that 80% of all the wealth was held by 20% of all the people. This became known as the 80/20 rule and it was found to be applicable to more than the economy. Eighty percent of the warehouse space is taken up by 20% of the part numbers. Eighty percent of the defects are caused by 20% of the defect types. The Pareto chart is a bar chart. The height of the bars indicates the count, or frequency, of occurrence. The bars represent one grouping of the data, such as defect type. The idea motivating this chart is that 80% of the

Page 116: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

111

count will be due to 20% of the categories. The bars are arranged in descending order, therefore the dominant group can be determined and it will be the first bar on the left. This chart can be used in a variety of places in a Six Sigma project.

Frequency 164 90 45 34 22 19 17

Percent 41.9 23.0 11.5 8.7 5.6 4.9 4.3

Cum % 41.9 65.0 76.5 85.2 90.8 95.7 100.0

Complaints

Other

Elev

ator

s

Hskp

g

Desk

Allo

c

Temp Co

ntro

l

Food

Tran

spor

t

400

300

200

100

0

100

80

60

40

20

0

Fre

qu

en

cy

Pe

rce

nt

Pareto Chart of Complaints

Figure 36: Worksheets: Complaints.mtw

Step 8- Identify Critical Xs

Hypothesis Testing Hypothesis testing is a statistical analysis where a hypothesis is stated, sample data are collected, and a decision is made based on the sample data and related probability value. This testing can be used to detect differences such as differences between a process mean and the target for the process, differences between two suppliers, or differences between multiple employees. To conduct a hypothesis test, the first step is to state the business question involving a comparison. For instance, the team may wonder if there is a difference in variability seen in thickness due to two different material types. Once the business question is posed, the next step is to convert the business language or question into statistical language or hypothesis statements. Two hypothesis statements are written. The first statement is the null hypothesis, H0. This is a statement of what is to be disproved.

The second statement is the alternative hypothesis, Ha. This is a statement of what is to be proved. Between

the two statements, 100% of all possibilities are covered. The hypothesis will be focused on a parameter of the population such as the mean, standard deviation, variance, proportion, or median.

Page 117: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

112

The type of hypothesis test that could be conducted is based on the data type (discrete or continuous) of the y data. For instance, if the data are continuous, the analysts may want to conduct tests on the mean, median, or variance. If the data are discrete, the analysts may want to conduct a test on proportions. Hypothesis Testing A statistical hypothesis is an assumption about a population parameter. This assumption may or may not be true. The best way to determine whether a statistical hypothesis is true would be to examine the entire population. Since that is often impractical, researchers typically examine a random sample from the population. If sample data are not consistent with the statistical hypothesis, the hypothesis is rejected. Hypothesis Tests Statisticians follow a formal process to determine whether to reject a null hypothesis, based on sample data. This process, called hypothesis testing, consists of four steps.

o State the hypotheses: This involves stating the null and alternative hypotheses. The hypotheses

are stated in such a way that they are mutually exclusive. That is, if one is true, the must be false.

o Formulate an analysis plan: The analysis plan describes how to use sample data to evaluate the

null hypothesis. The evaluation often focuses around a single test statistic.

o Analyze sample data: Find the value of the test statistic (mean score, proportion, t-score, z-score,

etc.) described in the analysis plan.

o Interpret results: Apply the decision rule described in the analysis plan. If the value of the test

statistic is unlikely, based on the null hypothesis, reject the null hypothesis.

Null Hypothesis (Ho): It is a hypothesis which states that there is no difference between the procedures and is denoted by Ho

Alternative Hypothesis (Ha): It is a hypothesis which states that there is a difference and is denoted by Ha

The Language of Hypothesis

The Null Hypothesis (H0) o A statement of ‘No difference’

o It is a statement you are testing in order to determine whether or not that statement is true

o In other words, observations are the result purely of chance

o An example expression isH0: A = B

Alternative Hypothesis (Ha) o A statement of ‘difference’

o It is that there is a real effect and the observations are affected by the effect and some pure

chance variation

o It depends on the direction of the effect we are looking for. For example,

o For Example: Ha: AB or Ha: A>B or Ha: A<B

Page 118: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

113

Decision Errors

There is always possibility of errors in the decisions we make. Two types of errors can result from a hypothesis test. They are:

o Type I error. A Type I error occurs when the researcher rejects a null hypothesis when it is true.

The probability of committing a Type I error is called the significance level. This probability is also

called alpha, and is often denoted by α.

o Type II error. A Type II error occurs when the researcher fails to reject a null hypothesis that is

false. The probability of committing a Type II error is called Beta, and is often denoted by β. The

probability of not committing a Type II error is called the Power of the test.

Steps of Hypothesis testing using the p-value approach

State the null hypothesis H0, and the alternative hypothesis Ha.

Choose the level of significance (α) and the sample size, n.

Determine the appropriate test statistic and sampling distribution.

Collect the data and compute the sample value of the appropriate test statistic.

Calculate the p-value based on the test statistic and compare the p-value to α.

Make the statistical decision. If the p-value is greater than or equal to α, we fail to reject the null

hypothesis. If the p-value is less than α, were reject the null hypothesis.

Express the statistical decision in the context of the problem.

Parametric Hypothesis tests

F-test

A fast food chain may want to evaluate variation in cutlet diameter across two units, and therefore carry out F-test to compare variances of two groups

o F-test is used compare variances of two groups. It tests the hypothesis whether there is any

difference between two population variances

o F-test results help us determine whether we should assume equal variances or not while carrying

out 2-independent sample t test

Bartlett/Levene test

Consider the same fast food has more than two units to compare the diameter of the cutlet. F-test is limited to two groups. Therefore Bartlett/Levene test should be used to compare variances of multiple groups

o Levene and Bartlett tests can be used to test variances of several groups.

o Bartlett test for variances assumes that the data follows normal distribution however Levene

tests doesn’t make any such assumption. Levene test for variances is a non-parametric equivalent

of Bartlett test.

Page 119: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

114

o Test for equal variances is carried out to satisfy assumption of homoscedasity (equal variances)

for ANOVA-One way

2-sample t test

An assembly operation requires a 6-week training period for a new employee to reach maximum efficiency. A new method of training was proposed and an experiment was carried out to compare the new method with the standard method. A group of 18 new employees was split into two groups at random. Each group was trained for 6 weeks, one group using the standard method and the other the new method. The time (in minutes) required for each employee to assemble a device was recorded at the end of the training period. Here the X is the training method and the Y is the assembly time. The two levels of the factor training method are ‘standard’ and ‘new’. The analyst may carry out 2-sample t test to compare the time required to assemble a device for two levels of training.

o 2- independent sample t-test compares and tests the difference between the means of two

independent populations

o 2 –independent sample t- test are of two types and we choose one of them depending on F-test

results

o 2-independent sample t-test assuming equal variances

o 2-independent sample t-test assuming unequal variances

ANOVA-One Way: Testing for differences among mean of multiple groups

Sometimes, a hypothesis test involves testing for differences in the mean level of a Y for an X with three or more levels. A factor (X), such as baking temperature, may have several numerical levels (e.g., 300 degrees, 350 degrees, 400 degrees, 450 degrees) or a factor such as preferred level may have several categorical levels (low, medium, high).This type of hypothesis test is called a completely randomized design. When the numerical measurements (values of the CTQ) across the groups (levels of the [X]) are continuous and certain assumptions are met, you use the analysis of variance (ANOVA) to compare the population means of the CTQ for each level of X. The ANOVA procedure used for completely randomized designs is referred to as a one-way ANOVA and is an extension of the t test for the difference between the means. Although ANOVA is an acronym for analysis of variance, the term is misleading because the objective is to analyze differences among the population means, not the variances. However, unlike the t test, which compares differences in two means, the one-way ANOVA simultaneously compares the differences among three or more population means. This is accomplished through an analysis of the variation among the populations and also within the populations. In ANOVA, the total variation of the measurements in all the populations is subdivided into variation that is due to differences among the populations and variation that is due to variation within the populations. Within-group variation is considered random or experimental error. Among-group variation is attributable to treatment effects, which represent the effect of the levels of X, called a factor, used in the experiment on the CTQ or Y. Simply Analysis of Variances- One Way tests to determine whether means of several groups are equal or not Underlying Assumptions of ANOVA- One way

o Within each sample, the values are independent

o The k samples are normally distributed

o The samples are independent of each other

Page 120: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

115

o The k samples are all assumed to come from populations with the same variance

• 2 proportion test

A hypothesis test for two population proportions to determine whether they are significantly different. This procedure uses the null hypothesis that the difference between two population proportions is equal to a hypothesized value (H0: p1 - p2 = P0), and tests it against an alternative hypothesis, which can be either left-tailed (p1 - p2 < P0), right-tailed (p1 - p2 > P0), or two-tailed (p1 - p2 ≠ P0 ).

• Chi-Square test of association

A hypothesis tests to test the independence between categorical variables. For example, a manager of three customer support call centers wants to know if a successful resolution of a customer's problem (yes or no) depends on which branch receives the call. The manager tallies the successful and unsuccessful resolutions for each branch in a table, and performs a chi-square test of independence on the data. In this case, the chi-square statistic quantifies how the observed distribution of counts varies from the distribution you would expect if no relationship exists between call center and a successful resolution.

Non-parametric tests The most commonly used statistical tests (t-tests, Z-tests, ANOVA, etc.) are based on a number of assumptions. Non-parametric tests, while not assumption-free, make no assumption of a specific distribution for the population. The qualifiers (assuming) for non-parametric tests are always much less restrictive than for their parametric counterparts. For example, classical ANOVA requires the assumptions of mutually independent random samples drawn from normal distributions that have equal variances, while the non-parametric counterparts require only the assumption that the samples come from any identical continuous distributions. Also, classical statistical methods are strictly valid only for data measured on interval or ratio scales, while non-parametric statistics apply to frequency or count data and to data measured on nominal or ordinal scales. Since interval and ratio data can be transformed to nominal or ordinal data, non-parametric methods are valid in all cases where classical methods are valid; the reverse is not true. Ordinal and nominal data are very common in Six Sigma work. Nearly all customer and employee surveys, product quality ratings, and many other activities produce ordinal and nominal data. So if non-parametric methods are so great, why do we ever use parametric methods? - When the assumptions hold, parametric tests will provide greater power than non-parametric tests. That is, the probability of rejecting H0 when it is false is higher with parametric tests than with a non-parametric test using the same sample size. However, if the assumptions do not hold, then non-parametric tests may have considerably greater power than their parametric counterparts. It should be noted that non-parametric tests perform comparisons using medians rather than means, ranks rather than measurements, and signs of difference rather than measured differences. In addition to not requiring any distributional assumptions, these statistics are also more robust to outliers and extreme values.

Page 121: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

116

Table 25: Non Parametric Tests

Minitab Non-Parametric Test

What It Does Parametric Analogs

1 sample sign Performs a one-sample sign test of the median and calculates the corresponding point estimates and confidence interval.

1-sample Z-test

1-sample t-test

1-sample Wilcoxon Performs a one-sample Wilcoxon signed rank test of the median and calculates the corresponding point estimate and confidence interval

1-sample Z-test

1-sample t-test

Mann-Whitney

Performs a hypothesis test of the equality of two population medians and calculates the corresponding point estimate and confidence interval.

2-sample t-test

Kruskal-Wallis

Kruskal-Wallis performs a hypothesis test of the equality of population medians for a one-way design (two or more populations). This test is the generalization of the procedure used by the Man-Whitney test.

See also: Mood’s median test.

One-Way ANOVA

Mood’s Median test

Performs a hypothesis test of the equality of population medians in a one way design. Sometimes called a median test or sign scores test. Mood’s median test is more robust against outliers than the Kruskal-Wallis test, but is less powerful (confidence interval is wider, on the average) for analysing data from many distributions, including data from the normal distribution.

See also: Kruskal-Wallis test.

One-way ANOVA

Friedman

Performs a non-parametric analysis of a randomized block experiment. Randomized block experiments are generalization of paired experiments. The Friedman test is a generalization of the paired sign test with a null hypothesis of treatments having no effect. This test requires exactly one observation per treatment-block combination.

Two-way ANOVA

Paired sign test

Runs tests Test whether or not the data order is random. Use Minitab’s stat > Quality Tools > Run Chart to generate a run chart

None

Levene’s test

Test for equal variances. This method considers the distances of the observations from their sample median rather than their sample mean. Using the sample median rather than the sample mean

Bartlett’s test

Page 122: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

117

makes the test more robust for smaller samples.

Correlation and Regression Let’s take a look a few scenarios.

Administrative – A financial analyst wants to predict the cash needed to support growth and

increases in capacity.

Market/Customer Research – The marketing department wants to determine how to predict a

customer’s buying decision from demographics and product characteristics.

Hospitality – Food and Beverage manager wants to see if there is a relationship between room

service delays and order size.

Customer Service - GB is trying to reduce call length for potential clients calling for a good faith

estimate on a mortgage loan. GB thinks that there is a relationship between broker experience and

call length.

Hospitality - The Green Belt suspects that the customers have to wait too long on days when there

are many deliveries to make at Six Sigma Pizza.

In the scenarios mentioned here, there is something common, i.e. we want to explore the relationship between Output and Input, or between two variables. Correlation and Regression helps us explore statistical relationship between two continuous variables.

Correlation

Correlation is a measure of the relation between two or more continuous variables. The Pearson correlation coefficient is a statistic that measures the linear relationship between the x and y. The symbol used is r. The correlation value ranges from -1 to 1. The closer the value is to 1 in magnitude, the stronger the relationship between the two.

A value of zero, or close to zero, indicates no linear relationship between the x and y.

A positive value indicates that as x increases, y increases.

A negative value indicates that as x increases, y decreases.

The Pearson correlation is a measure of a linear relationship, so scatter plots are used to depict the relationship visually. The scatter plot may show other relationships. The figure 37 below shows a scatter plot with correlation coefficient r=0.982.

Page 123: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

118

400035003000250020001500

1200

1000

800

600

400

200

0

Calories Consumed

We

igh

t g

ain

ed

Scatterplot of Weight gained vs Calories Consumed

Figure 37: Worksheet: Calories Consumed.mtw

Regression

A natural extension of correlation is regression. Regression is the technique of determining a mathematical equation that relates the x's to the y. Regression analysis is also used with historical data - data where the business already collects the y and associated x's. The regression equation can be used as a prediction model for making process improvements.

Simple Linear Regression

Simple linear regression is a statistical method used to fit a line for one x and one y. The formula of the line is where b0 is the intercept term andb1is the slope associated with x. These beta terms are called the coefficients of the model. The regression model describes a response variable y as a function of an input factor x. The larger the b1 term, the more change in y given a change in x. In Simple Linear Regression, a single variable “X” is used to define/predict “Y”

e.g.; Wait Time = b0+ (b1) x (Deliveries) + E (error)

Simple Regression Equation: Y = b0 + (b1) * (X) + E (error)

Y = mx+c+e

o m= slope, c= constant/intercept, e= error

Regression terms Types of Variables

Input Variable (X’s): These are also called predictor variables or independent variables. It is best if the

variables are continuous

Output Variable (Y’s): These are also called response variables or dependent variables (what we’re

trying to predict). It is best if the variables are continuous.

Page 124: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

119

R-squared-also known as Coefficient of determination- The coefficient of determination (r2) is the ratio of the regression sum of squares (SSR) to the total sum of squares (SST). It represents the % variation in output (dependent variable) explained by input variables/s or Percentage of response variable variation that is explained by its relationship with one or more predictor variables. Prediction and Confidence Interval: These are types of confidence intervals used for predictions in regression and other linear models.

Prediction Interval: It represents a range that a single new observation is likely to fall given

specified settings of the predictors.

Confidence interval of the prediction: It represents a range that the mean response is likely to

fall given specified settings of the predictors.

The prediction interval is always wider than the corresponding confidence interval because of the added uncertainty involved in predicting a single response versus the mean response.

Step 9- Verify Sufficiency of Critical X’s for project Although a statistical analysis may show that there is a statistically significant difference, there may not be a practical difference. In other words, the difference may not be big enough to be of importance to the business. The bigger the sample size used in an analysis, the smaller the deviation from the null hypothesis that may be detected. Always remember to check for outliers that may be influencing results. And finally, ask whether this difference means anything practically.

Analyze phase Tollgate checklist Has the team conducted a value-added and cycle time analysis, identifying areas where time and

resources are devoted to tasks not critical to the customer?

Has the team examined the process and identified potential bottlenecks, disconnects, and redundancies that could contribute to the problem statement?

Has the team analyzed data about the process and its performance to help stratify the problem, understand reasons for variation in the process, and generate hypothesis as to the root causes of the current process performance?

Has an evaluation been done to determine whether the problem can be solved without a fundamental ‘white paper’ recreation of the process? Has the decision been confirmed with the Project Sponsor?

Has the team investigated and validated (or revalidated) the root cause hypothesis generated earlier, to gain confidence that the “vital few” root causes have been uncovered?

Does the team understand why the problem (the Quality, Cycle Time, or Cost Efficiency issue identified in the Problem Statement) is being seen?

Has the team been able to identify any additional ‘Quick Wins’?

Have ‘learning’s’ to-date required modification of the Project Charter? If so, have these changes been approved by the Project Sponsor and the Key Stakeholders?

Have any new risks to project success been identified, added to the Risk Mitigation Plan, and a mitigation strategy put in place?

Page 125: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

120

Improve

Improve Phase Overview In the Improve phase, the team has validated the causes of the problems in the process and is ready to generate a list of solutions for consideration. They will answer the question "What needs to be done?" As the team moves into this phase, the emphasis goes from analytical to creative. To create a major difference in the outputs, a new way to handle the inputs must be considered. When the team has decided on a solution to present to management, the team must also consider the cost/benefit analysis of the solutions as well as the best way to sell their ideas to others in the business. The deliverables from the Improve phase are:

Proposed solution(s)

Cost/benefit analysis

Presentation to management

Pilot plan

Step 10- Generate and evaluate Alternative Solutions The first task in the Improve phase is to develop ideas for improving the process. The traditional method for developing improvement ideas is to use conventional brainstorming however Green Belt/Black Belt may also choose alternative ways to generate creative ideas in a team. Let’s explore few of those techniques in the following section.

Ideation Techniques Brainstorming (Round-Robin)

Most project executions require a cross-functional team effort because different creative ideas at

different levels of management are needed in the definition and the shaping of a project. These ideas

are better generated through brainstorming sessions. Brainstorming is a tool used at the initial steps

•Generate and Evaluate Alternative Solutions Step 10

•Select and Optimize best solution Step 11

•Pilot, Implement and Validate the Solution Step 12

Chapter

8

Page 126: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

121

or during the Analyze phase of a project. It is a group discussion session that consists of encouraging a

voluntary generation of a large volume of creative, new, and not necessarily traditional ideas by all

the participants. It is very beneficial because it helps prevent narrowing the scope of the issue being

addressed to the limited vision of a small dominant group of managers. Since the participants come

from different disciplines, the ideas that they bring forth are very unlikely to be uniform in structure.

They can be organized for the purpose of finding root causes of a problem and suggest palliatives.

Brainstorming using De-Bono Six Thinking Hats

Dr. Edward de Bono developed a technique for helping teams stay focused on creative problem

solving by avoiding negativity and group arguments. Dr. Bono introduced Six Thinking Hats which will

represent different thought processes of team members, and also discussed how we can harness

these thoughts to generate creative ideas. These hats are:

The White Hat thinking requires team members to consider only the data and information at hand.

With white hat thinking, participants put aside proposals, arguments and individual opinions and

review only what information is available or required.

The Red Hat gives team members the opportunity to present their feelings or intuition about the

subject without explanation or need for justification. The red hat helps teams to surface conflict and

air feelings openly without fear of retribution. Use of this hat encourages risk-taking and right-brain

thinking.

The Black Hat thinking calls for caution and critical judgment. Using this hat helps teams avoid

“groupthink” and proposing unrealistic solutions. This hat should be used with caution so that

creativity is not stifled.

The Blue Hat is used for control of the brainstorming process. The blue hat helps teams evaluate the

thinking style and determine if it is appropriate. This hat allows members to ask for summaries and

helps the team progress when it appears to be off track. It is useful for “thinking about thinking.”

The Green Hat makes time and space available for creative thinking. When in use, the team is

encouraged to use divergent thinking and explore alternative ideas or options.

The Yellow Hat is for optimism and a positive view of things. When this hat is in use, teams look at

the logical benefits of the proposal. Every green hat idea deserves some yellow hat attention.

This technique is called the Six Thinking Hats. It can be used to enhance team creativity and evaluate

ideas. This technique can be applied during solution or idea generation and also can assist in building

consensus. This technique has been used world-wide, in a variety of corporations. During the Green

Belt training, we will discuss how we can utilize this concept to generate creative ideas and also to

build consensus on generated ideas.

Creative Thinking using Probing Methods

Structured Probing methods are extremely helpful in lateral thinking and problem solving

approaches. Process Improvement experts also consider challenging an idea, or disproving an idea to

be an initiation point for creative ideas. Most scientists, innovators will like to probe to understand

the existence, validity and feasibility of an idea and this helps in improving and optimizing the idea,

and may also trigger a new idea.

Page 127: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

122

Benchmarking Benchmarking is a popular method for developing requirements and setting goals. In more conventional terms, benchmarking can be defined as measuring your performance against that of best-in-class companies, determining how the best-in-class achieve those performance levels, and using the information as the basis for your own company’s targets, strategies, and implementation. Benchmarking involves research into the best practices at the industry, firm, or process level. Benchmarking goes beyond a determination of the “industry standard” it breaks the firm’s activities down to process operations and looks for the best-in-class for a particular operation. For example, to achieve improvement in their parts distribution process Xerox Corporation studied the retailer L.L. Bean. Benchmarking must have a structured methodology to ensure successful completion of thorough and accurate investigations. However, it must be flexible to incorporate new and innovative ways of assembling difficult-to-obtain information. It is a discovery process and a learning experience. It forces the organization to take an external view, to look beyond itself. The essence of benchmarking is the acquisition of information.

The process begins with the identification of the process that is to be benchmarked. The process

chosen should be one that will have a major impact on the success of the business.

Once the process has been identified, contact a business library and request a search for the

information relating to your area of interest. The library will identify material from a variety of

external sources, such as magazines, journals, special reports, etc. Internet, organization’s internal

resources, Subject Matter experts in key department and of course network of contacts is immensely

helpful in this research.

Look for the best of the best, not the average firm. One approach is to build a compendium of

business awards and citations of merit that organizations have received in business process

improvement. Sources to consider are Industry Week’s Best Plant’s Award, National Institute of

Standards and Technology’s Malcolm Baldrige Award, USA Today and Rochester Institute of

Technology’s Quality Cup Award, European Foundation for Quality Management Award, Occupational

Safety and Health Administration (OSHA), Federal Quality Institute, Deming Prize, Competitiveness

Forum, Fortune magazine, United States Navy’s Best Manufacturing Practices, to name just a few.

Green Belt/Black Belt may wish to subscribe to an “exchange service” that collects benchmarking

information and makes it available for a fee. Once enrolled, Green Belt/Black Belt will have access to

the names of other subscribers—a great source for contacts. Don’t overlook your own suppliers as a

source for information. If your company has a program for recognizing top suppliers, contact these

suppliers and see if they are willing to share their “secrets” with you. Suppliers are predisposed to

cooperate with their customers; it’s an automatic door-opener. Also contact your customers.

Customers have a vested interest in helping you do a better job. If your quality, cost, and delivery

performance improve, your customers will benefit. Customers may be willing to share some of their

insights as to how their other suppliers compare with you. Again, it isn’t necessary that you get

information about direct competitors. Which of your customer’s suppliers are best at billing? Order

fulfilment? Customer service?

Another source for detailed information on companies is academic research. Companies often allow

universities access to detailed information for research purposes. While the published research

usually omits reference to the specific companies involved, it often provides comparisons and

Page 128: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

123

detailed analysis of what separates the best from the others. Such information, provided by experts

whose work is subject to rigorous peer review, will often save Green Belt/Black Belt thousands of

hours of work.

After a list of potential candidates is compiled, the next step is to choose the best three to five

targets. As the benchmarking process evolves, the characteristics of the most desirable candidates

will be continually refined. This occurs as a result of a clearer understanding of your organization’s

key quality characteristics and critical success factors and an improved Knowledge of the marketplace

and other players. This knowledge and the resulting actions tremendously strengthen an

organization.

Benchmarking is based on learning from others, rather than developing new and improved approaches. Since the process being studied is there for all to see, benchmarking cannot give a firm a sustained competitive advantage. Although helpful, benchmarking should never be the primary strategy for improvement. Competitive analysis is an approach to goal setting used by many firms. This approach is essentially benchmarking confined to one’s own industry. Although common, competitive analysis virtually guarantees second-rate quality because the firm will always be following their competition. If the entire industry employs the approach hit will lead to stagnation for the entire industry, setting them up for eventual replacement by outside innovators.

Pugh Matrix Pugh Matrix was introduced by Stuart Pugh. Pugh Matrix Concept Selection is a quantitative technique used to rank the multi-dimensional options of an option set. It is frequently used in engineering for making design decisions but can also be used to rank investment options, vendor options, product options or any other set of multidimensional entities.

Pugh Matrix refers to a matrix that helps determine a solution from a list of potential solutions. It is a scoring matrix used for concept selection, in which options are assigned scores relative to some pre-defined criteria. The selection is made based on the consolidated score. The Pugh matrix is a tool to facilitate a methodical team based approach for the selection of the best solution. It combines the strengths of different solutions and eliminates the weaknesses. This solution then becomes the datum of the base solution against which other solutions are compared. The process is iterated until the best solution or concept emerges. The basic steps of the Pugh Concept Selection Process are

Develop a list of the selection criteria. For evaluating product designs, list VOC requirements; for

evaluating improvement proposals, list customer requirements or organizational improvement goals.

Develop a list of all potential improvement solutions or all product designs to be rated.

Select one potential improvement or product design as the baseline - all other proposals are

compared to the baseline.

o For product designs, the baseline is usually either the current design or a preferred new

design.

o For improvement proposals, the baseline is usually the improvement suggested by the team

or an improvement that has strong management support.

o The current solution in place may also be used as a baseline

Enter the baseline proposal in the space provided.

Page 129: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

124

Enter the selection criteria along the left side of the matrix and the alternative product or

improvement proposals across the top of the matrix.

Apply a weighting factor to all the selection criteria. These weights might not be the same for all

projects, as they can reflect localized improvement needs or changes in customer requirements. A 1-

to-9 scale or 1-to-5 scale can be used for weighting the importance of the selection criteria, using 1

for the least important criteria and 5 or 9 for the most important criteria.

Based on team input, score how well the baseline proposal matches each of the selection criteria. Use

a 1-to-9 or 1-to-5 scale for scoring the baseline, using 5 or 9 for very strong matches to the criteria,

and 1 for very poor matches to the criteria. We may also

For each alternative proposal, the team should determine whether the alternative is Better, the

Same, or Worse than the baseline, relative to each of the selection criteria:

o Better results in a +1 score

o Same results in a 0 score

o Worse results in a -1 score.

Multiply the scores by the criteria weights and add them together to obtain the weighted score.

Table 26: Example Pugh Matrix

Evaluation Criteria Imp Datum B C D

Ease of guests finding the lobby for check - in 3 S S S -

Minimum weight times for check in the process 5 S - - +

Minimum errors in the room assignment 5 S - S S

Appearance of the lobby area cleanliness 4 S - - -

Sum of Same

4 1 2 1

Sum of Positives

0 0 0 1

Sum of Negatives

0 3 2 2

Weighted Sum of Positives

0 0 0 5

Weighted Sum of Negatives

0 14 9 7

Scoring the Proposed Solutions

Each Proposed Solution is rated on how well it addresses a selection criterion compared with the Baseline Solution. For each Proposed Solution, select whether it is Better than the Baseline Solution, the Same as the Baseline Solution (default), or Worse than the Baseline Solution.

Solution Scores

Based on your choice of Better, Same or Worse for each Proposed Solution (relative to the Baseline Solution), three scores are calculated:

Page 130: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

125

Weighted Score: Each Better rating receives a raw score of 1, each same rating receives a raw score of 0, and each Worse rating receives a raw score of -1. The raw scores are multiplied by the Importance of Selection Criteria, and the sum of the raw scores times the Importance is the Weighted Score. A higher Weighted Score is better.

Worse/ Weighted Negative Score: Tracks the number of times that a Proposed Solution is rated worse than the Baseline Solution. The lower the Worse Score, the better a proposed solution is relative to the Baseline Solution.

Better/Weighted Positive Score: Tracks the number of times that a Proposed Solution is rated better than the Baseline Solution. The higher the Better Score, the better a Proposed Solution is relative to the Baseline Solution.

Risk Analysis Project leader may choose to carry out FMEA to analyze the risks the solution. Solutions are not devoid of risks, and therefore analyzing the potential impact of the solution on the risk parameter is important.

Consensus Building tools Multi- voting By design, brainstorming generates a long list of ideas. However, also by design, many are not realistic or feasible. The Multi-voting activity allows a group to narrow their list or options into a manageable size for sincere consideration or study. It may not help the group make a single decision but can help the group narrow a long list of ideas into a manageable number that can be discussed and explored. It allows all members of the group to be involved in the process and ultimately saves the group a lot of time by allowing them to focus energy on the ideas with the greatest potential.

When to use Multi-voting

When the group has a long list of possibilities and wants to narrow it down to a few for analysis and discussion and or when a selection process needs to be made after brainstorming. Guidelines for Conducting the Multi-voting Activity

Brainstorm a list of options: Conduct the brainstorming activity to generate a list of ideas or options.

Review the list from the Brainstorming activity: Once Green Belt/Black Belt have completed the list,

clarify ideas, merge similar ideas, and make sure everyone understands the options. Note: at this

time the group is not to discuss the merits of any idea, just clarify and make sure everyone

understands the meaning of each option.

Participants vote for the ideas that are worthy of further discussion: Each participant may vote for

as many ideas as they wish. Voting may be by show of hands or physically going to the list and

marking their choices or placing a dot by their choices. If they so desire, participants may vote for

every item.

Identify items for next round of voting: Count the votes for each item. Any item receiving votes from

half the people voting is identified for the next round of voting. For example, if there are 12 people

voting, any item receiving at least six votes is included in the next round. Signify the items for the next

vote by circling or marking them with a symbol, i.e., all items with a star by the number will be voted

on the next round.

Page 131: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

126

Vote again: Participants vote again, however this time they may only cast votes for half the items

remaining on the list. In other words, if there are 20 items from the last round that are being voted

on, a participant may only vote for ten items.

Repeat steps 4 and 5: Participants continuing voting and narrowing the options as outlined in steps 4

and 5 until there is an appropriate number of ideas for the group to analyze as a part of the decision-

making or problem solving process. Generally groups need to have three to five options for further

analysis.

Discuss remaining ideas: At this time the group engages in discussing the pros and cons of the

remaining ideas. This may be done in small groups or the group as a whole.

Proceed with appropriate actions: At this point the group goes to the next steps. This might be

making a choice of the best option or identifying the top priorities.

Item Number of Votes

1. Have a meeting agenda

2. Inform participants why they have to attend meeting

3. Have someone take notes at the meeting

4. Have a clear meeting objective

5. Reduce the number of topics to be discussed at each meeting

6. Start and end meetings on time

Figure 38: Multi-Voting Activity

Nominal Group Technique A technique that Supplements Brainstorming. Structured approaches to generate additional ideas, surveys the opinions of a small group, and prioritize brainstormed ideas. Nominal (meaning in name only) group technique (NGT) is a structured variation of a small-group discussion to reach consensus. NGT gathers information by asking individuals to respond to questions posed by a moderator, and then asking participants to prioritize the ideas or suggestions of all group members. The process prevents the domination of the discussion by a single person, encourages all group members to participate, and results in a set of prioritized solutions or recommendations that represent the group’s preferences. The nominal group technique is a decision making method for use among groups who want to make their decision quickly, as by a vote, but want everyone's opinions taken into account. First, every member of the group gives their view of the solution, with a short explanation. Then, duplicate solutions are eliminated from the list of all solutions, and the members proceed to rank the solutions, 1st, 2nd, 3rd, 4th, and so on. The numbers each solution receives are totalled, and the solution with the lowest (i.e. most favoured) total ranking is selected as the final

Page 132: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

127

decision. Figure 39 shows results of the nominal group technique. Each idea was ranked by several team members. The idea with the lowest total score is selected as the winner. In this example (Idea N).

Structured to focus on problems, not people. To open lines of communication, tolerate conflicting ideas.

Builds consensus and commitment to the final result. Especially good for highly controversial issues

Nominal Group Technique is most often used after a brainstorming session to help organize and prioritize ideas

Figure 39: Example of Nominal Group Technique with ideas rated

The Four Step Process to Conduct Nominal Group Technique:

Generating Ideas: The moderator presents the question or problem to the group in written form and reads the question to the group. The moderator directs everyone to write ideas in brief phrases or statements and to work silently and independently. Each person silently generates ideas and writes them down.

Recording Ideas: Group members engage in a round-robin feedback session to concisely record each idea (without debate at this point). The moderator writes an idea from a group member on a flip chart that is visible to the entire group, and proceeds to ask for another idea from the next group member, and so on. There is no need to repeat ideas; however, if group members believe that an idea provides a different emphasis or variation, feel free to include it. Proceed until all members’ ideas have been documented.

Discussing Ideas: Each recorded idea is then discussed to determine clarity and importance. For each idea, the moderator asks, “Are there any questions or comments group members would like to make about the item?” This step provides an opportunity for members to express their understanding of the logic and the relative importance of the item. The creator of the idea need not feel obliged to clarify or explain the item; any member of the group can play that role.

1 Item Number

Card Rating Value 6

Idea Scores

Idea 1 Totals 8,8,6,7,8,2 6/39

Idea 2 6,5,4,7,3 5/25

Idea N 3,2,2,1 4/8

Page 133: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

128

Voting on Ideas: Individuals vote privately to prioritize the ideas. The votes are tallied to identify the ideas that are rated highest by the group as a whole. The moderator establishes what criteria are used to prioritize the ideas. To start, each group member selects the five most important items from the group list and writes one idea on each index card. Next, each member ranks the five ideas selected, with the most important receiving a rank of 5, and the least important receiving a rank of 1 (Green Belt/Black Belt may change the rank i.e. rank 1 can be the best and rank 5 can be the worst). After members rank their responses in order of priority, the moderator creates a tally sheet on the flip chart with numbers down the left-hand side of the chart, which correspond to the ideas from the round-robin. The moderator collects all the cards from the participants and asks one group member to read the idea number and number of points allocated to each one, while the moderator records and then adds the scores on the tally sheet. The ideas that are the most highly rated by the group are the most favoured group actions or ideas in response to the question posed by the moderator.

When to Use NGT

NGT is a good method to use to gain group consensus, for example, when various people (program staff, stakeholders, community residents, etc.) are involved in constructing a logic model and the list of outputs for a specific component is too long and therefore has to be prioritized. In this case, the questions to consider would be: “Which of the outputs listed are most important for achieving our goal and are easier to measure? Which of our outputs are less important for achieving our goal and are more difficult for us to measure?”

Disadvantages of NGT

Requires preparation.

Is regimented and lends itself only to a single-purpose, single-topic meeting.

Minimizes discussion, and thus does not allow for the full development of ideas, and therefore can be a less stimulating group process than other techniques.

Advantages of NGT

Generates a greater number of ideas than traditional group discussions.

Balances the influence of individuals by limiting the power of opinion makers (particularly advantageous for use with teenagers, where peer leaders may have an exaggerated effect over group decisions or in meetings where established leaders tend to dominate the discussion).

Diminishes competition and pressure to conform, based on status within the group.

Encourages participants to confront issues through constructive problem solving.

Allows the group to prioritize ideas democratically.

Typically provides a greater sense of closure than can be obtained through group discussion. Delphi Technique

Delphi Technique is a method of relying on a panel of experts to anonymously select their responses using a secret ballot process. After each round, a facilitator provides the summary of the experts’ opinions along with the reasons for their decisions. Participants are encouraged to revise their answers in light of replies from other experts. The process is stopped after pre-defined criteria such as number of rounds. The advantage of this technique is that if there are team members who are boisterous or overbearing, they will not have much of an impact on swaying the decisions of other team members. The Delphi technique, mainly developed by Dalkey and Helmer (1963) at the Rand Corporation in the 1950s, is a widely used and accepted method for achieving convergence of opinion concerning real-world knowledge solicited from experts within certain topic areas.

Page 134: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

129

The Delphi technique is a widely used and accepted method for gathering data from respondents within their domain of expertise. The technique is designed as a group communication process which aims to achieve a convergence of opinion on a specific real-world issue. The Delphi process has been used in various fields of study such as program planning, needs assessment, policy determination, and resource utilization to develop a full range of alternatives, explore or expose underlying assumptions, as well as correlate judgments on a topic spanning a wide range of disciplines. The Delphi technique is well suited as a method for consensus-building by using a series of questionnaires delivered using multiple iterations to collect data from a panel of selected subjects. Subject selection, time frames for conducting and completing a study, the possibility of low response rates, and unintentionally guiding feedback from the respondent group are areas which should be considered when designing and implementing a Delphi study.

Delphi technique’s application is observed in program planning, needs assessment, policy determination, resource utilization, marketing & sales and multiple other business decision areas.

Fowles (1978) describes ten steps for the Delphi method:

Formation of a Delphi team to undertake a Delphi on a subject.

Selection of expert panel(s).

Development of the first round questionnaire

Testing the questionnaire for proper wording.

Transmission to the panel lists.

Analysis of 1st responses

Preparation of 2nd round.

Transmission of 2nd round questionnaires to the panel lists

Analysis of the 2nd round responses (7 to 9 may be repeated to get consensus)

Preparation and presentation of report

Organizations do customize these steps to meet their requirements, as they may have time constraints and large number of iterations may not be possible.

Step 11-Select and optimize best solution Experiments Designed experiments play an important role in quality improvement. While the confidence intervals and hypothesis tests previously discussed are limited to rather simple comparisons between one sample and requirements or between two samples, the designed experiments will use ANOVA (analysis of variance) techniques to partition the variation in a response amongst the potential sources of variation.

Page 135: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

130

The traditional approach, which most of us learned in high school science class, is to hold all factors constant except one. When this approach is used we can be sure that the variation is due to a cause and effect relationship or so we are told. However, this approach suffers from a number of problems:

It usually isn’t possible to hold all other variables constant.

There is no way to account for the effect of joint variation of independent variables, such as

interaction.

There is no way to account for experimental error, including measurement variation.

The statistically designed experiment usually involves varying two or more variables simultaneously and obtaining multiple measurements under the same experimental conditions. The advantage of the statistical approach is threefold:

Interactions can be detected and measured. Failure to detect interactions is a major flaw in the OFAT

approach.

Each value does the work of several values. A properly designed experiment allows Green Belt/Black

Belt to use the same observation to estimate several different effects. This translates directly to cost

savings when using the statistical approach.

Experimental error is quantified and used to determine the confidence the experimenter has in his

conclusions.

Black belts use Design of Experiments (DOE) to craft well-designed efforts to identify which process changes yield the best possible results for sustained improvement. Whereas most experiments address only one factor at a time, the Design of Experiments (DOE) methodology focuses on multiple factors at one time. DOE provides the data that illustrates the significance to the output of input variables acting alone or interacting with one another. "A branch of applied statistics dealing with planning, conducting, analyzing and interpreting controlled tests to evaluate the factors that control the value of a parameter or group of parameters." DOE provides these advantages over other, more traditional methods:

Evaluating multiple factors at the same time can reduce the time needed for experimentation.

Some well-designed experiments do not require the use of sophisticated statistical methods to understand the results at a basic level. However, computer software can be used to yield very precise results as needed.

The costs vary depending on the experiment, but the financial benefits realized from these experiments can be substantial.

The figure below depicts an example of a relationship between the components that DOE examines:

Process input variables, normally referred to as x variables and as factors in DOE terminology.

Process output variables, normally referred to as y variables and as responses in DOE terminology.

The relationship between input variables and output variables.

The interaction, or relationship, between input variables as it relates to the output variables.

Page 136: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

131

Figure 40: Example of relationship between components that DOE examines

DOE Terminology Certain terms are frequently used with DOE that need to be defined clearly.

Factor: A predictor variable that is varied with the intent of assessing its effect on a response variable. Most often referred to as an "input variable."

Factor Level: A specific setting for a factor. In DOE, levels are frequently set as high and low for each factor. A potential setting, value or assignment of a factor of the value of the predictor variable. For example, if the factor is time, then the low level may be 50 minutes and the high level may be 70 minutes.

Response variable: A variable representing the outcome of an experiment. The response is often referred to as the output or dependent variable.

Treatment: The specific setting of factor levels for an experimental unit. For example, a level of temperature at 65° C and a level of time at 45 minutes describe a treatment as it relates to an output of yield.

Experimental error: An error from an experiment reveals variation in the outcome of identical tests. The variation in the response variable beyond that accounted for by the factors, blocks, or other assignable sources while conducting an experiment. Certain terms are frequently used with DOE that need to be defined clearly.

Experimental run: A single performance of an experiment for a specific set of treatment conditions.

Experimental unit: The smallest entity receiving a particular treatment, subsequently yielding a value of the response variable.

Predictor Variable: A variable that can contribute to the explanation of the outcome of an experiment. Also known as an independent variable.

Repeated Measures: The measurement of a response variable more than once under similar conditions. Repeated measures allow one to determine the inherent variability in the measurement system. Repeated measures are known as "duplication" or 'repetition."

Page 137: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

132

Replicate: A single repetition of the experiment. See also replication.

Replication: Performance of an experiment more than once for a given set of predictor variables. Each of the repetitions of the experiment is called a "replicate." Replication differs from repeated measures in that it is a repeat of the entire experiment for a given set of predictor variables, not just repeat of measurements of the same experiment. Note: Replication increases the precision of the estimates of the effects in an experiment. Replication is more effective when all elements contributing to the experimental error are included. In some cases replication may be limited to repeated measures under essentially the same conditions. In other cases, replication may be deliberately different, though similar, in order to make the results more general.

Repetition: When an experiment is conducted more than once, repetition describes this event when the factors are not reset. Subsequent test trials are run again but not necessarily under the same conditions.

DOE Applications Planning the experiment is probably the most important task in the Improve phase when using DOE. For planning to be done well, some experts estimate that 10-25% of your time spent should be devoted to planning and organizing the experiments. The purpose of DOE is to create an observable event from which data may be extracted and decisions made about the best methods to improve the process. DOE may be used most effectively in the following situations:

o Identifying factors that produce a specific response or outcome o Selecting between alternative approaches to effect the best outcome

In DOE, a full factorial design combines levels for each factor with levels for all other factors. This basic design ensures that all combinations are used, but if factors are many, this design may take too much time or be too costly to implement. In either case, a fractional factorial design is selected as the number of runs is fewer with fewer treatments. DOE Planning Process The project team decides the exact steps to follow in the Improve phase. Steps to include in the Improve phase may actually be identified in the Measure and Analyze phases and should be noted to expedite later planning in the Improve phase. What follows is a suggested guide for planning the experiment(s) to be conducted in the Improve phase. The suggested process may be modified depending on the exact situation. To use DOE, follow these steps: 1. Establish experiment objectives: Objectives differ per project, but the designs typically fall into three

categories to support different objectives:

Screening – used to identify which factors are most important.

Characterization – used to quantify the relationships and interaction between several factors.

Optimization – used to develop a more precise understanding of just one or two variables.

Page 138: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

133

2. Identify factors to be considered

Label both input variables (x factors) and output variables (y responses) in the experiment.

Use information collected in prior phases to assist in the identification process.

3. Finalize an experiment design

Select a design for the experiment.

Choose a design type (full factorial, fractional factorial, or others) that meets the experiment’s objectives.

Determine how the factors are measured.

Consider the resources needed and determine whether a practice run or pilot experiment may be needed.

4. Run the experiment

Run the experiment and collect the data. Place initial data in the results column of a design array, a graphical representation of the experiment factors and results. Roll over Page Resources and click on Design Array for an example.

Minimize chance for human error by carefully planning where human error could occur and allow for the possibility in the planning process.

Randomize the runs to reduce confounding (defined later in this topic).

Document the results as needed depending on the experiment.

5. Analyze the results of the experiment

Review the results of the experiment(s).

Examine the relationships among input variables (factors) acting together and with regards to the output variable(s) (responses).

6. Make decisions on next steps

Based on the results, determine next steps.

Are additional runs of the experiment needed?

Do the levels need to be modified prior to conducting the experiment again?

If the results point to an optimal solution, implement the factors and the levels of choice and look at the Control phase to sustain the desired improvements.

Design Array

Table 27: Example of Design of Array

A: TEMPERATURE B: TIME C: CATALYST VOLUME RESULTS (YEILD %)

1 - - -

2 + - -

3 - + -

4 + + -

5 - - +

6 + - +

Page 139: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

134

7 - + +

8 + + +

Barriers to the Planning Process In all projects, barriers present themselves as obstacles to the project’s successful completion. The Improve phase of DMAIC is no exception. The following are examples of the types of barriers to watch for during planning for an experiment:

Objectives or purpose are unclear – the objectives are not developed and fully understood.

Factor levels are either set too low or too high – factor levels set inappropriately can adversely affect the data and understanding of the relationships between factors.

Unverified or misunderstood data from previous phases may lead to errors in planning and assumptions.

Experimentation is a cost, although DOE is more cost effective than some other options like OFAT (one-factor-at-a-time) experiments, the costs can be too expensive and need to be considered carefully.

Lacks of management support – Experiments require the full support of management in order to effectively use the resources required.

Selecting Experiment Factors Identifying process variables, both inputs/factors and outputs/responses, is an important part of the planning process. While the selection process varies based on the information gathered in the Analyze phase and the objectives of the experiment, variables should be selected that have the following basic characteristics:

Important to the process in question – Since many inputs and output variables may exist for a process, most experiments focus on only the most critical inputs and outputs for a process. Such emphasis makes it more likely to successfully improve the most relevant parts of a process and, on a practical level, limits the number of variables and the cost of conducting the experiment.

Identifiable relationships to the inputs and the outputs – If relationships are already evident based on prior information gathered, the design of the experiment can be more focused on those factors with the most positive impact on outputs.

Not extreme level values – The information related to the level values for the factors should not be extreme. Values that reflect a reasonable range around the actual performance of the factors usually yield the best results.

There is no magic formula or equation for selecting the factors. The guidelines listed above and a review of the analysis work done in previous phases should provide a good basis for selection. Remember, the goal of Improve is to model the possible combination of factors and levels that yield valid and necessary results. We recommend the use of process experts for selecting experimental factors and levels based on prior analysis. The prior analysis should suggest what the critical factors are and where the levels should be set for a first run in the experiment.

Page 140: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

135

Other Planning Considerations The DOE planning phase may include other considerations for the project team.

Iterative process: One large experiment does not normally reveal enough information to make final decisions. Several iterations may be necessary so that the proper decisions may be made and the proper value settings verified.

Measurement methods: Ensure that measurement methods are checked out prior to the experiment to avoid errors or variations from the measurement method itself. Review measurement systems analysis to ensure methods have been reviewed and instruments calibrated as needed, etc.

Process control and stability: The results from an experiment are more accurate if the process in question is relatively stable.

Inference space: If the inference space is narrow, then the experiment is focused on a subset of a larger process such as one specific machine, one operator, one shift, or one production line. With a narrowed or focused inference space, the chance for “noise” (variation in output from factors not directly related to inputs) is much reduced. If the inference is broad, the focus is on the entire process and the chances for noise impacting the results are much greater.

Types of Experiment Designs: Part of planning an experiment is selecting the experiment design.

o 2-level, 2 factor: The simplest of design options, the 2-level, 2-factor design uses only four combinations or runs. The number in the first column represents the run number. The "+" symbol represents the high level; the "–" symbol represents the low level.

Table 28: Example of 2 level, 2 factor

Factor A Factor B

1 + +

2 + -

3 - +

4 - -

o Full Factorial

Table 29: Example of Full Factorial

A: Temperature B: Time C: Catalyst Volume Results (Yield %)

1 - - - 72

2 + - - 90

3 - + - 79

4 + + - 89

5 - - + 78

6 + - + 88

7 - + + 81

8 + + + 85

Page 141: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

136

This design option includes all levels and all factors for a given process. The advantage of a full factorial design is that all factors and levels are part of the experiment, thus ensuring the most complete data. If there are 2 levels and 6 factors (26), then there are 64 possible runs for the experiment. A common description of factorial experiments is the designator Lf where f is the number of factors in the experiment and L is the number of levels.

o Fractional Factorial: This design is best used when you are unsure about which factor influences the response outcome or when the number of factors is large (usually considered to be 5 or more). A fractional factorial uses a subset of the total runs. For example, if there are 2 levels and 6 factors (26), then there are 64 possible runs for the experiment. For a fractional factorial, the experiment could be reduced to 32 runs or perhaps even 16 runs. An example of this may be viewed later in this topic.

Design Principles: Black Belts adhere to a set of design principles to assist in the proper experiment design.

Power: The equivalent to one minus the probability of a Type II error (1-β). A higher power is associated with a higher probability of finding a statistically significant difference. Lack of power usually occurs with smaller sample sizes. The Beta Risk (i.e., Type II Error or Consumer’s Risk) is the probability of failing to reject the null hypothesis when there is significant difference (i.e., a product is passed on as meeting the acceptable quality level when in fact the product is bad). Typically, (β) = 0.10%. This means there is a 90% (1-β) probability you are rejecting the null when it is false (correct decision). Also, the power of the sampling plan is defined as 1-β, hence the smaller the β, the larger the power.

Sample Size: The number of sampling units in a sample. Determining sample size is a critical decision in any experiment design. Generally, if the experimenter is interested in detecting small effects, more replicates are required than if the experimenter is interested in detecting large effects. Increasing the sample size decreases the margin of error and improves the precision of the estimate. There are several approaches to determining sample size including, but not limited to: Operating Characteristic Curves, Specifying a Standard Deviation Increase, and Confidence Interval Estimation Method. Note: In a multistage sample, the sample size is the total number of sampling units at the conclusion of the final stage of sampling.

Balanced Design: A design where all treatment combinations have the same number of observations. If replication in a design exists, it would be balanced only if the replication was consistent across all the treatment combinations. In other words, the number of replicates of each treatment combination is the same.

Replication: Performance of an experiment more than once for a given set of predictor variables. Each of the repetitions of the experiment is call a replicate. Replication differs from repeated measures in that it is a repeat of the entire experiment for a given set of predictor variables, not just a repeat of measurements for the same experiment.

Page 142: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

137

Replication involves an independent repeat of each factor combination in random order. For example, suppose a metallurgical engineer is interested in studying the effect of two different hardening processes: oil quenching and saltwater quenching on an aluminum alloy. If he has five alloy specimens and treats them in each of the hardening processes, we will make ten observations. These should be done in random order to maintain the properties of replication. First, the experimenter can obtain an estimate of the experimental error which becomes a basic unit of measurement for determining whether observed differences in the data are really statistically different. Second, if the sample mean is used to estimate the true mean response for one of the factor levels in the experiment, replication permits the experimenter to obtain a more precise estimate of this parameter.

Repetition: When an experiment is conducted more than once, repetition describes this event when the factors are not changed or reset. Subsequent test trials are run again but not necessarily under the same conditions.

Efficiency: In experimental designs, efficiency refers to an experiment that is designed in such a way as to include the minimal number of runs and to minimize the amount of resources, personnel, and time utilized.

Randomization: The process used to assign treatments to experimental units so each experimental unit has an equal chance of being assigned a particular treatment. Randomization validates the assumptions made in statistical analysis and prevents unknown biases from impacting conclusions. By randomization we mean that both the allocation of the experimental material and the order in which the individual runs or trials of the experiment are to be performed are arbitrarily determined.

For example, suppose the specimens in a metallurgical hardness experiment are of slightly different thicknesses and the effectiveness of the quenching medium may be affected by the specimen thickness. If all the specimens subjected to the oil quench are thicker than those subjected to the saltwater quench, systematic bias may be introduced into the results. This bias handicaps one of the quenching media and consequently invalidates our results. Randomly assigning the specimens to the quenching media alleviates this problem.

Blocking: The method of including blocks in an experiment in order to broaden the applicability of the conclusions or to minimize the impact of selected assignable causes. The randomization of the experiment is restricted and occurs within blocks.

Order: The order of an experiment refers to the chronological sequence of steps to an experiment. The trials from an experiment should be carried out in a random run order. In experimental design, one of the underlying assumptions is that the observed responses should be independent of one another (i.e., the observations are independently distributed). By randomizing the experiment, we reduce bias that could result by running the experiment in a “logical” order.

Interaction effect: The interaction effect for which the apparent influence of one factor on the response variable depends upon one or more other factors. Existence of an interaction effect means that the factors cannot be changed independently of each other.

Confounding: Indistinguishably combining an effect with other effects or blocks. When done, higher-order effects are systematically aliased so as to allow estimation of lower-order effects. Sometimes, confounding results from poor planning or inadvertent changes to a design during the running of an experiment. Confounding can diminish or even invalidate the effectiveness of the experiment.

Page 143: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

138

Alias: An alias is a confounded effect resulting from the nature of the designed experiment. The confounding may or may not be deliberate.

Types of Design Experiments can be designed to meet a wide variety of experimental objectives. A few of the more common types of experimental designs are defined here.

Fixed-effects model: An experimental model where all possible factor levels are studied. For example,

if there are three different materials, all three are included in the experiment.

Random-effects model: An experimental model where the levels of factors evaluated by the

experiment represent a sample of all possible levels. For example, if we have three different materials

but only use two materials in the experiment.

Mixed model: An experimental model with both fixed and random effects.

Completely randomized design: An experimental plan where the order in which the experiment is

performed is completely random, for example,

Table 30: Example of Completely Randomized Design

Level Test Sequence Number

A 7 , 1 , 5

B 2 , 3 , 6

C 8 , 4

Randomized-block design When focusing on just one factor in multiple treatments, it is important to maintain all other conditions as constant as possible. Since the number of tests to ensure constant conditions might be too large to practically implement, an experiment may be divided into blocks. These blocks represent planned groups that exhibit homogeneous characteristics. A randomized block experiment limits each group in the experiment to exactly one and only one measurement per treatment. For example, if an experiment is going to cover two shifts, then bias may emerge based on the shift during which the test was conducted. A randomized block plan might measure each item on each shift to reduce the chance for bias. A randomized block experiment would arbitrarily select the runs to be performed during each shift. For example, since the coolant temperature in the example below is probably the most difficult to adjust, but may partially reflect the impact of the change in shift, the best approach would be to randomly select runs to

Page 144: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

139

be performed during each shift. The random selection might put runs 1, 4, 5 and 8 in the first shift and runs 2, 3, 6 and 7 in the second shift. Another approach that may be used to nullify the impact of the shift change would be to do the first three replicates of each run during the first shift and the remaining two replicates of each run during the second shift.

Table 31: Example of Randomized Block Design

Run # Feed Speed C Temperature Average Surface Finish Reading

1 .01 1300 100 10

2 .01 1300 140 4

3 .01 1800 100 6

4 .01 1800 140 2

5 .04 1300 100 7

6 .04 1300 140 6

7 .04 1800 100 6

8 .04 1800 140 3

For example, assume we are conducting a painting test with different materials, material A and material B. We have four test pieces of each material. Ideally we would like to clean all of the pieces at the same time to ensure that the cleaning process doesn’t have an effect on our results; but what if our test requires that we use a cleaning tank that cleans two test pieces at a time? The tank load then becomes a “blocking factor.” We will have four blocks, which might look like this:

Table 32

Material Tank Load Test Piece Number

A

B

1 7

1

B

A

2 5

2

B

A

3 3

6

B

A

4 4

8

Since each material appears exactly once per cleaning tank load we say the design is balanced. The material totals or averages can be compared directly.

Page 145: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

140

Latin-square designs

A Latin square design involves three factors in which the combination of the levels of any one of them and the levels of the other two appear once and only once. A Latin square design is often used to reduce the impact of two blocking factors by balancing out their contributions. A basic assumption is that these block factors do not interact with the factor of interest or with each other. This design is particularly useful when the assumptions are valid for minimizing the amount of experimentation.

1. The Latin square design has two limitations: The number of rows, columns, and treatments must all be the same (in other words, designs may be 3X3X3, 4X4X4, 5X5X5, etc.).

2. Interactions between row and column factors are not measured.

Table 33: Example of Latin Square Design (3x3)

Engine configuration

Aircraft I II III

1 A B C

2 B C A

3 C A B

Three aircraft with three different engine configurations are used to evaluate the maximum altitude when flown by three different pilots (A, B, and C). In this case, the two constant sources are the aircraft (1, 2, and 3) and the engine configuration (I, II, and III). The third variable – the pilots – is the experimental treatment and is applied to the source variables (aircraft and engine). Notice that the condition of interest is the maximum altitude each of the pilots can attain, not the interaction between aircraft or engine configuration. For example, if the data shows that pilot A attains consistently higher altitudes in each of the aircraft/engine configurations, then the skills and techniques of that pilot are the ones to be modeled. This is also an example of a fractional factorial as only nine of the 27 possible combinations are tested in the experiment. Suppose we wish to compare four materials with regard to their wearing qualities. Suppose further that we have a wear-testing machine which can handle four samples simultaneously. Two sources of variation might be the variations from run to run, and the variation among the four positions on the wear machine. The Latin square plan is as in Table 34 (the four materials are labelled A, B, C, D).

Table 34

Run Position Number

(1) (2) (3) (4)

1 A B C D

2 B C D A

3 C D A B

4 D A B C

Page 146: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

141

Since each material appears exactly once per cleaning tank load we say the design is balanced. The material totals or averages can be compared directly.

Simulation Simulation is a means of experimenting with a detailed model of a real system to determine how the system will respond to changes in its structure, environment, or underlying assumptions. A system is defined as a combination of elements that interact to accomplish a specific objective. A group of machines performing related manufacturing operations would constitute a system. These machines may be considered, as a group, an element in a larger production system. The production system may be an element in a larger system involving design, delivery, etc. Simulations allow the system or process designer to solve problems. To the extent that the computer model behaves as the real world system it models, the simulation can help answer important questions. Care should be taken to prevent the model from becoming the focus of attention. If important questions can be answered more easily without the model, then the model should not be used. The modeller must specify the scope of the model and the level of detail to include in the model. Only those factors which have a significant impact on the model’s ability to serve its stated purpose should be included. The level of detail must be consistent with the purpose. The idea is to create, as economically as possible, a replica of the real world system that can provide answers to important questions. This is usually possible at a reasonable level of detail. Well-designed simulations provide data on a wide variety of systems metrics, such as throughput, resource utilization, queue times, and production requirements. While useful in modelling and understanding existing systems, they are even better suited to evaluating proposed process changes. In essence, simulation is a tool for rapidly generating and evaluating ideas for process improvement. By applying this technology to the creativity process, Six Sigma improvements can be greatly accelerated.

Predicting CTQ Performance

A key consideration for any design concept is the CTQ that would result from deploying the design. It is often very difficult to determine the overall result of a series of process steps, but relatively easy to study each step individually. Software can then be used to simulate the process a number of times and calculate the performance of CTQ at the end of the series.

Step 12- Pilot, Implement and Validate solution Pilot A pilot is a test of a proposed solution. This type of test has the following properties:

Performed on a small scale, usually on a subset of the target population

Used to evaluate both the solution and the implementation of the solution

Purpose is to make the full scale implementation more effective

Provides data about expected results and exposes issues/ challenges in the implementation plan

Pilot Study helps to

Identify previously unknown performance problems, improve the solution and understand risks of the

solution

Validate expected results and facilitate buy-in from stakeholders, cross-functional team.

Smooth implementation with limited or no teething issues

Page 147: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

142

Full Scale Implementation The team first develops a plan to implement the one or more solutions selected in the Improve phase. It is important to create a solution implementation plan based on the results from the pilot plan and pilot implementation. A big piece of the solution implementation plan is to leave tools to help the owner manage the process after the team has gone on to other projects. A large-scale project may require dealing with multiple processes and sub-processes, multiple implementation locations, a large number of implementation teams and several different disciplines and methodologies. Full Scale Implementation should be treated as a project in itself, and project management practises should not be ignored. That plan should include but not limited to:

Potential Risk analysis – Failure Modes and Effect Analysis

Solution Implementation Schedule

Training Plan

Communication Plan

Cost and Benefit Analysis

Improvement Validation Once full scale implementation has been successfully completed, Green Belt/Black Belt should

Revalidate the measurement system to ensure that data collected post improvement is reliable

Collect data for Output (Y) to evaluate performance post full scale implementation

o Sigma level - Sigma level computed during the Measure phase acts as a baseline and can be

compared with Sigma level computed in Improve phase

o Cp, Cpk, Pp, Ppk - Similarly process capability indices can also be compared post

improvement versus baseline

o Hypothesis tests - If sample data are collected post improvement, Hypothesis test should be

carried to ascertain whether the improvement is observed for the population process

performance

o FMEA - Process FMEA may also be revised to cover the new changes in the process. It will

help in ascertaining whether the solution and changes induced by solution have affected the

risk parameters of the process.

The main deliverable of Improve phase is Selected Solution.

Page 148: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

143

Improve Phase Tollgate Checklist What techniques were used to generate ideas for potential solutions?

What narrowing and screening techniques were used to further develop and qualify potential solutions?

What evaluation criteria were used to select a recommended solution?

Do the proposed solutions address all of the identified root causes, or at least the most critical?

Were the solutions verified with the Project Sponsor and Stakeholders? Has an approval been received to implement?

Was a pilot run to test the solution? What was learned? What modifications were made?

Has the team seen evidence that the root causes of the initial problems have been addressed during the pilot? What are the expected benefits?

Has the team considered potential problems and unintended consequences (FMEA) of the solution and developed preventive and contingency actions to address them?

Has the proposed solution been documented, including process participants, job descriptions, and if applicable, their estimated time commitment to support the process?

Has the team developed an implementation plan? What is the status?

Have changes been communicated to all the appropriate people?

Has the team been able to identify any additional ‘Quick Wins’?

Has ‘learning’ to-date required modification of the Project Charter? If so, have these changes been approved by the Project Sponsor and the Key Stakeholders?

Have any new risks to project success been identified and added to the Risk Mitigation Plan?

Page 149: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

144

Control

Control Phase Overview In the Control phase, the emphasis is on maintaining the gains achieved. The question the team is trying to answer is, "How can we guarantee performance?". In the Improve phase, the team had a successful pilot and also got an opportunity to tweak the solution. They used this information to plan the solution implementation and carried out full scale implementation. It is time now to ensure that, when they finish the project, the success that they have observed will be sustained. This involves transferring the responsibility to the process owner. The deliverables in the Control phase are:

Process Control Plan

Solution documentation and Validation of Benefits

Successful transfer to Process Owner

Step 13- Implement Control System for Critical X’s Control System is the complete strategy for maintaining the improved process performance over time. It identifies the specific actions and tools required for sustaining the process improvements or gains. The objective is

To ensure sure that the process stays in control after the solution has been implemented.

To quickly detect the out of control state and determine the associated special causes so that actions

can be taken to correct the problem before non-conformances are produced.

• Implement Control System for Critical X’s Step 13

• Document Solution and Benefits Step 14

• Transfer to Process Owner, Project Closure Step 15

Chapter

9

Page 150: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

145

Green Belt/Black Belt will develop a control plan which the process owner will use as a guideline for sustaining the gains. Control plan provides a written summary description of the systems used in minimizing process and service variation

Control plans do not replace information contained in detailed instructions

In a grand sense the control plan describes the actions that are required at each phase of the

process including receiving, in-process, outgoing, and periodic requirements to assure that all

process outputs will be in a state of control

Ultimately, the control plan is a living document reflecting current methods of control and measurement systems used

What is the process that is being controlled?

What measures (numbers) are we monitoring?

For each measure, what are the “trigger point” values where action should be taken?

What action should be taken when a “trigger point” is reached? Who is responsible for taking

action?

A control plan includes but not limited to

Table 35

Control Plan

Process Documentation Communication Plan

Data Collection Plan Audit/Inspection Plan

Mistake Proofing System Risk Mitigation System

Response/Reaction Plan Statistical Process Control

Mistake Proofing (Poka Yoke) Mistake Proofing is a method for avoiding errors in a process. The simplest definition of "Mistake Proofing” is that it is a technique for eliminating errors by making it impossible to make mistakes in the process. It is often considered the best approach to process control. A Poka-Yoke device is any mechanism that either prevents a mistake from being made or makes the mistake obvious at a glance. The ability to prevent mistakes is essential because the cause of defects lies in errors committed due to imperfect processes. Defects result from either being unaware of the errors or neglecting to do anything to correct them.

Inspection/Audits Inspection isn’t the most likely control method however they may be only option left if we were unable to identify a suitable mistake proofing measure. We can classify inspection as

Source Inspection: Inspection carried out at source or as close to the source of the defect as

possible. Mistakes detected close to source can reworked or corrected before it is passed.

Page 151: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

146

Informative Inspection: Inspection carried out to investigate the cause of any defect found, so

action can be taken. It only provides information on defect after it has occurred.

Judgment Inspection: Inspection carried out to separate good units from bad units once

processing has occurred. It doesn’t decrease the defect rate

Statistical Process/ Quality Control

Common Cause and Special Cause Common Cause Variation In any process, regardless of how well-designed or carefully maintained it is, a certain amount of inherent or natural variability will always exist. This natural variability or “background noise” is the cumulative effect of many small, essentially unavoidable causes. In the framework of statistical quality control, this natural variability is often called a “stable system of chance causes (common causes).” A process that is operating with only chance causes of variation present is said to be in statistical control. In other words, the chance causes (common causes) are an inherent part of the process. Special Cause Variation In a process, other kinds of variability may occasionally be present. This variability in key quality characteristics can arises from sources like improperly adjusted machines, operator errors, defective raw materials, untrained resources, system outage etc. Such variability is generally large when compared to the background noise, and usually stands out. We refer to these sources of variability that are not part of the chance cause pattern as assignable causes or special causes. A process that is operating in the presence of assignable causes is said to be out of control. Production processes will often operate in the in-control state, producing acceptable product for relatively long periods of time. Occasionally, however, assignable causes will occur, seemingly at random, resulting in a “shift” to an out-of-control state where a large proportion of the process output does not conform to requirements. A major objective of statistical process control is to quickly detect the occurrence of assignable causes or process shifts so that investigation of the process and corrective action may be undertaken before many nonconforming units are manufactured. The control chart is an online process-monitoring technique widely used for this purpose.

Types of Control Charts Control charts may be classified into two general types.

Attribute Control Charts

Many characteristics are not measured on a continuous scale or even a quantitative scale. In these

cases, we may judge each unit of product as either conforming or nonconforming on the basis of

whether or not it possesses certain attributes, or we may count the number of nonconformities

(defects) appearing on a unit of product. Control charts for such quality characteristics are called

attributes control charts. We will explore the following charts

o Area of opportunity charts for count data:

Number of defects charts for constant areas of opportunity – c chart

Number of defects per unit charts for variable areas of opportunity – u chart

o Proportion nonconforming charts (p-charts) for classification data:

Page 152: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

147

Proportion of nonconformities for constant subgroup size- np chart

Proportion of nonconformities for variable subgroup size – p chart

Variable Control Charts:

Many characteristics can be measured and expressed as numbers on some continuous scale of

measurement. In such cases, it is convenient to describe the quality characteristic with a measure

of central tendency and a measure of variability. Control charts for central tendency and variability

are collectively called variables control charts. The chart is the most widely used chart for

monitoring central tendency, whereas charts based on either the sample range or the sample

standard deviation are used to control process variability.

o Charts based on individual measurements for subgroups of n = 1 – Individual and moving

range chart

o Charts based on subgroups of n>=2:

Subgroups of 2 ≤ n ≤ 10: Mean and range chart

Subgroups of n> 10: Mean and standard deviation chart

Computing control limits

Process mean of the statistic ± 3 standard deviations of the statistic so that

o Upper control limit (UCL) = process mean of the statistic + 3 standard deviations of the

statistic

o Lower control limit (LCL) = process mean of the statistic – 3 standard deviations of the

statistic

252321191715131197531

6.50

6.25

6.00

5.75

5.50

Observation

Ind

ivid

ual V

alu

e

_X=6.015

UCL=6.430

LCL=5.601

1

I Chart of pH

Figure 41: Worksheet: Detergent.mtw

Page 153: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

148

A typical control chart is shown in Figure 41, which is a graphical display of a quality characteristic that has been measured or computed from a sample versus the sample number or time. Often, the samples are selected at periodic intervals such as every hour. The chart contains a center-line (CL) that represents the average value of the quality characteristic corresponding to the in-control state. (That is, only chance causes are present.) Two other horizontal lines, called the upper control limit (UCL) and the lower control limit (LCL) are also shown on the chart. There is a close connection between control charts and hypothesis testing. Essentially, the control chart is a test of the hypothesis that the process is in a state of statistical control. A point plotting within the control limits is equivalent to failing to reject the hypothesis of statistical control, and a point plotting outside the control limits is equivalent to rejecting the hypothesis of statistical control. The most important use of a control chart is to improve the process. We have found that, generally

Most processes do not operate in a state of statistical control.

Consequently, the routine and attentive use of control charts will identify assignable causes. If

these causes can be eliminated from the process, variability will be reduced and the process will be

improved.

The control chart will only detect assignable causes. Management, operator, and engineering

action will usually be necessary to eliminate the assignable cause. An action plan for responding to

control chart signals is vital.

In identifying and eliminating assignable causes, it is important to find the underlying root cause of the problem and to attack it. A cosmetic solution will not result in any real, long-term process improvement. Developing an effective system for corrective action is an essential component of an effective SPC implementation.

Step 14- Document Solution and Benefits Solutions should be documented for review and replication in other processes in the organization. It also acts a lessons learnt document for other Green Belts and Black Belts. Standardization and Solution Replication One of the powerful aspects of Six Sigma is to take successful implementations and expand them across the organization. This is accomplished with replication and standardization.

Replication is taking the solution from the team and applying it to the same type or a similar type

of process.

Standardization is taking the lessons/solutions from the team and applying those good ideas to

processes that may be dissimilar to the original process improved.

The team should consider standardization and replication opportunities to significantly increase the impact on the sigma performance of processes to far exceed the anticipated results by the pilot and solution implementation. As the implementation expands to other areas, four implementation approaches can be combined or used independently. The appropriate approach will depend on the resources available, the culture of the organization and the requirements for a fast implementation. The four approaches are:

A sequenced approach is when a solution is fully implemented in one process or location;

implementation begins at a second location.

Page 154: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

149

A parallel approach is when the solution is implemented at two or more locations or processes

simultaneously.

A phased approach is when a pre-determined milestone is achieved at one location; the

implementation at a second location begins.

A flat approach is when implementation is done at all target locations, companywide.

Project Benefits The results of the improvement and financial benefits need to be monitored (generally) for one year. The project leader should prepare a Project Closure Document. A Project Closure document displays the results of the project, control activities, status of incomplete tasks, approval of key stakeholders (for example, the process owner, finance, quality systems, and environmental) to confirm that the project is complete. This document is often used as the formal hand-off of the project to the process owner. It provides a formal place to record final project results, document key stakeholder approvals and a record of the status of the improved process at the completion of the project. This record may be the basis for ongoing auditing. A Project Closure document should examine the following:

Record all pertinent performance and financial data.

Reference controls (or the control plan).

Review and resolve any incomplete tasks.

Obtain signatures (varies by project and organization).

Finance – Are financial benefits valid?

Process owners – Do they accept the controls and do they agree the project is complete?

Environmental, health, and safety – Do they agree that all procedures and policies have been

followed?

Quality systems – Have sufficient verifiable controls been instituted to ensure project success?

Have all procedures and policies been followed?

Project technical support (for example, Master Black Belt) – Does the project meet the

requirements of the process improvement model that is used?

Management (for example, Champion) – Does management agree that the project is completed?

Green Belt/Black Belt should first obtain the signature of an independent quality group to confirm

that controls are: (1) in place, (2) verifiable, and (3) sufficient to ensure the project benefits will

continue to accrue. Without such controls, the project should not be approved for closure.

Financial benefits may be soft, which can be difficult to measure (for example, improved product

lets us keep a key customer, reduced order to delivery time, reduced environmental emissions,

improved ergonomics, and standardization of product/process).

Page 155: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

150

Step 15-Transfer to Process Owner, Project Closure In the Control phase, the team needs to ensure that when they finish the project, the success that they have seen in implementation will continue. This involves transferring the responsibility to the process owner. This may require:

Process control plan

Review meetings to communicate the state of the process.

Updated flowcharts, procedures and statement of work

Statistical Process/Quality Control measures including control plan

Out-of-Control Action Plans, Response Plans to define how irregularities in the process are handled

As the project wraps up, a couple of additional activities may be appropriate:

The team might consider evaluating how the team worked together

Management may devise rewards to recognize the work and success of the team

Recognition and celebration of a successful Six Sigma drives process excellence philosophy within the organization and managing change becomes simpler A Six Sigma project does not really "end" at the conclusion of the Control phase. There should be opportunities to extend the success of this project team in other areas. The team and champion may share the knowledge gained with others, replicate the solution in other processes, and develop standards for other processes based on what they learned from their solution implementation. The team may continue to examine the process to look for opportunities for continuous process improvement.

Page 156: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

151

Control Phase Tollgate Checklist

Has the team prepared all the essential documentation for the improved process, including revised/new Standard Operating Procedures (SOP’s), a training plan, and a process control system?

Has the necessary training for process owners/operators been performed?

Have the right measures been selected, and documented as part of the Process Control System, to monitor performance of the process and the continued effectiveness of the solution? Has the metrics briefing plan/schedule been documented? Who owns the measures? Has the Process Owner’s job description been updated to reflect the new responsibilities? What happens if minimum performance is not achieved?

Has the solution been effectively implemented? Has the team compiled results data confirming that the solution has achieved the goals defined in the Project Charter?

Has the Financial Benefit Summary been completed? Has the Resource Manager reviewed it?

Has the process been transitioned to the Process Owner, to take over responsibility for managing continuing operations? Do they concur with the control plan?

Has a final Storyboard documenting the project work been developed?

Has the team forwarded other issues/opportunities, which were not able to be addressed, to senior management?

Have “lessons learned” been captured?

Have replication opportunities been identified and communicated?

Has the hard work and successful efforts of our team been celebrated?

Page 157: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

152

Appendix

Acronyms

Important Links for Online Learning and Discussions

References

Page 158: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

153

Acronyms

AAA Attribute Agreement Analysis

µ Population Arithmetic Mean

5S Sort Set-order Shine Standardize and Sustain

AIAG Automotive Industry Action Group

ANOVA Analysis of Variance

AR&R Attribute Repeatability and Reproducibility

BCR Benefit Cost Ratio

C & E Cause and Effect

CCR Critical Customer Requirement

cdf Cumulative distribution function

CFM Continuous Flow Manufacturing

CL Centre Line

COPQ Cost of Poor Quality

CP, CPk Process Capability Indices (Short Term)

CPM Critical Path Method

CTP Critical to Process

CTQ Critical to Quality

DFSS Design for Six Sigma

DMADV Define, Measure, Analyze, Design, Validate/Verify

DMAIC Define, Measure, Analyze, Improve, Control

DOE Design of Experiments

DPMO Defects per Million Opportunities

DPU Defects per Unit

FMEA Failure Modes & Effects Analysis

FTY First time Yield / First pass Yield

Ha Alternate Hypothesis

HACCP Hazard Analysis and Critical control points

Ho Null Hypothesis

HOQ House of Quality

Page 159: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

154

IDOV Identify, Design, Optimize, Validate/Verify

IQR Inter Quartile Range

IRR Internal Rate Of Return

Kaizen Continuous Improvement

KPI Key Performance Indicator

KPIV Key Process Input Variables

KPOV Key Output Process Variables

LCL Lower Control Limit

MSA Measurement Systems Analysis

Muda Waste

NGT Nominal Group Technique

NPS Net Promoter Score

NPV Net Present Value

NVA Non-Value Added

OFAT One factor at a time

OSHA Occupational Safety and Health Administration

PDCA Plan Do Check Act

pdf Probability density function

PERT Program Evaluation and Review Techniques

PINS Percentage of Industry Sales

pmf Probability mass function

PO Process Owner

PP, PPK Process Capability Indices (Long Term)

PVF Present Value Factor

QFD Quality Function Deployment

QM Quality Management

R&R Repeatability and Reproducibility

ROI Return On Investment

RPN Risk Priority Number

RTY Rolled Throughput Yield

S Sample Standard Deviation

Page 160: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

155

s2 Sample Variance

SIPOC Suppliers Input Process Output Customer

SMART Specific Measurable Achievable Relevant Time bound

SME Subject Matter Expert

SMED Single minute exchange of die

SPC Statistical Process Control

TPM Total Productive Maintenance

UCL Upper Control Limit

USL Upper Specification Limit

VA Value Added

VOB Voice of Business

VOC Voice of Customer

WACC Weighted Average Cost of Capital

WBS Work Breakdown Structure

Zlt Sigma level long term

Zst Sigma Level short term

α (Alpha)

Probability of type I error

β (Beta) Probability of type II error

Σ Population standard deviation

Page 161: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

156

Important Links for online Learning and Discussion

1. How does Six Sigma apply in various Industries and Functional Areas?

2. What is the role of Lean Six Sigma in business or career growth?

3. What are the useful globally available insights for Six Sigma Black Belts? Are there any example Black Belt projects that I can refer?

4. What are the success stories of people who have benefited a lot from Six Sigma?

5. How can I network with professionals who have been trained at Benchmark Six Sigma?

6. What is the post training support provided by Benchmark Six Sigma?

For answers to these questions, please use the following link http://www.benchmarksixsigma.com/content/global-news

Page 162: Benchmark Six Sigma Black Belt Preparatory Module v12

B L A C K B E L T P R E P A R A T O R Y M O D U L E

157

References

Gitlow,H. S., and D. M. Levine, Six Sigma for Green Belts and Champions

Berenson, M. L.,D. M. Levine, and T.C. Krehbiel. Basic Business Statistics: Concepts and

Applications,

Issa Bass, Barbara Lawton, Ph.D. Lean Six Sigma Using SigmaXL and Minitab

Theodore T. Allen, Introduction to Engineering, Statistics and Lean Sigma

Kai Yang, Basem S. El-Haik, Design for Six Sigma, A Roadmap for Product Development

Thomas Pyzdek, Paul A. Keller The Six Sigma Handbook

Thomas Pyzdek, The Six Sigma Project Planner


Recommended