+ All Categories
Home > Documents > Mata Kuliah Qc Iwan Pratam

Mata Kuliah Qc Iwan Pratam

Date post: 03-Mar-2015
Category:
Upload: iwan-pratama
View: 190 times
Download: 3 times
Share this document with a friend
105
QUALITY CONTROL THEORY APPROACH USING SIX SIGMA Summarized BY IWAN PRATAMA Sekolah Tinggi Ilmu Ekonomi “STMY” Majalengka
Transcript
Page 1: Mata Kuliah Qc Iwan Pratam

QUALITY CONTROLTHEORY APPROACH USING SIX SIGMA

Summarized BY

IWAN PRATAMA

Sekolah Tinggi Ilmu Ekonomi “STMY”

Majalengka

2011

Page 2: Mata Kuliah Qc Iwan Pratam

Chapter 1 Quality Control Overview

1. Total Quality Control2. Quality Control In Project Management3. Standard Operating Procedure

Chapter 2 Quality Assurance

1. Quality Assurance2. Initial Effort to Control the Quality of Production3. Step for a Typical Quality Assurance Process4. Statistical Control5. Total Quality Management

Chapter 3 Six Sigma Overview

1. Historical Overview2. Part of Six Sigma3. Manufacturing Method4. Improvement Method5. Information & Communication6. ISO (International Standard Organization)7. PLC (Programmable Logic Controller)8. Zero Defect9. Six Sigma Method

10. Quality Management Tools and Method Used in Six Sigma11. Implementation Roles12. Origin and Meaning of The Term “Six Sigma Process”13. Software Used for Six Sigma14. Application15. Criticism

Chapter 4 Strategic Management

1. The Strategy Hierarchy2. Historical Development of Strategic Management3. Growth and Portfolio Theory4. Strategic Change

Page 3: Mata Kuliah Qc Iwan Pratam

5. Information- and Technology-driven Strategy6. Knowledge Adaptive Strategy7. Strategic Decision Making Processes8. The Psychology of Strategic Management9. Reasons Why Strategic Plans Fail

10. Limitations of strategic management

Chapter 5 TOOLS OF SIX SIGMA

1. Strategy Formation2. Strategy Evaluation3. Marketing Analyses4. Decision Trees5. Forecasting6. Simple Random Sampling

Page 4: Mata Kuliah Qc Iwan Pratam

Chapter 1

QUALITY CONTROL OVERVIEW

Quality control, or QC for short, is a process by which entities review the quality of all factors involved in production. This approach places an emphasis on three aspects:

1. Elements such as controls, job management, defined and well managed processes, performance and integrity criteria, and identification of records

2. Competence, such as knowledge, skills, experience, and qualifications

3. Soft elements, such as personnel integrity, confidence, organizational culture, motivation, team spirit, and quality relationships.

The quality of the outputs is at risk if any of these three aspects is deficient in any way.

Quality control emphasizes testing of products to uncover defects and reporting to management who make the decision to allow or deny product release, whereas quality assurance attempts to improve and stabilize production (and associated processes) to avoid, or at least minimize, issues which led to the defect(s) in the first place. For contract work, particularly work awarded by government agencies, quality control issues are among the top reasons for not renewing a contract.

1. Total Quality Control

“Total quality control", also called total quality management, is an approach that extends beyond ordinary statistical quality control techniques and quality improvement methods. It implies a complete overview and re-evaluation of the specification of a product, rather than just considering a more limited set of changeable features within an existing product. If the original specification does not reflect the correct quality requirements, quality cannot be inspected or manufactured into the product. For instance, the design of a pressure vessel should include not only the material and dimensions, but also operating, environmental, safety, reliability and maintainability requirements, and documentation of findings about these requirements.

2. Quality Control in Project Management

In project management, quality control requires the project manager and the project team to inspect the accomplished work to ensure it's alignment with the project scope. In practice, projects typically have a dedicated quality control team which focuses on this area.

3. Standard Operating Procedure

The terms standard operating procedure or SOP, is used in a variety of different contexts, such as healthcare, education, industry or the military. The military uses the term Standing Operating Procedure- rather than Standard- because an SOP refers to an organization's unique procedures, which are not standard to another unit. Standard would imply that the operating procedure is the only correct one.

Page 5: Mata Kuliah Qc Iwan Pratam

Clinical research

In clinical research, the International Conference on Harmonisation (ICH) defines SOPs as "detailed, written instructions to achieve uniformity of the performance of a specific function".

Business and manufacturing practice

An SOP is a written document or instruction detailing all steps and activities of a process or procedure. ISO 9001 essentially requires the documentation of all procedures used in any manufacturing process that could affect the quality of the product.

Page 6: Mata Kuliah Qc Iwan Pratam

Chapter 2

Quality Assurance

Quality assurance, or QA (in use from 1973) for short, is the systematic monitoring and evaluation of the various aspects of a project, service or facility to maximize the probability that minimum standards of quality are being attained by the production process. QA cannot absolutely guarantee the production of quality products.

Two principles included in QA are: "Fit for purpose", the product should be suitable for the intended purpose; and "Right first time", mistakes should be eliminated. QA includes regulation of the quality of raw materials, assemblies, products and components, services related to production, and management, production and inspection processes.

Quality is determined by the product users, clients or customers, not by society in general. It is not the same as 'expensive' or 'high quality'. Low priced products can be considered as having high quality if the product users determine them as such.

1. Initial efforts to control the quality of productionDuring the Middle Ages, guilds adopted responsibility for quality control of their members, setting and maintaining certain standards for guild membership. Royal governments purchasing material were interested in quality control as customers. For this reason, King John of England appointed William Wrotham to report about the construction and repair of ships. Centuries later, Samuel Pepys, Secretary to the BritishAdmiralty, appointed multiple such overseers.

Prior to the extensive division of labor and mechanization resulting from the Industrial Revolution, it was possible for workers to control the quality of their own products. The Industrial Revolution led to a system in which large groups of people performing a similar type of work were grouped together under the supervision of a foreman who was appointed to control the quality of work manufactured.

Wartime production

At the time of the First World War, manufacturing processes typically became more complex with larger numbers of workers being supervised. This period saw the widespread introduction of mass production and piecework, which created problems as workmen could now earn more money by the production of extra products, which in turn occasionally led to poor quality workmanship being passed on to the assembly lines. To counter bad workmanship, full time inspectors were introduced to identify, quarantine and ideally correct product quality failures. Quality control by inspection in the 1920s and 1930s led to the growth of quality inspection functions, separately organised from production and large enough to be headed by superintendents.

The systematic approach to quality started in industrial manufacturing during the 1930s, mostly in the USA, when some attention was given to the cost of scrap and rework. With the impact of mass production required during the Second World War made it necessary to introduce an improved form of quality control known as Statistical Quality Control, or SQC. Some of the

Page 7: Mata Kuliah Qc Iwan Pratam

initial work for SQC is credited to Walter A. Shew hart of Bell Labs, starting with his famous one-page memorandum of 1924.

SQC includes the concept that every production piece cannot be fully inspected into acceptable and non acceptable batches. By extending the inspection phase and making inspection organizations more efficient, it provides inspectors with control tools such as sampling and control charts, even where 100 per cent inspection is not practicable. Standard statistical techniques allow the producer to sample and test a certain proportion of the products for quality to achieve the desired level of confidence in the quality of the entire batch or production run.

Postwar

In the period following World War II, many countries' manufacturing capabilities that had been destroyed during the war were rebuilt. General Douglas MacArthur oversaw the re-building of Japan. During this time, General MacArthur involved two key individuals in the development of modern quality concepts: W. Edwards Deming and Joseph Juran. Both individuals promoted the collaborative concepts of quality to Japanese business and technical groups, and these groups utilized these concepts in the redevelopment of the Japanese economy.

Although there were many individuals trying to lead United States industries towards a more comprehensive approach to quality, the U.S. continued to apply the Quality Control (QC) concepts of inspection and sampling to remove defective product from production lines, essentially ignoring advances in QA for decades.

2. Step for Typical Quality Assurance Process

There are many forms of QA processes, of varying scope and depth. The application of a particular process is often customized to the production process.

A typical process may include:

test of previous articles

plan to improve

design to include improvements and requirements

manufacture with improvements

review new item and improvements

test of the new item

3. Failure Testing

Valuable processes to perform on a whole consumer product is failure testing or stress testing. In mechanical terms this is the operation of a product until it fails, often under stresses such as increasing vibration, temperature, and humidity. This exposes many unanticipated weaknesses in a product, and the data is used to drive engineering and manufacturing process improvements. Often quite simple changes can dramatically improve product service, such as changing to mold-resistant paint or adding lock-washer placement to the training for new assembly personnel.

Page 8: Mata Kuliah Qc Iwan Pratam

4. Statistical Control

Many organizations use statistical process control to bring the organization to Six Sigma levels of quality, in other words, so that the likelihood of an unexpected failure is confined to six standard deviations on the normal distribution. This probability is less than four one-millionths. Items controlled often include clerical tasks such as order-entry as well as conventional manufacturing tasks.

Traditional statistical process controls in manufacturing operations usually proceed by randomly sampling and testing a fraction of the output. Variances in critical tolerances are continuously tracked and where necessary corrected before bad parts are produced.

5. Total Quality Management

The quality of products is dependent upon that of the participating constituents, some of which are sustainable and effectively controlled while others are not. The process(es) which are managed with QA pertain to Total Quality Management.

If the specification does not reflect the true quality requirements, the product's quality cannot be guaranteed. For instance, the parameters for a pressure vessel should cover not only the material and dimensions but operating, environmental, safety, reliability and maintainability requirements.

Models and standards

ISO 17025 is an international standard that specifies the general requirements for the competence to carry out tests and or calibrations. There are 15 management requirements and 10 technical requirements. These requirements outline what a laboratory must do to become accredited. Management system refers to the organization's structure for managing its processes or activities that transform inputs of resources into a product or service which meets the organization's objectives, such as satisfying the customer's quality requirements, complying with regulations, or meeting environmental objectives.

The CMMI (Capability Maturity Model Integration) model is widely used to implement Quality Assurance (PPQA) in an organization. The CMMI maturity levels can be divided in to 5 steps, which a company can achieve by performing specific activities within the organization. (CMMI QA processes are excellent for companies like NASA, and may even be adapted for agile development style).

Company quality

During the 1980s, the concept of “company quality” with the focus on management and people came to the fore. It was realized that, if all departments approached quality with an open mind, success was possible if the management led the quality improvement process.

The company-wide quality approach places an emphasis on four aspects :-

1. Elements such as controls, job management, adequate processes, performance and integrity criteria and identification of records

2. Competence such as knowledge, skills, experience, qualifications

Page 9: Mata Kuliah Qc Iwan Pratam

3. Soft elements, such as personnel integrity, confidence, organizational culture, motivation, team spirit and quality relationships.

4. Infrastructure (as it enhances or limits functionality)

The quality of the outputs is at risk if any of these aspects is deficient. QA is not limited to the manufacturing, and can be applied to any business or non-business activity:

Design work

Administrative services

Consulting

Banking

Insurance

Computer software development

Retailing

Transportation

Education

Translation

It comprises a quality improvement process, which is generic in the sense it can be applied to any of these activities and it establishes abehavior pattern, which supports the achievement of quality.

This in turn is supported by quality management practices which can include a number of business systems and which are usually specific to the activities of the business unit concerned.

In manufacturing and construction activities, these business practices can be equated to the models for quality assurance defined by the International Standards contained in the ISO 9000 series and the specified Specifications for quality systems.

In the system of Company Quality, the work being carried out was shop floor inspection which did not reveal the major quality problems. This led to quality assurance or total quality control, which has come into being recently.

Consultants and contractors are sometimes employed when introducing new quality practices and methods, particularly where the relevant skills and expertise are not available within the organization or when allocating the available internal resources are not available. Consultants and contractors will often employ Quality Management Systems (QMS), auditing and procedural documentation writing CMMI, Six Sigma, Measurement Systems Analysis (MSA), Quality Function Deployment (QFD), Failure Mode and Effects Analysis (FMEA), and Advance Product Quality Planning (APQP).

Quality assurance in European vocational education & training

With the formulation of a joint quality strategy, the European Union seeks to fostering the overall attractiveness of vocational education & training (VET) in Europe. In order to promote this process, a set of new policy instruments were implemented, such as CQAF (Common Quality Assurance Framework) and replacement EQARF (European Quality Assurance Reference framework), which shall allow for EC-wide comparison of QA in VET and building the capacities

Page 10: Mata Kuliah Qc Iwan Pratam

for a common quality assurance policy and quality culture in VET throughout Europe. Furthermore the new policy instruments shall allow for an increased transparency and mutual trust between national VET systems.

In line with the European quality strategy, the member states subsequently have implemented national structures (QANRPs: reference points for quality assurance in VET), who closely collaborate with national stakeholders in order to meet the requirements and priorities of the national VET systems and support activities to training providers in order to guarantee the implementation and commitment at all levels. At European level, the cooperation between QANRPs will be ensured through the EQAVET network.

Over the past few years, with financial support of the European Union as well as the EU member states, numerous pilot initiatives have been developed, most of which are concerned with the promotion and development of quality in VET throughout Europe. Examples can be found in the project database ADAM, which keeps comprehensive information about innovation & transfer projects sponsored by the EU.

A practical example might be seen in the BEQUAL project, which has developed a benchmarking tool for training providers, who with the help of the online-tool can benchmark their quality performance in line with the CQAF quality process model. Furthermore the project offers a database with European good practice on quality assurance in the field of vocational education & training.

Online Benchmarking Tool For Vocational Training Institutes

A different approach was developed by the European VETWORKS project. The project builds on the observation, that over the past years there’s rapid growing VET networks throughout Europe, with a strong tendency to interlocking educational activities across organisations and sectors. It is argued that the vast majority of instruments and methods of quality assurance available for educational planning, monitoring and evaluation on provider level do not meet the new requirements. They are designed for managing the quality of either individual organisations or discrete training processes and structures, and this way are systematically counting out collaborative quality processes within newly emerging learning networks. The VETWORKS approach therefore shall allow local networks to examine their strengths and weaknesses in the area of vocational education & training. When local networks understand the factors that contribute to their success and those that pose challenges, they can better undertake strategies to maximize their strengths and effectively address their weaknesses.

Recent experience in place-based learning strategies, shows that learning communities often deploy three key success areas (Faris, 2007):

Partnership - learning to build links between all sectors and mobilize their shared resources;

Participation - learning to involve the public in the policy process as well as learning opportunities;

Performance - learning to assess progress and benchmark good practice.

The SPEAK-tool, adopted under the VETWORKS initiative to each of these areas deploys indicative descriptors, which local networks can use to determine achievements in their activities towards building on local VET quality. Following the EQARF process model, each indicative descriptor can be assigned a certain stage of the P-D-C-A cycle. This not only allows

Page 11: Mata Kuliah Qc Iwan Pratam

for compliance with the EQARF principles, but rather extends the original model by deploying a separate network level, bridging between systems and institute level. The quality process cycle employs four key areas of activity, typically found in quality management systems: quality planning, control, assurance and improvement. Each area of activity is focused on a specific quality issue, such as what do we want to achieve?, which concrete operations are required to ensure achievement? what have we achieved? what needs to be improved? indicative descriptors and process cycle define the core elements of quality management in VET networks. SPEAK aims at using quality assurance on a new range by systematically taking advantage of the EQARF and combining it with state-of-the-art methodologies of self-evaluation, especially the SPEAK instrument. By using SPEAK, stakeholders and managers of educational networks and programmes will be able to link progress indicators available for provider, network and system level:

provider level: SPEAK helps to systematically gain knowledge about VET providers’ “performance” in their working environment (at the market, or in the network),

network level: SPEAK helps to evaluate progress-indices, and measure the total operating performance of educational networks and organizations, by connecting relevant data on all actors´ levels: collaborators, volunteers, management etc.

system level: SPEAK helps to reflect and concretize descriptors available for the system level in the light of local VET strategies and programmes.

Finally, on the basis of different analysis options SPEAK also can help to get essential insights in long-term effects of educational programs and requirements for change. The below table reflects this construction principle.

Page 12: Mata Kuliah Qc Iwan Pratam

Chapter 3

Six Sigma OverviewSix Sigma is a business management strategy originally developed by Motorola, USA in 1986. As of 2010, it is widely used in many sectors of industry, although its use is not without controversy.

Six Sigma seeks to improve the quality of process outputs by identifying and removing the causes of defects (errors) and minimizing variability in manufacturing and business processes. It uses a set of quality management methods, including statistical methods, and creates a special infrastructure of people within the organization ("Black Belts", "Green Belts", etc.) who are experts in these methods. Each Six Sigma project carried out within an organization follows a defined sequence of steps and has quantified financial targets (cost reduction or profit increase).

The term Six Sigma originated from terminology associated with manufacturing, specifically terms associated with statistical modeling of manufacturing processes. The maturity of a manufacturing process can be described by a sigma rating indicating its yield, or the percentage of defect-free products it creates. A six sigma process is one in which 99.99966% of the products manufactured are statistically expected to be free of defects (3.4 defects per million). Motorola set a goal of "six sigma" for all of its manufacturing operations, and this goal became a byword for the management and engineering practices used to achieve it.

1. Historical overviewSix Sigma originated as a set of practices designed to improve manufacturing processes and eliminate defects, but its application was subsequently extended to other types of business processes as well. In Six Sigma, a defect is defined as any process output that does not meet customer specifications, or that could lead to creating an output that does not meet customer specifications. The idea of Six Sigma was actually “born” at Motorola in the 1970s, when senior executive Art Sundry was criticizing Motorola’s bad quality. Through this criticism, the company discovered the connection between increasing quality and decreasing costs in the production process. Before, everybody thought that quality would cost extra money. In fact, it was reducing costs, as costs for repair or control sank. Then, Bill Smith first formulated the particulars of the methodology at Motorola in 1986. Six Sigma was heavily inspired by six preceding decades of quality improvement methodologies such as quality control, TQM, and Zero Defects, based on the work of pioneers such as Shewhart, Deming, Juran, Ishikawa, Taguchi and others.

Part of a series of articles on

Industry

Manufacturing methods

Batch production • Job production

Continuous production

Improvement methods

LM • TPM • QRM • VDM

TOC • Six Sigma • RCM

Information & communication

ISA-88 • ISA-95 • ERP

SAP • IEC 62264 • B2MML

Process control

PLC • DCS

Page 13: Mata Kuliah Qc Iwan Pratam

Like its predecessors, Six Sigma doctrine asserts that: Continuous efforts to achieve stable and predictable process results (i.e., reduce process

variation) are of vital importance to business success. Manufacturing and business processes have characteristics that can be measured,

analyzed, improved and controlled. Achieving sustained quality improvement requires commitment from the entire

organization, particularly from top-level management.

Features that set Six Sigma apart from previous quality improvement initiatives include: A clear focus on achieving measurable and quantifiable financial returns from any Six

Sigma project. An increased emphasis on strong and passionate management leadership and support. A special infrastructure of "Champions," "Master Black Belts," "Black Belts," "Green

Belts", etc. to lead and implement the Six Sigma approach. A clear commitment to making decisions on the basis of verifiable data, rather than

assumptions and guesswork.

The term "Six Sigma" comes from a field of statistics known as process capability studies. Originally, it referred to the ability of manufacturing processes to produce a very high proportion of output within specification. Processes that operate with "six sigma quality" over the short term are assumed to produce long-term defect levels below 3.4 defects per million opportunities (DPMO). Six Sigma's implicit goal is to improve all processes to that level of quality or better.Six Sigma is a registered service mark and trademark of Motorola Inc. As of 2006 Motorola reported over US$17 billion in savings from Six Sigma.Other early adopters of Six Sigma who achieved well-publicized success include Honeywell (previously known as AlliedSignal) and General Electric, where Jack Welch introduced the method.[13] By the late 1990s, about two-thirds of the Fortune 500 organizations had begun Six Sigma initiatives with the aim of reducing costs and improving quality. In recent years, some practitioners have combined Six Sigma ideas with lean manufacturing to yield a methodology named Lean Six Sigma.

2. Part of SIX SIGMAQuality control is a process by which entities review the quality of all factors involved in production. This approach places an emphasis on three aspects :

1. Elements such as controls, job management, defined and well managed processes, performance and integrity criteria, and identification of records

2. Competence, such as knowledge, skills, experience, and qualifications3. Soft elements, such as personnel integrity, confidence, organizational culture,

motivation, team spirit, and quality relationships.The quality of the outputs is at risk if any of these three aspects is deficient in any way. Quality control is a process for maintaining proper standards in manufacturing.Quality control emphasizes testing of products to uncover defects, and reporting to management who make the decision to allow or deny the release, whereas quality assurance attempts to improve and stabilize production, and associated processes, to avoid, or at least minimize, issues that led to the defects in the first place. For contract work, particularly work awarded by government agencies, quality control issues are among the top reasons for not renewing a contract.

3. Manufacturing Method Batch production is the manufacturing technique of creating a group of components at

a workstation before moving the group to the next step in production. Batch production is common in bakeries and in the manufacture of sports shoes, pharmaceutical ingredients (APIs), inks, paints and adhesives. In the manufacture of inks and paints, a

Page 14: Mata Kuliah Qc Iwan Pratam

technique called a color-run is used. A color-run is where one manufactures the lightest color first, such as light yellow followed by the next increasingly darker color such as orange, then red and so on until reaching black and then starts over again. This minimizes the cleanup and reconfiguring of the machinery between each batch. White (by which is meant opaque paint, not transparent ink) is the only color that cannot be used in a color-run because a small amount of white pigment can adversely affect the medium colors. The chemical, tire, and process industry (CPT) segment uses a combination of batch and process manufacturing depending the product and plant.

Job production, sometimes called jobbing, involves producing a one-off product for a specific customer. Job production is most often associated with small firms (making railings for a specific house, building/repairing a computer for a specific customer, making flower arrangements for a specific wedding etc.) but large firms use job production too. Examples include:1. Designing and implementing an advertising campaign2. Auditing the accounts of a large public limited company 3. Building a new factory4. Installing machinery in a factory5. Machining a batch of parts per a CAD drawing supplied by a customer. Fabrication shops and machine shops whose work is primarily of the job production type are often called job shops. The associated people or corporations are sometimes called jobbers.

Continuous production is a method used to manufacture, produce, or process materials without interruption. This process is followed in most oil and gas industries and petrochemical plant, process manufacturing and in other industries such as the float glass industry, where glass of different thickness is processed in a continuous manner. Once the molten glass flows out of the furnace, machines work on the glass from either side and either compress or expand it. Controlling the speed of rotation of those machines and varying them in numbers produces a glass ribbon of varying width and thickness. Continuous production is largely controlled by production controllers with feedback. The majority of transducers and controllers employ PID (Proportional, Integral, and Derivative) control which controls the final output element based on the variables response to the control element. The most important difference between batch production and continuous production is that in continuous production, the chemical transformations of the input materials are made in continuous reactions that occur in flowing streams of the materials whereas in batch production they are done in containers.

4. Improvement Method Lean manufacturing or lean production, often simply, "Lean," is a production practice

that considers the expenditure of resources for any goal other than the creation of value for the end customer to be wasteful, and thus a target for elimination. Working from the perspective of the customer who consumes a product or service, "value" is defined as any action or process that a customer would be willing to pay for. Basically, lean is centered on preserving value with less work. Lean manufacturing is a management philosophy derived mostly from the Toyota Production System (TPS) (hence the term Toyotism is also prevalent) and identified as "Lean" only in the 1990s. It is renowned for its focus on reduction of the original Toyota seven wastes to improve overall customer value, but there are varying perspectives on how this is best achieved. The steady growth of Toyota, from a small company to the world's largest automaker,[ has focused attention on how it has achieved this.Lean manufacturing is a variation on the theme of efficiency based on optimizing flow; it is a present-day instance of the recurring theme in human history toward increasing efficiency, decreasing waste, and using empirical methods to decide what matters, rather than uncritically accepting pre-existing ideas. As such, it is a chapter in the larger narrative that also includes such ideas as the folk wisdom of thrift, time and motion study, Taylorism, the Efficiency Movement, and Fordism. Lean manufacturing is often

Page 15: Mata Kuliah Qc Iwan Pratam

seen as a more refined version of earlier efficiency efforts, building upon the work of earlier leaders such as Taylor or Ford, and learning from their mistakes.

Total productive maintenance (TPM) has been around for almost 50 years. It may be misunderstood as a new way of looking at maintenance, however, at least in Japan, it is a well-established process. Like all processes, it has a host of acronyms and buzzwords. Some are obvious, many will require follow-up reading.In TPM, the machine operator is thoroughly trained to perform much of the simple maintenance and fault-finding. Eventually, by working in "Zero Defects" teams that include a technical expert as well as operators, they can learn many more tasks - sometimes all those within the scope of an operator. Tradesmen are also trained at doing the more skilled tasks to help ensure process reliability.This should be fully documented, Autonomous Maintenance ensures appropriate and effective efforts are expended after the machine becomes wholly the domain of one person or team. Safety is paramount, so training must be appropriate. Operators are often capable of high standards of technical ability, this is improved through the use of "best practice" procedures and proper training of these procedures.TPM is a critical adjunct to lean manufacturing. If machine uptime is not predictable and if process capability is not sustained, the process must keep extra stocks to buffer against this uncertainty and flow through the process will be interrupted. Unreliable uptime is caused by breakdowns or badly performed maintenance. If maintenance is done properly (Right First Time), uptime will improve - as will "OEE" (Overall Equipment Effectiveness - basically how many "sellable" items "are" actually produced as opposed to how many the machine "should" produce in a given time).One way to think of TPM is "deterioration prevention": deterioration is what happens naturally to anything that is not "taken care of". For this reason many people refer to TPM as "total productive manufacturing" or "total process management". TPM is a proactive approach that essentially aims to identify issues as soon as possible and plan to prevent any issues before occurrence. One motto is "zero error, zero work-related accident, and zero loss".

Quick Response Manufacturing (QRM) emphasizes the beneficial effect of reducing internal and external lead times. Shorter lead times improve quality, reduce cost and eliminate non-value-added waste within the organization while simultaneously increasing the organization’s competitiveness and market share by serving customers better and faster. The time-based framework of QRM accommodates strategic variability such as offering custom-engineered products while eliminating dysfunctional variability such as rework and changing due dates.[1] For this reason, companies making products in low or varying volumes have used QRM as an alternative or to complement other strategies such as Lean Manufacturing, Total quality management, Six Sigma or Kaizen.

VDM - Value Driven Maintenance is a maintenance management methodology. VDM was developed by Mark Haarman and Guy Delahay. Both former chairmen of the Dutch Maintenance Association (NVDO) and authors of the book entitled “Value Driven Maintenance, New Faith in Maintenance”. Value drivers in Maintenance (VDM) What exactly is value? In financial literature; value (net present value) is defined as: Value = the sum of all free future cash flows, discounted to today. A cash flow is the difference between income and expenditure. It is not the difference between turnover and costs, because this is easy to manipulate through accounting. There are companies that use highly creative lease, depreciation and reservation techniques to keep book profits artificially high or low; this does not always contribute to shareholder value. Recent stock market scandals are a painful but revealing illustration of what sometimes happens as a result of this. The second part of the definition concerns the knowledge that the value of a cash flow is time-related, given the term "present value". Future cash flows must be corrected or discounted to today. Managing by value necessitates maximizing future cash flows. Managing by value obliges companies constantly to search for new free cash flows. It's no longer enough

Page 16: Mata Kuliah Qc Iwan Pratam

for a company to go on doing what it is already doing. Today, it's all about creating value.Now that we know what value is, we can translate the concept into maintenance. Within VDM, there are four axes along which maintenance can contribute to value creation within a company. The axes are also called the 4 value drivers (see fig 1.).

Theory of Constraints (TOC) is an overall management philosophy introduced by Dr. Eliyahu M. Goldratt in his 1984 book titled The Goal, that is geared to help organizations continually achieve their goal. The title comes from the contention that any manageable system is limited in achieving more of its goal by a very small number of constraints, and that there is always at least one constraint. The TOC process seeks to identify the constraint and restructure the rest of the organization around it, through the use of the Five Focusing Steps.

Reliability-centered maintenance, often known as RCM, is a process to ensure that assets continue to do what their users require in their present operating context. It is generally used to achieve improvements in fields such as the establishment of safe minimum levels of maintenance, changes to operating procedures and strategies and the establishment of capital maintenance regimes and plans. Successful implementation of RCM will lead to increase in cost effectiveness, machine uptime, and a greater understanding of the level of risk that the organization is presently managing.The late John Moubray, in his industry leading book RCM2 [2], characterized Reliability-centered Maintenance as a process to establish the safe minimum levels of maintenance. This description echoed statements in the Nowlan and Heap report from United Airlines.It is defined by the technical standard SAE JA1011 [3], Evaluation Criteria for RCM Processes, which sets out the minimum criteria that any process should meet before it can be called RCM. This starts with the 7 questions below, worked through in the order that they are listed:1. What is the item supposed to do and its associated performance standards?2. In what ways can it fail to provide the required functions?3. What are the events that cause each failure?4. What happens when each failure occurs?5. In what way does each failure matter?6. What systematic task can be performed proactively to prevent, or to diminish to a

satisfactory degree, the consequences of the failure?7. What must be done if a suitable preventive task cannot be found?

5. Information & communication

International Society of Automation (ISA) -88 International Society of Automation (ISA) -95 Enterprise resource planning (ERP) Systems, Applications and Products (SAP) Enterprise Control system Integration (IEC) 62264 B2MML or Business To Manufacturing Markup Language

6. ISO (International Standard Organization)

ISO 9000 : 1999 Quality management systems -- Fundamentals and vocabulary ISO 9001 : 2000 Quality management systems -- Requirements for Small Businesses ISO 9004 : 2000 Quality management systems -- Guidelines for performance

improvements

ISO 14000 : 1999 Environmental management ISO 14001:2004 : Environmental management systems -- Requirements with guidance for

use

ISO 31000:2009 provides principles and generic guidelines on risk management

Page 17: Mata Kuliah Qc Iwan Pratam

ISO 26000 – 2010 : Social responsibility

ISO For Spesific Industries /part :

ISO/IEC 27001- 2005 information security explained for small businesses ISO/IEC 17025:2005 General requirements for the competence of testing and calibration

laboratories ISO 22000 – 2007 : Food safety management systems. An easy-to-use checklist for small

business ISO/IEC 19770-1 Software Asset Management

7. PLC (Programmable Logic Controller)

A programmable logic controller (PLC) or programmable controller is a digital computer used for automation of electromechanical processes, such as control of machinery on factory assembly lines, amusement rides, or lighting fixtures. PLCs are used in many industries and machines. Unlike general-purpose computers, the PLC is designed for multiple inputs and output arrangements, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed or non-volatile memory. A PLC is an example of a hard real time system since output results must be produced in response to input conditions within a bounded time, otherwise unintended operation will result.

A distributed control system (DCS) refers to a control system usually of a manufacturing system, process or any kind of dynamic system, in which the controller elements are not central in location (like the brain) but are distributed throughout the system with each component sub-system controlled by one or more controllers. The entire system of controllers is connected by networks for communication and monitoring.

DCS is a very broad term used in a variety of industries, to monitor and control distributed equipment.

Electrical power grids and electrical generation plants Environmental control systems Traffic signals radio signals Water management systems Oil refining plants Metallurgical Process Plants Chemical plants Pharmaceutical manufacturing Sensor networks Dry cargo and bulk oil carrier ships

8. Zero Defects"Zero Defects" is Step 7 of "Philip Crosby's 14 Step Quality Improvement Process". Although applicable to any type of enterprise, it has been primarily adopted within industry supply chains wherever large volumes of components are being purchased (common items such as nuts and bolts are good examples). Crosby's response to the quality crisis was the principle of "doing it right the first time" (DIRFT). He would also include four major principles:1. the definition of quality is conformance to requirements (requirements meaning both the

product specifications and the customer's requirements)

Page 18: Mata Kuliah Qc Iwan Pratam

2. the system of quality is prevention3. the performance standard is zero defects4. the measurement of quality is the price of nonconformance

Zero Defects was a quality control program originated by the Denver Division of the Martin Marietta Corporation (now Lockheed Martin) on the Titan Missile program, which carried the first astronauts into space in the late 1960s. It was then incorporated into the Orlando Division, which built the mobile Pershing Missile System, deployed in Europe; the Sprint antiballistic missile, never deployed; and a number of air to ground missiles for the Vietnam War.Principles of Zero DefectsThe principles of the methodology are four-fold:1. Quality is conformance to requirements

Every product or service has a requirement: a description of what the customer needs. When a particular product meets that requirement, it has achieved quality, provided that the requirement accurately describes what the enterprise and the customer actually need. This technical sense should not be confused with more common usages that indicate weight or goodness or precious materials or some absolute idealized standard. In common parlance, an inexpensive disposable pen is a lower-quality item than a gold-plated fountain pen. In the technical sense of Zero Defects, the inexpensive disposable pen is a quality product if it meets requirements: it writes, does not skip nor clog under normal use, and lasts the time specified.

2. Defect prevention is preferable to quality inspection and correctionThe second principle is based on the observation that it is nearly always less troublesome, more certain and less expensive to prevent defects than to discover and correct them.

3. Zero Defects is the quality standardThe third is based on the normative nature of requirements: if a requirement expresses what is genuinely needed, then any unit that does not meet requirements will not satisfy the need and is no good. If units that do not meet requirements actually do satisfy the need, then the requirement should be changed to reflect reality.

4. Quality is measured in monetary terms – the Price of Nonconformance (PONC)The fourth principle is key to the methodology. Phil Crosby believes that every defect represents a cost, which is often hidden. These costs include inspection time, rework, wasted material and labor, lost revenue and the cost of customer dissatisfaction. When properly identified and accounted for, the magnitude of these costs can be made apparent, which has three advantages. First, it provides a cost-justification for steps to improve quality. The title of the book, "Quality is free," expresses the belief that improvements in quality will return savings more than equal to the costs. Second, it provides a way to measure progress, which is essential to maintaining management commitment and to rewarding employees. Third, by making the goal measurable, actions can be made concrete and decisions can be made on the basis of relative return.

While Zero Defects began in the aerospace and defense industry, started at Martin Marietta in the 1960s, thirty years later it was regenerated in the automotive world. During the 1990s, large companies in the automotive industry tried to cut costs by reducing their quality inspection processes and demanding that their suppliers dramatically improve the quality of their supplies. This eventually resulted in demands for the "Zero Defects" standard. It is implemented all over the world.Criticism of "Zero Defects" frequently centers around allegations of extreme cost in meeting the standard. Proponents say that it is an entirely reachable ideal and that claims of extreme cost result from misapplication of the principles. Technical author David Salsburg claims that W. Edwards Deming was critical of this approach and terms it a fad.

9. Six Sigma Methods

Page 19: Mata Kuliah Qc Iwan Pratam

Six Sigma projects follow two project methodologies inspired by Deming's Plan-Do-Check-Act Cycle. PDCA (plan–do–check–act) is an iterative four-step management process typically used in business. It is also known as the Deming circle/cycle/wheel, Shewhart cycle, control circle/cycle, or plan–do–study–act (PDSA).

The PDCA cyclePDCA is a successive cycle which starts off small to test potential effects on processes, but then gradually leads to larger and more targeted change. Plan, Do, Check, Act are the four components of Work bench in Software testing.PLAN Establish the objectives and processes necessary to deliver results in accordance with the

expected output (the target or goals). By making the expected output the focus, it differs from other techniques in that the completeness and accuracy of the specification is also part of the improvement.

DO Implement the new processes, often on a small scale if possible, to test possible effects. It is important to collect data for charting and analysis for the following "CHECK" step.

CHECK Measure the new processes and compare the results (collected in "DO" above) against the expected results (targets or goals from the "PLAN") to ascertain any differences. Charting data can make this much easier to see trends in order to convert the collected data into information. Information is what you need for the next step "ACT".

ACT Analyze the differences to determine their cause. Each will be part of either one or more of the P-D-C-A steps. Determine where to apply changes that will include improvement. When a pass through these four steps does not result in the need to improve, refine the scope to which PDCA is applied until there is a plan that involves improvement.

These methodologies, composed of five phases each, bear the acronyms DMAIC and DMADV.

DMAIC is used for projects aimed at improving an existing business process. DMAIC is pronounced as "duh-may-ick".

DMADV is used for projects aimed at creating new product or process designs. DMADV is pronounced as "duh-mad-vee".

DMAICThe DMAIC project methodology has five phases:

Define the problem, the voice of the customer, and the project goals, specifically. Measure key aspects of the current process and collect relevant data. Analyze the data to investigate and verify cause-and-effect relationships. Determine

what the relationships are, and attempt to ensure that all factors have been considered. Seek out root cause of the defect under investigation.

Improve or optimize the current process based upon data analysis using techniques such as design of experiments, poka yoke or mistake proofing, and standard work to create a new, future state process. Set up pilot runs to establish process capability.

Control the future state process to ensure that any deviations from target are corrected before they result in defects. Implement control systems such as statistical process control, production boards, and visual workplaces, and continuously monitor the process.

DMADV or DFSS

Page 20: Mata Kuliah Qc Iwan Pratam

The DMADV project methodology, also known as DFSS ("Design For Six Sigma"), features five phases:

Define design goals that are consistent with customer demands and the enterprise strategy.

Measure and identify CTQs (characteristics that are Critical To Quality), product capabilities, production process capability, and risks.

Analyze to develop and design alternatives, create a high-level design and evaluate design capability to select the best design.

Design details, optimize the design, and plan for design verification. This phase may require simulations.

Verify the design, set up pilot runs, implement the production process and hand it over to the process owner(s).

10. Quality management tools and methods used in Six Sigma

Within the individual phases of a DMAIC or DMADV project, Six Sigma utilizes many established quality-management tools that are also used outside of Six Sigma. The following table shows an overview of the main methods used.

5 Whys Accelerated life testing Analysis of variance ANOVA Gauge R&R Axiomatic design Business Process Mapping Cause & effects diagram (also known as

fishbone or Ishikawa diagram) Check sheet Chi-square test of independence and fits Control chart Correlation Cost-benefit analysis CTQ tree Design of experiments Failure mode and effects analysis

(FMEA) General linear model Histograms

Pareto analysis Pareto chart Pick chart Process capability Quality Function Deployment (QFD) Quantitative marketing research

through use of Enterprise Feedback Management (EFM) systems

Regression analysis Root cause analysis Run charts Scatter diagram SIPOC analysis (Suppliers, Inputs,

Process, Outputs, Customers) Stratification Taguchi methods Taguchi Loss Function TRIZ

11. Implementation roles

One key innovation of Six Sigma involves the "professionalizing" of quality management functions. Prior to Six Sigma, quality management in practice was largely relegated to the production floor and to statisticians in a separate quality department. Formal Six Sigma programs adopt a ranking terminology (similar to some martial arts systems) to define a hierarchy (and career path) that cuts across all business functions.Six Sigma identifies several key roles for its successful implementation.

Executive Leadership includes the CEO and other members of top management. They are responsible for setting up a vision for Six Sigma implementation. They also empower the other role holders with the freedom and resources to explore new ideas for breakthrough improvements.

Champions take responsibility for Six Sigma implementation across the organization in an integrated manner. The Executive Leadership draws them from upper management. Champions also act as mentors to Black Belts.

Page 21: Mata Kuliah Qc Iwan Pratam

Master Black Belts, identified by champions, act as in-house coaches on Six Sigma. They devote 100% of their time to Six Sigma. They assist champions and guide Black Belts and Green Belts. Apart from statistical tasks, they spend their time on ensuring consistent application of Six Sigma across various functions and departments.

Black Belts operate under Master Black Belts to apply Six Sigma methodology to specific projects. They devote 100% of their time to Six Sigma. They primarily focus on Six Sigma project execution, whereas Champions and Master Black Belts focus on identifying projects/functions for Six Sigma.

Green Belts are the employees who take up Six Sigma implementation along with their other job responsibilities, operating under the guidance of Black Belts.

Some organizations use additional belt colours, such as Yellow Belts, for employees that have basic training in Six Sigma tools.

CertificationIn the United States, Six Sigma certification for both Green and Black Belts is offered by the Institute of Industrial Engineers, the American Society for Quality and by The International Association for Six Sigma Certification. Many organizations also offer certification programs to their employees. Corporations, such as early Six Sigma pioneers General Electric and Motorola developed certification programs as part of their Six Sigma implementation.

12. Origin and meaning of the term "six sigma process"

Graph of the normal distribution, which underlies the statistical assumptions of the Six Sigma model. The Greek letter σ (sigma) marks the distance on the horizontal axis between the mean, µ, and the curve's inflection point. The greater this distance, the greater is the spread of values encountered. For the curve shown above, µ = 0 and σ = 1. The upper and lower specification limits (USL, LSL) are at a distance of 6σ from the mean. Because of the properties of the normal distribution, values lying that far away from the mean are extremely unlikely. Even if the mean were to move right or left by 1.5σ at some point in the future (1.5 sigma shift), there is still a good safety cushion. This is why Six Sigma aims to have processes where the mean is at least 6σ away from the nearest specification limit.The term "six sigma process" comes from the notion that if one has six standard deviations between the process mean and the nearest specification limit, as shown in the graph, practically no items will fail to meet specifications. This is based on the calculation method employed in process capability studies.Capability studies measure the number of standard deviations between the process mean and the nearest specification limit in sigma units. As process standard deviation goes up, or the mean of the process moves away from the center of the tolerance, fewer standard deviations will fit between the mean and the nearest specification limit, decreasing the sigma number and increasing the likelihood of items outside specification.

Page 22: Mata Kuliah Qc Iwan Pratam

Role of the 1.5 sigma shiftExperience has shown that processes usually do not perform as well in the long term as they do in the short term. As a result, the number of sigmas that will fit between the process mean and the nearest specification limit may well drop over time, compared to an initial short-term study. To account for this real-life increase in process variation over time, an empirically-based 1.5 sigma shift is introduced into the calculation.[10][18] According to this idea, a process that fits 6 sigma between the process mean and the nearest specification limit in a short-term study will in the long term only fit 4.5 sigma – either because the process mean will move over time, or because the long-term standard deviation of the process will be greater than that observed in the short term, or both. Hence the widely accepted definition of a six sigma process is a process that produces 3.4 defective parts per million opportunities (DPMO). This is based on the fact that a process that is normally distributed will have 3.4 parts per million beyond a point that is 4.5 standard deviations above or below the mean (one-sided capability study). So the 3.4 DPMO of a six sigma process in fact corresponds to 4.5 sigma, namely 6 sigma minus the 1.5-sigma shift introduced to account for long-term variation. This allows for the fact that special causes may result in a deterioration in process performance over time, and is designed to prevent underestimation of the defect levels likely to be encountered in real-life operation.

Sigma levels

A control chart depicting a process that experienced a 1.5 sigma drift in the process mean toward the upper specification limit starting at midnight. Control charts are used to maintain 6 sigma quality by signaling when quality professionals should investigate a process to find and eliminate special-cause variation.See also: Three sigma ruleThe table. below gives long-term DPMO values corresponding to various short-term sigma levels.It must be understood that these figures assume that the process mean will shift by 1.5 sigma toward the side with the critical specification limit. In other words, they assume that after the initial study determining the short-term sigma level, the long-term Cpk value will turn out to be 0.5 less than the short-term Cpk value. So, for example, the DPMO figure given for 1 sigma assumes that the long-term process mean will be 0.5 sigma beyond the specification limit (Cpk = –0.17), rather than 1 sigma within it, as it was in the short-term study (Cpk = 0.33). Note that the defect percentages only indicate defects exceeding the specification limit to which the process mean is nearest. Defects beyond the far specification limit are not included in the percentages.

Sigma level DPMO Percent Percentage yield Short-term Cpk Long-term Cpk

Page 23: Mata Kuliah Qc Iwan Pratam

defective1 691,462 69% 31% 0.33 –0.172 308,538 31% 69% 0.67 0.173 66,807 6.7% 93.3% 1.00 0.54 6,210 0.62% 99.38% 1.33 0.835 233 0.023% 99.977% 1.67 1.176 3.4 0.00034% 99.99966% 2.00 1.57 0.019 0.0000019% 99.9999981% 2.33 1.83

13. Software used for Six Sigma

There are generally four classes of software used to support Six Sigma: Analysis tools, which are used to perform statistical or process analysis Program management tools, used to manage and track a corporation's entire Six Sigma

program DMAIC and Lean online project collaboration tools for local and global teams Data Collection tools that feed information directly into the analysis tools and

significantly reduce the time spent gathering data

Analysis toolsProprietor Software:

Actuate Arena (software) ARIS Six Sigma ARM - Strategic Thought Group @RISK for Six Sigma CHARTrunner by PQ Systems DataLyzer Spectrum DOE Pro XL EngineRoom by MoreSteam IBM WebSphere Business Modeler iGrafx Process for Six Sigma i-nexus JMP Oracle Crystal Ball Microsoft Visio Minitab Nimbus Control QI MACROS QPR ProcessGuide by QPR Software Quality Companion by Minitab Quality Tools Charts Generator QTcharts Quality Window by Busitech Quantum XL QWXL 3 by Busitech Quik Sigma Professional by Promontory Management Group SDI Tools SigmaFlow Sigma Magic SigmaXL Software AG webMethods BPM Suite SPC XL Statgraphics

Page 24: Mata Kuliah Qc Iwan Pratam

STATISTICA TalkFreely LTD Telelogic System Architect The Unscrambler Select Architect Business Process Modeling

Open Source Software:

DMAIC/Lean Online Collaboration Tools SigmaFlow Integrated, Project Management Tools (PPM), (BPA) and (ECM) Grouputer SigmaSense TalkFreely LTD Quality Tools Charts Generator QTcharts

14. ApplicationSix Sigma mostly finds application in large organizations. An important factor in the spread of Six Sigma was GE's 1998 announcement of $350 million in savings thanks to Six Sigma, a figure that later grew to more than $1 billion. According to industry consultants like Thomas Pyzdek and John Kullmann, companies with fewer than 500 employees are less suited to Six Sigma implementation, or need to adapt the standard approach to make it work for them. This is due both to the infrastructure of Black Belts that Six Sigma requires, and to the fact that large organizations present more opportunities for the kinds of improvements Six Sigma is suited to bringing about.

The following companies claim to have successfully implemented Six Sigma in some form or another:

3M Acme Markets Advanced Micro Devices Agilent Technologies Air Canada ALCAN Amazon.com AXA BAE Systems Bank of America Bank of Montreal BD Medical Bechtel Corporation Boeing Cabot Microelectronics Ltd CAE Inc Canada Post Caterpillar Inc. Chartered Quality Institute Chevron CIGNA Cintas Uniforms Cognizant Technology Solutions Computer Sciences Corporation Cookson Group Corning CoorsTek Credit Suisse Cummins Inc. Deere & Company Dell [12]

LG Group Lockheed Martin Mando Corporation Maple Leaf Foods McKesson Corporation Merrill Lynch Microflex Inc. Motorola Mumbai's dabbawalas Network Rail NewPage Corporation Nielsen Company Nortel Networks Northrop Grumman Owens-Illinois Patheon Penske Truck Leasing PepsiCo PolyOne Corporation Precision Castparts Corp. procise gmbh Quest Diagnostics RaggedCom Raytheon ResMed Samsung Group Sears SGL Group Shaw Industries Shinhan Bank Shinhan Card

Page 25: Mata Kuliah Qc Iwan Pratam

Delphi Corporation Denso DHL Deutsche Telekom Dominion Resources Dow Chemical Company DSB Bank DsignStudio DuPont Eastman Kodak Company

EMC Finning Flextronics Ford Motor Company General Electric General Dynamics Genpact GlaxoSmithKline HCL Technologies Heinz Co. Honeywell Hertel HSBC Group Idearc Media Ingram Micro Inventec ITC Welcomgroup Hotels, Palaces and

Resorts ITT Corporation JEA Korea Telecom KTF

Shop Direct Group Siemens AG SKF Skyworks Solutions, Inc. Starwood Hotels & Resorts Worldwide Staples Inc. Sterlite Optical Technologies Target Corporation Teradyne Teraeon Consulting Corporation Trane Textron The Hertz Corporation The McGraw-Hill Companies The Vanguard Group TomoTherapy, Inc. TRW TSYS (Total System Services) Tyco International UMS UNIVERSAL MANAGEMENT

SERVICES GMBH Unipart United Biscuits United States Air Force United States Army United States Marine Corps United States Navy UnitedHealth Group Vodafone Volt Information Sciences Whirlpool Wipro [31] Xchanging Xerox Paper Products Limited

15. Criticism

Lack of originalityNoted quality expert Joseph M. Juran has described Six Sigma as "a basic version of quality improvement", stating that "there is nothing new there. It includes what we used to call facilitators. They've adopted more flamboyant terms, like belts with different colors. I think that concept has merit to set apart, to create specialists who can be very helpful. Again, that's not a new idea. The American Society for Quality long ago established certificates, such as for reliability engineers." Role of consultantsThe use of "Black Belts" as itinerant change agents has (controversially) fostered an industry of training and certification. Critics argue there is overselling of Six Sigma by too great a number of consulting firms, many of which claim expertise in Six Sigma when they only have a rudimentary understanding of the tools and techniques involved.Potential negative effectsA Fortune article stated that "of 58 large companies that have announced Six Sigma programs, 91 percent have trailed the S&P 500 since". The statement was attributed to "an analysis by Charles Holland of consulting firm Qualpro (which espouses a competing quality-improvement process)." The summary of the article is that Six Sigma is effective at what it is intended to do, but that it is "narrowly designed to fix an existing process" and does not help in "coming up

Page 26: Mata Kuliah Qc Iwan Pratam

with new products or disruptive technologies." Advocates of Six Sigma have argued that many of these claims are in error or ill-informed. A BusinessWeek article says that James McNerney's introduction of Six Sigma at 3M had the effect of stifling creativity and reports its removal from the research function. It cites two Wharton School professors who say that Six Sigma leads to incremental innovation at the expense of blue skies research.[26] This phenomenon is further explored in the book, Going Lean, which describes a related approach known as lean dynamics and provides data to show that Ford's "6 Sigma" program did little to change its fortunes.Based on arbitrary standardsWhile 3.4 defects per million opportunities might work well for certain products/processes, it might not operate optimally or cost effectively for others. A pacemaker process might need higher standards, for example, whereas a direct mail advertising campaign might need lower standards. The basis and justification for choosing 6 (as opposed to 5 or 7, for example) as the number of standard deviations is not clearly explained. In addition, the Six Sigma model assumes that the process data always conform to the normal distribution. The calculation of defect rates for situations where the normal distribution model does not apply is not properly addressed in the current Six Sigma literature.Criticism of the 1.5 sigma shiftThe statistician Donald J. Wheeler has dismissed the 1.5 sigma shift as "goofy" because of its arbitrary nature. Its universal applicability is seen as doubtful.The 1.5 sigma shift has also become contentious because it results in stated "sigma levels" that reflect short-term rather than long-term performance: a process that has long-term defect levels corresponding to 4.5 sigma performance is, by Six Sigma convention, described as a "six sigma process." The accepted Six Sigma scoring system thus cannot be equated to actual normal distribution probabilities for the stated number of standard deviations, and this has been a key bone of contention about how Six Sigma measures are defined. The fact that it is rarely explained that a "6 sigma" process will have long-term defect rates corresponding to 4.5 sigma performance rather than actual 6 sigma performance has led several commentators to express the opinion that Six Sigma is a confidence trick.

Page 27: Mata Kuliah Qc Iwan Pratam

Chapter 4

Strategic management

Strategic management is a field that deals with the major intended and emergent initiatives taken by general managers on behalf of owners, involving utilization of resources, to enhance the performance of firms in their external environments. It entails specifying the organization's mission, vision and objectives, developing policies and plans, often in terms of projects and programs, which are designed to achieve these objectives, and then allocating resources to implement the policies and plans, projects and programs. A balanced scorecard is often used to evaluate the overall performance of the business and its progress towards objectives. Recent studies and leading management theorists have advocated that strategy needs to start with stakeholders expectations and use a modified balanced scorecard which includes all stakeholders. Strategic management is a level of managerial activity under setting goals and over Tactics. Strategic management provides overall direction to the enterprise and is closely related to the field of Organization Studies. In the field of business administration it is useful to talk about "strategic alignment" between the organization and its environment or "strategic consistency." According to Arieu (2007), "there is strategic consistency when the actions of an organization are consistent with the expectations of management, and these in turn are with the market and the context." Strategic management includes not only the management team but can also include the Board of Directors and other stakeholders of the organization. It depends on the organizational structure.

“Strategic management is an ongoing process that evaluates and controls the business and the industries in which the company is involved; assesses its competitors and sets goals and strategies to meet all existing and potential competitors; and then reassesses each strategy annually or quarterly [i.e. regularly] to determine how it has been implemented and whether it has succeeded or needs replacement by a new strategy to meet changed circumstances, new technology, new competitors, a new economic environment., or a new social, financial, or political environment.” (Lamb, 1984:ix)

Strategic management techniques can be viewed as bottom-up, top-down, or collaborative processes. In the bottom-up approach, employees submit proposals to their managers who, in turn, funnel the best ideas further up the organization. This is often accomplished by a capital budgeting process. Proposals are assessed using financial criteria such as return on investment or cost-benefit analysis. Cost underestimation and benefit overestimation are major sources of error. The proposals that are approved form the substance of a new strategy, all of which is done without a grand strategic design or a strategic architect. The top-down approach is the most common by far. In it, the CEO, possibly with the assistance of a strategic planning team, decides on the overall direction the company should take. Some organizations are starting to experiment with collaborative strategic planning techniques that recognize the emergent nature of strategic decisions. Strategic decisions should focus on Outcome, Time remaining, and current Value/priority. The outcome comprises both the desired ending goal and the plan designed to reach that goal. Managing strategically requires paying attention to the time remaining to reach a particular level or goal and adjusting the pace and options accordingly. Value/priority relates to the shifting, relative concept of value-add. Strategic decisions should be based on the understanding that the value-add of whatever you are managing is a constantly changing reference point. An objective that begins with a high level of value-add may change due to influence of internal and external factors. Strategic management by definition, is managing with a heads-up approach to outcome, time and relative value, and actively making course corrections as needed.

Page 28: Mata Kuliah Qc Iwan Pratam

1. The strategy hierarchy

In most (large) corporations there are several levels of management. Corporate strategy is the highest of these levels in the sense that it is the broadest - applying to all parts of the firm - while also incorporating the longest time horizon. It gives direction to corporate values, corporate culture, corporate goals, and corporate missions. Under this broad corporate strategy there are typically business-level competitive strategies and functional unit strategies.Corporate strategy refers to the overarching strategy of the diversified firm. Such a corporate strategy answers the questions of "which businesses should we be in?" and "how does being in these businesses create synergy and/or add to the competitive advantage of the corporation as a whole?" Business strategy refers to the aggregated strategies of single business firm or a strategic business unit (SBU) in a diversified corporation. According to Michael Porter, a firm must formulate a business strategy that incorporates either cost leadership, differentiation, or focus to achieve a sustainable competitive advantage and long-term success. Alternatively, according to W. Chan Kim and Renée Mauborgne, an organization can achieve high growth and profits by creating a Blue Ocean Strategy that breaks the previous value-cost trade off by simultaneously pursuing both differentiation and low cost.Functional strategies include marketing strategies, new product development strategies, human resource strategies, financial strategies, legal strategies, supply-chain strategies, and information technology management strategies. The emphasis is on short and medium term plans and is limited to the domain of each department’s functional responsibility. Each functional department attempts to do its part in meeting overall corporate objectives, and hence to some extent their strategies are derived from broader corporate strategies.Many companies feel that a functional organizational structure is not an efficient way to organize activities so they have reengineered according to processes or SBUs. A strategic business unit is a semi-autonomous unit that is usually responsible for its own budgeting, new product decisions, hiring decisions, and price setting. An SBU is treated as an internal profit centre by corporate headquarters. A technology strategy, for example, although it is focused on technology as a means of achieving an organization's overall objective(s), may include dimensions that are beyond the scope of a single business unit, engineering organization or IT department.An additional level of strategy called operational strategy was encouraged by Peter Drucker in his theory of management by objectives (MBO). It is very narrow in focus and deals with day-to-day operational activities such as scheduling criteria. It must operate within a budget but is not at liberty to adjust or create that budget. Operational level strategies are informed by business level strategies which, in turn, are informed by corporate level strategies.Since the turn of the millennium, some firms have reverted to a simpler strategic structure driven by advances in information technology. It is felt that knowledge management systems should be used to share information and create common goals. Strategic divisions are thought to hamper this process. This notion of strategy has been captured under the rubric of dynamic strategy, popularized by Carpenter and Sanders's textbook. This work builds on that of Brown and Eisenhart as well as Christensen and portrays firm strategy, both business and corporate, as necessarily embracing ongoing strategic change, and the seamless integration of strategy formulation and implementation. Such change and implementation are usually built into the strategy through the staging and pacing facets.

2. Historical development of strategic management

Strategic management as a discipline originated in the 1950s and 60s. Although there were numerous early contributors to the literature, the most influential pioneers were Alfred D. Chandler, Philip Selznick, Igor Ansoff, and Peter Drucker.Alfred Chandler recognized the importance of coordinating the various aspects of management under one all-encompassing strategy. Prior to this time the various functions of management were separate with little overall coordination or strategy. Interactions between functions or between departments were typically handled by a boundary position, that is, there were one or

Page 29: Mata Kuliah Qc Iwan Pratam

two managers that relayed information back and forth between two departments. Chandler also stressed the importance of taking a long term perspective when looking to the future. In his 1962 groundbreaking work Strategy and Structure, Chandler showed that a long-term coordinated strategy was necessary to give a company structure, direction, and focus. He says it concisely, “structure follows strategy.”In 1957, Philip Selznick introduced the idea of matching the organization's internal factors with external environmental circumstances. This core idea was developed into what we now call SWOT analysis by Learned, Andrews, and others at the Harvard Business School General Management Group. Strengths and weaknesses of the firm are assessed in light of the opportunities and threats from the business environment.Igor Ansoff built on Chandler's work by adding a range of strategic concepts and inventing a whole new vocabulary. He developed a strategy grid that compared market penetration strategies, product development strategies, market development strategies and horizontal and vertical integration and diversification strategies. He felt that management could use these strategies to systematically prepare for future opportunities and challenges. In his 1965 classic Corporate Strategy, he developed the gap analysis still used today in which we must understand the gap between where we are currently and where we would like to be, then develop what he called “gap reducing actions”. Peter Drucker was a prolific strategy theorist, author of dozens of management books, with a career spanning five decades. His contributions to strategic management were many but two are most important. Firstly, he stressed the importance of objectives. An organization without clear objectives is like a ship without a rudder. As early as 1954 he was developing a theory of management based on objectives.[7] This evolved into his theory of management by objectives (MBO). According to Drucker, the procedure of setting objectives and monitoring your progress towards them should permeate the entire organization, top to bottom. His other seminal contribution was in predicting the importance of what today we would call intellectual capital. He predicted the rise of what he called the “knowledge worker” and explained the consequences of this for management. He said that knowledge work is non-hierarchical. Work would be carried out in teams with the person most knowledgeable in the task at hand being the temporary leader.In 1985, Ellen-Earle Chaffee summarized what she thought were the main elements of strategic management theory by the 1970s:

Strategic management involves adapting the organization to its business environment. Strategic management is fluid and complex. Change creates novel combinations of

circumstances requiring unstructured non-repetitive responses. Strategic management affects the entire organization by providing direction. Strategic management involves both strategy formation (she called it content) and also

strategy implementation (she called it process). Strategic management is partially planned and partially unplanned. Strategic management is done at several levels: overall corporate strategy, and

individual business strategies. Strategic management involves both conceptual and analytical thought processes.

3. Growth and portfolio theory

In the 1970s much of strategic management dealt with size, growth, and portfolio theory. The PIMS study was a long term study, started in the 1960s and lasted for 19 years, that attempted to understand the Profit Impact of Marketing Strategies (PIMS), particularly the effect of market share. Started at General Electric, moved to Harvard in the early 1970s, and then moved to the Strategic Planning Institute in the late 1970s, it now contains decades of information on the relationship between profitability and strategy. Their initial conclusion was unambiguous: The greater a company's market share, the greater will be their rate of profit. The high market share provides volume and economies of scale. It also provides experience and learning curve advantages. The combined effect is increased profits.[9] The studies conclusions continue to be

Page 30: Mata Kuliah Qc Iwan Pratam

drawn on by academics and companies today: "PIMS provides compelling quantitative evidence as to which business strategies work and don't work" - Tom Peters.The benefits of high market share naturally lead to an interest in growth strategies. The relative advantages of horizontal integration, vertical integration, diversification, franchises, mergers and acquisitions, joint ventures, and organic growth were discussed. The most appropriate market dominance strategies were assessed given the competitive and regulatory environment.There was also research that indicated that a low market share strategy could also be very profitable. Schumacher (1973), Woo and Cooper (1982), Levenson (1984), and later Traverso (2002) showed how smaller niche players obtained very high returns.By the early 1980s the paradoxical conclusion was that high market share and low market share companies were often very profitable but most of the companies in between were not. This was sometimes called the “hole in the middle” problem. This anomaly would be explained by Michael Porter in the 1980s.The management of diversified organizations required new techniques and new ways of thinking. The first CEO to address the problem of a multi-divisional company was Alfred Sloan at General Motors. GM was decentralized into semi-autonomous “strategic business units” (SBU's), but with centralized support functions.One of the most valuable concepts in the strategic management of multi-divisional companies was portfolio theory. In the previous decade Harry Markowitz and other financial theorists developed the theory of portfolio analysis. It was concluded that a broad portfolio of financial assets could reduce specific risk. In the 1970s marketers extended the theory to product portfolio decisions and managerial strategists extended it to operating division portfolios. Each of a company’s operating divisions were seen as an element in the corporate portfolio. Each operating division (also called strategic business units) was treated as a semi-independent profit center with its own revenues, costs, objectives, and strategies. Several techniques were developed to analyze the relationships between elements in a portfolio. B.C.G. Analysis, for example, was developed by the Boston Consulting Group in the early 1970s. This was the theory that gave us the wonderful image of a CEO sitting on a stool milking a cash cow. Shortly after that the G.E. multi factoral model was developed by General Electric. Companies continued to diversify until the 1980s when it was realized that in many cases a portfolio of operating divisions was worth more as separate completely independent companies.

The marketing revolutionThe 1970s also saw the rise of the marketing oriented firm. From the beginnings of capitalism it was assumed that the key requirement of business success was a product of high technical quality. If you produced a product that worked well and was durable, it was assumed you would have no difficulty selling them at a profit. This was called the production orientation and it was generally true that good products could be sold without effort, encapsulated in the saying "Build a better mousetrap and the world will beat a path to your door." This was largely due to the growing numbers of affluent and middle class people that capitalism had created. But after the untapped demand caused by the second world war was saturated in the 1950s it became obvious that products were not selling as easily as they had been. The answer was to concentrate on selling. The 1950s and 1960s is known as the sales era and the guiding philosophy of business of the time is today called the sales orientation. In the early 1970s Theodore Levitt and others at Harvard argued that the sales orientation had things backward. They claimed that instead of producing products then trying to sell them to the customer, businesses should start with the customer, find out what they wanted, and then produce it for them. The customer became the driving force behind all strategic business decisions. This marketing orientation, in the decades since its introduction, has been reformulated and repackaged under numerous names including customer orientation, marketing philosophy, customer intimacy, customer focus, customer driven, and market focused.

The Japanese challengeIn 2009, industry consultants Mark Blaxill and Ralph Eckardt suggested that much of the Japanese business dominance that began in the mid 1970s was the direct result of competition enforcement efforts by the Federal Trade Commission (FTC) and U.S. Department of Justice

Page 31: Mata Kuliah Qc Iwan Pratam

(DOJ). In 1975 the FTC reached a settlement with Xerox Corporation in its anti-trust lawsuit. (At the time, the FTC was under the direction of Frederic M. Scherer). The 1975 Xerox consent decree forced the licensing of the company’s entire patent portfolio, mainly to Japanese competitors. (See "compulsory license.") This action marked the start of an activist approach to managing competition by the FTC and DOJ, which resulted in the compulsory licensing of tens of thousands of patent from some of America's leading companies, including IBM, AT&T, DuPont, Bausch & Lomb, and Eastman Kodak.Within four years of the consent decree, Xerox's share of the U.S. copier market dropped from nearly 100% to less than 14%. Between 1950 and 1980 Japanese companies consummated more than 35,000 foreign licensing agreements, mostly with U.S. companies, for free or low-cost licenses made possible by the FTC and DOJ. The post-1975 era of anti-trust initiatives by Washington D.C. economists at the FTC corresponded directly with the rapid, unprecedented rise in Japanese competitiveness and a simultaneous stalling of the U.S. manufacturing economy.

Competitive advantageThe Japanese challenge shook the confidence of the western business elite, but detailed comparisons of the two management styles and examinations of successful businesses convinced westerners that they could overcome the challenge. The 1980s and early 1990s saw a plethora of theories explaining exactly how this could be done. They cannot all be detailed here, but some of the more important strategic advances of the decade are explained below.Gary Hamel and C. K. Prahalad declared that strategy needs to be more active and interactive; less “arm-chair planning” was needed. They introduced terms like strategic intent and strategic architecture. Their most well known advance was the idea of core competency. They showed how important it was to know the one or two key things that your company does better than the competition. Active strategic management required active information gathering and active problem solving. In the early days of Hewlett-Packard (HP), Dave Packard and Bill Hewlett devised an active management style that they called management by walking around (MBWA). Senior HP managers were seldom at their desks. They spent most of their days visiting employees, customers, and suppliers. This direct contact with key people provided them with a solid grounding from which viable strategies could be crafted. The MBWA concept was popularized in 1985 by a book by Tom Peters and Nancy Austin.[18] Japanese managers employ a similar system, which originated at Honda, and is sometimes called the 3 G's (Genba, Genbutsu, and Genjitsu, which translate into “actual place”, “actual thing”, and “actual situation”).Probably the most influential strategist of the decade was Michael Porter. He introduced many new concepts including; 5 forces analysis, generic strategies, the value chain, strategic groups, and clusters. In 5 forces analysis he identifies the forces that shape a firm's strategic environment. It is like a SWOT analysis with structure and purpose. It shows how a firm can use these forces to obtain a sustainable competitive advantage. Porter modifies Chandler's dictum about structure following strategy by introducing a second level of structure: Organizational structure follows strategy, which in turn follows industry structure. Porter's generic strategies detail the interaction between cost minimization strategies, product differentiation strategies, and market focus strategies. Although he did not introduce these terms, he showed the importance of choosing one of them rather than trying to position your company between them. He also challenged managers to see their industry in terms of a value chain. A firm will be successful only to the extent that it contributes to the industry's value chain. This forced management to look at its operations from the customer's point of view. Every operation should be examined in terms of what value it adds in the eyes of the final customer.In 1993, John Kay took the idea of the value chain to a financial level claiming “ Adding value is the central purpose of business activity”, where adding value is defined as the difference between the market value of outputs and the cost of inputs including capital, all divided by the firm's net output. Borrowing from Gary Hamel and Michael Porter, Kay claims that the role of strategic management is to identify your core competencies, and then assemble a collection of assets that will increase value added and provide a competitive advantage. He claims that there are 3 types of capabilities that can do this; innovation, reputation, and organizational structure.

Page 32: Mata Kuliah Qc Iwan Pratam

The 1980s also saw the widespread acceptance of positioning theory. Although the theory originated with Jack Trout in 1969, it didn’t gain wide acceptance until Al Ries and Jack Trout wrote their classic book “Positioning: The Battle For Your Mind” (1979). The basic premise is that a strategy should not be judged by internal company factors but by the way customers see it relative to the competition. Crafting and implementing a strategy involves creating a position in the mind of the collective consumer. Several techniques were applied to positioning theory, some newly invented but most borrowed from other disciplines. Perceptual mapping for example, creates visual displays of the relationships between positions. Multidimensional scaling, discriminant analysis, factor analysis, and conjoint analysis are mathematical techniques used to determine the most relevant characteristics (called dimensions or factors) upon which positions should be based. Preference regression can be used to determine vectors of ideal positions and cluster analysis can identify clusters of positions.Others felt that internal company resources were the key. In 1992, Jay Barney, for example, saw strategy as assembling the optimum mix of resources, including human, technology, and suppliers, and then configure them in unique and sustainable ways.Michael Hammer and James Champy felt that these resources needed to be restructured. This process, that they labeled reengineering, involved organizing a firm's assets around whole processes rather than tasks. In this way a team of people saw a project through, from inception to completion. This avoided functional silos where isolated departments seldom talked to each other. It also eliminated waste due to functional overlap and interdepartmental communications.In 1989 Richard Lester and the researchers at the MIT Industrial Performance Center identified seven best practices and concluded that firms must accelerate the shift away from the mass production of low cost standardized products. The seven areas of best practice were:

Simultaneous continuous improvement in cost, quality, service, and product innovation Breaking down organizational barriers between departments Eliminating layers of management creating flatter organizational hierarchies. Closer relationships with customers and suppliers Intelligent use of new technology Global focus Improving human resource skills

The search for “best practices” is also called benchmarking. This involves determining where you need to improve, finding an organization that is exceptional in this area, then studying the company and applying its best practices in your firm.A large group of theorists felt the area where western business was most lacking was product quality. People like W. Edwards Deming, Joseph M. Juran, A. Kearney, Philip Crosby, and Armand Feignbaum [27] suggested quality improvement techniques like total quality management (TQM), continuous improvement (kaizen), lean manufacturing, Six Sigma, and return on quality (ROQ).An equally large group of theorists felt that poor customer service was the problem. People like James Heskett (1988), Earl Sasser (1995), William Davidow, Len Schlesinger, A. Paraurgman (1988), Len Berry, Jane Kingman-Brundage, Christopher Hart, and Christopher Lovelock (1994), gave us fishbone diagramming, service charting, Total Customer Service (TCS), the service profit chain, service gaps analysis, the service encounter, strategic service vision, service mapping, and service teams. Their underlying assumption was that there is no better source of competitive advantage than a continuous stream of delighted customers.Process management uses some of the techniques from product quality management and some of the techniques from customer service management. It looks at an activity as a sequential process. The objective is to find inefficiencies and make the process more effective. Although the procedures have a long history, dating back to Taylorism, the scope of their applicability has been greatly widened, leaving no aspect of the firm free from potential process improvements. Because of the broad applicability of process management techniques, they can be used as a basis for competitive advantage.

Page 33: Mata Kuliah Qc Iwan Pratam

Some realized that businesses were spending much more on acquiring new customers than on retaining current ones. Carl Sewell,[33] Frederick F. Reichheld,[34] C. Gronroos,[35] and Earl Sasser[36] showed us how a competitive advantage could be found in ensuring that customers returned again and again. This has come to be known as the loyalty effect after Reicheld's book of the same name in which he broadens the concept to include employee loyalty, supplier loyalty, distributor loyalty, and shareholder loyalty. They also developed techniques for estimating the lifetime value of a loyal customer, called customer lifetime value (CLV). A significant movement started that attempted to recast selling and marketing techniques into a long term endeavor that created a sustained relationship with customers (called relationship selling, relationship marketing, and customer relationship management). Customer relationship management (CRM) software (and its many variants) became an integral tool that sustained this trend.James Gilmore and Joseph Pine found competitive advantage in mass customization.[37] Flexible manufacturing techniques allowed businesses to individualize products for each customer without losing economies of scale. This effectively turned the product into a service. They also realized that if a service is mass customized by creating a “performance” for each individual client, that service would be transformed into an “experience”. Their book, The Experience Economy,[38] along with the work of Bernd Schmitt convinced many to see service provision as a form of theatre. This school of thought is sometimes referred to as customer experience management (CEM).Like Peters and Waterman a decade earlier, James Collins and Jerry Porras spent years conducting empirical research on what makes great companies. Six years of research uncovered a key underlying principle behind the 19 successful companies that they studied: They all encourage and preserve a core ideology that nurtures the company. Even though strategy and tactics change daily, the companies, nevertheless, were able to maintain a core set of values. These core values encourage employees to build an organization that lasts. In Built To Last (1994) they claim that short term profit goals, cost cutting, and restructuring will not stimulate dedicated employees to build a great company that will endure. [39] In 2000 Collins coined the term “built to flip” to describe the prevailing business attitudes in Silicon Valley. It describes a business culture where technological change inhibits a long term focus. He also popularized the concept of the BHAG (Big Hairy Audacious Goal).Arie de Geus (1997) undertook a similar study and obtained similar results. He identified four key traits of companies that had prospered for 50 years or more. They are:

Sensitivity to the business environment — the ability to learn and adjust Cohesion and identity — the ability to build a community with personality, vision, and

purpose Tolerance and decentralization — the ability to build relationships Conservative financing

A company with these key characteristics he called a living company because it is able to perpetuate itself. If a company emphasizes knowledge rather than finance, and sees itself as an ongoing community of human beings, it has the potential to become great and endure for decades. Such an organization is an organic entity capable of learning (he called it a “learning organization”) and capable of creating its own processes, goals, and persona.There are numerous ways by which a firm can try to create a competitive advantage - some will work but many will not. To help firms avoid a hit and miss approach to the creation of competitive advantage, Will Mulcaster [40] suggests that firms engage in a dialogue that centres around the question "Will the proposed competitive advantage create Perceived Differential Value?" The dialogue should raise a series of other pertinent questions, including:

"Will the proposed competitive advantage create something that is different from the competition?"

"Will the difference add value in the eyes of potential customers?" - This question will entail a discussion of the combined effects of price, product features and consumer perceptions.

Page 34: Mata Kuliah Qc Iwan Pratam

"Will the product add value for the firm?" - Answering this question will require an examination of cost effectiveness and the pricing strategy.

The military theoristsIn the 1980s some business strategists realized that there was a vast knowledge base stretching back thousands of years that they had barely examined. They turned to military strategy for guidance. Military strategy books such as The Art of War by Sun Tzu, On War by von Clausewitz, and The Red Book by Mao Zedong became instant business classics. From Sun Tzu, they learned the tactical side of military strategy and specific tactical prescriptions. From Von Clausewitz, they learned the dynamic and unpredictable nature of military strategy. From Mao Zedong, they learned the principles of guerrilla warfare. The main marketing warfare books were:

Business War Games by Barrie James, 1984 Marketing Warfare by Al Ries and Jack Trout, 1986 Leadership Secrets of Attila the Hun by Wess Roberts, 1987

Philip Kotler was a well-known proponent of marketing warfare strategy.There were generally thought to be four types of business warfare theories. They are:

Offensive marketing warfare strategies Defensive marketing warfare strategies Flanking marketing warfare strategies Guerrilla marketing warfare strategies

The marketing warfare literature also examined leadership and motivation, intelligence gathering, types of marketing weapons, logistics, and communications.By the turn of the century marketing warfare strategies had gone out of favour. It was felt that they were limiting. There were many situations in which non-confrontational approaches were more appropriate. In 1989, Dudley Lynch and Paul L. Kordis published Strategy of the Dolphin: Scoring a Win in a Chaotic World. "The Strategy of the Dolphin” was developed to give guidance as to when to use aggressive strategies and when to use passive strategies. A variety of aggressiveness strategies were developed.In 1993, J. Moore used a similar metaphor.[41] Instead of using military terms, he created an ecological theory of predators and prey (see ecological model of competition), a sort of Darwinian management strategy in which market interactions mimic long term ecological stability.

4. Strategic change

In 1968, Peter Drucker (1969) coined the phrase Age of Discontinuity to describe the way change forces disruptions into the continuity of our lives. In an age of continuity attempts to predict the future by extrapolating from the past can be somewhat accurate. But according to Drucker, we are now in an age of discontinuity and extrapolating from the past is hopelessly ineffective. We cannot assume that trends that exist today will continue into the future. He identifies four sources of discontinuity: new technologies, globalization, cultural pluralism, and knowledge capital.In 1970, Alvin Toffler in Future Shock described a trend towards accelerating rates of change.]

He illustrated how social and technological norms had shorter lifespans with each generation, and he questioned society's ability to cope with the resulting turmoil and anxiety. In past generations periods of change were always punctuated with times of stability. This allowed society to assimilate the change and deal with it before the next change arrived. But these periods of stability are getting shorter and by the late 20th century had all but disappeared. In 1980 in The Third Wave, Toffler characterized this shift to relentless change as the defining feature of the third phase of civilization (the first two phases being the agricultural and industrial waves). He claimed that the dawn of this new phase will cause great anxiety for those that grew up in the previous phases, and will cause much conflict and opportunity in the

Page 35: Mata Kuliah Qc Iwan Pratam

business world. Hundreds of authors, particularly since the early 1990s, have attempted to explain what this means for business strategy.In 2000, Gary Hamel discussed strategic decay, the notion that the value of all strategies, no matter how brilliant, decays over time.In 1978, Dereck Abell (Abell, D. 1978) described strategic windows and stressed the importance of the timing (both entrance and exit) of any given strategy. This has led some strategic planners to build planned obsolescence into their strategies.In 1989, Charles Handy identified two types of change. Strategic drift is a gradual change that occurs so subtly that it is not noticed until it is too late. By contrast, transformational change is sudden and radical. It is typically caused by discontinuities (or exogenous shocks) in the business environment. The point where a new trend is initiated is called a strategic inflection point by Andy Grove. Inflection points can be subtle or radical.In 2000, Malcolm Gladwell discussed the importance of the tipping point, that point where a trend or fad acquires critical mass and takes off.In 1983, Noel Tichy wrote that because we are all beings of habit we tend to repeat what we are comfortable with. He wrote that this is a trap that constrains our creativity, prevents us from exploring new ideas, and hampers our dealing with the full complexity of new issues. He developed a systematic method of dealing with change that involved looking at any new issue from three angles: technical and production, political and resource allocation, and corporate culture.In 1990, Richard Pascale (Pascale, R. 1990) wrote that relentless change requires that businesses continuously reinvent themselves.[50] His famous maxim is “Nothing fails like success” by which he means that what was a strength yesterday becomes the root of weakness today, We tend to depend on what worked yesterday and refuse to let go of what worked so well for us in the past. Prevailing strategies become self-confirming. To avoid this trap, businesses must stimulate a spirit of inquiry and healthy debate. They must encourage a creative process of self renewal based on constructive conflict.Peters and Austin (1985) stressed the importance of nurturing champions and heroes. They said we have a tendency to dismiss new ideas, so to overcome this, we should support those few people in the organization that have the courage to put their career and reputation on the line for an unproven idea.In 1996, Adrian Slywotzky showed how changes in the business environment are reflected in value migrations between industries, between companies, and within companies.[51] He claimed that recognizing the patterns behind these value migrations is necessary if we wish to understand the world of chaotic change. In “Profit Patterns” (1999) he described businesses as being in a state of strategic anticipation as they try to spot emerging patterns. Slywotsky and his team identified 30 patterns that have transformed industry after industry.[52]

In 1997, Clayton Christensen (1997) took the position that great companies can fail precisely because they do everything right since the capabilities of the organization also defines its disabilities.[53] Christensen's thesis is that outstanding companies lose their market leadership when confronted with disruptive technology. He called the approach to discovering the emerging markets for disruptive technologies agnostic marketing, i.e., marketing under the implicit assumption that no one - not the company, not the customers - can know how or in what quantities a disruptive product can or will be used before they have experience using it.A number of strategists use scenario planning techniques to deal with change. The way Peter Schwartz put it in 1991 is that strategic outcomes cannot be known in advance so the sources of competitive advantage cannot be predetermined. The fast changing business environment is too uncertain for us to find sustainable value in formulas of excellence or competitive advantage. Instead, scenario planning is a technique in which multiple outcomes can be developed, their implications assessed, and their likeliness of occurrence evaluated. According to Pierre Wack, scenario planning is about insight, complexity, and subtlety, not about formal analysis and numbers.In 1988, Henry Mintzberg looked at the changing world around him and decided it was time to reexamine how strategic management was done. He examined the strategic process and concluded it was much more fluid and unpredictable than people had thought. Because of this,

Page 36: Mata Kuliah Qc Iwan Pratam

he could not point to one process that could be called strategic planning. Instead Mintzberg concludes that there are five types of strategies:

Strategy as plan - a direction, guide, course of action - intention rather than actual Strategy as ploy - a maneuver intended to outwit a competitor Strategy as pattern - a consistent pattern of past behaviour - realized rather than

intended Strategy as position - locating of brands, products, or companies within the conceptual

framework of consumers or other stakeholders - strategy determined primarily by factors outside the firm

Strategy as perspective - strategy determined primarily by a master strategist

In 1998, Mintzberg developed these five types of management strategy into 10 “schools of thought”. These 10 schools are grouped into three categories. The first group is prescriptive or normative. It consists of the informal design and conception school, the formal planning school, and the analytical positioning school. The second group, consisting of six schools, is more concerned with how strategic management is actually done, rather than prescribing optimal plans or positions. The six schools are the entrepreneurial, visionary, or great leader school, the cognitive or mental process school, the learning, adaptive, or emergent process school, the power or negotiation school, the corporate culture or collective process school, and the business environment or reactive school. The third and final group consists of one school, the configuration or transformation school, an hybrid of the other schools organized into stages, organizational life cycles, or “episodes”. In 1999, Constantinos Markides also wanted to reexamine the nature of strategic planning itself. He describes strategy formation and implementation as an on-going, never-ending, integrated process requiring continuous reassessment and reformation. Strategic management is planned and emergent, dynamic, and interactive. J. Moncrieff (1999) also stresses strategy dynamics. He recognized that strategy is partially deliberate and partially unplanned. The unplanned element comes from two sources: emergent strategies (result from the emergence of opportunities and threats in the environment) and Strategies in action (ad hoc actions by many people from all parts of the organization).Some business planners are starting to use a complexity theory approach to strategy. Complexity can be thought of as chaos with a dash of order. Chaos theory deals with turbulent systems that rapidly become disordered. Complexity is not quite so unpredictable. It involves multiple agents interacting in such a way that a glimpse of structure may appear.

5. Information- and technology-driven strategy

Peter Drucker had theorized the rise of the “knowledge worker” back in the 1950s. He described how fewer workers would be doing physical labor, and more would be applying their minds. In 1984, John Nesbitt theorized that the future would be driven largely by information: companies that managed information well could obtain an advantage, however the profitability of what he calls the “information float” (information that the company had and others desired) would all but disappear as inexpensive computers made information more accessible.Daniel Bell (1985) examined the sociological consequences of information technology, while Gloria Schuck and Shoshana Zuboff looked at psychological factors.[61] Zuboff, in her five year study of eight pioneering corporations made the important distinction between “automating technologies” and “infomating technologies”. She studied the effect that both had on individual workers, managers, and organizational structures. She largely confirmed Peter Drucker's predictions three decades earlier, about the importance of flexible decentralized structure, work teams, knowledge sharing, and the central role of the knowledge worker. Zuboff also detected a new basis for managerial authority, based not on position or hierarchy, but on knowledge (also predicted by Drucker) which she called “participative management”.[62]

In 1990, Peter Senge, who had collaborated with Arie de Geus at Dutch Shell, borrowed de Geus' notion of the learning organization, expanded it, and popularized it. The underlying theory is that a company's ability to gather, analyze, and use information is a necessary

Page 37: Mata Kuliah Qc Iwan Pratam

requirement for business success in the information age. (See organizational learning.) To do this, Senge claimed that an organization would need to be structured such that:[63]

People can continuously expand their capacity to learn and be productive, New patterns of thinking are nurtured, Collective aspirations are encouraged, and People are encouraged to see the “whole picture” together.

Senge identified five disciplines of a learning organization. They are:

Personal responsibility, self reliance, and mastery — We accept that we are the masters of our own destiny. We make decisions and live with the consequences of them. When a problem needs to be fixed, or an opportunity exploited, we take the initiative to learn the required skills to get it done.

Mental models — We need to explore our personal mental models to understand the subtle effect they have on our behaviour.

Shared vision — The vision of where we want to be in the future is discussed and communicated to all. It provides guidance and energy for the journey ahead.

Team learning — We learn together in teams. This involves a shift from “a spirit of advocacy to a spirit of enquiry”.

Systems thinking — We look at the whole rather than the parts. This is what Senge calls the “Fifth discipline”. It is the glue that integrates the other four into a coherent strategy. For an alternative approach to the “learning organization”, see Garratt, B. (1987).

Since 1990 many theorists have written on the strategic importance of information, including J.B. Quinn, J. Carlos Jarillo, D.L. Barton, Manuel Castells, J.P. Lieleskin, Thomas Stewart, K.E. Sveiby, Gilbert J. Probst, and Shapiro and Varian to name just a few.Thomas A. Stewart, for example, uses the term intellectual capital to describe the investment an organization makes in knowledge. It is composed of human capital (the knowledge inside the heads of employees), customer capital (the knowledge inside the heads of customers that decide to buy from you), and structural capital (the knowledge that resides in the company itself).Manuel Castells, describes a network society characterized by: globalization, organizations structured as a network, instability of employment, and a social divide between those with access to information technology and those without.Geoffrey Moore (1991) and R. Frank and P. Cook also detected a shift in the nature of competition. In industries with high technology content, technical standards become established and this gives the dominant firm a near monopoly. The same is true of networked industries in which interoperability requires compatibility between users. An example is word processor documents. Once a product has gained market dominance, other products, even far superior products, cannot compete. Moore showed how firms could attain this enviable position by using E.M. Rogers five stage adoption process and focusing on one group of customers at a time, using each group as a base for marketing to the next group. The most difficult step is making the transition between visionaries and pragmatists (See Crossing the Chasm). If successful a firm can create a bandwagon effect in which the momentum builds and your product becomes a de facto standard.Evans and Wurster describe how industries with a high information component are being transformed. They cite Encarta's demolition of the Encyclopedia Britannica (whose sales have plummeted 80% since their peak of $650 million in 1990). Encarta’s reign was speculated to be short-lived, eclipsed by collaborative encyclopedias like Wikipedia that can operate at very low marginal costs. Encarta's service was subsequently turned into an on-line service and dropped at the end of 2009. Evans also mentions the music industry which is desperately looking for a new business model. The upstart information savvy firms, unburdened by cumbersome physical assets, are changing the competitive landscape, redefining market segments, and disintermediating some channels. One manifestation of this is personalized marketing.

Page 38: Mata Kuliah Qc Iwan Pratam

Information technology allows marketers to treat each individual as its own market, a market of one. Traditional ideas of market segments will no longer be relevant if personalized marketing is successful.The technology sector has provided some strategies directly. For example, from the software development industry agile software development provides a model for shared development processes.Access to information systems have allowed senior managers to take a much more comprehensive view of strategic management than ever before. The most notable of the comprehensive systems is the balanced scorecard approach developed in the early 1990s by Drs. Robert S. Kaplan (Harvard Business School) and David Norton (Kaplan, R. and Norton, D. 1992). It measures several factors financial, marketing, production, organizational development, and new product development to achieve a 'balanced' perspective.

6. Knowledge Adaptive Strategy

Most current approaches to business "strategy" focus on the mechanics of management—e.g., Drucker's operational "strategies" -- and as such are not true business strategy. In a post-industrial world these operationally focused business strategies hinge on conventional sources of advantage have essentially been eliminated:

Scale used to be very important. But now, with access to capital and a global marketplace, scale is achievable by multiple organizations simultaneously. In many cases, it can literally be rented.

Process improvement or “best practices” were once a favored source of advantage, but they were at best temporary, as they could be copied and adapted by competitors.

Owning the customer had always been thought of as an important form of competitive advantage. Now, however, customer loyalty is far less important and difficult to maintain as new brands and products emerge all the time.

In such a world, differentiation, as elucidated by Michael Porter, Botten and McManus is the only way to maintain economic or market superiority (i.e., comparative advantage) over competitors. A company must OWN the thing that differentiates it from competitors. Without IP ownership and protection, any product, process or scale advantage can be compromised or entirely lost. Competitors can copy them without fear of economic or legal consequences, thereby eliminating the advantage.This principle is based on the idea of evolution: differentiation, selection, amplification and repetition. It is a form of strategy to deal with complex adaptive systems which individuals, businesses, the economy are all based on. The principle is based on the survival of the "fittest". The fittest strategy employed after trail and error and combination is then employed to run the company in its current market. Failed strategic plans are either discarded or used for another aspect of a business. The trade off between risk and return is taken into account when deciding which strategy to take. Cynefin model and the adaptive cycles of businesses are both good ways to develop KAS, reference Panarchy and Cynefin. Analyze the fitness landscapes for a product, idea, or service to better develop a more adaptive strategy.(For an explanation and elucidation of the "post-industrial" worldview, see George Ritzer and Daniel Bell.)

7. Strategic decision making processes

Will Mulcaster argues that while much research and creative thought has been devoted to generating alternative strategies, too little work has been done on what influences the quality of strategic decision making and the effectiveness with which strategies are implemented. For instance, in retrospect it can be seen that the financial crisis of 2008-9 could have been avoided if the banks had paid more attention to the risks associated with their investments, but how should banks change the way they make decisions to improve the quality of their decisions in the future? Mulcaster's Managing Forces framework addresses this issue by identifying 11

Page 39: Mata Kuliah Qc Iwan Pratam

forces that should be incorporated into the processes of decision making and strategic implementation. The 11 forces are: Time; Opposing forces; Politics; Perception; Holistic effects; Adding value; Incentives; Learning capabilities; Opportunity cost; Risk; Style—which can be remembered by using the mnemonic 'TOPHAILORS'.

8. The psychology of strategic management

Several psychologists have conducted studies to determine the psychological patterns involved in strategic management. Typically senior managers have been asked how they go about making strategic decisions. A 1938 treatise by Chester Barnard, that was based on his own experience as a business executive, sees the process as informal, intuitive, non-routinized, and involving primarily oral, 2-way communications. Bernard says “The process is the sensing of the organization as a whole and the total situation relevant to it. It transcends the capacity of merely intellectual methods, and the techniques of discriminating the factors of the situation. The terms pertinent to it are “feeling”, “judgement”, “sense”, “proportion”, “balance”, “appropriateness”. It is a matter of art rather than science.”In 1973, Henry Mintzberg found that senior managers typically deal with unpredictable situations so they strategize in ad hoc, flexible, dynamic, and implicit ways. . He says, “The job breeds adaptive information-manipulators who prefer the live concrete situation. The manager works in an environment of stimulous-response, and he develops in his work a clear preference for live action.”In 1982, John Kotter studied the daily activities of 15 executives and concluded that they spent most of their time developing and working a network of relationships that provided general insights and specific details for strategic decisions. They tended to use “mental road maps” rather than systematic planning techniques.Daniel Isenberg's 1984 study of senior managers found that their decisions were highly intuitive. Executives often sensed what they were going to do before they could explain why. He claimed in 1986 that one of the reasons for this is the complexity of strategic decisions and the resultant information uncertainty.Shoshana Zuboff (1988) claims that information technology is widening the divide between senior managers (who typically make strategic decisions) and operational level managers (who typically make routine decisions). She claims that prior to the widespread use of computer systems, managers, even at the most senior level, engaged in both strategic decisions and routine administration, but as computers facilitated (She called it “deskilled”) routine processes, these activities were moved further down the hierarchy, leaving senior management free for strategic decision making.In 1977, Abraham Zaleznik identified a difference between leaders and managers. He describes leadershipleaders as visionaries who inspire. They care about substance. Whereas managers are claimed to care about process, plans, and form. He also claimed in 1989 that the rise of the manager was the main factor that caused the decline of American business in the 1970s and 80s.The main difference between leader and manager is that, leader has followers and manager has subordinates. In capitalistic society leaders make decisions and manager usually follow or execute. Lack of leadership is most damaging at the level of strategic management where it can paralyze an entire organization.According to Corner, Kinichi, and Keats, strategic decision making in organizations occurs at two levels: individual and aggregate. They have developed a model of parallel strategic decision making. The model identifies two parallel processes that both involve getting attention, encoding information, storage and retrieval of information, strategic choice, strategic outcome, and feedback. The individual and organizational processes are not independent however. They interact at each stage of the process.

9. Reasons why strategic plans fail

There are many reasons why strategic plans fail, especially:

Failure to execute by overcoming the four key organizational hurdles

Page 40: Mata Kuliah Qc Iwan Pratam

o Cognitive hurdleo Motivational hurdleo Resource hurdleo Political hurdle

Failure to understand the customer o Why do they buyo Is there a real need for the producto inadequate or incorrect marketing research

Inability to predict environmental reaction o What will competitors do

Fighting brands Price wars

o Will government intervene Over-estimation of resource competence

o Can the staff, equipment, and processes handle the new strategyo Failure to develop new employee and management skills

Failure to coordinate o Reporting and control relationships not adequateo Organizational structure not flexible enough

Failure to obtain senior management commitment o Failure to get management involved right from the starto Failure to obtain sufficient company resources to accomplish task

Failure to obtain employee commitment o New strategy not well explained to employeeso No incentives given to workers to embrace the new strategy

Under-estimation of time requirements o No critical path analysis done

Failure to follow the plan o No follow through after initial planningo No tracking of progress against plano No consequences for above

Failure to manage change o Inadequate understanding of the internal resistance to changeo Lack of vision on the relationships between processes, technology and

organization Poor communications

o Insufficient information sharing among stakeholderso Exclusion of stakeholders and delegates

10. Limitations of strategic management

Although a sense of direction is important, it can also stifle creativity, especially if it is rigidly enforced. In an uncertain and ambiguous world, fluidity can be more important than a finely tuned strategic compass. When a strategy becomes internalized into a corporate culture, it can lead to group think. It can also cause an organization to define itself too narrowly. An example of this is marketing myopia.Many theories of strategic management tend to undergo only brief periods of popularity. A summary of these theories thus inevitably exhibits survivorship bias (itself an area of research in strategic management). Many theories tend either to be too narrow in focus to build a complete corporate strategy on, or too general and abstract to be applicable to specific situations. Populism or faddishness can have an impact on a particular theory's life cycle and may see application in inappropriate circumstances. See business philosophies and popular management theories for a more critical view of management theories.In 2000, Gary Hamel coined the term strategic convergence to explain the limited scope of the strategies being used by rivals in greatly differing circumstances. He lamented that strategies converge more than they should, because the more successful ones are imitated by firms that

Page 41: Mata Kuliah Qc Iwan Pratam

do not understand that the strategic process involves designing a custom strategy for the specifics of each situation.Ram Charan, aligning with a popular marketing tagline, believes that strategic planning must not dominate action. "Just do it!" while not quite what he meant, is a phrase that nevertheless comes to mind when combatting analysis paralysis.

The linearity trapIt is tempting to think that the elements of strategic management – (i) reaching consensus on corporate objectives; (ii) developing a plan for achieving the objectives; and (iii) marshalling and allocating the resources required to implement the plan – can be approached sequentially. It would be convenient, in other words, if one could deal first with the noble question of ends, and then address the mundane question of means.But in the world where strategies must be implemented, the three elements are interdependent. Means are as likely to determine ends as ends are to determine means. The objectives that an organization might wish to pursue are limited by the range of feasible approaches to implementation. (There will usually be only a small number of approaches that will not only be technically and administratively possible, but also satisfactory to the full range of organizational stakeholders.) In turn, the range of feasible implementation approaches is determined by the availability of resources.And so, although participants in a typical “strategy session” may be asked to do “blue sky” thinking where they pretend that the usual constraints – resources, acceptability to stakeholders , administrative feasibility – have been lifted, the fact is that it rarely makes sense to divorce oneself from the environment in which a strategy will have to be implemented. It’s probably impossible to think in any meaningful way about strategy in an unconstrained environment. Our brains can’t process “boundless possibilities”, and the very idea of strategy only has meaning in the context of challenges or obstacles to be overcome. It’s at least as plausible to argue that acute awareness of constraints is the very thing that stimulates creativity by forcing us to constantly reassess both means and ends in light of circumstances.The key question, then, is, "How can individuals, organizations and societies cope as well as possible with ... issues too complex to be fully understood, given the fact that actions initiated on the basis of inadequate understanding may lead to significant regret?"The answer is that the process of developing organizational strategy must be iterative. It involves toggling back and forth between questions about objectives, implementation planning and resources. An initial idea about corporate objectives may have to be altered if there is no feasible implementation plan that will meet with a sufficient level of acceptance among the full range of stakeholders, or because the necessary resources are not available, or both.Even the most talented manager would no doubt agree that "comprehensive analysis is impossible" for complex problems. Formulation and implementation of strategy must thus occur side-by-side rather than sequentially, because strategies are built on assumptions that, in the absence of perfect knowledge, are never perfectly correct. Strategic management is necessarily a "...repetitive learning cycle [rather than] a linear progression towards a clearly defined final destination." While assumptions can and should be tested in advance, the ultimate test is implementation. You will inevitably need to adjust corporate objectives and/or your approach to pursuing outcomes and/or assumptions about required resources. Thus a strategy will get remade during implementation because "humans rarely can proceed satisfactorily except by learning from experience; and modest probes, serially modified on the basis of feedback, usually are the best method for such learning."It serves little purpose (other than to provide a false aura of certainty sometimes demanded by corporate strategists and planners) to pretend to anticipate every possible consequence of a corporate decision, every possible constraining or enabling factor, and every possible point of view. At the end of the day, what matters for the purposes of strategic management is having a clear view – based on the best available evidence and on defensible assumptions – of what it seems possible to accomplish within the constraints of a given set of circumstances. As the situation changes, some opportunities for pursuing objectives will disappear and others arise. Some implementation approaches will become impossible, while others, previously impossible or unimagined, will become viable.

Page 42: Mata Kuliah Qc Iwan Pratam

The essence of being “strategic” thus lies in a capacity for "intelligent trial-and error" rather than linear adherence to finally honed and detailed strategic plans. Strategic management will add little value—indeed, it may well do harm—if organizational strategies are designed to be used as a detailed blueprints for managers. Strategy should be seen, rather, as laying out the general path—but not the precise steps—an organization will follow to create value. Strategic management is a question of interpreting, and continuously reinterpreting, the possibilities presented by shifting circumstances for advancing an organization's objectives. Doing so requires strategists to think simultaneously about desired objectives, the best approach for achieving them, and the resources implied by the chosen approach. It requires a frame of mind that admits of no boundary between means and ends.It may not be so limiting as suggested in "The linearity trap" above. Strategic thinking/ identification takes place within the gambit of organizational capacity and Industry dynamics. The two common approaches to strategic analysis are value analysis and SWOT analysis. Yes Strategic analysis takes place within the constraints of existing/potential organizational resources but its would not be appropriate to call it a trap. For e.g., SWOT tool involves analysis of the organization's internal environment (Strengths & weaknesses) and its external environment (opportunities & threats). The organization's strategy is built using its strengths to exploit opportunities, while managing the risks arising from internal weakness and external threats. It further involves contrasting its strengths & weaknesses to determine if the organization has enough strengths to offset its weaknesses. Applying the same logic, at the external level, contrast is made between the externally existing opportunities and threats to determine if the organization is capitalizing enough on opportunities to offset emerging threats.

Page 43: Mata Kuliah Qc Iwan Pratam

CHAPTER 5

TOOLS OF SIX SIGMA

1. Strategy formation

Strategic formation is a combination of three main processes which are as follows:

Performing a situation analysis, self-evaluation and competitor analysis: both internal and external; both micro-environmental and macro-environmental.

Concurrent with this assessment, objectives are set. These objectives should be parallel to a time-line; some are in the short-term and others on the long-term. This involves crafting vision statements (long term view of a possible future), mission statements (the role that the organization gives itself in society), overall corporate objectives (both financial and strategic), strategic business unit objectives (both financial and strategic), and tactical objectives.

These objectives should, in the light of the situation analysis, suggest a strategic plan. The plan provides the details of how to achieve these objectives.

2. Strategy evaluation

Measuring the effectiveness of the organizational strategy, it's extremely important to conduct a SWOT analysis to figure out the internal strengths and weaknesses, and external opportunities and threats of the entity in business. This may require taking certain precautionary measures or even changing the entire strategy. SWOT analysis is a strategic planning method used to evaluate the Strengths, Weaknesses, Opportunities, and Threats involved in a project or in a business venture. It involves specifying the objective of the business venture or project and identifying the internal and external factors that are favorable and unfavorable to achieve that objective. The technique is credited to Albert Humphrey, who led a convention at Stanford University in the 1960s and 1970s using data from Fortune 500 companies.A SWOT analysis must first start with defining a desired end state or objective. A SWOT analysis may be incorporated into the strategic planning model. Strategic Planning has been the subject of much research.

Strengths: characteristics of the business or team that give it an advantage over others in the industry.

Weaknesses: are characteristics that place the firm at a disadvantage relative to others. Opportunities: external chances to make greater sales or profits in the environment. Threats: external elements in the environment that could cause trouble for the business.

Identification of SWOTs is essential because subsequent steps in the process of planning for achievement of the selected objective may be derived from the SWOTs. First, the decision makers have to determine whether the objective is attainable, given the SWOTs. If the objective is NOT attainable a different objective must be selected and the process repeated. The SWOT analysis is often used in academia to highlight and identify strengths, weaknesses, opportunities and threats. It is particularly helpful in identifying areas for development.

Matching and converting

Another way of utilizing SWOT is matching and converting. Matching is used to find competitive advantages by matching the strengths to opportunities. Converting is to apply conversion strategies to convert weaknesses or threats into strengths or opportunities.

Page 44: Mata Kuliah Qc Iwan Pratam

An example of conversion strategy is to find new markets. If the threats or weaknesses cannot be converted a company should try to minimize or avoid them.

Evidence on the use of SWOT

SWOT analysis may limit the strategies considered in the evaluation. J. Scott Armstrong notes that "people who use SWOT might conclude that they have done an adequate job of planning and ignore such sensible things as defining the firm's objectives or calculating ROI for alternate strategies." Findings from Menon et al. (1999) and Hill and Westbrook (1997) have shown that SWOT may harm performance. As an alternative to SWOT, Armstrong describes a 5-step approach alternative that leads to better corporate performance.

Internal and external factors

The aim of any SWOT analysis is to identify the key internal and external factors that are important to achieving the objective. These come from within the company's unique value chain. SWOT analysis groups key pieces of information into two main categories:

Internal factors – The strengths and weaknesses internal to the organization. External factors – The opportunities and threats presented by the external environment to the

organization.

The internal factors may be viewed as strengths or weaknesses depending upon their impact on the organization's objectives. What may represent strengths with respect to one objective may be weaknesses for another objective. The factors may include all of the 4P's; as well as personnel, finance, manufacturing capabilities, and so on. The external factors may include macroeconomic matters, technological change, legislation, and socio-cultural changes, as well as changes in the marketplace or competitive position. The results are often presented in the form of a matrix.

Four P's

Elements of the marketing mix are often referred to as the "Four P's":

Product - It is a tangible object or an intangible service that is mass produced or manufactured on a large scale with a specific volume of units. Intangible products are service based like the tourism industry & the hotel industry or codes-based products like cellphone load and credits. Typical examples of a mass produced tangible object are the motor car and the disposable razor. A less obvious but ubiquitous mass produced service is a computer operating system. Packaging also needs to be taken into consideration. Every product is subject to a life-cycle including a growth phase followed by an eventual period of decline as the product approaches market saturation. To retain its competitiveness in the market, product differentiation is required and is one of the strategies to differentiate a product from its competitors.

Price – The price is the amount a customer pays for the product. The business may increase or decrease the price of product if other stores have the same product.

Place – Place represents the location where a product can be purchased. It is often referred to as the distribution channel. It can include any physical store as well as virtual stores on the Internet.

Promotion represents all of the communications that a marketeer may use in the marketplace. Promotion has four distinct elements: advertising, public relations, personal selling and sales promotion. A certain amount of crossover occurs when promotion uses the four principal elements together, which is common in film promotion. Advertising covers any communication that is paid for, from cinema commercials, radio and Internet adverts through print media and billboards. Public relations are where the communication is not directly paid for and includes press

Page 45: Mata Kuliah Qc Iwan Pratam

releases, sponsorship deals, exhibitions, conferences, seminars or trade fairs and events. Word of mouth is any apparently informal communication about the product by ordinary individuals, satisfied customers or people specifically engaged to create word of mouth momentum. Sales staff often plays an important role in word of mouth and Public Relations (see Product above).

Any organization, before introducing its products or services into the market; conducts a market survey. The sequence of all 'P's as above is very much important in every stage of product life cycle Introduction, Growth, Maturity and Decline.

Extended Marketing Mix (3 Ps)

More recently, three more Ps have been added to the marketing mix namely People, Process and Physical Evidence. This marketing mix is known as Extended Marketing Mix.

People: All people involved with consumption of a service are important. For example workers, management, consumers etc. It also defines the market segmentation, mainly demographic segmentation. It addresses particular class of people for whom the product or service is made available.

Process: Procedure, mechanism and flow of activities by which services are used. Also the 'Procedure' how the product will reach the end user.

Physical Evidence: The marketing strategy should include effectively communicating their satisfaction to potential customers.

Four Cs (1) in 7Cs compass model

A formal approach to this customer-focused marketing mix is known as the Four Cs (Commodity, Cost, Channel, Communication) in “7Cs compass model.” Koichi Shimizu proposed a four Cs classification in 1973. This system is basically the four Ps renamed and reworded to provide a customer focus. The four Cs Model provides a demand/customer centric version alternative to the well-known four Ps supply side model (product, price, place, promotion) of marketing management. The Four Cs model is more consumer-oriented and attempts to better fit the movement from mass marketing to symbiotic marketing.

1. Commodity :(Original meaning of Latin: Commodus=convenient)the product for the consumers or citizens. A commodity can also be described as raw materials such as; oil, metal ores or wheat; the price of these tend to change on a daily basis due to the demand and supply of these commodities.

2. Cost :(Original meaning of Latin: Constare= It makes sacrifices)producing cost, selling cost, purchasing cost and social cost.

3. Channel :(Original meaning is a Canal)Flow of commodity : marketing channels.4. Communication :(Original meaning of Latin:Communio=sharing of meaning) marketing

communication : It doesn't promote the sales.(Framework of 7 Cs compass model )

(C1): Corporation and competitor : The core of 4Cs is corporation and organization, while the core of 4Ps is customers who are the targets for attacks or defenses.

(C2) : Commodity (C3) : Cost (C4) : Channel (C5) : Communication (C6) : Consumer (Needle of compass to Consumer), The factors related to customers can

be explained by the first character of four directions marked on the compass model: N = Needs, W = Wants, S = Security and E = Education (consumer education).

(C7) : Circumstances (Needle of compass to Circumstances ), In addition to the customer, there are various uncontrollable external environmental factors encircling the

Page 46: Mata Kuliah Qc Iwan Pratam

companies. Here it can also be explained by the first character of the four directions marked on the compass model --- N = National and International Circumstances, W=Weather, S = Social and Cultural Circumstances, E = Economic (Circumstances).

Four Cs (2)

Robert F. Lauterborn proposed a four Cs(2) classification in 1993. The Four Cs model is more consumer-oriented and attempts to better fit the movement from mass marketing to niche marketing. The Product part of the Four Ps model is replaced by Consumer or Consumer Models, shifting the focus to satisfying the consumer needs. Another C replacement for Product is Capable. By defining offerings as individual capabilities that when combined and focused to a specific industry, creates a custom solution rather than pigeon-holing a customer into a product. Pricing is replaced by Cost reflecting the total cost of ownership. Many factors affect Cost, including but not limited to the customer's cost to change or implement the new product or service and the customer's cost for not selecting a competitor's product or service. Placement is replaced by Convenience. With the rise of internet and hybrid models of purchasing, Place is becoming less relevant. Convenience takes into account the ease of buying the product, finding the product, finding information about the product, and several other factors. Finally, the Promotions feature is replaced by Communication which represents a broader focus than simply Promotions. Communications can include advertising, public relations, personal selling, viral advertising, and any form of communication between the firm and the consumer.

SWOT analysis is just one method of categorization and has its own weaknesses. For example, it may tend to persuade companies to compile lists rather than think about what is actually important in achieving objectives. It also presents the resulting lists uncritically and without clear prioritization so that, for example, weak opportunities may appear to balance strong threats.It is prudent not to eliminate too quickly any candidate SWOT entry. The importance of individual SWOTs will be revealed by the value of the strategies it generates. A SWOT item that produces valuable strategies is important. A SWOT item that generates no strategies is not important.

Use of SWOT analysis

The usefulness of SWOT analysis is not limited to profit-seeking organizations. SWOT analysis may be used in any decision-making situation when a desired end-state (objective) has been defined. Examples include: non-profit organizations, governmental units, and individuals. SWOT analysis may also be used in pre-crisis planning and preventive crisis management. SWOT analysis may also be used in creating a recommendation during a viability study/survey.

SWOT - landscape analysis

Page 47: Mata Kuliah Qc Iwan Pratam

The SWOT-landscape systematically deploys the relationships between overall objective and underlying SWOT-factors and provides an interactive, query-able 3D landscape.

The SWOT-landscape grabs different managerial situations by visualizing and foreseeing the dynamic performance of comparable objects according to findings by Brendan Kitts, Leif Edvinsson and Tord Beding (2000). Changes in relative performance are continually identified. Projects (or other units of measurements) that could be potential risk or opportunity objects are highlighted.SWOT-landscape also indicates which underlying strength/weakness factors that have had or likely will have highest influence in the context of value in use (for ex. capital value fluctuations).

Corporate planning

As part of the development of strategies and plans to enable the organization to achieve its objectives, then that organization will use a systematic/rigorous process known as corporate planning. SWOT alongside PEST/PESTLE can be used as a basis for the analysis of business and environmental factors.

Set objectives – defining what the organization is going to do Environmental scanning

Internal appraisals of the organization's SWOT, this needs to include an assessment of the present situation as well as a portfolio of products/services and an analysis of the product/service life cycle

Analysis of existing strategies, this should determine relevance from the results of an internal/external appraisal. This may include gap analysis which will look at environmental factors

Strategic Issues defined – key factors in the development of a corporate plan which needs to be addressed by the organization

Develop new/revised strategies – revised analysis of strategic issues may mean the objectives need to change

Establish critical success factors – the achievement of objectives and strategy implementation

Preparation of operational, resource, projects plans for strategy implementation Monitoring results – mapping against plans, taking corrective action which may mean

amending objectives/strategies.

3. Marketing Analyses

In many competitor analyses, marketers build detailed profiles of each competitor in the market, focusing especially on their relative competitive strengths and weaknesses using SWOT analysis. Marketing managers will examine each competitor's cost structure, sources of profits, resources and competencies, competitive positioning and product differentiation, degree of vertical integration, historical responses to industry developments, and other factors.

Marketing management often finds it necessary to invest in research to collect the data required to perform accurate marketing analysis. Accordingly, management often conducts market research (alternately marketing research) to obtain this information. Marketers employ a variety of techniques to conduct market research, but some of the more common include:

Qualitative marketing research, such as focus groups Quantitative marketing research, such as statistical surveys Experimental techniques such as test markets Observational techniques such as ethnographic (on-site) observation

Page 48: Mata Kuliah Qc Iwan Pratam

Marketing managers may also design and oversee various environmental scanning and competitive intelligence processes to help identify trends and inform the company's marketing analysis.

Using SWOT to analyse the market position of a small management consultancy with specialism in HRM.

Strengths Weaknesses Opportunities Threats

Reputation in marketplace

Shortage of consultants at operating level rather than partner level

Well established position with a well defined market niche

Large consultancies operating at a minor level

Expertise at partner level in HRM consultancy

Unable to deal with multi-disciplinary assignments because of size or lack of ability

Identified market for consultancy in areas other than HRM

Other small consultancies looking to invade the marketplace

In corporate strategy, Johnson, Scholes and Whittington present a model in which strategic options are evaluated against three key success criteria:

Suitability (would it work?) Feasibility (can it be made to work?) Acceptability (will they work it?)

SuitabilitySuitability deals with the overall rationale of the strategy. The key point to consider is whether the strategy would address the key strategic issues underlined by the organisation's strategic position.

Does it make economic sense? Would the organization obtain economies of scale or economies of scope? Would it be suitable in terms of environment and capabilities?

Tools that can be used to evaluate suitability include:

Ranking strategic options

Page 49: Mata Kuliah Qc Iwan Pratam

4. Decision trees

A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm. Decision trees are commonly used in operations research, specifically in decision analysis, to help identify a strategy most likely to reach a goal. Another use of decision trees is as a descriptive means for calculating conditional probabilities.

In decision analysis, a "decision tree" — and the closely-related influence diagram — is used as a visual and analytical decision support tool, where the expected values (or expected utility) of competing alternatives are calculated.

Decision trees have traditionally been created manually, as the following example shows:

A decision Tree consists of 3 types of nodes:1. Decision nodes - commonly represented by square2. Chance nodes - represented by circles3. End nodes - represented by triangles

Drawn from left to right, a decision tree has only burst nodes (splitting paths) but no sink nodes (converging paths). Therefore, used manually, they can grow very big and are then often hard to draw fully by hand.

Page 50: Mata Kuliah Qc Iwan Pratam

Analysis can take into account the decision maker's (e.g., the company's) preference or utility function, for example:

The basic interpretation in this situation is that the company prefers B's risk and payoffs under realistic risk preference coefficients (greater than $400K—in that range of risk aversion, the company would need to model a third strategy, "Neither A nor B").

Influence diagram . A decision tree can be represented more compactly as an influence diagram, focusing attention on the issues and relationships between events.

The squares represent decisions, the ovals represent action, and the diamond represents results.

Uses in teaching

Decision trees, influence diagrams, utility functions, and other decision analysis tools and methods are taught to undergraduate students in schools of business, health economics, and public health, and are examples of operations research or management science methods.

Advantages

Amongst decision support tools, decision trees (and influence diagrams) have several advantages:Decision trees:

Are simple to understand and interpret. People are able to understand decision tree models after a brief explanation.

Have value even with little hard data. Important insights can be generated based on experts describing a situation (its alternatives, probabilities, and costs) and their preferences for outcomes.

Use a white box model. If a given result is provided by a model, the explanation for the result is easily replicated by simple math.

Page 51: Mata Kuliah Qc Iwan Pratam

Can be combined with other decision techniques. The following example uses Net Present Value calculations, PERT 3-point estimations (decision #1) and a linear distribution of expected outcomes (decision #2):

Example

Decision trees can be used to optimize an investment portfolio. The following example shows a portfolio of 7 investment options (projects). The organization has $10,000,000 available for the total investment. Bold lines mark the best selection 1, 3, 5, 6, and 7, which will cost $9,750,000 and create a payoff of 16,175,000. All other combinations would either exceed the budget or yield a lower payoff.

Decision tree learningDecision tree learning, used in statistics, data mining and machine learning, uses a decision

tree as a predictive model which maps observations about an item to conclusions about the item's target value. More descriptive names for such tree models are classification trees or regression trees. In these tree structures, leaves represent classifications and branches represent conjunctions of features that lead to those classifications.In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data but not decisions; rather the resulting classification tree can be an input for decision making. This page deals with decision trees in data mining.

Decision tree advantagesAmongst other data mining methods, decision trees have various advantages:

Simple to understand and interpret. People are able to understand decision tree models after a brief explanation.

Requires little data preparation. Other techniques often require data normalisation, dummy variables need to be created and blank values to be removed.

Able to handle both numerical and categorical data. Other techniques are usually specialised in analysing datasets that have only one type of variable. Ex: relation rules can be used only with nominal variables while neural networks can be used only with numerical variables.

Uses a white box model. If a given situation is observable in a model the explanation for the condition is easily explained by boolean logic. An example of a black box model is an artificial neural network since the explanation for the results is difficult to understand.

Page 52: Mata Kuliah Qc Iwan Pratam

Possible to validate a model using statistical tests. That makes it possible to account for the reliability of the model.

Robust . Performs well even if its assumptions are somewhat violated by the true model from which the data were generated.

Perform well with large data in a short time. Large amounts of data can be analysed using personal computers in a time short enough to enable stakeholders to take decisions based on its analysis.

Limitations

The problem of learning an optimal decision tree is known to be NP-complete under several aspects of optimality and even for simple concepts. Consequently, practical decision-tree learning algorithms are based on heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node. Such algorithms cannot guarantee to return the globally optimal decision tree. Recent developments suggest the use of genetic algorithms to avoid local optimal decisions and search the decision tree space with little a priori bias.

Decision-tree learners can create over-complex trees that do not generalise the data well. This is called overfitting.[9] Mechanisms such as pruning are necessary to avoid this problem.

There are concepts that are hard to learn because decision trees do not express them easily, such as XOR, parity or multiplexer problems. In such cases, the decision tree becomes prohibitively large. Approaches to solve the problem involve either changing the representation of the problem domain (known as propositionalisation) [10] or using learning algorithms based on more expressive representations (such as statistical relational learning or inductive logic programming).

Extending decision trees with decision graphs

In a decision tree, all paths from the root node to the leaf node proceed by way of conjunction, or AND. In a decision graph, it is possible to use disjunctions (ORs) to join two more paths together using Minimum Message Length (MML). Decision graphs have been further extended to allow for previously unstated new attributes to be learnt dynamically and used at different places within the graph. The more general coding scheme results in better predictive accuracy and log-loss probabilistic scoring. In general, decision graphs infer models with fewer leaves than decision trees.

Implementations

Weka , a free and open-source data mining suite, contains many decision tree algorithms. Weka (Waikato Environment for Knowledge Analysis) is a popular suite of machine learning software written in Java, developed at the University of Waikato, New Zealand. Weka is free software available under the GNL (General Public License).

Orange , a free data mining software suite, module orngTree. Orange is a component-based data mining and machine learning software suite, featuring friendly yet powerful and flexible visual programming front-end for explorative data analysis and visualization, and Python bindings and libraries for scripting. It includes comprehensive set of components for data preprocessing, feature scoring and filtering, modeling, model evaluation, and exploration techniques. It is implemented in C++ (speed) and Python (flexibility). Its graphical user interface builds upon cross-platform Qt framework. Orange is distributed free under the GPL.

Sipina , a free decision tree software, including an interactive tree builder. SIPINA is especially intended to decision trees induction (says also Classification Trees). Use TANAGRA if you want other kind of analyses such as clustering algorithms, svm, factorial analysis, statistical hypothesis testing, etc.

Page 53: Mata Kuliah Qc Iwan Pratam

KNIME the Konstanz Information Miner, is a user friendly, coherent open source data analytics, reporting and integration platform. KNIME integrates various components for machine learning and data mining through its modular data pipelining concept. The graphical user interface allows the quick and easy assembly of nodes for data preprocessing (ETL: Extraction, Transformation, Loading), for modeling and data analysis and visualization. KNIME is since 2006 used in pharmaceutical research, but is also used in other areas like CRM customer data analysis, business intelligence and financial data analysis.

5. FORECASTINGForecasting is the process of making statements about events whose actual outcomes (typically) have not yet been observed. A commonplace example might be estimation for some variable of interest at some specified future date. Prediction is a similar, but more general term. Both might refer to formal statistical methods employing time series, cross-sectional or longitudinal data, or alternatively to less formal judgemental methods. Usage can differ between areas of application: for example in hydrology, the terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain specific future times, while the term "prediction" is used for more general estimates, such as the number of times floods will occur over a long period. Risk and uncertainty are central to forecasting and prediction; it is generally considered good practice to indicate the degree of uncertainty attaching to forecasts. The process of climate change and increasing energy prices has led to the usage of Egain Forecasting of buildings. The method uses Forecasting to reduce the energy needed to heat the building, thus reducing the emission of greenhouse gases. Forecasting is used in the practice of Customer Demand Planning in every day business forecasting for manufacturing companies. The discipline of demand planning, also sometimes referred to as supply chain forecasting, embraces both statistical forecasting and a consensus process. An important, albeit often ignored aspect of forecasting, is the relationship it holds with planning. Forecasting can be described as predicting what the future will look like, whereas planning predicts what the future should look like. There is no single right forecasting method to use. Selection of a method should be based on your objectives and your conditions (data etc.). A good place to find a method, is by visiting a selection tree. An example of a selection tree can be found here.

Categories of forecasting methods

Time series methodsTime series methods use historical data as the basis of estimating future outcomes.

Moving average weighted moving average Exponential smoothing Autoregressive moving average (ARMA) Autoregressive integrated moving average (ARIMA)

e.g. Box-Jenkins

Extrapolation Linear prediction Trend estimation Growth curve

Causal / econometric forecasting methods Some forecasting methods use the assumption that it is possible to identify the underlying factors that might influence the variable that is being forecast. For example, including information about weather conditions might improve the ability of a model to predict umbrella sales. Such methods include:

Page 54: Mata Kuliah Qc Iwan Pratam

Regression analysis includes a large group of methods that can be used to predict future values of a variable using information about other variables. These methods include both parametric (linear or non-linear) and non-parametric techniques.

Autoregressive moving average with exogenous inputs (ARMAX)

Judgmental methods Judgmental forecasting methods incorporate intuitive judgements, opinions and subjective probability estimates.

Composite forecasts Surveys Delphi method Scenario building Technology forecasting Forecast by analogy

Artificial intelligence methods

Artificial neural networks Support vector machines

Other methods

Simulation Prediction market Probabilistic forecasting and Ensemble forecasting Reference class forecasting

Forecasting accuracy

The forecast error is the difference between the actual value and the forecast value for the corresponding period.

where E is the forecast error at period t, Y is the actual value at period t, and F is the forecast for period t.

Measures of aggregate error:

Mean Absolute Error (MAE)

Mean Absolute Percentage Error (MAPE)

Percent Mean Absolute Deviation (PMAD)

Mean squared error (MSE)

Root Mean squared error (RMSE)

Page 55: Mata Kuliah Qc Iwan Pratam

Forecast skill (SS)

Please note that business forecasters and practitioners sometimes use different terminology in the industry. They refer to the PMAD as the MAPE, although they compute this volume weighted MAPE. For more information see Calculating Demand Forecast Accuracy

Reference class forecasting was developed to increase forecasting accuracy.

Applications of forecasting

Forecasting has application in many situations:

Supply chain management - Forecasting can be used in Supply Chain Management to make sure that the right product is at the right place at the right time. Accurate forecasting will help retailers reduce excess inventory and therefore increase profit margin. Accurate forecasting will also help them meet consumer demand.

Weather forecasting , Flood forecasting and Meteorology Transport planning and Transportation forecasting Economic forecasting Egain Forecasting Technology forecasting Earthquake prediction Land use forecasting Product forecasting Player and team performance in sports Telecommunications forecasting Political Forecasting Sales Forecasting Beak-even Analysis

Break-even (or break even) is a point where any difference between plus or minus or equivalent changes side.

In economics A technique for which identifying the point where the total revenue is just sufficient to cover the total cost. The formula for break even point is fixed costs over contribution per unit. In economics & business, specifically cost accounting, the break-even point (BEP) is the point at which cost or expenses and revenue are equal: there is no net loss or gain, and one has "broken even". A profit or a loss has not been made, although opportunity costs have been paid, and capital has received the risk-adjusted, expected return.

In other fields In nuclear fusion research, the term breakeven refers to a fusion energy gain factor equal to unity, this is also known as the Lawson criterion.The notion can also be found in more general phenomena, such as percolation, and is rather similar to the critical threshold. In energy, the breakeven point is the point where usable energy gotten from a process exceeds the input energy.In computer science, the (less usual) term refers to a point in the life cycle of a programming language where the language can be used to code its own compiler or interpreter. This is also called self-hosting. In medicine, it is a postulated state when the advances of medicine permit every year an increase of one year or more of the life expectancy of the living, therefore leading to medical immortality (barring accidental death).

Page 56: Mata Kuliah Qc Iwan Pratam

6. Simple Random Sampling

To understand sampling, you need to first understand a few basic definitions. The total set of observations that can be made is called the population. A sample is a set of observations drawn from a population. A parameter is a measurable characteristic of a population, such as a mean or standard

deviation. A statistic is a measurable characteristic of a sample, such as a mean or standard

deviation. A sampling method is a procedure for selecting sample elements from a population. A random number is a number determined totally by chance, with no predictable

relationship to any other number. A random number table is a list of numbers, composed of the digits 0, 1, 2, 3, 4, 5, 6, 7,

8, and 9. Numbers in the list are arranged so that each digit has no predictable relationship to the digits that preceded it or to the digits that followed it. In short, the digits are arranged randomly. The numbers in a random number table are random numbers.

Simple random sampling refers to a sampling method that has the following properties. The population consists of N objects. The sample consists of n objects. All possible samples of n objects are equally likely to occur.

The main benefit of simple random sampling is that it guarantees that the sample chosen is representative of the population. This ensures that the statistical conclusions will be valid. There are many ways to obtain a simple random sample. One way would be the lottery method. Each of the N population members is assigned a unique number. The numbers are placed in a bowl and thoroughly mixed. Then, a blind-folded researcher selects n numbers. Population members having the selected numbers are included in the sample.

Random Number Generator

In practice, the lottery method described above can be cumbersome, particularly with large sample sizes. As an alternative, use Stat Trek's Random Number Generator. With the Random Number Generator, you can select n random numbers quickly and easily. This tool is provided at no cost - free!! To access the Random Number Generator, simply click on the button below. It can also be found under the Stat Tools tab, which appears in the header of every Stat Trek web page.

What are random numbers? Random numbers are sets of digits (i.e., 0, 1, 2, 3, 4, 5, 6, 7, 8, 9) arranged in random order. Because they are randomly ordered, no individual digit can be predicted from knowledge of any other digit or group of digits.

What is a random number generator?

A random number generator is a process that produces random numbers. Any random process (e.g., a flip of a coin or the toss of a die) can be used to generate random numbers. Stat Trek's Random Number Generator uses a statistical algorithm to produce random numbers.

What is a random number table? A random number table is a listing of random numbers. Stat Trek's Random Number Generator produces a listing of random numbers, based on the following User specifications:

Page 57: Mata Kuliah Qc Iwan Pratam

The quantity of random numbers desired. The maximum and minimum values of random numbers in the table. Whether or not duplicate random numbers are permitted.

How "random" is Stat Trek's Random Number Generator?

Although no computer algorithm can produce numbers that are truly random, Stat Trek's Random Number Generator produces numbers that are nearly random. Stat Trek's Random Number Generator can be used for most statistical applications (like randomly assigning subjects to treatments in a statistical experiment). However, it should not be used to generate numbers for cryptography.

How can I control the number of entries in a single random number table? The value entered in the text box labeled "How many random numbers" controls the quantity of numbers generated for the random number table. Stat Trek's Random Number Generator can create up to 1000 random numbers for a single random number table. If more random numbers are needed, Users can create additional tables.

What are the minimum and maximum values in the Random Number Generator?The minimum and maximum values set limits on the range of values that might appear in a random number table. The minimum value identifies the smallest number in the range; and the maximum value identifies the largest number. For example, if we set the minimum value equal to 12 and the maximum value equal to 30, the Random Number Generator will produce a table consisting of random arrangements of the numbers in the range of 12 to 30.

What does it mean to allow duplicate entries in a random number table? Stat Trek's Random Number Generator allows Users to permit or prevent the same number from appearing more than once in the random number table. To permit duplicate entries, set the drop-down box labeled "Allow duplicate entries" equal to True. To prevent duplicate entries, change the setting to False. Essentially, allowing duplicate entries amounts to sampling with replacement; preventing duplicate entries amounts to sampling without replacement.

What is a seed? The seed is a number that controls whether the Random Number Generator produces a new set of random numbers or repeats a particular sequence of random numbers. If the text box labeled "Seed" is blank, the Random Number Generator will produce a different set of random numbers each time a random number table is created. On the other hand, if a number is entered in the "Seed" text box, the Random Number Generator will produce a set of random numbers based on the value of the Seed. Each time a random number table is created, the Random Number Generator will produce the same set of random numbers, until the Seed value is changed. Note: The ability of the seed to repeat a random sequence of numbers assumes that other User specifications (i.e., quantity of random numbers, minimum value, maximum value, whether duplicate values are permitted) are constant across replications. The use of a seed is illustrated in Sample Problem 1. Warning: The seed capability is provided for Users as a short-term convenience. However, it is not a long-term solution. From time to time, Stat Trek changes the underlying random number algorithm, to more closely approximate true randomization. A newer algorithm will not reproduce random numbers generated by an older algorithm, even with the same seed. Therefore, the safest way to "save" a random number table is to print it out.

Using the Random Number Generator: Sample Problems

Random Number Generator | Random Number Table | Frequently-Asked Questions

Page 58: Mata Kuliah Qc Iwan Pratam

1. A university is testing the effectiveness of two different medications. They have 10 volunteers. To conduct the study, researchers randomly assign a number from 1 to 2 to each volunteer. Volunteers who are assigned number 1 get Treatment 1 and volunteers who are assigned number 2 get Treatment 2. To implement this strategy, they input the following settings in the Random Number Generator:

They want to assign a number randomly to each of 10 volunteers, so they need 10 entries in the random number table. Therefore, the researchers enter 10 in the text box labeled "How many random numbers?".

Since each volunteer will receive one of two treatments, they set the minimum value equal to 1; and the maximum value equal to 2.

Since some volunteers will receive the same treatment, the researchers allow duplicate random numbers in the random number table. Therefore, they set the "Allow duplicate entries" dropdown box equal to "True".

And finally, they set the Seed value equal to 345. (The number 345 is not special. They could have used any positive integer.)

Then, they hit the Calculate button. The Random Number Generator produces a Random Number Table consisting of 10 entries, where each entry is the number 1 or 2. The researchers assign the first entry to volunteer number 1, the second entry to volunteer number 2, and so on.Using this strategy, what treatment did the first volunteer receive? What treatment did the tenth volunteer receive?

Solution:This problem can be solved by recreating the exact Random Number Table used by the researchers. By inputting all of the same entries (especially the same Seed value) that were used originally, we can recreate the Random Number Table used by the researchers. Therefore, we do the following:

Enter 10 in the text box labeled "How many random numbers?". Set the minimum value equal to 1 and the maximum value equal to 2. Set the "Allow duplicate entries" dropdown box equal to "True". Set the Seed value equal to 345.

Then, we hit the Calculate button. This produces the Random Number Table shown below.

Random Numbers

2 2 1 1 2 2 2 1 2 1

From the table, we can see that the first entry is "2". Therefore, the first volunteer received Treatment 2. And the tenth entry is "1". Hence, the tenth volunteer received Treatment 1.

2. We would like to survey 500 families from a population of 20,000 families. Each family has been assigned a number from 1 to 20,000. How can we randomly select 500 families for the survey?

3. Solution:

We input the following settings in the Random Number Generator: We want to select 500 families. Therefore, we enter 500 in the text box labeled

"How many random numbers?". Since each family has been assigned a number from 1 to 20,000, we set the

minimum value equal to 1; and the maximum value equal to 20,000. Since we only want to survey each family once, we don't want duplicate random

numbers in our random number table. Therefore, we set the "Allow duplicate entries" dropdown box equal to "False".

Page 59: Mata Kuliah Qc Iwan Pratam

Then, we hit the Calculate button. The Random Number Generator produces a Random Number Table consisting of 500 unique random numbers between 1 and 20,000. We will survey the families represented by these numbers - a sample of 500 families randomly selected from the population of 20,000 families.

Sampling With Replacement and Without Replacement

Suppose we use the lottery method described above to select a simple random sample. After we pick a number from the bowl, we can put the number aside or we can put it back into the bowl. If we put the number back in the bowl, it may be selected more than once; if we put it aside, it can selected only one time. When a population element can be selected more than one time, we are sampling with replacement. When a population element can be selected only one time, we are sampling without replacement.

How to Choose the Best Sampling MethodWhat is the Best Sampling Method?

The best sampling method is the sampling method that most effectively meets the particular goals of the study in question. The effectiveness of a sampling method depends on many factors. Because these factors interact in complex ways, the "best" sampling method is seldom obvious. Good researchers use the following strategy to identify the best sampling method.

List the research goals (usually some combination of accuracy, precision, and/or cost). Identify potential sampling methods that might effectively achieve those goals. Test the ability of each method to achieve each goal. Choose the method that does the best job of achieving the goals.

The next section presents an example that illustrates this strategy. Sample Planning Wizard

The computations involved in comparing different sampling methods can be complex and time-consuming. Stat Trek's Sample Planning Wizard can help. The Wizard computes survey precision, sample size requirements, costs, etc., allowing you to compare alternative sampling methods quickly, easily, and accurately. The Wizard creates a summary report that lists key findings and documents analytical techniques. Whenever you want to find the most precise, cost-effective sampling method, consider using the Sample Planning Wizard. The Sample Planning Wizard is a premium tool available only to registered users. > Learn more

How to Choose the Best Sampling MethodIn this section, we illustrate how to choose the best sampling method by working through a sample problem. Here is the problem:

Problem Statement

At the end of every school year, the state administers a reading test to a sample of third graders. The school system has 20,000 third graders, half boys and half girls. There are 1000 third-grade classes, each with 20 students. The maximum budget for this research is $3600. The only expense is the cost to proctor each test session. This amounts to $100 per session. The purpose of the study is to estimate the reading proficiency of third graders, based on sample data. School administrators want to maximize the precision of this estimate without exceeding the $3600 budget. What sampling method should they use?

As noted earlier, finding the "best" sampling method is a four-step process. We work through each step below.

Page 60: Mata Kuliah Qc Iwan Pratam

List goals. This study has two main goals: (1) maximize precision and (2) stay within budget. Identify potential sampling methods. This tutorial has covered three basic sampling methods - simple random sampling, stratified sampling, and cluster sampling. In addition, we've described some variations on the basic methods (e.g., proportionate vs. disproportionate stratification, one-stage vs. two-stage cluster sampling, sampling with replacement versus sampling without replacement). Because one of the main goals is to maximize precision, we can eliminate some of these alternatives. Sampling without replacement always provides equal or better precision than sampling with replacement, so we will focus only on sampling without replacement. Also, as long as the same clusters are sampled, one-stage cluster sampling always provides equal or better precision than two-stage cluster sampling, so we will focus only on one-stage cluster sampling. (Note: For cluster sampling in this example, the cost is the same whether we sample all students or only some students from a particular cluster; so in this example, two-stage sampling offers no cost advantage over one-stage sampling.)This leaves us with four potential sampling methods - simple random sampling, proportionate stratified sampling, disproportionate stratified sampling, and one-stage cluster sampling. Each of these uses sampling without replacement. Because of the need to maximize precision, we will use Neyman allocation with our disproportionate stratified sample.

Test methods. A key part of the analysis is to test the ability of each potential sampling method to satisfy the research goals. Specifically, we will want to know the level of precision and the cost associated with each potential method. For our test, we use the standard error to measure precision. The smaller the standard error, the greater the precision. To avoid getting bogged down in the computational details of the analysis, we will use results from sample problems that have appeared in previous lessons. Those results are summarized in the table below. (To review the analyses that produced this output, click the "See analysis" links in the last column of the table.)

Sampling method Cost Standard error

Sample size

Analytical details

Simple random sampling $3600 1.66 36 See analysisProportionate stratified sampling

$3600 1.45 36 See analysis

Disproportionate stratified sampling

$3600 1.41 36 See analysis

One-stage cluster sampling $3600 1.10 720 See analysis

Because the budget is $3600 and because each test session costs $100 (for the proctor), there can be at most 36 test sessions. For the first three methods, students in the sample might come from 36 different schools, which would mean that each test session could have only one student. Thus, for simple random sampling and stratified sampling, the sample size might be only 36 students. For cluster sampling, in contrast, each of the 36 test sessions will have a full class of 20 students; so the sample size will be 36 * 20 = 720 students.

Choose best method. In this example, the cost of each sampling method is identical, so none of the methods has an advantage on cost. However, the methods do differ with respect to precision (as measured by standard error). Cluster sampling provides the most precision (i.e., the smallest standard error); so cluster sampling is the best method.

Although cluster sampling was "best" in this example, it may not be the best solution in other situations. Other sampling methods may be best in other situations. Use the four-step process described above to determine which method is best in any situation.

Page 61: Mata Kuliah Qc Iwan Pratam

Analysis of Simple Random SamplesSimple random sampling refers to a sampling method that has the following properties.

The population consists of N objects. The sample consists of n objects. All possible samples of n objects are equally likely to occur.

The main benefit of simple random sampling is that it guarantees that the sample chosen is representative of the population. This ensures that the statistical conclusions will be valid. There are many ways to obtain a simple random sample. One way would be the lottery method. Each of the N population members is assigned a unique number. The numbers are placed in a bowl and thoroughly mixed. Then, a blind-folded researcher selects n numbers. Population members having the selected numbers are included in the sample. Notation

The following notation is helpful, when we talk about simple random sampling. σ: The known standard deviation of the population. σ2: The known variance of the population. P: The true population proportion. N: The number of observations in the population. x: The sample estimate of the population mean. s: The sample estimate of the standard deviation of the population. s2: The sample estimate of the population variance. p: The proportion of successes in the sample. n: The number of observations in the sample. SD: The standard deviation of the sampling distribution. SE: The standard error. (This is an estimate of the standard deviation of the sampling

distribution.) Σ = Summation symbol, used to compute sums over the sample. ( To illustrate its use, Σ

xi = x1 + x2 + x3 + ... + xm-1 + xm ) The Variability of the Estimate

The precision of a sample design is directly related to the variability of the estimate. Two common measures of variability are the standard deviation (SD) of the estimate and the standard error (SE) of the estimate. The tables below show how to compute both measures, assuming that the sample method is simple random sampling. The first table shows how to compute variability for a mean score. Note that the table shows four sample designs. In two of the designs, the true population variance is known; and in two, it is estimated from sample data. Also, in two of the designs, the researcher sampled with replacement; and in two, without replacement.

Population variance Replacement strategy

Variability

Known With replacement SD = sqrt [ σ2 / n ]Known Without replacement SD = sqrt { ( 1 - n/N ) * [ N / ( N - 1 ) ] * σ2 / n }Estimated With replacement SE = sqrt [ s2 / n ]Estimated Without replacement SE = sqrt [ ( 1 - n/N ) * s2 / n ]

The next table shows how to compute variability for a proportion. Like the previous table, this table shows four sample designs. In two of the designs, the true population proportion is known; and in two, it is estimated from sample data. Also, in two of the designs, the researcher sampled with replacement; and in two, without replacement.

Population proportion

Replacement strategy

Variability

Known With replacement SD = sqrt [ P * ( 1 - P ) / n ]

Page 62: Mata Kuliah Qc Iwan Pratam

Known Without replacement SD = sqrt { [ ( N - n ) / ( N - 1 ) ] * P * ( 1 - P ) / n }Estimated With replacement SE = sqrt [ p * ( 1 - p ) / ( n - 1 ) ]Estimated Without replacement SE = sqrt [ ( 1 - n / N ) * p * ( 1 - p ) / ( n - 1 ) ]

Sample ProblemThis section presents a sample problem that illustrates how to analyze survey data when the sampling method is simple random sampling. (In a subsequent lesson, we re-visit this problem and see how simple random sampling compares to other sampling methods.) Sample Planning Wizard

The analysis of data collected via simple random sampling can be complex and time-consuming. Stat Trek's Sample Planning Wizard can help. The Wizard computes survey precision, sample size requirements, costs, etc., as well as estimates population parameters and tests hypotheses. It also creates a summary report that lists key findings and documents analytical techniques. Whenever you work with simple random samples, consider using the Sample Planning Wizard. The Sample Planning Wizard is a premium tool available only to registered users. > Learn more

Problem 1At the end of every school year, the state administers a reading test to a simple random sample drawn without replacement from a population of 20,000 third graders. This year, the test was administered to 36 students selected via simple random sampling. The test score from each sampled student is shown below: 50, 55, 60, 62, 62, 65, 67, 67, 70, 70, 70, 70, 72, 72, 73, 73, 75, 75, 75, 78, 78, 78, 78, 80, 80, 80, 82, 82, 85, 85, 85, 88, 88, 90, 90, 90 Using sample data, estimate the mean reading achievement level in the population. Find the margin of error and the confidence interval. Assume a 95% confidence level. Solution: Previously we described how to compute the confidence interval for a mean score. We follow that process below.

Identify a sample statistic. Since we are trying to estimate a population mean, we choose the sample mean as the sample statistic. The sample mean is: x = Σ ( xi ) / n x = ( 50 + 55 + 60 + ... + 90 + 90 + 90 ) / 36 = 75 Therefore, based on data from the simple random sample, we estimate that the mean reading achievement level in the population is equal to 75.

Select a confidence level. In this analysis, the confidence level is defined for us in the problem. We are working with a 95% confidence level.

Find the margin of error. Elsewhere on this site, we show how to compute the margin of error when the sampling distribution is approximately normal. The key steps are shown below.

Find standard error of the sampling distribution. First, we estimate the variance of the test scores (s2). And then, we compute the standard error (SE). s2 = Σ ( xi - x )2 / ( n - 1 )s2 = [ (50 - 75)2 + (55 - 75)2 + (60 - 75)2 + ... + (90 - 75)2 + (90 - 75)2 ] / 29 = 98.97 SE = sqrt [ ( 1 - n/N ) * s2 / n ] = sqrt [ ( 1 - 36/20,000 ) * 98.97 / 36 ] = 1.66

Find critical value. The critical value is a factor used to compute the margin of error. Based on the central limit theorem, we can assume that the sampling distribution of the mean is normally distributed. Therefore, we express the critical value as a z score. To find the critical value, we take these steps.

o Compute alpha (α): α = 1 - (confidence level / 100) = 1 - 95/100 = 0.05 o Find the critical probability (p*): p* = 1 - α/2 = 1 - 0.05/2 = 0.975 o The critical value is the z score having a cumulative probability equal to

0.975. From the Normal Distribution Calculator, we find that the critical value is 1.96.

Compute margin of error (ME): ME = critical value * standard error = 1.96 * 1.66 = 3.25

Page 63: Mata Kuliah Qc Iwan Pratam

Specify the confidence interval. The range of the confidence interval is defined by the sample statistic + margin of error. And the uncertainty is denoted by the confidence level.

Therefore, the 95% confidence interval is 71.75 to 78.25. And the margin of error is equal to 3.25. That is, we are 95% confident that the true population mean is in the range defined by 75 + 3.25.

Analysis of Stratified Samples

Notation

The following notation is helpful, when we talk about analyzing data from stratified samples. H: The number of strata in the population. N: The number of observations in the population. Nh: The number of observations in stratum h of the population. Ph: The true proportion in stratum h of the population. σ2: The known variance of the population. σ: The known standard deviation of the population. σh: The known standard deviation in stratum h of the population. x: The sample estimate of the population mean. xh: The mean of observations from stratum h of the sample. ph: The proportion of successes in stratum h of the sample. sh: The sample estimate of the population standard deviation in stratum h. sh

2: The sample estimate of the population variance in stratum h. n: The number of observations in the sample. nh: The number of observations in stratum h of the sample. SD: The standard deviation of the sampling distribution. SE: The standard error. (This is an estimate of the standard deviation of the sampling

distribution.) Σ: Summation symbol. ( To illustrate the use of the symbol, Σ xh = x1 + x2 + ... + xH-1 + xH )

How to Analyze Data From Stratified Samples

When it comes to analyzing data from stratified samples, there is good new and there is bad news. First, the bad news. Different sampling methods use different formulas to estimate population parameters and to estimate standard errors. The formulas that we have used so far in this tutorial work for simple random samples, but they are not right for stratified samples. Now, the good news. Once you know the correct formulas, you can readily estimate population parameters and standard errors. And once you have the standard error, the procedures for computing other things (e.g., margin of error, confidence interval, and region of acceptance) are largely the same for stratified samples as for simple random samples. The next two sections provide formulas that can be used with stratified sampling. The sample problem at the end of this lesson shows how to use these formulas to analyze data from stratified samples. Measures of Central Tendency

The table below shows formulas that can be used with stratified sampling to estimate a population mean and a population proportion.

Population parameter Formula for sample estimate

Mean Σ( Nh / N ) * xh Proportion Σ( Nh / N ) * ph

Note that Nh/N is the sampling fraction. Thus, to compute a sample estimate of the population mean or population proportion, we need to know the sampling fraction (i.e., we need to know the relative size of each stratum). The Variability of the Estimate

Page 64: Mata Kuliah Qc Iwan Pratam

The precision of a sample design is directly related to the variability of the estimate, which is measured by the standard deviation or standard error. The tables below show how to compute the standard deviation (SD) and standard error (SE), assuming that the sample method is stratified random sampling.The first table shows how to compute the varibility for a mean score. Note that the table shows four sample designs. In two of the designs, the true population variance is known; and in two, it is estimated from sample data. Also, in two of the designs, the researcher sampled with replacement; and in two, without replacement.

Population variance

Replacement strategy

Variability

Known With replacement SD = (1 / N) * sqrt [ Σ ( Nh2 * σh

2 / nh ) ] Known Without replacementSD = (1 / N) * sqrt { Σ [ Nh

3/( Nh - 1) ] * ( 1 - nh / Nh ) * σh2 / nh }

Estimated With replacement SE = (1 / N) * sqrt [ Σ ( Nh2 * sh

2 / nh ) ]Estimated Without replacementSE = (1 / N) * sqrt { Σ [ Nh

2 * ( 1 - nh/Nh ) * sh2 / nh ] }

The next table shows how to compute the variability for a proportion. Like the previous table, this table shows four sample designs. In this case, however, the designs are based on whether the true population proportion is known and whether the design calls for sampling with or without replacement.

Populationproportion

Replacementstrategy

Variability

Known With replacement SD = (1 / N) * sqrt { Σ [ Nh2 * Ph * ( 1 - Ph ) / nh ] }

Known Without replacement SD = (1 / N) * sqrt ( Σ { [ Nh3/( Nh - 1) ] * ( 1 - nh / Nh ) * Ph * ( 1 - Ph

) / nh } )Estimated With replacement SE = (1 / N) * sqrt { Σ [ Nh

2 * ph * ( 1 - ph ) / ( nh - 1 ) ] }Estimated Without replacement SE = (1 / N) * sqrt { Σ [ Nh

2 * ( 1 - nh/Nh ) * ph * ( 1 - ph ) / ( nh - 1 ) ] }Sample Problem

This section presents a sample problem that illustrates how to analyze survey data when the sampling method is proportionate stratified sampling. (In a subsequent lesson, we re-visit this problem and see how stratified sampling compares to other sampling methods.)

Sample Planning Wizard

The analysis of data collected via stratified random sampling can be complex and time-consuming. Stat Trek's Sample Planning Wizard can help. The Wizard computes survey precision, sample size requirements, costs, etc., as well as estimates population parameters and tests hypotheses. It also creates a summary report that lists key findings and documents analytical techniques. Whenever you work with stratified random samples, consider using the Sample Planning Wizard. The Sample Planning Wizard is a premium tool available only to registered users. > Learn more

Problem 1At the end of every school year, the state administers a reading test to a sample of third graders. The school system has 20,000 third graders, half boys and half girls. This year, a proportionate stratified sample was used to select 36 students for testing. Because the population is half boy and half girl, one stratum consisted of 18 boys; the other, 18 girls. Test scores from each sampled student are shown below:

Boys

50, 55, 60, 62, 62, 65, 67, 67, 70, 70, 73, 73, 75, 78, 78, 80, 85, 90

Page 65: Mata Kuliah Qc Iwan Pratam

Girls 70, 70, 72, 72, 75, 75, 78, 78, 80, 80, 82, 82, 85, 85, 88, 88, 90, 90

Using sample data, estimate the mean reading achievement level in the population. Find the margin of error and the confidence interval. Assume a 95% confidence level. Solution: Previously we described how to compute the confidence interval for a mean score. We follow that process below.

Identify a sample statistic. For this problem, we use the overall sample mean to estimate the population mean. To compute the overall sample mean, we need to compute the sample means for each stratum. The stratum mean for boys is equal to: xboys = Σ ( xi ) / n xboys = ( 50 + 55 + 60 + ... + 80 + 85 + 90 ) / 18 = 70 The stratum mean for girls is computed similarly. It is equal to 80. Therefore, overall sample mean is: x = Σ( Nh / N ) * xh x = ( 10,000 / 20,000 ) * 70 + ( 10,000 / 20,000 ) * 80 = 75 Therefore, based on data from the sample strata, we estimate that the mean reading achievement level in the population is equal to 75.

Select a confidence level. In this analysis, the confidence level is defined for us in the problem. We are working with a 95% confidence level.

Find the margin of error. Elsewhere on this site, we show how to compute the margin of error when the sampling distribution is approximately normal. The key steps are shown below.

Find standard error of the sampling distribution. First, we estimate the variance of the test scores (sh

2) within each stratum. And then, we compute the standard error (SE). For boys, the within-stratum sample variance is equal to: sh

2 = Σ ( xi - xh )2 / ( n - 1 )sh

2 = [ (50 - 70)2 + (55 - 70)2 + (60 - 70)2 + ... + (85 - 70)2 + (90 - 70)2 ] / 17 = 105.41 The within-stratum sample variance for girls is computed similarly. It is equal to 45.41.

Using results from the above computations, we compute the standard error (SE): SE = (1 / N) * sqrt { Σ [ Nh

2 * ( 1 - nh/Nh ) * sh2 / nh ] }

SE = (1 / 20,000) * sqrt { [ 100,000,000 * ( 1 - 18/10,000 ) * 105.41 / 18 ] + [ 100,000,000 * ( 1 - 18/10,000 ) * 45.41 / 18 ] }SE = (1 / 20,000) * sqrt { 99,820,000 * 105.41 / 18 ] + [ 99,820,000 * 45.41 / 18 ] } = 1.45 Thus, the standard error of the sampling distribution of the mean is 1.45.

Find critical value. The critical value is a factor used to compute the margin of error. Based on the central limit theorem, we can assume that the sampling distribution of the mean is normally distributed. Therefore, we express the critical value as a z score. To find the critical value, we take these steps.

o Compute alpha (α): α = 1 - (confidence level / 100) = 1 - 95/100 = 0.05 o Find the critical probability (p*): p* = 1 - α/2 = 1 - 0.05/2 = 0.975 o The critical value is the z score having a cumulative probability equal to

0.975. From the Normal Distribution Calculator, we find that the critical value is 1.96.

Compute margin of error (ME): ME = critical value * standard error = 1.96 * 1.45 = 2.84

Specify the confidence interval. The range of the confidence interval is defined by the sample statistic + margin of error. And the uncertainty is denoted by the confidence level.

Therefore, the 95% confidence interval is 72.16 to 77.84. And the margin of error is equal to 2.84. That is, we are 95% confident that the true population mean is in the range defined by 75 + 2.84.

Page 66: Mata Kuliah Qc Iwan Pratam

Sample Size Within StrataThe precision and cost of a stratified design is influenced by the way that sample elements are allocated to strata.How to Assign Sample to Strata

One approach is proportionate stratification. With proportionate stratification, the sample size of each stratum is proportionate to the population size of the stratum. Strata sample sizes are determined by the following equation : nh = ( Nh / N ) * n where nh is the sample size for stratum h, Nh is the population size for stratum h, N is total population size, and n is total sample size.Another approach is disproportionate stratification, which can be a better choice (e.g., less cost, more precision) if sample elements are assigned correctly to strata. To take advantage of disproportionate stratification, researchers need to answer such questions as:

Given a fixed budget, how should sample be allocated to get the most precision from a stratified sample?

Given a fixed sample size, how should sample be allocated to get the most precision from a stratified sample?

Given a fixed budget, what is the most precision that I can get from a stratified sample? Given a fixed sample size, what is the most precision that I can get from a stratified

sample? What is the smallest sample size that will provide a given level of survey precision? What is the minimum cost to achieve a given level of survey precision? Given a particular sample allocation plan, what level of precision can I expect? And so on.

Although a consideration of all these questions is beyond the scope of this tutorial, the remainder of this lesson does address the first two questions. (To answer the other questions, as well as the first two questions, consider using the Sample Planning Wizard.) Sample Planning Wizard

Stat Trek's Sample Planning Wizard can help you find the right sample allocation plan for your stratified design. You specify your main goal - maximize precision, minimize cost, stay within budget, etc. Based on your goal, the Wizard prompts you for the necessary inputs and handles all computations automatically. It tells you the best sample size for each stratum. The Wizard creates a summary report that lists key findings, including the margin of error. And it describes analytical techniques. Whenever you want to quickly find the best sample allocation plan, consider using the Sample Planning Wizard. The Sample Planning Wizard is a premium tool available only to registered users. > Learn more

How to Maximize Precision, Given a Stratified Sample With a Fixed Budget

The ideal sample allocation plan would provide the most precision for the least cost. Optimal allocation does just that. Based on optimal allocation, the best sample size for stratum h would be: nh = n * [ ( Nh * σh ) / sqrt( ch ) ] / [ Σ ( Ni * σi ) / sqrt( ci ) ] where nh is the sample size for stratum h, n is total sample size, Nh is the population size for stratum h, σh is the standard deviation of stratum h, and ch is the direct cost to sample an individual element from stratum h. Note that ch does not include indirect costs, such as overhead costs.The effect of the above equation is to sample more heavily from a stratum when

The cost to sample an element from the stratum is low. The population size of the stratum is large. The variability within the stratum is large.

How to Maximize Precision, Given a Stratified Sample With a Fixed Sample Size

Page 67: Mata Kuliah Qc Iwan Pratam

Sometimes, researchers want to find the sample allocation plan that provides the most precision, given a fixed sample size. The solution to this problem is a special case of optimal allocation, called Neyman allocation. The equation for Neyman allocation can be derived from the equation for optimal allocation by assuming that the direct cost to sample an individual element is equal across strata. Based on Neyman allocation, the best sample size for stratum h would be: nh = n * ( Nh * σh ) / [ Σ ( Ni * σi ) ] where nh is the sample size for stratum h, n is total sample size, Nh is the population size for stratum h, and σh is the standard deviation of stratum h.Sample Problem

This section presents a sample problem that illustrates how to maximize precision, given a fixed sample size and a stratified sample. (In a subsequent lesson, we re-visit this problem and see how stratified sampling compares to other sampling methods.) Problem 1At the end of every school year, the state administers a reading test to a sample of 36 third graders. The school system has 20,000 third graders, half boys and half girls. The results from last year's test are shown in the table below.

Stratum Mean score Standard deviation

Boys 70 10.27Girls 80 6.66

This year, the researchers plan to use a stratified sample, with one stratum consisting of boys and the other, girls. Use the results from last year to answer the following questions?

To maximize precision, how many sampled students should be boys and how many should be girls?

What is the mean reading achievement level in the population? Compute the confidence interval. Find the margin of error

Assume a 95% confidence level.Solution: The first step is to decide how to allocate sample in order to maximize precision. Based on Neyman allocation, the best sample size for stratum h is: nh = n * ( Nh * σh ) / [ Σ ( Ni * σi ) ] where nh is the sample size for stratum h, n is total sample size, Nh is the population size for stratum h, and σh is the standard deviation of stratum h. By this equation, the number of boys in the sample is:nboys = 36 * ( 10,000 * 10.27 ) / [ ( 10,000 * 10.27 ) + ( 10,000 * 6.67 ) ] = 21.83 Therefore, to maximize precision, the total sample of 36 students should consist of 22 boys and (36 - 22) = 14 girls.The remaining questions can be answered during the process of computing the confidence interval. Previously, we described how to compute a confidence interval. We employ that process below.

Identify a sample statistic. For this problem, we use the overall sample mean to estimate the population mean. To compute the overall sample mean, we use the following equation (which was introduced in a previous lesson): x = Σ ( Nh / N ) * xh = ( 10,000/20,000 ) * 70 + ( 10,000/20,000 ) * 80 = 75 Therefore, based on data from the sample strata, we estimate that the mean reading achievement level in the population is equal to 75.

Select a confidence level. In this analysis, the confidence level is defined for us in the problem. We are working with a 95% confidence level.

Find the margin of error. Elsewhere on this site, we show how to compute the margin of error when the sampling distribution is approximately normal. The key steps are shown below.

Find standard deviation or standard error. The equation to compute the standard error was introduced in a previous lesson. We use that equation here:

Page 68: Mata Kuliah Qc Iwan Pratam

SE = (1 / N) * sqrt { Σ [ Nh2 * ( 1 - nh/Nh ) * sh

2 / nh ] } SE = (1 / 20,000) * sqrt { [ 10,0002 * ( 1 - 22/10,000 ) * (10.27)2 / 22 ] + [ 10,0002 * ( 1 - 14/10,000 ) * (6.66)2 / 14 ] } = 1.41 Thus, the standard deviation of the sampling distribution (i.e., the standard error) is 1.41.

Find critical value. The critical value is a factor used to compute the margin of error. We express the critical value as a z score. To find the critical value, we take these steps.

o Compute alpha (α): α = 1 - (confidence level / 100) = 1 - 99/100 = 0.01 o Find the critical probability (p*): p* = 1 - α/2 = 1 - 0.05/2 = 0.975 o The critical value is the z score having a cumulative probability equal to

0.975. From the Normal Distribution Calculator, we find that the critical value is 1.96.

Compute margin of error (ME): ME = critical value * standard error = 1.96 * 1.41 = 2.76

Specify the confidence interval. The range of the confidence interval is defined by the sample statistic + margin of error. And the uncertainty is denoted by the confidence level. Thus, with this sample design, we are 95% confident that the sample estimate of reading achievement is 75 + 2.76.

In summary, given a total sample size of 36 students, we can get the greatest precision from a stratified sample if we sample 22 boys and 14 girls. This results in a 95% confidence interval of 72.24 to 77.76. The margin of error is 2.76.

Analysis of Cluster SamplesNotation

The following notation is helpful, when we talk about analyzing data from cluster samples. N = The number of clusters in the population. Mi = The number of observations in the ith cluster. M = The total number of observations in the population = Σ Mi. P = The true population proportion. n = The number of clusters in the sample. mi = The number of sample observations from the ith cluster. xij = The measurement for the jth observation from the ith cluster. xi = The sample estimate of the population mean for the ith cluster = Σ ( xij / mi ) summed

over j. pi = The sample estimate of the population proportion for the ith cluster. s2

i = The sample estimate of the population variance within cluster i. ti = The estimated total for the ith cluster = Σ ( Mi / mi ) * xij = Σ ( Mi * xi ) . tmean = The sample estimate of the population total = ( N / n ) * Σ ti . tprop(i) = The sample estimate of the number of successes in population i = Σ [ ( M i / mi ) *

pi ] . tprop = The sample estimate of the number of successes in the population = ( N / n ) * Σ

tprop(i) . SE: The standard error. (This is an estimate of the standard deviation of the sampling

distribution.) Σ = Summation symbol, used to compute sums over the sample. ( To illustrate its use, Σ

xi = x1 + x2 + x3 + ... + xm-1 + xm ) How to Analyze Data From Cluster Samples

Different sampling methods use different formulas to estimate population parameters and to estimate standard errors. The formulas that we have used so far in this tutorial work for simple random samples and for stratified samples, but they are not right for cluster samples. The next two sections of this lesson show the correct formulas to use with cluster samples. With these formulas, you can readily estimate population parameters and standard errors. And once you have the standard error, the procedures for computing other things (e.g., margin of

Page 69: Mata Kuliah Qc Iwan Pratam

error, confidence interval, and region of acceptance) are largely the same for cluster samples as for simple random samples. The sample problem at the end of this lesson shows how to use these formulas to analyze data from cluster samples. Measures of Central Tendency

The table below shows formulas that can be used with one-stage and two-stage cluster samples to estimate a population mean and a population proportion.

Population parameter Formula for sample estimate

Mean [ ( N / ( n * M ) ] * Σ ( Mi * xi ) Proportion [ ( N / ( n * M ) ] * Σ ( Mi * pi )

These formulas produce unbiased estimates of the population parameters. The Variability of the Estimate

The precision of a sample design is directly related to the variability of the estimate, which is measured by the standard error. The tables below show how to compute the standard error (SE), when the sampling method is cluster sampling.The first table shows how to compute the standard error for a mean score, given one- or two-stage sampling.

Number of stages Standard error of mean score

One ( 1 / M ) * sqrt { [ N2 * ( 1 - n/N ) / n ] * Σ ( Mi * xi - tmean / N )2 / ( n - 1 ) }

Two ( 1 / M ) * sqrt { [ N2 * ( 1 - n/N ) / n ] * Σ ( Mi * xi - tmean / N )2 / ( n - 1 ) + ( N / n ) * Σ [ ( 1 - mi / Mi ) * Mi

2 * si2 / mi ] }

The next table shows how to compute the standard error for a proportion. Like the previous table, this table shows equations for one- and two-stage designs. It also shows how the equations differ when the true population proportions are known versus when they are estimated based on sample data.

Numberof stages

Populationproportion Standard error of proportion

One Known ( 1 / M ) * sqrt { [ N2 * ( 1 - n/N ) / n ] * Σ ( M i * Pi - tprop / N )2 / ( n - 1 ) }

One Estimated ( 1 / M ) * sqrt { [ ( N2 * ( 1 - n/N ) / n ] * Σ ( M i * pi - tprop / N )2 / ( n - 1 ) }

Two Known( 1 / M ) * sqrt { [ N2 * ( 1 - n/N ) / n ] * Σ ( M i * Pi - tprop / N )2 } / ( n - 1 ) + ( N / n ) * Σ [ ( 1 - mi / Mi ) * Mi

2 * Pi * ( 1 - Pi ) / mi ] }

Two Estimated( 1 / M ) * sqrt [ ( N2 * ( 1 - n/N ) / n ] * Σ ( M i * pi - tprop / N )2 } / ( n - 1 ) + ( N / n ) * Σ [ ( 1 - mi / Mi ) * Mi

2 * pi * ( 1 - pi ) / ( mi - 1 ) ] }

Sample Problem

This section presents a sample problem that illustrates how to analyze survey data when the sampling method is one-stage cluster sampling. (In a subsequent lesson, we re-visit this problem and see how cluster sampling compares to other sampling methods.) Sample Planning Wizard

The analysis of data collected via cluster sampling can be complex and time-consuming. Stat Trek's Sample Planning Wizard can help. The Wizard computes survey precision, sample size requirements, costs, etc., as well as estimates population parameters and tests hypotheses. It also creates a summary report that lists key findings and documents analytical techniques.

Page 70: Mata Kuliah Qc Iwan Pratam

Whenever you work with cluster sampling, consider using the Sample Planning Wizard. The Sample Planning Wizard is a premium tool available only to registered users. > Learn more Example 1At the end of every school year, the state administers a reading test to a sample of third graders. The school system has 20,000 third graders, grouped in 1000 separate classes. Assume that each class has 20 students. This year, the test was administered to each student in 36 randomly-sampled classes. Thus, this is one-stage cluster sampling, with classes serving as clusters. The average test score from each sampled cluster xi is shown below:

55, 60, 65, 67, 67, 70, 70, 70, 72, 72, 72, 72, 73, 73, 75, 75, 75, 75,

75, 77, 77, 78, 78, 78, 78, 80, 80, 80, 80, 80, 80, 83, 83, 85, 85, 85

Using sample data, estimate the mean reading achievement level in the population. Find the margin of error and the confidence interval. Assume a 95% confidence level. Solution: Previously we described how to compute the confidence interval for a mean score. Below, we apply that process to the present cluster sampling problem.

Identify a sample statistic. For this problem, we use the sample mean to estimate the population mean, and we use the equation from the "Measures of Central Tendency" table to compute the sample mean. x = [ ( N / ( n * M ) ] * Σ ( Mi * xi )x = [ ( 1000 / ( 36 * 20,000 ) ] * Σ ( 20 * x i ) = Σ ( xi ) / 36 x = ( 55 + 60 + 65 + ... + 85 + 85 + 85 ) / 36 = 75 Therefore, based on data from the cluster sample, we estimate that the mean reading achievement level in the population is equal to 75.

Select a confidence level. In this analysis, the confidence level is defined for us in the problem. We are working with a 95% confidence level.

Find the margin of error. Elsewhere on this site, we show how to compute the margin of error when the sampling distribution is approximately normal. The key steps are shown below.

Find standard error of the sampling distribution. Since we used one-stage cluster sampling, the standard error is: SE = ( 1 / M ) * sqrt { [ N2 * ( 1 - n/N ) / n ] * Σ ( Mi * xi - tmean / N )2 / ( n - 1 ) } where tmean = ( N / n ) * Σ ( Mi * xi ) Except for tmean, all of the terms on the right side of the above equation are known. Therefore, to compute SE, we must first compute tmean. The formula for tmean is: tmean = ( N / n ) * Σ ti tmean = ( N / n ) * ΣΣ [( Mi / mi ) * xij ] tmean = ( 1000 / 36 ) * ΣΣ [( 20 / 20 ) * xij ] tmean = ( 27.778 ) * ΣΣ ( xij ) = ( 27.778 ) * 20 * Σ ( xi ) tmean = ( 27.778 ) * 20 * ( 55 + 60 + 65 + ... + 85 + 85 + 85 ) = 1,500,000 After we compute tmean, all of the terms on the right side of the SE equation are known, so we plug the known values into the standard error equation. As shown below, the standard error is 1.1.SE = ( 1 / M ) * sqrt { [ N2 * ( 1 - n/N ) / n ] * Σ ( Mi * xi - tmean / N )2 / ( n - 1 ) } SE = ( 1 /20,000 ) * sqrt { [ 10002 * ( 1 - 36/1000 ) / 36 ] * Σ ( 20 * x i - 1,500,000 / 1000 )2 / ( 35 ) } SE = ( 1 /20,000 ) * sqrt { [ 10002 * ( 1 - 36/1000 ) / 36 ] * ( 20 * 55 - 1,500,000 / 1000 )2 / ( 35 ) + ( 20 * 60 - 1,500,000 / 1000 )2 / ( 35 ) + ..+ ( 20 * 85 - 1,500,000 / 1000 )2/( 35 ) + ( 20 * 85 - 1,500,000 / 1000 )2 / ( 35 ) }SE = ( 1 /20,000 ) * sqrt [ [ 10002 * ( 1 - 36/1000 ) / 36 ] * 18,217.143 ] SE = 1.1

Find critical value. The critical value is a factor used to compute the margin of error. Based on the central limit theorem, we can assume that the sampling distribution of the mean is normally distributed. Therefore, we express the critical value as a z score. To find the critical value, we take these steps.

Page 71: Mata Kuliah Qc Iwan Pratam

o Compute alpha (α): α = 1 - (confidence level / 100) = 1 - 95/100 = 0.05 o Find the critical probability (p*): p* = 1 - α/2 = 1 - 0.05/2 = 0.975 o The critical value is the z score having a cumulative probability equal to

0.975. From the Normal Distribution Calculator, we find that the critical value is 1.96.

Compute margin of error (ME): ME = critical value * standard error = 1.96 * 1.1 = 2.16

Specify the confidence interval. The range of the confidence interval is defined by the sample statistic + margin of error. And the uncertainty is denoted by the confidence level.

Therefore, the 95% confidence interval is 72.84 to 77.16. And the margin of error is equal to 2.16. That is, we are 95% confident that the true population mean is in the range defined by 75 + 2.16.

References

1. ̂ Nag, R.; Hambrick, D. C.; Chen, M.-J,What is strategic management, really? Inductive derivation of a consensus definition of the field. Strategic Management Journal. Volume 28, Issue 9, pages 935–955, September 2007.

2. ̂ Lamb, Robert, Boyden Competitive strategic management, Englewood Cliffs, NJ: Prentice-Hall, 1984

3. ̂ Johnson, G, Scholes, K, Whittington, R Exploring Corporate Strategy, 8th Edition, FT Prentice Hall, Essex, 2008, ISBN 978-0-273-71192-6

4. ̂ Chandler, Alfred Strategy and Structure: Chapters in the history of industrial enterprise, Doubleday, New York, 1962.

5. ̂ Selznick, Philip Leadership in Administration: A Sociological Interpretation, Row, Peterson, Evanston Il. 1957.

6. ̂ Ansoff, Igor Corporate Strategy McGraw Hill, New York, 1965.7. ̂ Drucker, Peter The Practice of Management, Harper and Row, New York, 1954.8. ̂ Chaffee, E. “Three models of strategy”, Academy of Management Review, vol 10, no. 1, 1985.9. ̂ Buzzell, R. and Gale, B. The PIMS Principles: Linking Strategy to Performance, Free Press, New

York, 1987.10. ̂ Schumacher, E.F. Small is Beautiful: a Study of Economics as if People Mattered, ISBN 0-06-

131778-0 (also ISBN 0-88179-169-5)11. ̂ Woo, C. and Cooper, A. “The surprising case for low market share”, Harvard Business Review,

November–December 1982, pg 106–113.12. ̂ Levinson, J.C. Guerrilla Marketing, Secrets for making big profits from your small business,

Houghton Muffin Co. New York, 1984, ISBN 0-396-35350-5.13. ̂ Traverso, D. Outsmarting Goliath, Bloomberg Press, Princeton, 2000.14. ̂ Blaxill, Mark & Eckardt, Ralph, "The Invisible Edge: Taking your Strategy to the Next Level Using

Intellectual Property" (Portfolio, March 2009)15. ̂ Hamel, G. & Prahalad, C.K. “Strategic Intent”, Harvard Business Review, May–June 1989.

Page 72: Mata Kuliah Qc Iwan Pratam

16. ̂ Hamel, G. & Prahalad, C.K. Competing for the Future, Harvard Business School Press, Boston, 1994.

17. ̂ Hamel, G. & Prahalad, C.K. “The Core Competence of the Corporation”, Harvard Business Review, May–June 1990.

18. ̂ Peters, T. and Austin, N. A Passion for Excellence, Random House, New York, 1985 (also Warner Books, New York, 1985 ISBN 0-446-38348-1)

19. ̂ Barney, J. (1991) “Firm Resources and Sustainable Competitive Advantage”, Journal of Management, vol 17, no 1, 1991.

20. ̂ Hammer, M. and Champy, J. Reengineering the Corporation, Harper Business, New York, 1993.21. ̂ Lester, R. Made in America, MIT Commission on Industrial Productivity, Boston, 1989.22. ̂ Camp, R. Benchmarking: The search for industry best practices that lead to superior

performance, American Society for Quality Control, Quality Press, Milwaukee, Wis., 1989.23. ̂ Deming, W.E. Quality, Productivity, and Competitive Position, MIT Center for Advanced

Engineering, Cambridge Mass., 1982.24. ̂ Juran, J.M. Juran on Quality, Free Press, New York, 1992.25. ̂ Kearney, A.T. Total Quality Management: A business process perspective, Kearney Pree Inc,

1992.26. ̂ Crosby, P. Quality is Free, McGraw Hill, New York, 1979.27. ̂ Feignbaum, A. Total Quality Control, 3rd edition, McGraw Hill, Maidenhead, 1990.28. ̂ Heskett, J. Managing in the Service Economy, Harvard Business School Press, Boston, 1986.29. ̂ Davidow, W. and Uttal, B. Total Customer Service, Harper Perennial Books, New York, 1990.30. ̂ Schlesinger, L. and Heskett, J. "Customer Satisfaction is rooted in Employee Satisfaction,"

Harvard Business Review, November–December 1991.31. ̂ Berry, L. On Great Service, Free Press, New York, 1995.32. ̂ Kingman-Brundage, J. “Service Mapping” pp 148–163 In Scheuing, E. and Christopher, W.

(eds.), The Service Quality Handbook, Amacon, New York, 1993.33. ̂ Sewell, C. and Brown, P. Customers for Life, Doubleday Currency, New York, 1990.34. ̂ Reichheld, F. The Loyalty Effect, Harvard Business School Press, Boston, 1996.35. ̂ Gronroos, C. “From marketing mix to relationship marketing: towards a paradigm shift in

marketing”, Management Decision, Vol. 32, No. 2, pp 4–32, 1994.36. ̂ Reichheld, F. and Sasser, E. “Zero defects: Quality comes to services”, Harvard Business

Review, September/October 1990.37. ̂ Pine, J. and Gilmore, J. “The Four Faces of Mass Customization”, Harvard Business Review, Vol

75, No 1, Jan–Feb 1997.38. ̂ Pine, J. and Gilmore, J. (1999) The Experience Economy, Harvard Business School Press, Boston,

1999.39. ̂ Collins, James and Porras, Jerry Built to Last, Harper Books, New York, 1994.40. ̂ Mulcaster, W.R. "Three Strategic Frameworks," Business Strategy Series, Vol 10, No 1, pp 68 -

75, 2009.41. ̂ Moore, J. “Predators and Prey”, Harvard Business Review, Vol. 71, May–June, pp 75–86, 1993.42. ̂ Drucker, Peter The Age of Discontinuity, Heinemann, London, 1969 (also Harper and Row, New

York, 1968).43. ̂ Toffler, Alvin Future Shock, Bantom Books, New York, 1970.44. ̂ Toffler, Alvin The Third Wave, Bantom Books, New York, 1980.45. ^ a b Hamel, Gary Leading the Revolution, Plume (Penguin Books), New York, 2002.46. ̂ Abell, Derek “Strategic windows”, Journal of Marketing, Vol 42, pg 21–28, July 1978.47. ̂ Handy, Charles The Age of Unreason, Hutchinson, London, 1989.48. ̂ Gladwell, Malcolm (2000) The Tipping Point, Little Brown, New York, 2000.49. ̂ Tichy, Noel Managing Strategic Change: Technical, political, and cultural dynamics, John Wiley,

New York, 1983.50. ̂ Pascale, Richard Managing on the Edge, Simon and Schuster, New York, 1990.51. ̂ Slywotzky, Adrian Value Migration, Harvard Business School Press, Boston, 1996.52. ̂ Slywotzky, A., Morrison, D., Moser, T., Mundt, K., and Quella, J. Profit Patterns, Time Business

(Random House), New York, 1999, ISBN 0-8129-31118-1.53. ̂ Christensen, Clayton "The Innovator's Dilemma," Harvard Business School Press, Boston, 1997.54. ̂ Schartz, Peter The Art of the Long View, Doubleday, New York, 1991.55. ̂ Wack, Pierre “Scenarios: Uncharted Waters Ahead”, Harvard Business review, September

October, 1985.56. ̂ Mintzberg, Henry “Crafting Strategy”, Harvard Business Review, July/August 1987.57. ̂ Mintzberg, Henry and Quinn, J.B. The Strategy Process, Prentice-Hall, Harlow, 1988.

Page 73: Mata Kuliah Qc Iwan Pratam

58. ̂ Mintzberg, H. Ahlstrand, B. and Lampel, J. Strategy Safari : A Guided Tour Through the Wilds of Strategic Management, The Free Press, New York, 1998.

59. ̂ Markides, Constantinos “A dynamic view of strategy” Sloan Management Review, vol 40, spring 1999, pp55–63.

60. ̂ Moncrieff, J. “Is strategy making a difference?” Long Range Planning Review, vol 32, no2, pp273–276.

61. ̂ Schuck, Gloria “Intelligent Workers: A new predagogy for the high tech workplace”, Organizational Dynamics, Autumn 1985.

62. ̂ Zuboff, Shoshana In the Age of the Smart Machine, Basic Books, New York, 1988.63. ̂ Senge, PeterThe Fifth Discipline, Doubleday, New York, 1990; (also Century, London, 1990).64. ̂ Quinn, J.B. Intelligent Enterprise, The Free Press, New York, 1992.65. ̂ Jarillo, J. Carlos Strategic Networks: Creating borderless organizations, Butterworth-

Heinemann, Oxford, 1993.66. ̂ Barton, D.L. Wellsprings of Knowledge, Harvard Business school Press, Boston, 1995.67. ̂ Castells, Manuel The Rise of the Networked Society :The information age, Blackwell Publishers,

Cambridge Mass, 1996.68. ̂ Liebeskind, J. P. “Knowledge, Strategy, and the Theory of the Firm”, Strategic Management

Journal, vol 17, winter 1996.69. ̂ Stewart, Thomas Intellectual Capital, Nicholas Brealey, London, 1997, (also DoubleDay, New

York, 1997).70. ̂ Sveiby, K.E. The New Organizational Wealth : Managing and measuring knowledge-based

assets, Berrett-Koehler Publishers, San Francisco, 1997.71. ̂ Probst, Gilbert, Raub, S. and Romhardt K. Managing Knowledge, Wiley, London, 1999 (Exists

also in other languages)72. ̂ Shapiro, C. and Varian, H. (1999) Information Rules, Harard Business School Press, Boston,

1999.73. ̂ Frank, R. and Cook, P. The Winner Take All Society, Free Press, New York, 1995.74. ̂ Evens, P. and Wurster, T. “Strategy and the New Economics of Information”, Harvard Business

Review, Sept/Oct 1997.75. ̂ Mulcaster, W.R. "Three Strategic Frameworks," Business Strategy Series, Vol 10, No1, pp68 -

75, 2009.76. ̂ Barnard, Chester The function of the executive, Harvard University Press, Cambridge Mass,

1938, page 235.77. ̂ Mintzberg, Henry The Nature of Managerial Work, Harper and Roe, New York, 1973, page 38.78. ̂ Kotter, John The general manager, Free Press, New York, 1982.79. ̂ Isenberg, Daniel “How managers think”, Harvard Business Review, November–December 1984.80. ̂ Isenberg, Daniel Strategic Opportunism: Managing under uncertainty, Harvard Graduate

School of Business, Working paper 9-786-020, Boston, January 1986.81. ̂ Zaleznik, Abraham “Managers and Leaders: Are they different?”, Harvard Business Review,

May–June, 1977.82. ̂ leadership vs. management83. ̂ Zaleznik, Abraham The Managerial Mistique, Harper and Row, New York, 1989.84. ̂ Corner, P. Kinicki, A. and Keats, B. “Integrating organizational and individual information

processing perspectives on choice”, Organizational Science, vol. 3, 1994.85. ̂ Kim and Mauborgne. Blue Ocean Strategy. Harvard Business Press. 200586. ̂ Lindblom, Charles E., "The Science of Muddling Through," Public Administration Review, Vol.

19 (1959), No. 287. ̂ Woodhouse, Edward J. and David Collingridge, "Incrementalism, Intelligent Trial-and-Error,

and the Future of Political Decision Theory," in Redner, Harry, ed., An Heretical Heir of the Enlightenment: Politics, Policy and Science in the Work of Charles E. Limdblom, Boulder, C): Westview Press, 1993, p. 139

88. ̂ Ibid, p. 14089. ̂ Elcock, Howard, "Strategic Management," in Farnham, D. and S. Horton (eds.), Managing the

New Public Services, 2nd Edition, New York: Macmillan, 1996, p. 56.90. ̂ Woodhouse and Collingridge, 1993. p. 14091. ̂ Ibid., passim.92. ̂ Moore, Mark H., Creating Public Value: Strategic Management in Government, Cambridge:

Harvard University Press, 1995.93. ^ a b "The Inventors of Six Sigma". Archived from the original on November 6, 2005.

http://web.archive.org/web/20051106025733/http://www.motorola.com/content/0,,3079,00.html. Retrieved January 29, 2006.

Page 74: Mata Kuliah Qc Iwan Pratam

94. ̂ Tennant, Geoff (2001). SIX SIGMA: SPC and TQM in Manufacturing and Services. Gower Publishing, Ltd.. p. 6. ISBN 0566083744. http://books.google.com/?id=O6276jidG3IC&printsec=frontcover#PPA6,M1.

95. ^ a b c d e f g h i j k Antony, Jiju. "Pros and cons of Six Sigma: an academic perspective". Archived from the original on July 23, 2008. http://web.archive.org/web/20080723015058/http://www.onesixsigma.com/node/7630. Retrieved August 5, 2010.

96. ̂ "Motorola University - What is Six Sigma?". http://www.motorola.com/content/0,,3088,00.html. Retrieved 2009-09-14. "[...] Six Sigma started as a defect reduction effort in manufacturing and was then applied to other business processes for the same purpose."[dead link]

97. ̂ Schroeder, Richard A.; MIKEL PHD HARRY (2006). Six Sigma: The Breakthrough Management Strategy Revolutionizing the World's Top Corporations. Sydney: Currency. p. 9. ISBN 0-385-49438-6.

98. ̂ Harry, M., Schroeder, R., Six Sigma – Prozesse optimieren, Null-Fehler-Qualität schaffen, Rendite radikal steigern, Frankfurt / Main, 2000

99. ̂ Stamatis, D. H. (2004). Six Sigma Fundamentals: A Complete Guide to the System, Methods, and Tools. New York, New York: Productivity Press. p. 1. ISBN 9781563272929. OCLC 52775178. "The practitioner of the six sigma methodology in any organization should expect to see the use of old and established tools and approaches in the pursuit of continual improvement and customer satisfaction. So much so that even TQM (total quality management) is revisited as a foundation of some of the approaches. In fact, one may define six sigma as "TQM on steroids.""

100. ̂ Montgomery, Douglas C. (2009). Statistical Quality Control: A Modern Introduction (6 ed.). Hoboken, New Jersey: John Wiley & Sons. p. 23. ISBN 9780470233979. OCLC 244727396. "During the 1950s and 1960s programs such as Zero Defects and Value Engineering abounded, but they had little impact on quality and productivity improvement. During the heyday of TQM in the 1980s, another popular program was the Quality Is Free initiative, in which management worked on identifying the cost of quality..."

101. ̂ "Motorola University Six Sigma Dictionary". Archived from the original on January 28, 2006. http://web.archive.org/web/20060128110005/http://www.motorola.com/content/0,,3074-5804,00.html#ss. Retrieved January 29, 2006.

102.^ a b c d e f g h i j k l Tennant, Geoff (2001). SIX SIGMA: SPC and TQM in Manufacturing and Services. Gower Publishing, Ltd.. pp. 25. ISBN 0566083744. http://books.google.com/?id=O6276jidG3IC&printsec=frontcover#PPA25,M1.

103. ̂ "Motorola Inc. - Motorola University". http://www.motorola.com/motorolauniversity. Retrieved January 29, 2006.

104. ̂ "About Motorola University". Archived from the original on December 22, 2005. http://web.archive.org/web/20051222081924/http://www.motorola.com/content/0,,3071-5801,00.html. Retrieved January 28, 2006.

105. ̂ "Six Sigma: Where is it now?". http://scm.ncsu.edu/public/facts/facs030624.html. Retrieved May 22, 2008.

106.^ a b c d e De Feo, Joseph A.; Barnard, William (2005). JURAN Institute's Six Sigma Breakthrough and Beyond - Quality Performance Breakthrough Methods. Tata McGraw-Hill Publishing Company Limited. ISBN 0-07-059881-9.

107. ̂ Harry, Mikel; Schroeder, Richard (2000). Six Sigma. Random House, Inc. ISBN 0-385-49437-8.108. ̂ "Institute of Industrial Engineers Six Sigma certifications". Norcross, Georgia: Institute of Industrial

Engineers. http://www.iienet2.org/Seminars/SeminarGroup.aspx?id=12936&seminar=6SC&grp=ICP. Retrieved 2010-01-05.

109. ̂ "Certification - ASQ". Milwaukee, Wisconsin: American Society for Quality. http://www.asq.org/certification/index.html. Retrieved 2010-01-05.

110. ̂ Harry, Mikel J. (1988). The Nature of six sigma quality. Rolling Meadows, Illinois: Motorola University Press. p. 25. ISBN 9781569460092.

111. ̂ Gygi, Craig; DeCarlo, Neil; Williams, Bruce (2005). Six Sigma for Dummies. Hoboken, NJ: Wiley Publishing, Inc.. pp. Front inside cover, 23. ISBN 0-7645-6798-5.

112. ̂ El-Haik, Basem; Suh, Nam P.. Axiomatic Quality. John Wiley and Sons. p. 10. ISBN 9780471682738.113.^ a b c d Dirk Dusharme, "Six Sigma Survey: Breaking Through the Six Sigma Hype", Quality Digest114. ̂ Paton, Scott M. (August 2002). Juran: A Lifetime of Quality. 22. pp. 19–23.

http://www.qualitydigest.com/aug02/articles/01_article.shtml. Retrieved 2009-04-01.115. ̂ Morris, Betsy (2006-07-11). "Tearing up the Jack Welch playbook". Fortune.

http://money.cnn.com/2006/07/10/magazines/fortune/rule4.fortune/index.htm. Retrieved 2006-11-26.116. ̂ Richardson, Karen (2007-01-07). "The 'Six Sigma' Factor for Home Depot". Wall Street Journal Online.

http://online.wsj.com/article/SB116787666577566679.html. Retrieved October 15, 2007.117. ̂ Ficalora, Joe; Costello, Joe. "Wall Street Journal SBTI Rebuttal" (PDF). Sigma Breakthrough Technologies,

Inc.. http://www.sbtionline.com/files/Wall_Street_Journal_SBTI_Rebuttal.pdf. Retrieved October 15, 2007.

118. ̂ Hindo, Brian (6 June 2007). "At 3M, a struggle between efficiency and creativity". Business Week. http://www.businessweek.com/magazine/content/07_24/b4038406.htm?chan=top+news_top+news+index_best+of+bw. Retrieved June 6, 2007.

119. ̂ Ruffa, Stephen A. (2008). Going Lean: How the Best Companies Apply Lean Manufacturing Principles to Shatter Uncertainty, Drive Innovation, and Maximize Profits. AMACOM (a division of American Management Association). ISBN 0-8144-1057-X. http://books.google.com/?id=_Q7OGDd61hkC.

Page 75: Mata Kuliah Qc Iwan Pratam

120. ̂ Wheeler, Donald J. (2004). The Six Sigma Practitioner's Guide to Data Analysis. SPC Press. p. 307. ISBN 9780945320623.

121.^ a b *Pande, Peter S.; Neuman, Robert P.; Cavanagh, Roland R. (2001). The Six Sigma Way: How GE, Motorola, and Other Top Companies are Honing Their Performance. New York: McGraw-Hill Professional. p. 229. ISBN 0071358064. http://books.google.com/?id=ybOuvzvcqTAC&pg=PA229&lpg=PA229&dq=%22key+bones+of+contention+amongst+the+statistical+experts+about+how+Six+Sigma+measures+are+defined%22.


Recommended