+ All Categories
Home > Documents > How will we know patients are safer? An organization-wide approach to measuring and improving safety

How will we know patients are safer? An organization-wide approach to measuring and improving safety

Date post: 10-Nov-2023
Category:
Upload: independent
View: 0 times
Download: 0 times
Share this document with a friend
8
R. Phillip Dellinger, MD, FCCM, Section Editor Concise Definitive Review How will we know patients are safer? An organization-wide approach to measuring and improving safety Peter Pronovost, MD, PhD; Christine G. Holzmueller, BLA; Dale M. Needham, MD, PhD; J. Bryan Sexton, PhD; Marlene Miller, MD, MSc; Sean Berenholtz, MD, MHS; Albert W. Wu, MD, MPH; Trish M. Perl, MD, MSc; Richard Davis, PhD; David Baker, MBA; Laura Winner, MSN, MBA; Laura Morlock, PhD I n the years 1999 and 2001, land- mark reports from the Institute of Medicine (IOM), To Err Is Human and Crossing the Quality Chasm, made deficiencies in quality of care and patient safety visible to health care pro- fessionals and the public (1, 2). Now, 5 yrs after these reports, are we safer, and if so, how do we know? Many organizations are working to improve safety (3), many say that we lack empirical evidence to demon- strate that we are safer (4 – 6), and health care organizations generally lack measures to broadly evaluate their progress in im- proving safety. Current publicly reported performance measures will likely not be sufficient for providers to evaluate safety. In many hospitals, publicly reported perfor- mance measures apply to 10% of a hos- pital’s discharges (7). We need scientifically sound and feasible measures of patient safety. MEASURING SAFETY The model developed by Donabedian nearly a half century ago for measuring quality provides a framework for measur- ing safety. In this model, structure (i.e., how care is organized) plus process (i.e., what we do) influences patient outcomes or the results achieved. We have added the context in which care is delivered, also called safety culture, to this model (Fig. 1). Although most current measures of quality focus on either processes or outcomes, many measures of safety in- volve the structure and context in which care is delivered. Indeed, patient safety efforts have con- tributed greatly to the measurement co- nundrum by supplying a new set of lenses through which to view a richer array of structural measures. Such measures in- clude institutional variables such leader- ship structure for safety or the process for ensuring staff competency, task variables such as the presence of protocols, and From The Johns Hopkins University, Departments of Anesthesiology & Critical Care Medicine (PP, CGH, JBS, SB), Pediatrics (MM), and Medicine (DMN, AWW, TMP); Johns Hopkins Center for Innovations in Quality Patient Care (RD, DB, LW); and Johns Hopkins Bloomberg School of Public Health, Departments of Epidemiology (TMP) and Health Policy & Management (PP, JBS, AWW, LM). The authors have not disclosed any potential con- flicts of interest. Copyright © 2006 by the Society of Critical Care Medicine and Lippincott Williams & Wilkins DOI: 10.1097/01.CCM.0000226412.12612.B6 Objective: Our institution, like many, is struggling to develop measures that answer the question, How do we know we are safer? Our objectives are to present a framework to evaluate performance in patient safety and describe how we applied this model in intensive care units. Design: We focus on measures of safety rather than broader measures of quality. The measures will allow health care organiza- tions to evaluate whether they are safer now than in the past by answering the following questions: How often do we harm patients? How often do patients receive the appropriate interventions? How do we know we learned from defects? How well have we created a culture of safety? The first two measures are rate based, whereas the latter two are qualitative. To improve care within institutions, care- givers must be engaged, must participate in the selection and de- velopment of measures, and must receive feedback regarding their performance. The following attributes should be considered when evaluating potential safety measures: Measures must be important to the organization, must be valid (represent what they intend to mea- sure), must be reliable (produce similar results when used repeat- edly), must be feasible (affordable to collect data), must be usable for the people expected to employ the data to improve safety, and must have universal applicability within the entire institution. Setting: Health care institutions. Results: Health care currently lacks a robust safety score card. We developed four aggregate measures of patient safety and present how we applied them to intensive care units in an academic medical center. The same measures are being applied to nearly 200 intensive care units as part of ongoing collaborative projects. The measures include how often do we harm patients, how often do we do what we should (i.e., use evidence-based medicine), how do we know we learned from mistakes, and how well do we improve culture. Measures collected by different departments can then be aggregated to provide a hospital level safety score card. Conclusion: The science of measuring patient safety is imma- ture. This article is a starting point for developing feasible and scientifically sound approaches to measure safety within an in- stitution. Institutions will need to find a balance between mea- sures that are scientifically sound, affordable, usable, and easily applied across the institution. (Crit Care Med 2006; 34:1988–1995) KEY WORDS: safety; quality of care; quality measurement; measures 1988 Crit Care Med 2006 Vol. 34, No. 7
Transcript

R. Phillip Dellinger, MD, FCCM, Section EditorConcise Definitive Review

How will we know patients are safer? An organization-wideapproach to measuring and improving safety

Peter Pronovost, MD, PhD; Christine G. Holzmueller, BLA; Dale M. Needham, MD, PhD;J. Bryan Sexton, PhD; Marlene Miller, MD, MSc; Sean Berenholtz, MD, MHS; Albert W. Wu, MD, MPH;Trish M. Perl, MD, MSc; Richard Davis, PhD; David Baker, MBA; Laura Winner, MSN, MBA;Laura Morlock, PhD

I n the years 1999 and 2001, land-mark reports from the Institute ofMedicine (IOM), To Err Is Humanand Crossing the Quality Chasm,

made deficiencies in quality of care andpatient safety visible to health care pro-fessionals and the public (1, 2). Now, �5

yrs after these reports, are we safer, and ifso, how do we know? Many organizationsare working to improve safety (3), many saythat we lack empirical evidence to demon-strate that we are safer (4–6), and healthcare organizations generally lack measuresto broadly evaluate their progress in im-proving safety. Current publicly reportedperformance measures will likely not besufficient for providers to evaluate safety. Inmany hospitals, publicly reported perfor-mance measures apply to �10% of a hos-pital’s discharges (7). We need scientificallysound and feasible measures of patientsafety.

MEASURING SAFETY

The model developed by Donabediannearly a half century ago for measuringquality provides a framework for measur-

ing safety. In this model, structure (i.e.,how care is organized) plus process (i.e.,what we do) influences patient outcomesor the results achieved. We have addedthe context in which care is delivered,also called safety culture, to this model(Fig. 1). Although most current measuresof quality focus on either processes oroutcomes, many measures of safety in-volve the structure and context in whichcare is delivered.

Indeed, patient safety efforts have con-tributed greatly to the measurement co-nundrum by supplying a new set of lensesthrough which to view a richer array ofstructural measures. Such measures in-clude institutional variables such leader-ship structure for safety or the process forensuring staff competency, task variablessuch as the presence of protocols, and

From The Johns Hopkins University, Departmentsof Anesthesiology & Critical Care Medicine (PP, CGH,JBS, SB), Pediatrics (MM), and Medicine (DMN, AWW,TMP); Johns Hopkins Center for Innovations in QualityPatient Care (RD, DB, LW); and Johns HopkinsBloomberg School of Public Health, Departments ofEpidemiology (TMP) and Health Policy & Management(PP, JBS, AWW, LM).

The authors have not disclosed any potential con-flicts of interest.

Copyright © 2006 by the Society of Critical CareMedicine and Lippincott Williams & Wilkins

DOI: 10.1097/01.CCM.0000226412.12612.B6

Objective: Our institution, like many, is struggling to developmeasures that answer the question, How do we know we aresafer? Our objectives are to present a framework to evaluateperformance in patient safety and describe how we applied thismodel in intensive care units.

Design: We focus on measures of safety rather than broadermeasures of quality. The measures will allow health care organiza-tions to evaluate whether they are safer now than in the past byanswering the following questions: How often do we harm patients?How often do patients receive the appropriate interventions? How dowe know we learned from defects? How well have we created aculture of safety? The first two measures are rate based, whereas thelatter two are qualitative. To improve care within institutions, care-givers must be engaged, must participate in the selection and de-velopment of measures, and must receive feedback regarding theirperformance. The following attributes should be considered whenevaluating potential safety measures: Measures must be important tothe organization, must be valid (represent what they intend to mea-sure), must be reliable (produce similar results when used repeat-edly), must be feasible (affordable to collect data), must be usable forthe people expected to employ the data to improve safety, and musthave universal applicability within the entire institution.

Setting: Health care institutions.Results: Health care currently lacks a robust safety score card.

We developed four aggregate measures of patient safety andpresent how we applied them to intensive care units in anacademic medical center. The same measures are being appliedto nearly 200 intensive care units as part of ongoing collaborativeprojects. The measures include how often do we harm patients,how often do we do what we should (i.e., use evidence-basedmedicine), how do we know we learned from mistakes, and howwell do we improve culture. Measures collected by differentdepartments can then be aggregated to provide a hospital levelsafety score card.

Conclusion: The science of measuring patient safety is imma-ture. This article is a starting point for developing feasible andscientifically sound approaches to measure safety within an in-stitution. Institutions will need to find a balance between mea-sures that are scientifically sound, affordable, usable, and easilyapplied across the institution. (Crit Care Med 2006; 34:1988–1995)

KEY WORDS: safety; quality of care; quality measurement;measures

1988 Crit Care Med 2006 Vol. 34, No. 7

team variables such as staffing or com-munication among team members, par-ticularly the ability of team memberslower on the hierarchy to voice concerns.For most of these measures, we lack sci-entifically sound or feasible measures.

Although the attributes of perfor-mance measures are well developed, theyshould be scientifically sound (i.e., validand reliable), feasible, important, and us-able; challenges remain in implementingthese principles in the domain of safety

(8–11). One important challenge is toclarify what measures can be validly mea-sured as rates, that is, tell us how fast adisease or event is occurring in a popu-lation. To be a valid rate, the numerator(event or harm) and denominator (popu-lation at risk) should be clearly definedand a surveillance system in place to cap-ture the universe, or a probability sampleof events in the population at risk over adefined period of time (12). Most rates inpatient safety are actually proportions, or

the fraction of the population affectedwith no measure of time included. In thisarticle, we will use the term rate to referto either a rate or a proportion.

Most safety variables are difficult, ifnot impossible, to capture in the form ofa valid rate for a variety of reasons. Forexample, Patient Safety Reporting Sys-tems (PSRSs), recommended by the IOMand recently signed into law, are a com-mon method to identify patient safety is-sues (13) (http://www.theorator.com/bills109/2544.html). Many users, however,interpret the information obtained fromPSRSs as valid rates. In reality, PSRSrates of reported adverse events likelycontain significant bias for several rea-sons: Caregivers report a nonrandomsample from an unknown probability dis-tribution, the magnitude and direction ofbias in reporting are unknown, and thepopulation at risk is unknown. For exam-ple, is the correct denominator for a pa-tient who suffers complications from acentral line those who have a central lineor in whom a central line insertion wasattempted? Such data, however, are notavailable from a PSRS. Furthermore, themagnitude and direction of bias in report-ing are unknown, although it is likely thatvariation in what is reported is far greaterthan variation in patient safety (14–17).Therefore, as a measure of safety, PSRSrates, trends in rates, or benchmarking ofinformation is likely biased and should beinterpreted with caution.

How Do We Know We Are Safer?

Establishing which measures of safetycan be validly measured as rates is a crit-ical first step in monitoring and improv-ing safety. Those that are not true ratescan still be useful but cannot be pre-sented as rates. For example, surgicalcomplications or significant errors thatresult in harm are important as numer-ators, although they are not likely to bevalid rates.

We are currently applying a frame-work (Table 1) for assessing and improv-ing patient safety throughout the JohnsHopkins Hospital and in 200 intensivecare units (ICUs) in multiple states (18).In our four-question framework we ad-dress the critical issue of valid/invalidrates by stratifying the measures. Wewere able to express two measures asrates with minimal bias and generallyavailable hospital resources and two mea-sures as important but not valid as rates.Measures with valid rates were a) how

Figure 1. Conceptual model for measuring safety.

Table 1. Framework of a score card for patient safety and effectiveness

Domain DefinitionExample From Department of

Anesthesiology

How often do we harmpatients?

Measures of health careacquired infections

Catheter-related bloodstreaminfections

Surgical site infectionsHow often do we provide the

interventions that patientsshould receive?

Using either nationallyvalidated processmeasures or a validatedprocess to develop a newmeasure, what percent ofpatients receive evidence-based interventions?

Use of perioperative beta blockersElevation of the head of bed,

peptic ulcer disease and deepvenous thrombosis prophylaxis,and glucose �110 mg/dL inmechanically ventilated patients

Rates of postoperative hypothermiain neurosurgery and abdominalsurgery patients

How do we know we learnedfrom defects?

What percent of monthsdoes each area within theinstitution learn frommistakes?

Monitor percent of months inwhich the area creates a sharedstory (Fig. 1)

How well have we created aculture of safety?

Annual assessment of safetyculture at the unit level

Percent of patient care areas inwhich 80% of staff reportpositive safety and teamworkclimate

1989Crit Care Med 2006 Vol. 34, No. 7

Table 2. Learning from defects: Investigation process

Problem Statement: Health care organizations could increase the extent to which they learn from defectsWhat is a Defect? A defect is any clinical or operational event or situation that you would not want to happen again. These could include incidents

that you believe caused patient harm or put patients at risk for significant harmPurpose of Tool: The purpose of this tool is to provide a structured approach to help care givers and administrators identify the types of systems that

contributed to the defect and follow-up to ensure safety improvements are achievedWho Should Use this Tool:y Clinical departmental designee at Morbidity & Mortality Roundsy Patient care areas as part of the Comprehensive Unit Based Safety Program (CUSP)All staff involved in the delivery of care related to this defect should be present when this defect is evaluated. At a minimum, this should include the

physician, nurse and administrator and other selected professions as appropriate (e.g. medication defects should include pharmacy, equipmentdefects should include clinical engineering)

How to Use this Tool: Complete this tool on at least one defect per month. In addition, departments should investigate all of the following defects:liability claims, sentinel events, events for which risk management is notified, case presented to M&M and healthcare-acquired infections

Investigation ProcessI. Provide a clear, thorough and objective explanation of what happened

II. Review the list of factors that contributed to the incident and check off those that negatively contributed and positively contributed to the impactof the incident. Negative contributing factors are those that harmed or increased risk of harm for the patient; positive contributing factors limitedthe impact of harm

III. Describe how you will reduce the likelihood of this defect happening again by completing the Table. List what you will do, who will lead theintervention, when you will follow-up on the intervention’s progress and how you will know risk reduction has been achievedI. What happened?

(Reconstruct the timeline and explain what happened. For this investigation, put yourself in the place of those involved, in the middle of theevent as it was unfolding, to understand what they were thinking and the reasoning behind their actions/decisions. Try to view the world asthey did when the event occurred)

An elderly woman with a past medical history significant for lung cancer was admitted in mid-September for care of a surgical woundinfection. The patient experienced significant respiratory compromise and was transferred to the ICU. The patient had pleural effusions tappedand aggressive treatment, but her respiratory status did not improve, and a percutaneous tracheostomy was placed at the request of the familyThe patient had a central line for venous access. The patient was showing signs of sepsis, and the decision was made to re-wire the patient’scentral line and place a new line under sterile conditionsIn order to minimize the risk of air embolus during line placement, patients are placed on their back with the head of the bed lower than theirheart (trendelenburg position) if they are able to tolerate the position. Otherwise, the procedure is performed with the patient on their backwith the head of the bed flat. This procedure was performed with the head of the patient’s bed elevated to 30 degrees. During the procedure,the patient coughed and arrested shortly thereafter from venous air embolus. The patient’s family had requested a “do not resuscitate” (DNR)order days prior to this event, so no attempt was made to resuscitate the patient

II. Why did it happen?Following is a framework to help you review and evaluate your case. Please read each contributing factor and evaluate whether it was involved and, ifso, whether it negatively contributed (increased harm) or positively contributed (reduced impact of harm) to the incident

Contributing Factors (Example)Negatively

ContributedPositively

Contributed

Patient factorsPatient was acutely ill or agitated (elderly patient in renal failure, secondary to congestive heart failure) XXThere was a language barrier (patient did not speak English)There were personal or social issues (patient declined therapy)

Task factorsWas there a protocol available to guide therapy? (protocol for mixing medication concentrations is posted above the

medication bin)XX

Were test results available to help make care decision? (Stat blood glucose results were sent in 20 mins)Were tests results accurate? (four diagnostic tests done; only MRI results needed quickly, results faxed)

Caregiver factorsWas the caregiver fatigued? (tired at the end of a double shift, nurse forgot to take a blood pressure reading)Did the caregiver’s outlook/perception of own professional role affect this event? (doctor followed up to make sure

cardiac consult was done expeditiously)Was the physical or mental health of the provider a factor? (provider having personal issues and missed hearing a

verbal order)Team factors

Was verbal or written communication during hand-offs clear, accurate, clinically relevant, and goal directed?(oncoming care team was debriefed by outgoing staff regarding patient’s condition)

Was verbal or written communication during care clear, accurate, clinically relevant, and goal directed? (staff memberwas comfortable expressing his or her concern regarding high medication dose)

XX

Was verbal or written communication during crisis clear, accurate, clinically relevant, and goal directed? (team leaderquickly explained and directed his or her team regarding the plan of action)

Was there a cohesive team structure with an identified and communicative leader? (attending physician gave clearinstructions to, the team)

Training and education factorsWas provider knowledgeable, skilled, and competent? (nurse knew dose ordered was not standard for that medication) XX

(Cont’d)

1990 Crit Care Med 2006 Vol. 34, No. 7

often we harm patients, an outcome mea-sure; and b) how often we do what weshould, a process measure. Nonrate mea-sures were a) how we know we learnedfrom mistakes, a structural measure; andb) how well we created a culture of safety,a context measure. We will discuss eachof these questions in turn.

How Often Do We Harm Patients?Most stakeholders, including physicians,want to accurately and scientifically mea-sure outcomes. However, variation in anoutcome can be influenced by event defini-

tions, data collection methods, and manyother variables (19, 20). It is difficult todifferentiate between preventable and inev-itable harm, with efforts to do so demon-strating low reliability and validity (21). Yetfor a measure of harm to inform practice,most of the harm should be preventablewith the use of interventions. Currently,catheter-related bloodstream infections(CRBSIs) are one of the few, if not only,potentially valid measures of harm. Al-though some of these infections are in-evitable, most are preventable and rates

are easily measured and significantlyreduced (22).

For CRBSI, the Centers for DiseaseControl and Prevention (CDC) throughthe National Nosocomial Infection Sur-veillance (NNIS) system has created stan-dardized definitions, methods of data col-lection, and some case-mix adjustment toassist with this harm measurement (23,24). Despite standardization, there arestill potential problems with the NNISapproach (25). For example, the denom-inator for calculating the rate of CRBSI is

Table 2—Continued

Contributing Factors (Example)Negatively

ContributedPositively

Contributed

Did provider follow the established protocol? (provider pulled protocol to ensure steps were followed) XXDid the provider seek supervision or help? (new nurse asked preceptor to help her or him mix medication

concentration)XX

Information technology/CPOE factorsDid the computer/software program generate an error? (heparin was chosen, but Digoxin was printed on

the order sheet)Did the computer/software malfunction? (computer shut down in the middle of provider’s order entry)Did the user check what he or she entered to make sure it was correct? (provider initially chose 0.25 mg

but caught his or her error and changed it to 0.025 mg)Local environment

Was there adequate equipment available and was the equipment working properly? (there were two extraventilators, stocked and recently serviced by clinical engineering)

Was there adequate operational (administrative and managerial) support? (unit clerk out sick, but extraclerk sent to cover from another unit)

Was the physical environment conducive to enhancing patient care? (all beds were visible from the nurse’sstation)

Was there enough staff on the unit to care for patient volume? (nurse ratio was 1:1)Was there a good mix of skilled with new staff? (there was a nurse orientee shadowing a senior nurse and

an extra nurse on to cover, senior nurse’s responsibilities)Did workload affect the provision of good care? (nurse caring for three patients because nurse went home

sick)Institutional environment

Were adequate financial resources available? (unit requested experienced patient transport team forcritically patients and one was made available the next day)

Were laboratory technicians adequately in-serviced/educated? (lab tech was fully aware of complicationsrelated to thallium injection)

Was there adequate staffing in the laboratory to run results? (there were three dedicated laboratorytechnicians to run stat results)

Were pharmacists adequately in-service/educated? (pharmacists knew and followed the protocol for statmedication orders)

Did pharmacy have a good infrastructure (policy, procedures)? (it was standard policy to have a secondpharmacist do an independent check before dispensing medications)

Was there adequate pharmacy staffing? (there was a pharmacist dedicated to the ICU)Does hospital administration work with the units regarding what and how to support their needs?

(guidelines established to hold new ICU admissions in the ER when beds not available in the ICU)

III. How will you reduce the likelihood of this defect happening again?

Specific Things You Will Do toReduce the Risk of the Defect

Who Will Lead ThisEffort? Follow-Up Date How Will You Know Risk Is Reduced? (Action Items)

Develop a protocol for lineplacement

Task force led by theICU director

Instituted Put protocol on current central line checklist, whichis completed on all patients having a line placedor rewired

Develop an educational tool Require that physicians complete the Web-basedexamination for line placement

Develop competencyrequirements

Task force led by theICU director

Instituted Physicians demonstrate competency by successfullyplacing a minimum of five lines under supervision

MRI, magnetic resonance image; CPOE, computer physician order entry; ICU, intensive care unit.

1991Crit Care Med 2006 Vol. 34, No. 7

catheter days. Despite differing risks, pa-tients with single and multiple cathetersare both counted as one catheter day ac-cording to the NNIS definition. Moreover,femoral lines with the highest rate ofinfection are not included in the currentdefinition of a catheter day (26). Fortu-nately, the CDC is updating its surveil-lance system and NNIS to address someof these issues.

Furthermore, an infrastructure isneeded to obtain these data. Most hos-pitals have at least one full-time em-ployee to monitor these infections.Other measures of harm rates (i.e., out-come rates) besides infections may be bi-ased and should be used cautiously. Forexample, most postoperative myocardialinfarctions and deep venous thrombosesare silent. If clinically identified rates ofthese postoperative complications aremonitored without broad surveillance ofall patients, both would be subject to sur-veillance bias.

To obtain other measures of harm,therefore, we must create standardizeddefinitions and surveillance systems andselect harms that are to a large extentpreventable (27). This will require re-sources, which means that health carepayers will have to decide whether thebenefits of measurement outweigh thecosts. In the absence of standardized def-initions and surveillance systems, clini-cally recognized events and measures ofharm will likely be biased.

How Often Do We Do What We Should?Despite our best efforts, some patientswill develop complications. Patient fac-tors, bias in measurement, or failure touse interventions that may prevent thecomplication can all play a role in com-plications. In a recent example in theliterature, patients with acute myocardialinfarction had a higher mortality rate inDecember compared with other monthsdespite the use of similar evidence-basedinterventions (28). Therefore, it may bemore effective to ensure that patientsreliably receive interventions known toprevent complications and to seek newknowledge on other effective interven-tions, rather than to simply focus on im-proving outcomes themselves. Existingprocess measures focus, to a large extent,on adult medical patients with acutemyocardial infarction, congestive heartfailure, and pneumonia, which in manyhospitals only account for a small per-centage of discharges (7). Health careneeds many more scientifically soundprocess measures than are currently

available. The science of creating qualitymeasures is currently underdevelopedcompared with other types of clinical re-search.

How Do We Know We Learned FromOur Defects? In addition to developing pro-cess measures for individual therapies, pro-viders should seek to identify structuralmeasures that affect patient safety. For ex-ample, staffing ICUs with physicianstrained in critical care is strongly associ-ated with a relative reduction in hospitalmortality rate by 30% (29). Yet 80% ofhospitals lack critical care physician staff-ing, resulting in approximately 134,000deaths annually in the United States (30).The science of linking structural measuresto patient outcomes is currently immaturewith only a handful of clear examples suchas pharmacists in the ICU and nurse-to-patient ratios. We need continued researchsupport to identify effective interventions(including those implemented at the orga-nizational level) and to ensure that patientsreliably receive those interventions.

Outside of these evidence-based struc-tural measures, how can hospitals begin toknow whether they have implemented ap-propriate structural interventions? Onemethod is to evaluate whether they learnedfrom their defects, defined as something

you would not want to have happen again.A tool to help caregivers learn from defectsis outlined in Table 2 and a summary ofthat process described in Figure 2.

Defects can be identified from a varietyof venues including morbidity and mortal-ity conferences, liability claims, incidents inPSRS, and asking staff how they think thenext patient will be harmed. In general, it isnot feasible or scientifically sound to obtainrates of these defects, yet the defect is itselfinformative. Health care organizations maybe better served by monitoring how oftenthey learned from these defects, rather thancounting the occurrence of defects, and bymaking significant system improvementsin response to a few incidents rather thansuperficial changes to many (31).

Hospitals can assess whether they aresafer by evaluating the extent to which theyimplemented interventions to prevent theevent from recurring. For example, after asentinel event, hospitals often conduct aroot cause analysis, the output of which isrecommendations to prevent the mistakefrom recurring. How do we know whetherwe learned from these events? Let’s con-sider an event of an air embolus followingcentral line removal (31). The investigationled to several recommendations designedto improve patient safety. The recommen-

Figure 2. One-page summary generated from the How to Investigate a Defect tool, used to disseminateinformation about the factors contributing to an incident and what efforts were made to prevent asimilar incident from occurring in the future. ICU, intensive care unit.

1992 Crit Care Med 2006 Vol. 34, No. 7

dations included creating a task force todevelop educational materials for caregiv-ers, implementing that curriculum, creat-ing standards to determine caregiver com-petency in performing the procedure,monitoring compliance with education andcompetency standards, and improvingteamwork among physicians and nurses.

Which of these recommendations, ifany, should be measured and how should itbe done? In general, we can measure thepresence of a policy or program, staff’sawareness of the policy or program, andwhether the policy or program is actuallyused. If the policy involves communicationamong caregivers, we would likely need toobserve team behavior to determinewhether communication improved (32).Obtaining information regarding commu-nication from the medical record wouldlikely be biased.

There is no clear, correct measure toevaluate whether we learned from events,and what is measured should be influencedby the risks, including costs and benefits.The closer you move to individual patientsor providers, the more resource intensiveand more likely the measure will be valid.For defects where patients suffered signifi-cant harm, it may be beneficial to measurewhether recommended changes are actu-ally being used. For others, the presence ofa policy may be sufficient.

Significant work redesign (i.e., systemchange) takes resources; re-educationalone will probably do little to mitigate haz-ards. Any system change, such as introduc-ing computerized physician order entry,may defend against present hazards butalso introduce new hazards that need to beidentified and mitigated (33). Further re-search is needed to develop tools to deter-mine whether hazards have been mitigated.

Have We Created a Safe Culture? Otherindustries and patient safety efforts havetaught us that the context in which careis delivered, often referred to as safetyculture, is an important influence on pa-tient outcomes (34). Safety culture is im-portant, measurable, and improvable (35,36) (www.jcaho.org). Although the evi-dence to support the validity of measur-ing culture is emerging, safety cultureappears to measure how caregivers com-municate and interact. Failures in com-munication are the most common factorthat contributes to sentinel events anderrors (www.jcaho.org). Validated tools tomeasure safety culture, such as the rig-orous and commonly used Safety Atti-tudes Questionnaire (SAQ) (36–38) (JBSexton, EJ Thomas, RL Helmeich, un-

published data) evaluate caregivers’ per-ceptions of teamwork and safety climate.Higher safety culture scores, measuredwith the SAQ, are associated with lower

rates of nurse turnover, CRBSI, decubitusulcers, and in-hospital mortality (B Sex-ton, personal communication, September15, 2005).

Ventilator Bundle Compliance

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

Composite Prevention ofVAP

PUDProphylaxis

DVTProphylaxis

ExtubationAssessment

AppropriateSedation

Quality Measure

Per

cen

t Co

mp

lian

ce

ICU A

ICU B

ICU C

Figure 3. Components of ventilator bundle. Prevention of ventilator-associated pneumonia (VAP)defined as percent of ventilator days on which the head of the bed is elevated �30°. Peptic ulcer disease(PUD) prophylaxis defined as percent of ventilator days on which patient received peptic ulcer diseaseprophylaxis. Deep venous thrombosis (DVT) prophylaxis defined as percent of ventilator days on whichpatient received deep venous thrombosis prophylaxis. Extubation assessment defined as percent ofventilator days on which patient was evaluated by a rapid shallow breathing index or trial ofspontaneous ventilation. Appropriate sedation defined as percent of ventilator days on which patientcould follow simple commands if he or she received sedation.

Table 3. Example of a score card for patient safety in three intensive care units at The Johns HopkinsHospital

Domain

Score

Unit A Unit B Unit C Overall

How often do we harm patients?No. of BSI/1000 catheterdays (No. of infections)

3.4 (5) 1.1 (2) 1.4 (2) 1.9 (9)

How often do we provide neededinterventions? % patientsreceiving ventilator bundle

89 95 93 91

How often do we learn fromdefects? % months unitlearned from a defect

100 100 100 100

Do we have a safe culture? % ofunits in which 80% of staffreport positive teamworkand safety climate and theaverage score in each ICU

Unit Ratio (%)

Safety climate 52a 89a 73a 1/3 (33)b

Teamwork climate 53a 85a 88a 2/3 (67)b

BSI, bloodstream infection; ICU, intensive care unit.aPercent reporting good climate (e.g., agree slightly or agree strongly); bpercent ICUs reporting

�80% agreement.

1993Crit Care Med 2006 Vol. 34, No. 7

By measuring culture using the SAQ,providers can obtain a valid and feasibleassessment of teamwork and safety culturein their clinical area or professional disci-pline. Scores on the SAQ are presented asthe percent of staff in a patient care areawho report positive safety and teamworkclimate or the percent of patient care areaswithin a department or hospital in which80% of staff report positive safety and team-work climate. Our goal is for 80% of staff inall units to report positive safety and team-work climates.

PUTTING THISCOMPREHENSIVE APPROACHTO WORK

Although it takes resources and the sci-ence is evolving, we have applied this ap-proach to nearly 200 ICUs in Michigan,New Jersey, and Rhode Island (18) andthroughout the Johns Hopkins Hospital.Table 3 describes the score card for threeunits at the Johns Hopkins Hospital. Toevaluate harm, we measured CRBSI rates.To evaluate how often we do what weshould, we assessed compliance with theventilator bundle as outlined in Figure 3(39). To evaluate whether we learned fromdefects, we tracked the number of monthsin which each ICU completed the LearningFrom Defects form (40). To evaluate safetyculture, we measured the percent of ICUsin which �80% of staff reported positivesafety and teamwork climate.

Overall, 100% of units learned from onedefect per month, and evidence-based in-terventions are used 91% of the time.These include the ventilator bundle out-lined in Table 1. In addition, 33% of unitsreported good safety climate and 67% goodteamwork climate (�80% optimal).

FUTURE DIRECTIONS

There is much to learn before organi-zations can provide a meaningful answerto the question, Are our patients safer?The science of measuring safety is slowlymaturing, and more validated measuresof harm are needed. In addition, we mustmaximally exploit information technol-ogy to increase the feasibility of data col-lection.

Some, but not all, measures of safetylend themselves to valid rates, and this dis-tinction should be made clear. We havedescribed an approach that we hope helpshealth care organizations begin to answer,for their patients, staff, board, and commu-nity, the question, How do we know we are

safer? Although we applied this to ICUs, weare currently applying this model through-out our health system.

ACKNOWLEDGMENT

We acknowledge Sorrel King for tire-less work to help improve patient safetyand for being the guiding light in ourwork and in representing patients andtheir families, the JHH team members,and families who help us do our best.

REFERENCES

1. Kohn L, Corrigan J, Donaldson M (Eds): ToErr Is Human: Building a Safer Health Sys-tem (Institute of Medicine Report). Washing-ton, DC, National Academy Press, 1999

2. Institute of Medicine: Crossing the QualityChasm: A New Health System for the 21stCentury. Washington, DC, National AcademyPress, 2001

3. Curtis JR, Cook DJ, Wall RJ, et al: Intensivecare unit quality improvement: A “how-to”guide for the interdisciplinary team. CritCare Med 2006; 34:211–218

4. Leape LL, Berwick D: Five years after To ErrIs Human: What have we learned? JAMA2005; 293:2384–2390

5. Wachter RM: The end of the beginning: Pa-tient safety five years after “To Err Is Hu-man.” Health Aff (Millwood). 2004 July–Dec;Suppl Web Exclusives:W4-534–45

6. Brennan TA, Gawande A, Thomas E, et al:Accidental deaths, saved lives, and im-proved quality. N Engl J Med 2005; 353:1405–1409

7. Jha AK, Li Z, Orav EJ, et al: Care in U.S.hospitals—The Hospital Quality Allianceprogram. N Engl J Med 2005; 353:265–274

8. McGlynn EA: An evidence-based nationalquality measurement and reporting system.Med Care 2003; 41:I8–I15

9. McGlynn EA: Selecting common measures ofquality and system performance. Med Care2003; 41:I39–I47

10. McGlynn EA: Choosing and evaluating clini-cal performance measures. Journal of Qual-ity Improvement 1998; 24:470–479

11. McGlynn EA, Asch S: Developing a clinicalperformance measure. Am J Prev Med 1998;14:14–21

12. Gordis L: Epidemiology. Philadelphia, Saun-ders, 2004

13. Aspden P, Corrigan JM, Wolcott J, et al: In-stitute of Medicine: Patient Safety: Achiev-ing a New Standard for Care. Washington,DC, National Academies Press, 2004, pp216 –219

14. Holzmueller CG, Pronovost PJ, Dickman F,et al: Creating the web-based Intensive CareUnit Safety Reporting System (ICUSRS).J Am Med Inform Assoc 2005; 12:130–139

15. Lubomski L, Pronovost PJ, Thompson DA, etal: Building a better incident reporting sys-

tem: Perspectives from a multisite project.J Clin Outcomes Mgmt 2004; 11:275–280

16. Barach P, Small SD: Reporting and prevent-ing medical mishaps: Lessons from non-medical near miss reporting systems. BMJ2000; 320:759–763

17. Needham DM, Thompson DA, HolzmuellerCG, et al: A system factors analysis of airwayevents from the Intensive Care Unit SafetyReporting System (ICUSRS). Crit Care Med2005; 33:1701–1707

18. Pronovost PJ, Goeschel C: Improving ICUcare: It takes a team. Healthcare Executive2005; Mar/Apr:14–22

19. Lilford R, Mohammed MA, Spiegelhalter D,et al: Use and misuse of process and outcomedata in managing performance of acute med-ical care: Avoiding institutional stigma. Lan-cet 2004; 363:1147–1154

20. Lilford R, Mohammed MA, Braunholtz D, etal: The measurement of active errors: Meth-odological issues. Qual Saf Health Care2004; 12 (Suppl II):ii8–ii12

21. Hayward RA, Hofer TP: Estimating hospitaldeaths due to medical errors: Preventabilityis in the eye of the reviewer. JAMA 2004;286:415–420

22. Berenholtz S, Pronovost PJ, Lipsett PA, et al:Eliminating catheter-related bloodstream in-fections in the intensive care unit. Crit CareMed 2004; 32:2014–2020

23. CDC: National Nosocomial Infections Surveil-lance (NNIS) System report, data summaryfrom October 1986-April 1998, issued June1998. Am J Infect Control 1998; 26:522–533

24. National Nosocomial Infections Surveillance(NNIS) System Report, data summary fromJanuary 1992 through June 2003, issued Au-gust 2003. Am J Infect Control 2003;31:481–498

25. Wilson APR, Gibbons C, Reeves BC, et al:Surgical wound infection as a performanceindicator: Agreement of common definitionsof wound infection in 4773 patients. BMJ2004; 329:720–725

26. Merrer J, De Jonghe B, Golliot F, et al:French Catheter Study Group in IntensiveCare: Complications of femoral and subcla-vian venous catheterization in critically illpatients: A randomized controlled trial.JAMA 2001; 286:700–707

27. Needham DM, Dowdy DW, Mendez-TellezPA, et al: Studying outcomes of intensivecare unit survivors: Measuring exposures andoutcomes. Intensive Care Med 2005; 31:1153–1160

28. Meine TJ, Patel MR, DePuy V, et al: Evidence-based therapies and mortality in patients hos-pitalized in December with acute myocardialinfarction. Ann Intern Med 2005; 143:481–485

29. Pronovost PJ, Angus DC, Dorman T, et al:Physician staffing patterns and clinical out-comes in critically ill patients: A systematicreview. JAMA 2002; 288:2151–2162

30. Pronovost PJ, Rinke ML, Emery K, et al:Interventions to reduce mortality among pa-tients treated in intensive care units. J CritCare 2004; 19:158–164

1994 Crit Care Med 2006 Vol. 34, No. 7

31. Pronovost PJ, Wu AW, Sexton JB: Acute de-compensation after removing a central line:Practical approaches to increasing safety inthe intensive care unit. Ann Intern Med2004; 140:1025–1033

32. Kopp BJ, Erstad BL, Allen ME, et al: Medica-tion errors and adverse drug events in anintensive care unit: Direct observation ap-proach for detection. Crit Care Med 2006;34:415–425

33. Koppel R, Metlay J, Cohen A, et al: Role ofcomputerized physician order entry systemsin facilitating medication errors. JAMA 2005;293:1197–1203

34. Sexton JB, Helmreich R, Thomas E: Error,stress and teamwork in medicine and avia-

tion: Cross sectional surveys. BMJ 2000; 320:745–749

35. Pronovost PJ, Weast B, Rosenstein B, et al:Implementing and validating a comprehen-sive unit-based safety program. J PatientSafety 2005; 1:33–40

36. Sexton JB, Helmreich RL, Neilands TD, et al:The Safety Attitudes Questionnaire: Psycho-metric properties, benchmarking data, andemerging research. BMC Health Serv Res2006; Apr 3;6(1):44[Epub doi:10.1186/1472-6963-6-44]

37. Sexton JB, Helmreich RL: Using Language inthe Cockpit: Relationships With Workloadand Performance. Houston, University ofTexas, 2003

38. Sexton JB, Thomas EJ, Pronovost PJ: Thecontext of care and the patient care team:The Safety Attitudes Questionnaire. In:Building a Better Delivery System. A NewEngineering/Health Care Partnership. ReidPP, Compton WD, Grossman JH, et al (Eds).Washington, DC, National Academies Press,2005, pp 119–123

39. Berenholtz SM, Milanovich S, Faircloth A,et al: Improving care for the ventilatedpatient. Jt Comm J Qual Saf 2004; 30:195–204

40. Pronovost PJ, Holzmueller CG, Martinez E,et al: A practical tool to learn from defects inpatient care. Jt Comm J Qual Patient Saf2006; 32:102–108

1995Crit Care Med 2006 Vol. 34, No. 7


Recommended