+ All Categories
Home > Documents > International Journal for Quality in Health Care 2004 ... · International Journal for Quality in...

International Journal for Quality in Health Care 2004 ... · International Journal for Quality in...

Date post: 30-Jul-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
16
International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International Journal for Quality in Health Care vol. 16 Supplement 1 i11 Using clinical indicators in a quality improvement programme targeting cardiac care ANNABEL HICKEY 1 , IAN SCOTT 1 , CHARLES DENARO 2 , NEIL STEWART 1 , CAMERON BENNETT 2 AND THERESE THEILE 2 1 Department of Internal Medicine, Princess Alexandra Hospital, Brisbane, 2 Department of Internal Medicine, Royal Brisbane Hospital, Brisbane, Australia Abstract Rationale. The Brisbane Cardiac Consortium, a quality improvement collaboration of clinicians from three hospitals and Wve divisions of general practice, developed and reported clinical indicators as measures of the quality of care received by patients with acute coronary syndromes or congestive heart failure. Development of indicators. An expert panel derived indicators that measured gaps between evidence and practice. Data collected from hospital records and general practice heart-check forms were used to calculate process and outcome indicators for each condition. Our indicators were reliable (kappa scores 0.7–1.0) and widely accepted by clinicians as having face validity. Independent review of indicator-failed, in-hospital cases revealed that, for 27 of 28 process indicators, clinically legitimate reasons for withholding speciWc interventions were found in <5% of cases. Implementation and results. Indicators were reported every 6 months in hospitals and every 10 months in general practice. To stimulate practice change, we fed back indicators in conjunction with an education programme, and provided, when requested, customized analyses to different user groups. SigniWcant improvement was seen in 17 of 40 process indicators over the course of the project. Lessons learned and future plans. Lessons learnt included the need to: (i) ensure brevity and clarity of feedback formats; (ii) liberalize patient eligibility criteria for interventions in order to maximize sample size; (iii) limit the number of data items; (iv) balance effort of indicator validation with need for timely feedback; (v) utilize more economical methods of data collection and entry such as scannable forms; and (vi) minimize the burden of data veriWcation and changes to indicator deWnitions. Indicator measurement is being continued and expanded to other public hospitals in the state, while divisions of general practice are exploring lower-cost methods of ongoing clinical audit. Conclusion. Use of clinical indicators succeeded in supporting clinicians to monitor practice standards and to realize change in systems of care and clinician behaviour. Keywords: cardiac, clinical indicators, performance measures, quality improvement Project rationale and conception Improving quality of health care requires performance meas- urement and feedback. Various programmes targeting cardiac care have combined the use of clinical indicators with other quality improvement interventions in identifying and closing the gaps between routine care and evidence-based best practice [1–4]. However, in the successful development and use of clinical indicators, several dimensions need to be carefully considered: (i) deWning the purpose of measurement; (ii) determining target standards of care; (iii) formulating data collection methods; (iv) converting data into usable clinical indicators; and (v) maximizing the impact of indicator feedback. In this report we describe our experience in undertaking these steps within a multifaceted quality improvement programme target- ing in-hospital and post-hospital care of patients admitted with acute coronary syndromes (ACS) or congestive heart failure (CHF). An overview of our programme is provided in Box 1. Address reprint requests to A. Hickey, Clinical Services Evaluation Unit, Princess Alexandra Hospital, Ipswich Road, Woolloongabba, Brisbane, Queensland, Australia 4102. E-mail: [email protected]
Transcript
Page 1: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032

International Journal for Quality in Health Care vol. 16 Supplement 1 i11

Using clinical indicators in a quality improvement programme targeting cardiac careANNABEL HICKEY1, IAN SCOTT1, CHARLES DENARO2, NEIL STEWART1, CAMERON BENNETT2 AND THERESE THEILE2

1Department of Internal Medicine, Princess Alexandra Hospital, Brisbane, 2Department of Internal Medicine, Royal Brisbane Hospital, Brisbane, Australia

Abstract

Rationale. The Brisbane Cardiac Consortium, a quality improvement collaboration of clinicians from three hospitals and Wvedivisions of general practice, developed and reported clinical indicators as measures of the quality of care received by patientswith acute coronary syndromes or congestive heart failure.

Development of indicators. An expert panel derived indicators that measured gaps between evidence and practice. Datacollected from hospital records and general practice heart-check forms were used to calculate process and outcome indicatorsfor each condition. Our indicators were reliable (kappa scores 0.7–1.0) and widely accepted by clinicians as having face validity.Independent review of indicator-failed, in-hospital cases revealed that, for 27 of 28 process indicators, clinically legitimatereasons for withholding speciWc interventions were found in <5% of cases.

Implementation and results. Indicators were reported every 6 months in hospitals and every 10 months in general practice. Tostimulate practice change, we fed back indicators in conjunction with an education programme, and provided, when requested,customized analyses to different user groups. SigniWcant improvement was seen in 17 of 40 process indicators over the courseof the project.

Lessons learned and future plans. Lessons learnt included the need to: (i) ensure brevity and clarity of feedback formats;(ii) liberalize patient eligibility criteria for interventions in order to maximize sample size; (iii) limit the number of dataitems; (iv) balance effort of indicator validation with need for timely feedback; (v) utilize more economical methods of datacollection and entry such as scannable forms; and (vi) minimize the burden of data veriWcation and changes to indicatordeWnitions. Indicator measurement is being continued and expanded to other public hospitals in the state, while divisions ofgeneral practice are exploring lower-cost methods of ongoing clinical audit.

Conclusion. Use of clinical indicators succeeded in supporting clinicians to monitor practice standards and to realize change insystems of care and clinician behaviour.

Keywords: cardiac, clinical indicators, performance measures, quality improvement

Project rationale and conception

Improving quality of health care requires performance meas-urement and feedback. Various programmes targeting cardiaccare have combined the use of clinical indicators with otherquality improvement interventions in identifying and closingthe gaps between routine care and evidence-based best practice[1–4].

However, in the successful development and use of clinicalindicators, several dimensions need to be carefully considered:

(i) deWning the purpose of measurement; (ii) determiningtarget standards of care; (iii) formulating data collectionmethods; (iv) converting data into usable clinical indicators;and (v) maximizing the impact of indicator feedback. In thisreport we describe our experience in undertaking these stepswithin a multifaceted quality improvement programme target-ing in-hospital and post-hospital care of patients admittedwith acute coronary syndromes (ACS) or congestive heartfailure (CHF). An overview of our programme is provided inBox 1.

Address reprint requests to A. Hickey, Clinical Services Evaluation Unit, Princess Alexandra Hospital, Ipswich Road,Woolloongabba, Brisbane, Queensland, Australia 4102. E-mail: [email protected]

Page 2: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

A. Hickey et al.

i12

Development and design of clinical indicators

Defining the purpose of measurement

We wanted indicators that would guide and motivate internalquality improvement activities in hospital and general prac-tice, and not be used merely for monitoring purposes byexternal agencies. To engage health care providers we wantedour indicators to be clinically relevant to speciWc processes ofcare that demonstrated an accepted link with desired healthoutcomes [5]. We aimed to present aggregate indicator data atthe level of facility or, where sample size was large enough,at a departmental or group general practice level. As the targetgroups were identiWable groups of clinicians, we considered itvital that the indicators were robustly accurate.

Determining standards and indicators using multidisciplinary teams

We speciWed the target standards of care in a set of clinicalpractice guideline recommendations derived from a systematicreview of the research evidence [6–8]. These guidelines weredeveloped over a period of 5 months, using formal group con-sensus methods, by multidisciplinary panels of general phys-icians (n = 5), cardiologists (n = 5), general practitioners (GPs;n = 12), clinical pharmacists (n = 2), and advanced physiciantrainees (n = 3). These panels contained expertise in clinicalcontent and measurement, and represented the interests ofpotential users of the indicators [9]. The development work

was coordinated by the programme manager in liaison with theeight-member Clinical Guideline Working Group.

The same multidisciplinary team derived sets of clinicalindicators from guideline recommendations and a review ofindicators published by other groups [10,11]. Indicatorsrelated to both process (what care was given to whom) andoutcome (what was the end result of care), were designed tobe as explicit and objective as possible, and were developedby group consensus.

Methods

Data collection

Data used to calculate in-hospital care indicators wereabstracted by trained nurses from hospital medical recordsretrieved shortly after discharge. Post-hospital care data werecollected using a 1-page heart-check form completed duringGP consultations at 3, 6, and 12 months post-discharge. Exam-ples of data forms used are provided in supplementary data.We restricted data elements for each indicator to those thatwere objectively veriWable and easily retrieved from medicalrecords or GP case notes.

Conversion of data into useful clinical indicators

Improving care processes known to have a direct link withhealth outcomes was our prime objective. To this end, processindicators predominated over outcome indicators, particularly

Box 1 Overview of the Brisbane Cardiac Consortium

The Brisbane Cardiac Consortium (BCC) was one of four quality improvement consortia in Australia to receive 2 yearsfunding from the federal government under the Clinical Support Systems Program (CSSP) auspiced by the Royal AustralianCollege of Physicians. The program targeted the in-hospital and post-hospital care of patients admitted with acute coronarysyndromes (ACS) and congestive heart failure (CHF). The program ran from October 1, 2000, to August 31, 2002, andinvolved three teaching hospitals [Royal Brisbane (800 beds), Princess Alexandra (700 beds) and Queen Elizabeth IIHospitals (260 beds)] and four Divisions of General Practice within metropolitan Brisbane. Program objectives were to optimize care using systematic processes of performance measurement and feedback combinedwith various quality improvement interventions. The latter included dissemination of locally developed clinical practiceguidelines and other forms of decision support, academic detailing of patients and clinicians by clinical pharmacists,provision of patient self-management resources, and close liaison between hospital and general practitioners andcommunity pharmacists regarding future patient management.Primary outcome measures were changes in clinical indicators between baseline (1/10/00–17/4/01) and post-intervention(15/3/02–30/8/02) periods as measured on all consecutive patients who met pre-speciWed case deWnitions.* Post-hospitalcare indicators were collected on a subset of patients who satisWed eligibility criteria** and gave informed consent to post-hospital follow-up.The program saw participation of 2495 patients (1584 with ACS; 911 with CHF), 10 cardiologists, 17 general physicians, 20emergency physicians, 5 clinical pharmacists, 50 medical registrars, 200 residents, 150 nursing staff, and 1020 generalpractitioners.

*ACS: clinical diagnosis of ACS + elevation of cardiac enzyme markers (troponin or creatine kinase levels elevated to more than 1.5 and2.0 times upper normal reference range respectively). CHF: clinical diagnosis and at least 3 key clinical signs (elevated jugular venouspressure, gallop rhythm, chest crackles to mid-zones bilaterally, pedal oedema, or pulmonary oedema or cardiomegaly on chest X-ray).**Absence of major co-morbidity (physical or psychological) which precluded ability to self-care, permanent resident in the greaterBrisbane area, community-living, and ability to speak English.

Page 3: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

Cardiac clinical indicators

i13

as large, risk-adjusted samples are required to detect signiW-cant changes in outcome indicators (such as in-hospital deathor 30-day readmission) given their relatively low event rates(in our experience <15%) [12].

We routinely gave clinicians information on 28 in-hospitaland 12 post-hospital process indicators: 13 and six for ACS, and15 and six for CHF, respectively (see Tables 1 and 2). We wantedour process indicators to accurately discriminate betweeneligible patients who should receive interventions (sensitivity)and ineligible patients who should not (speciWcity) [13].

We calculated each indicator as a proportion. In-hospitalprocess indicators comprised the number of eligible patientsactually receiving the intervention (numerator) over the numberof patients eligible to receive it (denominator). Eligibility foreach intervention was assessed as deWnite indication with no rela-tive or absolute contra-indications as ascertained from hospitalrecords. In contrast, post-discharge indicators were simply ratesof overall prescription of drugs or non-drug interventionsamong all evaluable patients that had consented to follow-upand had been reviewed by GPs. This more simple format wasdeliberately designed to reduce the opportunity costs to GPs ofcollecting more detailed data about patient eligibility. Outcomeindicators for in-hospital care comprised in-hospital death,30-day same-cause readmission rate, and median length of stay.No outcome indicators for post-hospital care were included.

We sought to make our indicators credible and capable ofeliciting clinician responses by maximizing the followingattributes.

Relevance. Our process indicators were locally agreed, evidence-based, and important to both patients and clinicians [14].

Reliability. We minimized measurement error due to vari-ations in sampling, data collection, or analysis [15] using strategieslisted in Table 3. Potential for error in collection of post-discharge data by GPs was minimized by use of a 1-page formcontaining straightforward instructions and that allowedattachment of printouts of tests and medications from theGP’s electronic records. Accuracy of discharge medicationswas conWrmed by cross-checking information on the in-hospitaland GP forms with that recorded in pharmacy databases.

Re-abstraction audits of 5% of randomly chosen hospitalrecords found the level of agreement between data abstractersand principal investigators to be high: kappa scores were 1.0for case deWnition and ≥0.7 for all other items. The accuracyof GP-returned forms could not be veriWed as GP case noteswere inaccessible to independent reviewers. However, scru-tiny of returned forms conWrmed entries in >70% of all dataWelds, with 95% completion for medication use.

Validity. We validated in-hospital process indicators byindependently reviewing charts of all seemingly eligiblepatients who failed to receive speciWc interventions in the Wrstand last 6-month periods for those indicators that showed afailure rate of >20% (see below). The only indicator forwhich clinically legitimate reasons were detected for with-holding care in >5% of indicator-failed cases was prescriptionof warfarin in patients with CHF. In 64 of 113 (56%) patientsdeemed eligible for warfarin but who did not receive thisdrug, review disclosed relative contra-indications such as‘frailty’ or ‘risk of falls’ in 43 (67%) of cases.

Impact. To inXuence clinician behaviour, we chose indicatorsthat had the capacity to signal the existence of important evi-dence-practice gaps and that were potentially amenable tochange [16,17]. We presented indicator values in easy to inter-pret graphical and tabular formats, which depicted changes inindicator rates over time ( Figures 1 and 2) and which included areference standard (or target) of 100% of highly eligible patientsreceiving the stated intervention. In addition, we employed vari-ous strategies (see below) to enhance the ability of indicatorfeedback to stimulate clinician attempts to improve care [18,19].

Maximization of clinical indicator impact

Feedback of in-hospital care indicators occurred at 6-monthlyintervals, while feedback of post-discharge (general practice)indicators occurred over two rounds separated by 18 months,as follow-up data were collected more slowly and the patientsample was only one-third that of the in-hospital sample.

Dissemination

We disseminated results via hospital unit meetings, hospital anddivisions of general practice newsletters, drug and therapeut-ics bulletins, grand round presentations, educational meetings,and in-service training sessions. We ensured doctors, nurses,and clinical pharmacists at all levels were exposed to indicatorfeedback [20].

Feedback partnered with education and discussion

Clinical audit and periodic feedback, in isolation, engenderonly moderate levels of change [21]. Lack of skills, knowledge,and leadership are some of the known barriers to practiceimprovement [22]. Consequently, we coupled indicator feed-back in hospitals with small group discussions about guidelinerecommendations and the signiWcance of the indicator Wnd-ings. Practice improvement was promoted through citing casestudies of successful innovation elsewhere, and the formationof interdisciplinary groups that took responsibility forimproving speciWc care processes [23].

Customized indicator analyses

Encouraged by the work of others [17], and receiving localrequests, we provided more personalized reports to individ-ual hospital consultants about performance within his or herunit if patient samples were large enough to give statisticallymeaningful trends. These reports compared that consultant’sperformance with peers from all three hospitals andincluded patient demographics to reconcile any differencesin case mix.

Feedback from providers and modifications to the original plan

Clinician responses to reported indicators in the Wrst feedbackround centred, not surprisingly, on data quality and face validity,

Page 4: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

A. Hickey et al.

i14

Table 1 Process indicators for in-hospital care

Indicator..........................................................................................................................................................Eligibility criteria

Inclusions Exclusions.........................................................................................................................................................................................................................

Acute coronary syndromes

PresentationECG: proportion of patients receiving ECG within 10 minutes of hospital removal

All patients Nil

Thrombolysis: proportion of highly eligible patients receiving thrombolysis

Patients with ST segment elevation or new left bundle branch block

Recent trauma or surgery, cardiopulmonary resuscitation, prior cerebrovascular accident or transient ischaemic attack, uncontrolled hypertension (>180/100 mmHg), coagulopathy, active gastrointestinal bleeding, late (>12 hours) presentation, patient refusal, uncertain diagnosis, or scheduled for primary angioplasty

Time to lysis: proportion of highly eligible patients receiving thrombolysis within 60 and 30 minutes of hospital arrival

All highly eligible patients receiving thrombolysis

Nil

In-hospital courseCardiac counselling: proportion of highly eligible patients receiving in-hospital cardiac counselling

All patients Nil

Assessment of serum lipids: proportion of patients undergoing testing of serum lipids

All patients Nil

Discharge status1

β -blocker: proportion of highly eligible patients prescribed β-blocker

All patients Asthma, severe chronic obstructive pulmonary disease (FEV1 <50% predicted or ‘severe COPD’ recorded in medical record), pulse rate at discharge <60 b.p.m., systolic blood pressure at discharge <90 mmHg, cardiogenic shock, adverse drug reaction

Anti-platelet agents: proportion of highly eligible patients prescribed anti-platelet agents (aspirin or clopidogrel)

All patients Active peptic ulcer, recent or past major bleeding,concurrent warfarin therapy, adverse drug reaction toaspirin or clopidogrel

ACE inhibitors: proportion of highly eligible patients prescribed ACE inhibitors

Past history or in-hospital onset of congestive heart failure, LV ejection fraction <40% or LV systolic dysfunction on ECG

Serum potassium >5.5 mmol/l, serum creatinine>0.3 mmol/l, systolic blood pressure <90 mmHg, severeaortic stenosis (deWned as aortic valve area <0.9 cm2),adverse drug reaction

Lipid-lowering agents: proportion of highly eligible patients prescribed lipid-lowering agents

Random serum cholesterol >4.0 mmol/l

Adverse drug reaction

continued

Page 5: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

Cardiac clinical indicators

i15

Table 1 continued

Indicator......................................................................................................................................................Eligibility criteria

Inclusions Exclusions.........................................................................................................................................................................................................................

Cardiac rehabilitation: proportion of highly eligible patients referred to outpatient cardiac rehabilitation programme

All patients

Coronary angiography: proportion of highly eligible patients undergoing early coronary angiography (during index admission or scheduled within 30 days of discharge)

Recurrent angina or reinfarction, or NSTEMI infarction or inducible ischaemia on non-invasive testing

Primary or rescue angioplasty, age >75 years, current smoker, severe COPD, stroke with hemiplegic deWcit, renal disease (serum creatinine ≥0.2 mmol/l), advanced liver disease, advanced cancer, alcohol/drug dependence, living in residential care

Non-invasive risk stratiWcation: proportion of highly eligible patients undergoing non-invasive risk assessment (during index admission or within 30 days of discharge)

All patients Coronary angiography (performed or scheduled) or other exclusions as listed under ‘coronary angiography’

Congestive heart failure

At presentationRecording underlying causes: proportion of patients for whom underlying causes for heart failure were recorded in hospital notes

All patients Nil

Recording acute precipitants:proportion of patients forwhom acute precipitants were recorded in hospital notes

All patients Nil

Fluid regimens: proportion of patients for whom a Xuid management regimen was explicitly recorded in hospital notes

All patients Nil

Daily weigh: proportion of patients undergoing daily weighing in assessing effectiveness of diuresis

All patients Nil

DVT prophylaxis: proportionof patients receiving DVTprophylaxis

All patients Concurrent warfarin therapy

Dietitian review: proportion of patients receiving dietitian review re: salt and Xuid intake

All patients

Testing of thyroid function: proportion of patients undergoing thyroid function testing

Atrial Wbrillation as new arrhythmia

Nil

continued

Page 6: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

A. Hickey et al.

i16

with some recipients wanting further veriWcation of data accu-racy and justiWcation of the method of indicator calculation.The rigorous research approach to measurement can appear atodds with the more pragmatic process of measurement forimprovement [24]. Clinicians experiencing difWculty in accepting

indicator feedback were invited to review the records of theirindicator-failed patients. Only four clinicians accepted this offerand none asked for the reported indicators to be revised.

In the second round we emphasized indicators for whichimprovement in care would result in the greatest gains in

Table 1 continued

ECG, electrocardiogram; NSTEMI, non-ST segment elevation myocardial infarction; LV, left ventricular; FEV, forced expiratory volume;COPD, chronic obstructive pulmonary disease; ACE, angiotensin-converting enzyme; NSAID, non-steroidal anti-inXammatory drug; DVT,deep venous thrombosis.1Patients discharged alive and not transferred to other hospitals.

Indicator........................................................................................................................................................Eligibility criteria

Inclusions Exclusions.........................................................................................................................................................................................................................

Assessment left ventricular function: proportion of patients who have undergone left ventricular imaging either during index admission or within previous 12 months

All patients Nil

Clinical pharmacist review:proportion of patientsreceiving review by clinicalpharmacist

All patients Nil

Discharge status1

ACE inhibitors: proportion of highly eligible patientsprescribed ACE inhibitor atdischarge

LV ejection fraction <40% or LV systolic dysfunction on LV imaging

Serum potassium >5.5 mmol/l, serum creatinine >0.3 mmol/l, systolic blood pressure <90 mmHg, severe aortic stenosis (deWned as aortic valve area <0.9 cm2), renal artery stenosis, adverse drug reaction

ACE inhibitor dose:proportion of highly eligible patients prescribedACE inhibitors at dischargewho receive target dose

All highly eligible patients receiving ACE inhibitors

Nil

β-blockers: proportion ofhighly eligible patientsprescribed β-blockers atdischarge

LV ejection fraction <40% or LV systolic dysfunction on LV imaging

Systolic blood pressure <90 mmHg, pulse rate <60 b.p.m., past history severe COPD (deWned as above) or asthma, adverse drug reaction

Warfarin: proportion of highly eligible patients prescribedwarfarin at discharge

Chronic atrial Wbrillation or Xutter

Systolic blood pressure >150 mmHg, serum creatinine >0.30 mmol/l, past history of active peptic ulcer disease or recent major bleed (including intracranial bleeding), perceived risk of fall, adverse drug reaction

Deleterious agents: proportion of patients who did not receive deleterious agents (class I antiarrhythmic agents, verapamil, diltiazem, NSAIDs, tricyclic antidepressants)

All patients Nil

Clinic follow-up: proportion of patients scheduled for outpatient clinic review within 4 weeks of discharge

All patients Nil

Page 7: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

Cardiac clinical indicators

i17

patient outcome. We offered potential solutions based on lit-erature reviews and action research conducted elsewhere. Wecollaborated with various groups in providing more custom-ized data that better informed their attempts to improve care.

This support-giving approach was further consolidated inthe third round of feedback, by which time data quality was

no longer an issue. Various multidisciplinary groups were nowacting across professional boundaries to improve speciWcaspects of care [23]. At this time, we surveyed a representativesample (n = 150) of hospital clinicians about the usefulnessand impact of indicator feedback. Although the response ratewas low (25%), >70% of respondents found feedback usefuland wished to continue receiving it, while 52% reported thatit had resulted in changes to practice.

During the course of our implementation, we modiWed theproject design as a result of clinician feedback and internalre-appraisal.

Brevity and clarity. Our Wrst post-discharge care feedbacknewsletter to GPs met with almost universal dismissal onthe grounds that it contained too much uninterpretable datathat could not be assimilated by busy practitioners.Consequently we restricted ourselves to simple tables andboxed key messages (see Figure 3), which elicited positiveresponses.

Liberalization of eligibility criteria. With certain pharmacologi-cal indicators (e.g. warfarin, ACE inhibitors) we found that byusing restricted indications and inclusive contra-indications,very few patients demonstrated eligibility for treatment, andthus the indicator was rendered useless. In such cases we cre-ated, by consensus, a new version of the indicator with moreliberal eligibility criteria that still accorded with evidence-based clinical decision-making.

Balancing effort of validation with the need for timely feedback. Wefound that repeated minor corrections of the raw data aimedat maximizing validity of the calculated indicator did notmake any signiWcant difference to the reported indicatorvalue, and simply delayed its timely release which, in turn,reduced its impact on clinicians. We reconWrmed Hannan’sdictum: ‘do not wait for better data—perfect should not bethe enemy of good’ [25].

Table 2 Process indicators for post-discharge care measured at 3, 6, and 12 months following hospital discharge

ACE, angiotensin-converting enzyme.

Indicator Description.........................................................................................................................................................................................................................

Acute coronary syndromes

Anti-platelet agent Proportion of patients receiving anti-platelet therapyβ-blockers Proportion of patients receiving β-blockersACE inhibitor Proportion of patients receiving ACE inhibitorLipid-lowering agents Proportion of patients receiving lipid-lowering therapyControl of serum lipids Proportion of patients achieving serum cholesterol <4 mmol/l or low density

lipoprotein <2.6 mmol/lSmoking cessation counselling Proportion of active smokers who have received smoking cessation counselling

Congestive heart failure

ACE inhibitor Proportion of patients receiving ACE inhibitorACE inhibitor dose Proportion of patients receiving ACE inhibitor who receive target doseβ-blockers Proportion of patients receiving β-blockersControl of blood pressure Proportion of patients achieving systolic blood pressure ≤130 mmHg and

diastolic blood pressure ≤80 mmHgWeight monitoring Proportion of patients who are being weighed at each follow-up visit in

assessing Xuid statusExercise prescription Proportion of patients receiving exercise prescription

Table 3 Strategies to enhance indicator reliability

Develop a speciWc case deWnition for patient selection

Construct a core dataset of items with unambiguous operational deWnitions

Develop a procedure and data entry manual that deWnes all data Welds

Train data collectors in the use of standardized datasheets to collect information using data entry manual

Minimize errors in database design and reporting through regular communication between clinicians, data managers, and statisticians

Develop database logic checks to alert when incorrect data are being entered, e.g. discharge date that is before an admission date

Pilot test the indicators and data collection methods and reWne them over a set period (in our case, 3 months) and update data entry manual accordingly

Monitor data reliability with frequent interim analyses and regular meetings of data collectors to ensure consistency of data item interpretation and measurement

Undertake re-abstraction audits of randomly selected hospital charts in assessing level of agreement among abstracters (inter-rater reliability)

Page 8: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

A. Hickey et al.

i18

Avoiding frequent minor changes in indicator deWnition [26]. Whiledata sets should allow for changes in indicator deWnitions toreXect evidence-based changes in practice, minor changesserved only to confuse clinicians wanting to make compari-sons across feedback periods. We ceased making differentversions of the same indicators after the Wrst round for rea-sons other than those mandated by the publication of resultsof important, new clinical trials.

Discontinuation of indicator reports based on small samples. We origi-nally envisaged a 3-month rather than a 6-month cycle for in-hospital feedback. While we desired timely indicator feedback,the shorter cycle span led to smaller patient samples which, forsome indicators with low event rates, introduced excess randomerror [27]. While statistical process control methods [28,29] couldhave been used to correct for such error, we were concerned thatadding such analyses to our feedback formats would have jeop-ardized their interpretability to the majority of clinicians.

Discontinuation of formal feedback sessions. By the third roundof feedback, hospital clinicians did not desire feedback to beaccompanied by formal discussion sessions, suggesting thatexternal facilitation becomes redundant once the culture ofimprovement has been established.

Results

Preliminary results

Baseline process indicators suggested suboptimal performancein several areas [30,31], with subsequent improvement in most

indicators and signiWcant change in 17 of 40 (Tables 4–7). It isimpossible to gauge the extent to which these improvementsin care can be attributed solely to clinical indicator feedbackamong several concurrent quality improvement interventions.However, focus group discussions and results of the previ-ously mentioned questionnaire survey suggested that indicatorfeedback had stimulated changes in practice.

Lessons learned and future plans

Our experience yielded several lessons that are guiding futureplans. Firstly, keep the number of data elements to a mini-mum. Our dataset for in-hospital care comprised 171 variablesfor ACS and 204 for CHF, many relating to patient demo-graphics and clinical characteristics, which we reasoned wererelevant to determining process-of-care eligibility or conduct-ing risk-adjusted outcome comparisons. In retrospect, a largefraction of these data (and the effort involved in collectingthem) added little to the validity of reported indicators and wasnot required for case-mix adjustment in the presence of casedeWnitions and standardized patient ascertainment. In a fur-ther extension of our work, data elements have been reducedto 50 for ACS and 45 for CHF [32]. Secondly, more economicalmethods of data collection and entry are worth exploring. Inthe absence of universal electronic medical records, proformasthat are read by text recognition software are being trialed[33], which affords more rapid importation of data into data-bases and reduction in transcription errors.

100

80

60

40

20

0

% o

f el

igib

le p

atie

nts

rece

ivin

g tr

eatm

ent

B-B A-A ACE-I LLA Heparin

Discharge status

Series 1

Series 2

Series 3

Series 4

Figure 1 Graphical feedback of consecutive series of values of process indicators for in-hospital care of patients with acutecoronary syndromes. Series 1: patients discharged 17 October 2000 to 16 April 2001 (note: no data on heparin use werecollected for patients in this series). Series 2: patients discharged 17 April 2001 to 20 September 2001. Series 3: patientsdischarged 21 September 2001 to 18 February 2002. Series 4: patients discharged 19 February 2002 to 31 August 2002. A-A,patients receiving anti-platelet agents; ACE-I, patients receiving angiotensin-converting enzyme inhibitors; B-B, patients receivingβ-blocker; LLA, patients receiving lipid-lowering agents; Heparin, patients receiving heparin infusion.

Page 9: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

Cardiac clinical indicators

i19

Figure 2 Tabular feedback of clinical indicators for in-hospital care of acute coronary syndromes.

Page 10: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

A. Hickey et al.

i20

Figure 3 Feedback forms relating to post-discharge (general practice) care.

Page 11: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

Cardiac clinical indicators

i21

Figure 3 continued

Page 12: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

A. Hickey et al.

i22

Thirdly, in terms of sustainability, our costings suggest thatmeasurement and feedback systems used ∼ 50% (or $500 000)of the programme budget, which was expended on 2500 patients(i.e. approximately $200 per patient). Trends in the datasuggest that costs are more than likely to be offset by improve-ments in health outcomes such as the observed 1-day reductionin median length of hospital stay for patients with ACS. Theminimization of datasets and the use of previously mentionedscanning technology will signiWcantly reduce this infrastructurecost.

In the Queensland state public hospital system, our revisedmethods for measuring and reporting clinical indicator dataare being continued in the three consortium hospitals andextended to 14 other large hospitals under the auspices of theCardiac Collaborative of the Queensland Health Collabora-tive for Healthcare Improvement [32]. Our indicator experi-ence is also assisting the Cardiac Society of Australia andNew Zealand (CSANZ) [34] and the National Institute of Clini-cal Studies (NICS) [35] to develop national indicator sets forACS and CHF, respectively, and to inform the qualityimprovement strategies that these organizations may want tosponsor in the future.

As a result of our programme, Wve divisions of generalpractice representing all GPs in metropolitan Brisbane havebecome involved in clinical audit and feedback. However, thelong-term continuation of this activity in its current form isproving to be a challenge due to the expense and labourinvolved. Less than ideal response rates limit the sample size,

requiring longer feedback cycles in order to accrue meaning-ful data.

Performance measurement and feedback are an integralpart of quality improvement initiatives. We attempted todevelop a sustainable system of indicator collection andreporting as part of a programme targeting care of two com-mon cardiac conditions associated with signiWcant mortalityand morbidity. Taking a patient-centred focus, we providedfeedback for indicators targeting both hospital and commun-ity providers. At an organizational level, our methodology isbeing extended to multiple public hospitals throughout ourstate, while the smaller, less resourced divisions of generalpractice are building on their experience to develop lowercost systems for ongoing clinical audit.

Acknowledgements

We are appreciative of the extensive support given byQueensland Health, and the cooperation of the members ofthe clinical indicator expert panels and others involved in indi-cator analysis, in particular: Dr Kathleen Armstrong, Dr JohnAtherton, Dr John Bennett, Mr Neil Cottrell, Dr ChristineFawcett, Dr Judy Flores, Dr Andrew Galbraith, Dr PaulGarrahy, Ms Ann Hadwyn, Professor Tom Marwick, Dr AlisonMudge, Dr Mark Morris, Dr Bronwyn Pierce, Ms DanielaSanders, and Ms Justine Thiele. The authors gratefully acknow-ledge the funding support of the Australian Commonwealth

Table 4 Changes in process and outcome indicators for in-hospital care for acute coronary syndromes

ACE, angiotensin-converting enzyme.1SigniWcant change (P ≤ 0.05).

Indicator Baseline (n = 428) (1/10/00–17/4/01)

Final remeasurement period(n = 436) (15/2/02–30/8/02)

.........................................................................................................................................................................................................................

PresentationElectrocardiograph within 10 minutes of presentation1 145/238 (61%) 170/244 (70%)Lysis administration 49/49 (100%) 39/39 (100%)Lysis in 60 minutes 35/39 (71%) 28/39 (72%)Lysis in 30 minutes 17/49 (35%) 16/39 (41%)

In-hospital courseLipid levels checked 311/428 (73%) 336/436 (77%)Non-invasive risk stratiWcation tests 17/57 (30%) 17/55 (31%)Coronary angiography 41/45 (91%) 43/46 (93%)In-hospital cardiac counselling1 168/351 (48%) 212/371 (57%)

Discharge statusβ-blocker 212/251 (84%) 202/239 (85%)Anti-platelet agents 301/318 (95%) 321/334 (96%)ACE inhibitors1 105/143 (73%) 113/139 (81%)Lipid-lowering agents1 165/202 (82%) 197/223 (88%)Referral to outpatient cardiac rehabilitation1 24/351 (8%) 64/371 (17%)

OutcomesIn-hospital mortality 28/379 (7.4%) 23/394 (5.8%)Re-admission (same cause) in 30 days 26/351 (7.4%) 17/371 (4.6%)Median length of hospital stay (days)1 7.0 6.0

Page 13: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

Cardiac clinical indicators

i23

Table 5 Changes in process and outcome indicators for in-hospital care for congestive heart failure

ACE, angiotensin-converting enzyme; DVT, deep venous thrombosis.1SigniWcant change (P ≤ 0.05).

Indicator Baseline (n = 220) (1/10/00–17/4/01)

Re-measurement period (n = 235) (15/2/02–30/8/02)

.........................................................................................................................................................................................................................

In-hospital courseRecording underlying causes 188/220 (85%) 215/235 (91%)Recording precipitating factors1 160/220 (75%) 211/235 (90%)Limiting Xuids1 89/220 (40%) 128/235 (54%)Weighing daily1 121/220 (55%) 148/235 (63%)DVT prophylaxis1 31/104 (30%) 94/128 (73%)Dietician review 38/220 (17%) 44/235 (19%)Thyroid function test1 16/31 (52%) 41/52 (79%)Echo cardiograph 135/220 (61%) 164/235 (70%)Clinical pharmacist review1 105/191 (55%) 142/219 (65%)

Discharge statusACE inhibitors 58/71 (82%) 61/71 (86%)ACE inhibitor target dose 82/136 (60%) 108/164 (66%)β-blocker1 47/135 (35%) 88/152 (58%)Warfarin 22/50 (44%) 27/63 (41%)Deleterious agents (avoidance of) 180/191 (94%) 214/219 (98%)Physician clinic follow-up1 87/191 (46%) 130/219 (59%)

OutcomesIn-hospital mortality1 21/212 (9.9%) 11/230 (4.8%)Re-admission (same cause) in 30 days 10/199 (5.0%) 13/224 (5.8%)Median length of hospital stay (days) 7.0 7.0

Table 6 Changes in process indicators for post-hospital care (acute coronary syndromes)

ACE, angiotensin-converting enzyme.1SigniWcant change (P ≤ 0.05).

Indicator................................................................................3 month follow-up

..................................................................................6 month follow-up

Baseline (n = 95) Remeasurement (n = 89) Baseline (n = 93) Remeasurement (n = 104).........................................................................................................................................................................................................................

Anti-platelet agent 81% 90% 83% 87%β-blocker 65% 74% 61% 70%ACE inhibitor 59% 57% 61% 63%Lipid-lowering agents 81% 81% 81% 83%Control of serum lipids 50% 44% 60% 68%Smoking cessation 17% 50%1 15% 39%1

Table 7 Changes in process indicators for post-hospital care (congestive heart failure)

1SigniWcant change (P ≤ 0.05).

Indicator..................................................................................3 month follow-up

..................................................................................6 month follow-up

Baseline (n = 47) Remeasurement (n = 38) Baseline (n = 45) Remeasurement (n = 54).........................................................................................................................................................................................................................

ACE inhibitor 72% 87% 69% 69%ACE inhibitor dose 55% 66% 63% 70%β-blocker 38% 66%1 38% 48%Control of blood pressure 63% 74% 45% 68%1

Weight monitoring 69% 100%1 57% 69%Exercise prescription 40% 58%1 44% 63%1

Page 14: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

A. Hickey et al.

i24

Department of Health and Ageing who made this programmepossible through the Clinical Support Systems Program(a joint initiative with the Royal Australasian College ofPhysicians).

References

1. DeLong JF, Allman RM, Sherrill RG, Schliesz N. A congestiveheart failure project with measured improvements in care. Eval

Health Prof 1998; 21: 472–486.

2. Mehta R, Montoye CK, Gallogly M et al. Improving quality incare of acute myocardial infarction: the Guidelines Applied toPractice (GAP) initiative in south-east Michigan. J Am Med Assoc

2002; 287: 1269–1276.

3. Marciniak TA, Ellerbeck EF, Radford MJ et al. Improving qual-ity of care for medicare patients with acute myocardial infarc-tion: results from the Cooperative Cardiovascular Project. J Am

Med Assoc 1998; 279: 1351–1357.

4. Axtell SS, Ludgwig E, Lope-Candales P. Interventions toimprove adherence to ACC/AHA recommended adjunctivemedications for the management of patients with an acute myo-cardial infarction. Clin Cardiol 2001; 24: 114–118.

5. Rubin H, Pronovost P, Diette GB. From a process of care to ameasure: the development and testing of a quality indicator. Int J

Qual Health Care 2001; 13: 489–496.

6. Braunwald E, Antman E, Beasley J et al. ACC/AHA 2002 guide-line update for the management of patients with unstable anginaand non-ST-segment elevation myocardial infarction: summaryarticle. A report of the American College of Cardiology/AmericanHeart Association Task Force on Practice Guidelines (Committeeon the Management of Patients with Unstable Angina). Circulation

2002; 106: 1893–1900.

7. Aroney C, Boyden A, Jelinek M, for the Unstable Angina Work-ing Group. Management of Unstable Angina Guidelines—2000.Med J Aust 2000; 173 (suppl.): S65–S88.

8. National Heart Foundation of Australia and Cardiac Society ofAustralia and New Zealand Chronic Heart Failure Clinical PracticeGuidelines Writing Panel. Guidelines for management of patientswith chronic heart failure in Australia. Med J Aust 2001; 174:459–466.

9. Cook DJ, Greengold NL, Ellrodt AG, Weingarten SR. The rela-tion between systematic reviews and practice guidelines. Ann

Intern Med 1997; 127: 210–216.

10. Quality of Care and Outcomes Research in CVD and StrokeWorking Groups. Measuring and improving quality of care. Areport from the AHA/ACC First ScientiWc Forum on Assess-ment of Healthcare Quality in Cardiovascular Disease andStroke. Circulation 2000; 101: 1483–1493.

11. Krumholz HM, Baker DW, Ashton CM et al. Evaluating qualityof care for patients with heart failure. Circulation 2000; 101:e122 http://circ.ahajournals.org/cgi/content/full/101/12/e122Accessed 3 March 2003.

12. Mant J. Process versus outcome indicators in the assessmentof quality of health care. Int J Qual Health Care 2001; 13:475–480.

13. Hofer TP, Bernstein SJ, Hayward RA, DeMoner S. Validatingquality indicators for hospital care. Jt Comm J Qual Improv 1997;23: 455–467.

14. Brook RH, McGlynn EA, Cleary PD. Measuring quality of care.N Engl J Med 1996; 335: 960–970.

15. Huff ED. Comprehensive reliability assessment and comparisonof quality indicators and their components. J Clin Epidemiol

1997; 50: 1395–1404.

16. Fitzgerald ME, Molinari GF, Bausell RB. The empoweringpotential of quality improvement data. Eval Health Prof 1998; 21:419–428.

17. Mani O, Mehta RH, Tsai T et al. Assessing performance reportsto individual providers in the care of acute coronary syndromes.Jt Comm J Qual Improv 2002; 28: 220–232.

18. Rainwater JA, Romano PS, Antonius DM. The CaliforniaHospital Outcomes Project: how useful is California’s report cardfor quality improvement? Jt Comm J Qual Improv 1998; 24: 31–39.

19. Goddard M, Davies HT, Dawson D, Mannion R, McInnes F.Clinical performance measurement: part 2—avoiding the pit-falls. J R Soc Med 2002; 95: 549–551.

20. Jeacocke D, Heller R, Smith J, Anthony D, Williams JS, DugdaleA. Combining quantitative and qualitative research to engagestakeholders in developing indicators in general practice. Aust

Health Rev 2002; 25: 12–18.

21. Thomson O’Brien MA, Oxman AD, Davis DA et al. Audit andfeedback: effects on professional practice and health care out-comes (Cochrane Review). In: The Cochrane Library, Issue 4.Oxford: Update Software, 2002.

22. Berwick DM, James B, Coye MJ. Connection between qualitymeasurement and improvement. Med Care 2003; 41: I-30–I-38.

23. Lurie JD, Merrens EJ, Lee J, Splaine ME. An approach to hospi-tal quality improvement. Med Clin North Am 2002; 86: 825–845.

24. Solberg LI, Mosser G, McDonald S. The three faces of perform-ance measurement: improvement, accountability, and research.Jt Comm J Qual Improv 1997; 23: 135–147.

25. Hannan EL. Measuring hospital outcomes: don’t make perfectthe enemy of good! J Health Serv Res Policy 1998; 3: 67–69.

26. Ohm B, Brown J. Quality improvement pitfalls: how to over-come them. J Healthc Qual 1997; 19: 16–20.

27. Zaslavsky AM. Statistical issues in reporting quality data: smallsamples and casemix variation. Int J Qual Health Care 2001; 13:481–488.

28. Maleyeff J, Kaminsky FC, Jubinville A, Fenn CA. A guide tousing performance measurement systems for continuousimprovement. J Healthc Qual 2001; 23: 33–37.

29. Gibberd R. Pathmeswaran A, Burtenshaw K. Using clinical indi-cators to identify areas for quality improvement. J Qual Clin Prac-

tice 2000; 20: 136–144.

30. Scott IA, Denaro CP, Flores JL, Bennett C, Hickey A, MudgeAP. Quality of care of patients hospitalised with acute coronarysyndromes. Intern Med J 2002; 32: 502–511.

31. Scott IA, Denaro CP, Bennett C et al. Quality of care of patientshospitalised with congestive heart failure. Intern Med J 2003; 33:140–151.

Page 15: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

Cardiac clinical indicators

i25

32. Queensland Health Collaborative for Healthcare Improvement—Cardiac Collaborative, Brisbane, Australia. http://www.qheps.health.qld.gov.au/chi/home.htm Accessed 11 December 2002.

33. Thoma G. Automating the production of bibliographic recordsfor medline. An R&D report of the Communications Engineer-ing Branch Lister Hill National Center for Biomedical Commu-nications, National Library of Medicine. Bethesda, MD:National Library of Medicine. http://archive.nlm.nih.gov/pubs/thoma/mars2001.php Accessed September 2001.

34. Cardiac Society of Australia and New Zealand/National HeartFoundation of Australia Acute Coronary Syndrome WorkingGroup, Sydney, Australia. http://www.csanz.edu.au Accessed11 December 2002.

35. National Institute of Clinical Studies Heart Failure Data ExpertGroup, Melbourne, Australia. http://www.nicsl.com.au Accessed11 March 2003.

Accepted for publication 21 November 2003

Page 16: International Journal for Quality in Health Care 2004 ... · International Journal for Quality in Health Care 2004; Volume 16, Supplement 1: pp. i11–i25 10.1093/intqhc/mzh032 International

Recommended