Measure..but do it really intelligently...•Promote equity between the people who provide and use...

Post on 10-Mar-2020

0 views 0 download

transcript

12 June 2014

Measure..but do it really intelligently

Presenter: Mary Dixon-Woods

The Prudent Principles

• Minimise avoidable harm.

• Carry out the minimum appropriate intervention.

• Promote equity between the people who provide and use services.

Applying the Principles

• Minimise avoidable harm.

• Carry out the minimum appropriate intervention.

• Promote equity between the people who provide and use services.

• Most health care organisations at present have very little capacity to analyse, monitor, or learn from safety and quality information. This gap is costly and should be closed.

Why measure?

Why measure?

– Signal priority

– Create mission

– Detect variation

– Assess improvement and deterioration

– Provide feedback so staff know how’re they’re doing

– Identify weaknesses, strengths and areas for intervention

– Improve transparency and accountability to patients

Performance management by numbers

• Whether performance is enhanced or obstructed depends on the purpose to which numbers are put: – Targets

– Rankings

– Intelligence

Intelligence

• If you’re not measuring, you’re not managing

Intelligence

• If you’re not measuring, you’re not managing

Intelligence

• If you’re not measuring, you’re not managing

• If you’re measuring stupidly, you’re not managing

Intelligence

• If you’re not measuring, you’re not managing

• If you’re measuring stupidly, you’re not managing

• If you’re only measuring, you’re not managing

What methods might you use as ways of gaining intelligence?

Some ways of knowing

• Internal and external measures • Incident reports • ‘High reliability’ techniques • ‘Culture’ surveys • Walking the process • Direct observations • Role-swapping and shadowing • Glitch reports and hassle boxes • Analysis of complaints and incidents • Authentic listening to patients and staff

Vincent et al’s fundamental questions

• Has patient care been safe in the past?

• Are our clinical systems and processes reliable?

• Is care safe today?

• Will care be safe in the future?

• Are we responding and improving?

Vincent et al’s fundamental questions

• Has patient care been safe in the past?

• Are our clinical systems and processes reliable?

• Is care safe today?

• Will care be safe in the future?

• Are we responding and improving?

Harm and reliability

• Incidents of harm are an important but imperfect guide to safety in the past

• Measures of reliability assess the extent to which a system is performing against its intended specification

External measures

• What are the advantages of using external measures of harm and reliability?

External measures

• Standardisation of measures reduces burden of measure design

• Can allow detection of unwarranted variations

• Can identify high and low performers

• Can identify good practice

• Can stimulate improvement

• But prone to iatrogenic effects

Measuring too much

• In the US, National Quality Forum measures went from 200 in 2005 to over 700 in 2011

• US CMS has introduced 65 new measures in last year alone

• At MGH, measuring consumes 1% of net patient service revenue

Targets and terror

• People become adept at working out what they need to do to survive performance management

https://www.flickr.com/photos/benleto/

Eroding the denominator

• Exclusions for reporting [may be] …obscuring the current drivers of in-hospital mortality instead of helping focus attention on them. A key consideration for the future will be to dissociate these measures of performance from reimbursement, so as to allow for comprehensive reporting and data collection without the threat of punishment.

• Risk that emphasis on “gaming” conceals technical problems of measurement

Technical problems

• In setting up data collection systems are formidable

• Frequently under-estimated and challenges to comparability poorly understood

Measuring harm

• Design a system to collect data on the rate of infections in invasive devices in children (under 16) living mainly in the community.

Measuring harm

• Does it make a difference if you have to report your rates externally?

The Health Foundation’s Lining Up Research project

• An ethnographic study of interventions to reduce central line infections

• What happens when organisations are asked to interpret data definitions, collect data and report on CVC-BSIs?

Measurement

31

33

Data collection systems Controller centered Track-trigger Track Patrol

11 3 3

Highly fallible Highly reliable Reasonable reliability

Variable local credibility

High local credibility

Low local credibility

Denominators

• Staff perceived that patients with different risk profiles were lumped together

• Sometimes excluded patients thought to be “low risk” or “high risk”

• Perceptions of fairness were very important

Numerators

36

Clinical practices

• Some physicians started antimicrobial therapy without a blood sample

• What ICUs sent to the lab varied enormously

• Organisational systems meant samples were not always matched up

Differences in microbiology support

• Many could not support catheter-related definition

• Microbiology involvement in rounds varied

• Contribution to decision-making about what counted varied

Link between measurement and improvement

• High rates could motivate action – but only if credible • Low rates sometimes induced unjustified complacency • Credible data is a must

If I’m honest right before we started, we didn’t think we were that bad. […] We thought, you know, [we] don’t really have a problem with central line infections. But nobody ever looked to see whether we were any good […] and when we compared our infection rates, actually they were far worse than any of us ever realised.

39

Emphasis on gaming obscures technical problems

• Much practice in relation to counting central line infections was artless, not artful

• Technical components of measurement inescapably linked to social practices

• Fairness an important influence on those practices

Adding sanctions does change the game

Counting counts

• CDC definitions aimed at maximising sensitivity

• Financial and reputational penalties changing rules of game

• But accuracy of reported rates in question – one study found external validation raised reported rates by 27%

Past harm

• Is not necessarily a good guide to current safety

• Risk of “no harm no foul” mentality

• Failure of pro-active risk seeking in healthcare

https://www.flickr.com/photos/tsuda/

Sound rationale for testing use of safety management systems in healthcare

• Need to move from simplistic compliance-focused approach

• Need to be much more proactive about identifying local hazards

• Need risk management systems that are bespoke to contexts

44

Findings of the diagnostics across the 8 sites

• Highly complex systems with multiple unreliable sub-components

• Many systems are improvisatory – never been purposefully designed

• Knowledge of systems and functioning often informal and idiosyncratic

Reliability

Process weaknesses

• Absence of certainty that things would happen as they were supposed to happen

• Especially if

– No one person could oversee process

– Coordination needed across time, departmental, team, and shift boundaries

Process weaknesses

• Often unclear who was able, ought to, or was entitled to change the process

• Staff reported constantly rescuing processes that had gone wrong

• The patient was handed over in the morning as a diabetic on tablets and that the nurse only found out during the day that the patient should be on injections.

Site Total ops

Ops with equipment problems

Equipment problems

% ops with one or more

problems

A 258 50 56 19% D 67 25 28 37% F 165 19 19 12%

Total 490 94 103 19%

‘We always need a colposcope with that list and time and time again it isn’t there or it’s broken or it isn’t back or nobody knows where it is’

Reliability in your organisation

• Which are the most reliable processes?

• Which are the least reliable?

• How would you know?

Many QI projects

• Focus on improving reliability

• Use PDSA cycles

• Use locally developed internal measures

Internal measures

• Invaluable for identifying and characterising local problems

• Assessing change

• Identifying opportunities for further intervention

Internal/locally developed measures

• What problems do you expect to see?

• Insufficient data points

• Lack of sufficient baseline periods

• Changing samples and sampling strategies

• Inadequate annotations of changes

“Data for improvement”

Common problems with local measures

• Insufficiently focused on what is important to staff and patients

• Operational definitions unclear • Terms (e.g. “delay”) not explicit • Burdensome or difficult to put into practice • Duplicates information already being produced for

another purpose • Method of analysis not defined in advance • Baseline periods not defined • Inappropriate sampling strategies • Impossible to compare

Magical thinking in improvement

Conspiracy of enthusiasm

• Specific reforms are advocated as though they were certain to be successful…We must be able to advocate without that excess of commitment that blinds us to reality testing” (Donald Campbell, 1969: 72)

Your tips for getting measurement right?

Measurement

• You MUST have a protocol. • Use validated measures where you can. • Get a data analyst involved from the beginning and

design a database. • Sound definitions of denominator and numerator. • Define frequency and intervals. • Clear sampling strategy. • PDSA the data collection system and train people. • Only collect what you need to collect. • Quality assure data entry. • Proper plan for the analysis and choose the right tests.

Conclusions

• Vincent et al provide very useful framework

• No single indicator will tell you whether care is safe

• More use of pro-active diagnostic tools

• Better measurement of harm and reliability