+ All Categories
Home > Documents > Some comments on the 3 papers Robert T. O’Neill Ph.D.

Some comments on the 3 papers Robert T. O’Neill Ph.D.

Date post: 17-Dec-2015
Category:
Upload: britney-jennings
View: 222 times
Download: 1 times
Share this document with a friend
Popular Tags:
14
Some comments on the 3 Some comments on the 3 papers papers Robert T. O’Neill Ph.D Robert T. O’Neill Ph.D
Transcript
Page 1: Some comments on the 3 papers Robert T. O’Neill Ph.D.

Some comments on the 3 Some comments on the 3 paperspapers

Robert T. O’Neill Ph.DRobert T. O’Neill Ph.D

Page 2: Some comments on the 3 papers Robert T. O’Neill Ph.D.

Comments on G. AndersonComments on G. Anderson

WHISH is a nice exampleWHISH is a nice example

Randomiztion (Zelen) but using different sources of Randomiztion (Zelen) but using different sources of data for outcomedata for outcome

Outcome data: self reported, adjudicated for medical Outcome data: self reported, adjudicated for medical records, Medicare claim (hybrid-ability to estimate SE records, Medicare claim (hybrid-ability to estimate SE and SPand SP

Impact of outcome misclassificationImpact of outcome misclassification

Event data not defined by protocol-you depend on the Event data not defined by protocol-you depend on the health care systemhealth care system

Claims data DO NOT provide standardized data – see Claims data DO NOT provide standardized data – see Mini-Sentinel and OMOPMini-Sentinel and OMOP

Page 3: Some comments on the 3 papers Robert T. O’Neill Ph.D.

Comments on A J CookComments on A J Cook

Key component is randomization at Key component is randomization at patient or clinic level and use of electronic patient or clinic level and use of electronic health record for data capture (cluster health record for data capture (cluster randomization addresse different issues) randomization addresse different issues)

Missing data, informative censoring, switching, Missing data, informative censoring, switching, measuring duration of exposure (repeat Rx, gaps) , measuring duration of exposure (repeat Rx, gaps) , different answers depending upon definitiondifferent answers depending upon definition

Validation of outcomes makes the pragmatic trial less Validation of outcomes makes the pragmatic trial less simplesimple

Only some outcomes (endpoints) , populations, Only some outcomes (endpoints) , populations, questions are addressable before complexities of questions are addressable before complexities of interpretation overwhelminterpretation overwhelm

Page 4: Some comments on the 3 papers Robert T. O’Neill Ph.D.

Comments on M GaffneyComments on M Gaffney

Precision and Eagle are not large simple Precision and Eagle are not large simple trials – they are large difficult trials trials – they are large difficult trials

Outcome adjudication, monitoring Outcome adjudication, monitoring strategiesstrategies

Non-inferiority poses significant challenges Non-inferiority poses significant challenges for pragmatic trials – generally no assay for pragmatic trials – generally no assay sensitivitysensitivity

Margin selection based upon evidence Margin selection based upon evidence vs. based upon close enough but not vs. based upon close enough but not sure if both are equally good or badsure if both are equally good or bad

Page 5: Some comments on the 3 papers Robert T. O’Neill Ph.D.

Other comments on NI studiesOther comments on NI studies

Pre-specifying the margins – why and what is the Pre-specifying the margins – why and what is the difference in these two situationsdifference in these two situations

What treatment difference is detectable and credible What treatment difference is detectable and credible with the playoff of bias and huge sample size with the playoff of bias and huge sample size

When pre-specification is not possible because there is When pre-specification is not possible because there is no historical information, the width of the confidence no historical information, the width of the confidence interval makes sense – but two conclusions – both interval makes sense – but two conclusions – both treatments the same and comparably effective vs. both treatments the same and comparably effective vs. both the same but both ineffectivethe same but both ineffective

What endpoints are eligible : Hard endpoints (y), patient What endpoints are eligible : Hard endpoints (y), patient symptoms(n) symptoms(n)

Page 6: Some comments on the 3 papers Robert T. O’Neill Ph.D.

Other commentsOther comments

Are NI designs appropriate for claims data of EHR Are NI designs appropriate for claims data of EHR without independent all case adjudication – without independent all case adjudication – implications for poor sensitivity and specificity to implications for poor sensitivity and specificity to drive estimate to null – what does a null result drive estimate to null – what does a null result meanmean

Experience suggests that Exposure (drugs) has Experience suggests that Exposure (drugs) has better accuracy than diagnoses or procedure in better accuracy than diagnoses or procedure in claims data bases(outcomes)claims data bases(outcomes)

Duration of exposure dependent upon Duration of exposure dependent upon algorithms for repeat prescriptions – different algorithms for repeat prescriptions – different results depending upon definitions of gaps results depending upon definitions of gaps between repeated RXbetween repeated RX

Page 7: Some comments on the 3 papers Robert T. O’Neill Ph.D.

Can randomization overcome lack Can randomization overcome lack of blinding and personal choices of blinding and personal choices

after randomizationafter randomization

Use of observational methods of Use of observational methods of accounting for unmeasured accounting for unmeasured confounding of assigned treatment confounding of assigned treatment and time to event outcomes subject to and time to event outcomes subject to censoringcensoring

Directed Acylic Graphs to explore Directed Acylic Graphs to explore the confounding-censoring the confounding-censoring problem - diagnosticsproblem - diagnostics

Instrumental variablesInstrumental variables

Page 8: Some comments on the 3 papers Robert T. O’Neill Ph.D.

Lessons learned from Mini-Sentinel and the Lessons learned from Mini-Sentinel and the Observational Medical Outcomes Partnership Observational Medical Outcomes Partnership

(OMOP)(OMOP)

Distributed data modelsDistributed data models

Common data modelsCommon data models

Limits of detectability of effect sizes of two or more Limits of detectability of effect sizes of two or more competing agents – calibration, interpretation of p-competing agents – calibration, interpretation of p-values for non randomized studiesvalues for non randomized studies

Not all outcomes and exposures can be dealt with Not all outcomes and exposures can be dealt with in similar mannerin similar manner

Know the limitations of your data base – is this Know the limitations of your data base – is this possible in advance of conducting the study – part possible in advance of conducting the study – part of the intensive study planning, protocol and of the intensive study planning, protocol and prospective analysis planprospective analysis plan

Page 9: Some comments on the 3 papers Robert T. O’Neill Ph.D.

An example of Medicare data use but An example of Medicare data use but not a RCTnot a RCT

Page 10: Some comments on the 3 papers Robert T. O’Neill Ph.D.

Some other views and opinions on CER using the learning health care system

Page 11: Some comments on the 3 papers Robert T. O’Neill Ph.D.

Lessons Learned from OMOP Lessons Learned from OMOP and Mini-Sentinel About and Mini-Sentinel About

observational studies using observational studies using health care claims data or EHR – health care claims data or EHR –

but no randomizationbut no randomization

Lessons about the limitations of Lessons about the limitations of the data bases, outcome the data bases, outcome capturing, ascertainment , missing capturing, ascertainment , missing data (confounders) are relevant to data (confounders) are relevant to the RCT use of the same data the RCT use of the same data sourcesource

Lessons about data models, and Lessons about data models, and challenges for data (outcome challenges for data (outcome standardization)standardization)

http://www.mini-sentinel.org/http://omop.org/

Page 12: Some comments on the 3 papers Robert T. O’Neill Ph.D.

The Observational Medical Outcomes Partnership – many findings

Page 13: Some comments on the 3 papers Robert T. O’Neill Ph.D.

Some ideas on what to evaluate about a given data source before committing to conducting a study – focus on observational studies – but also relevant to pragmatic RCTs

Page 14: Some comments on the 3 papers Robert T. O’Neill Ph.D.

How do these presentations relate How do these presentations relate to pragmatic trials within a health to pragmatic trials within a health

care systemcare system

Two or more competing therapies on a formulary – Two or more competing therapies on a formulary – never compared with each othernever compared with each other

Randomize patients under equipoise principle – do you Randomize patients under equipoise principle – do you need patient consent , physician consent if health plan need patient consent , physician consent if health plan and no data to think ‘I or We don’t know but want to and no data to think ‘I or We don’t know but want to find out ‘find out ‘

Collect electronic medical record data, including Collect electronic medical record data, including exposures and outcomes – and decide if any additional exposures and outcomes – and decide if any additional adjudication is neededadjudication is needed

Analyze according to best practices – but with some Analyze according to best practices – but with some prospective SAPs – causal inference strategies ?prospective SAPs – causal inference strategies ?


Recommended