Management/Analysis Tools for Reviews • James Thomas, EPPI-Centre • Ethan Balk, Brown University • Nancy Owens, Covidence • Martin Morris, McGill Library
KTDRR and Campbell Collaboration Research Evidence Training Session 3: April 17, 2019
Copyright © 2018 American Institutes for Research (AIR). All rights reserved. No part of this presentation may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from AIR. Submit copyright permissions requests to the AIR Publications Copyright and Permissions Help Desk at [email protected]. Users may need to secure additional permissions from copyright holders whose work AIR included after obtaining permission as noted to reproduce or adapt materials for this presentation.
Agenda 3:00 – 3:05: Introduction
3:05 – 3:25: EPPI-Reviewer, James Thomas
3:25 – 3:45: Abstrackr, Ethan Balk
3:45 – 4:05: Covidence, Nancy Owens
4:05 – 4:25: Rayyan, Martin Morris
4:25 – 4:30: Wrap-up, Evaluation
2 2
Center For Evidence Synthesis in Health
Online, Open-Access, Open-Source, Free Software for Citation Screening
Brown Evidence-based Practice Center Center for Evidence Synthesis in Health
Brown School of Public Health Funded by: Agency for Healthcare Research and Quality (AHRQ)
http://abstrackr.cebm.brown.edu
Thomas Trikalinos, Jens Jap, Birol Senturk, Ethan Balk, Gaelen Adam [email protected]
3
Abstrackr: What it is. What it does. Center For Evidence Synthesis in Health
• Open-source, open-access, Web-based, free platform • Tool to screen publication citations for systematic reviews • Uses machine-learning algorithms to predict acceptance
• Allows semi-automation of screening process
• Annotates citations • Organizes citations by acceptance status (and tags)
• Developed by • Brown Evidence-based Practice Center and Center for Evidence Synthesis in Health • Highly experienced team of SR experts and methodologists
4
Center For Evidence Synthesis in Health Abstrackr: Major features
• Clean presentation of citations • Computer, tablet, phone
• Simultaneous multiple screening (e.g., in duplicate, pilot rounds) • Ability to color-code (and rate relevance) of terms (words, phrases) • Ability to tag and add notes to each citation • Prediction of potential relevance of unscreened citations
• Sort by relevance (front-load finding accepted citations) • Potentially stop (double) screening early
5
• Title, Journal, Authors, Abstract, Keywords • Color code terms • Add tags and notes • Accept, Reject, Maybe
Center For Evidence Synthesis in Health
6
Center For Evidence Synthesis in Health
• Title, Journal, Authors, Abstract, Keywords• Color code terms• Add tags and notes• Accept, Reject, Maybe
6
Center For Evidence Synthesis in Health
• Double or Single scr• Double or Single screeneen • Sort by likelihood of relevance• Sort by likelihood of relevance
(or in rand der)(or in random orom order) •• Pilot r ndPilot rouound (any size)
7
Center For Evidence Synthesis in Health
• Double or Single screen• Sort by likelihood of relevance
(or in random order)• Pilot round
• Double or Single screen• Sort by likelihood of relevance
(or in random order)• Pilot round (any size)
7
Center For Evidence Synthesis in Health
Import from: • RIS file (EndNote, Reference
Manager) • PMID list • Tab-delimited file (e.g.,
Excel)
8
Center For Evidence Synthesis in Health
Import from:• RIS file (EndNote, Reference
Manager)• PMID list• Tab-delimited file (e.g.,
Excel)
8
Center For Evidence Synthesis in Health
• Conflict Resolution
9
Center For Evidence Synthesis in Health
• Prediction Algorithms
10
Center For Evidence Synthesis in Health
• Export file (tab-delimited)
consensus 1 = agreement/reconciled ACCEPT -1 = agreement/reconciled REJECT x = disagreement o = not yet screened
NB. This table is truncated. Several screeners have been omitted. Therefore, consensus and screeners’ labels may not appear to match. Coloring was added after export.
11
Center For Evidence Synthesis in Health Abstrackr: Increases in efficiency
• Predict relevance • Machine learning is used to predict the relevance of remaining unscreened
citations
• Front-loading high relevance • Higher-relevance citations can be sorted to the front of the queue to
maximize screening efficiency
• Rapidly screen citations • In our experience, when all remaining prediction values are <0.40, screening
becomes extremely rapid as all remaining abstracts are ineligible. • In future, may be able to stop screening “early”
• Tool can be used with machine learning features “turned off”
12
Center For Evidence Synthesis in Health Abstrackr: Other functionality
• For an update of same search, can add new citations to existing review
• This will allow immediate predictions for new (unscreened) citations • Can track old and new citations together
• Can export results as • CSV file (Excel), RIS file (EndNote, RM), XML file • Will soon be able to transfer directly into SRDR+ for full text screening,
evidence map, and final data extraction
• Open-source, so can load software behind your firewall to maintainprivacy of all analyzed data
13
Center For Evidence Synthesis in Health Abstrackr: Improvements to come
• Transferring into SRDR+ (srdrplus.ahrq.gov) • Streamline
• Abstract screening • Evidence mapping/scoping • Full-text screening • Meta-analysis (with OpenMeta-Analyst, www.cebm.brown.edu/openmeta/) • Creation of preliminary PRISMA flow diagram
• Systematic Review Data Repository (Plus): data extraction and archiving platform • Also free and open-access
• Further confirmation of validity of prediction algorithmsand ability to use prediction values to stop double (or all)screening early
14
Center For Evidence Synthesis in Health Abstrackr: Summary
• Simple, intuitive, and free platform to screen citations, on any device • Allows single and double screening, pilot round training, and conflict
resolution • Able to mark up abstracts to improve screening speed and accuracy • Prediction algorithms available
• Front-load most relevant citations more quickly find eligible studies • May allow early stopping of screening (once all machine-predicted accepts have
been screened) • Can be carried forward into updated literature searches
• Easy import and export of citations and screening labels • To be incorporated into SRDR+ to streamline screening, evidence mapping,
full-text screening, data extraction, and archiving
15
Center For Evidence Synthesis in Health
References for Evaluations of Abstrackr Algorithms (with 1-4 examples from SR each; some algorithms include non-SR datasets as well)
Wallace BC et al. Semi-automated screening of biomedical citations for systematic reviews. BMC bioinformatics. 2010 Dec;11(1):55. Wallace BC et al. Deploying an interactive machine learning system in an evidence-based practice center: abstrackr. Health Informatics Symposium 2012 Jan 28 (pp. 819-824). Wallace BC et al. Active learning for biomedical citation screening. 16th ACM SIGKDD international conference on Knowledge discovery and data mining 2010 Jul 25 (pp. 173-182). Wallace BC et al. Class imbalance, redux. Data Mining (ICDM), 2011 IEEE 11th International Conference on 2011 Dec 11 (pp. 754-763). Wallace BC et al. Who should label what? instance allocation in multiple expert active learning. 2011 SIAM International Conference on Data Mining 2011 Apr 28 (pp. 176-187). Small K et al. The constrained weight space svm: learning with ranked features. 28th International Conference on International Conference on Machine Learning 2011 Jun 28 (pp. 865-872). Wallace BC et al. Modeling annotation time to reduce workload in comparative effectiveness reviews. 1st ACM International Health Informatics Symposium 2010 Nov 11 (pp. 28-35). Nguyen AT et al. Combining crowd and expert labels using decision theoretic active learning. Third AAAI conference on human computation and crowdsourcing 2015 Sep 23. Mortensen ML et al. An exploration of crowdsourcing citation screening for systematic reviews. Research Synthesis Methods (2016). RSM-02-2016-0006; 2016.
Empirical evaluation: SR Rathbone J et al. Faster title and abstract screening? Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers. Systematic reviews. 2015 Dec;4(1):80. Gates A et al. Technology-assisted title and abstract screening for systematic reviews: a retrospective evaluation of the Abstrackr machine learning tool. Systematic reviews. 2018 Dec;7(1):45.
Empirical evaluation: Updates of SR Wallace BC et al. Toward modernizing the systematic review pipeline in genetics: efficient updating via data mining. Genetics in medicine. 2012 Jul;14(7):663.
Scoping reviews Wallace BC et al. Active literature discovery for scoping evidence reviews: How many needles are there. KDD workshop on data mining for healthcare, 2013.
Descriptions Wallace BC et al. Modernizing the systematic review process to inform comparative effectiveness: tools and methods. JCER. 2013 May;2(3):273-82. Lease M et al. Systematic review is e-discovery in doctor’s clothing. 2nd SIGIR workshop on Medical Information Retrieval, 2016. Elsherbeny MY, Negida A. Using Absrackr-Technical Report. https://www.researchgate.net/publication/288183873_MRGE_REPORT_2_Semi-
automated_Online_Abstract_Screening_Using_Absrackr Wu W et al. Digital Tools for Managing Different Steps of the Systematic Review Process. Library Scholarly Publications. 136. Brander G, Pawliuk C. Embedded Health Librarians as Facilitators of a Multidisciplinary Scoping Review. Journal of the Canadian Health Libraries Association. 2017 Aug 1;38(2):38-43.
16
Center For Evidence Synthesis in Health Thank you
Ethan Balk [email protected] Tom Trikalinos [email protected]
Brown Center for Evidence Synthesis in Health https://www.brown.edu/public-health/cesh/home
Abstrackr http://abstrackr.cebm.brown.edu
SRDR / SRDR+ https://srdr.ahrq.gov https://srdrplus.ahrq.gov/
OpenMeta-Analyst http://www.cebm.brown.edu/openmeta/
17
Thank you!
Please take a few minutes to respond to the brief Evaluation Survey:
www.surveygizmo.com/s3/4936232/Evaluation-Session3-Management-Analysis-Tools
• James Thomas: [email protected]
• Ethan Balk: [email protected]
• Nancy Owens: [email protected]
• Martin Morris: [email protected]
18 18
www.ktdrr.org
4700 Mueller Blvd, Austin, TX 78723
800.266.1832
The contents of this presentation were developed under grant number 90DPKT0001 from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR). NIDILRR is a Center within the Administration for Community Living (ACL), Department of Health and Human Services (HHS). The contents of this presentation do not necessarily represent the policy of NIDILRR, ACL, HHS, and you should not assume endorsement by the Federal Government.
19