The semi-automation of title and abstract screening: a retrospective exploration of ways to leverage Abstrackr's relevance predictions in systematic and rapid reviews

BMC Med Res Methodol. 2020 Jun 3;20(1):139. doi: 10.1186/s12874-020-01031-w.

Abstract

Background: We investigated the feasibility of using a machine learning tool's relevance predictions to expedite title and abstract screening.

Methods: We subjected 11 systematic reviews and six rapid reviews to four retrospective screening simulations (automated and semi-automated approaches to single-reviewer and dual independent screening) in Abstrackr, a freely-available machine learning software. We calculated the proportion missed, workload savings, and time savings compared to single-reviewer and dual independent screening by human reviewers. We performed cited reference searches to determine if missed studies would be identified via reference list scanning.

Results: For systematic reviews, the semi-automated, dual independent screening approach provided the best balance of time savings (median (range) 20 (3-82) hours) and reliability (median (range) proportion missed records, 1 (0-14)%). The cited references search identified 59% (n = 10/17) of the records missed. For the rapid reviews, the fully and semi-automated approaches saved time (median (range) 9 (2-18) hours and 3 (1-10) hours, respectively), but less so than for the systematic reviews. The median (range) proportion missed records for both approaches was 6 (0-22)%.

Conclusion: Using Abstrackr to assist one of two reviewers in systematic reviews saves time with little risk of missing relevant records. Many missed records would be identified via other means.

Keywords: Automation; Efficiency; Machine learning; Rapid reviews; Systematic reviews.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Automation
  • Humans
  • Machine Learning*
  • Reproducibility of Results
  • Retrospective Studies
  • Systematic Reviews as Topic