Development of a clinician reputation metric to identify appropriate problem-medication pairs in a crowdsourced knowledge base

J Biomed Inform. 2014 Apr:48:66-72. doi: 10.1016/j.jbi.2013.11.010. Epub 2013 Dec 7.

Abstract

Background: Correlation of data within electronic health records is necessary for implementation of various clinical decision support functions, including patient summarization. A key type of correlation is linking medications to clinical problems; while some databases of problem-medication links are available, they are not robust and depend on problems and medications being encoded in particular terminologies. Crowdsourcing represents one approach to generating robust knowledge bases across a variety of terminologies, but more sophisticated approaches are necessary to improve accuracy and reduce manual data review requirements.

Objective: We sought to develop and evaluate a clinician reputation metric to facilitate the identification of appropriate problem-medication pairs through crowdsourcing without requiring extensive manual review.

Approach: We retrieved medications from our clinical data warehouse that had been prescribed and manually linked to one or more problems by clinicians during e-prescribing between June 1, 2010 and May 31, 2011. We identified measures likely to be associated with the percentage of accurate problem-medication links made by clinicians. Using logistic regression, we created a metric for identifying clinicians who had made greater than or equal to 95% appropriate links. We evaluated the accuracy of the approach by comparing links made by those physicians identified as having appropriate links to a previously manually validated subset of problem-medication pairs.

Results: Of 867 clinicians who asserted a total of 237,748 problem-medication links during the study period, 125 had a reputation metric that predicted the percentage of appropriate links greater than or equal to 95%. These clinicians asserted a total of 2464 linked problem-medication pairs (983 distinct pairs). Compared to a previously validated set of problem-medication pairs, the reputation metric achieved a specificity of 99.5% and marginally improved the sensitivity of previously described knowledge bases.

Conclusion: A reputation metric may be a valuable measure for identifying high quality clinician-entered, crowdsourced data.

Keywords: Crowdsourcing; Electronic health records; Knowledge bases; Medical records; Problem-oriented.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Crowdsourcing
  • Electronic Health Records*
  • Humans
  • Internet
  • Knowledge Bases*
  • Logistic Models
  • Medical Informatics / methods*
  • Medical Records Systems, Computerized*
  • Pharmaceutical Preparations
  • Physicians
  • Reproducibility of Results
  • Software
  • User-Computer Interface

Substances

  • Pharmaceutical Preparations