Building Consensus for a Shared Definition of Adverse Events: A Case Study in the Profession of Dentistry

J Patient Saf. 2022 Aug 1;18(5):470-474. doi: 10.1097/PTS.0000000000000959.

Abstract

Background: To achieve high-quality health care, adverse events (AEs) must be proactively recognized and mitigated. However, there is often ambiguity in applying guidelines and definitions. We describe the iterative calibration process needed to achieve a shared definition of AEs in dentistry. Our alignment process includes both independent and consensus building approaches.

Objective: We explore the process of defining dental AEs and the steps necessary to achieve alignment across different care providers.

Methods: Teams from 4 dental institutions across the United States iteratively reviewed patient records after identification of charts using an automated trigger tool. Calibration across teams was supported through negotiated definition of AEs and standardization of evidence provided in review. Interrater reliability was assessed using descriptive and κ statistics.

Results: After 5 iterative cycles of calibration, the teams (n = 8 raters) identified 118 cases. The average percent agreement for AE determination was 82.2%. Furthermore, the average, pairwise prevalence and bias-adjusted κ (PABAK) was 57.5% (κ = 0.575) for determining AE presence. The average percent agreement for categorization of the AE type was 78.5%, whereas the PABAK was 48.8%. Lastly, the average percent agreement for categorization of AE severity was 82.2% and the corresponding PABAK was 71.7%.

Conclusions: Successful calibration across reviewers is possible after consensus building procedures. Higher levels of agreement were found when categorizing severity (of identified events) rather than the events themselves. Our results demonstrate the need for collaborative procedures as well as training for the identification and severity rating of AEs.

Publication types

  • Research Support, N.I.H., Extramural

MeSH terms

  • Consensus
  • Dentistry*
  • Humans
  • Reproducibility of Results
  • United States