A probabilistic, non-parametric framework for inter-modality label fusion

Med Image Comput Comput Assist Interv. 2013;16(Pt 3):576-83. doi: 10.1007/978-3-642-40760-4_72.

Abstract

Multi-atlas techniques are commonplace in medical image segmentation due to their high performance and ease of implementation. Locally weighting the contributions from the different atlases in the label fusion process can improve the quality of the segmentation. However, how to define these weights in a principled way in inter-modality scenarios remains an open problem. Here we propose a label fusion scheme that does not require voxel intensity consistency between the atlases and the target image to segment. The method is based on a generative model of image data in which each intensity in the atlases has an associated conditional distribution of corresponding intensities in the target. The segmentation is computed using variational expectation maximization (VEM) in a Bayesian framework. The method was evaluated with a dataset of eight proton density weighted brain MRI scans with nine labeled structures of interest. The results show that the algorithm outperforms majority voting and a recently published inter-modality label fusion algorithm.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Brain / anatomy & histology*
  • Computer Simulation
  • Data Interpretation, Statistical
  • Humans
  • Image Enhancement / methods
  • Image Interpretation, Computer-Assisted / methods
  • Magnetic Resonance Imaging / methods*
  • Models, Anatomic*
  • Models, Neurological*
  • Models, Statistical*
  • Pattern Recognition, Automated / methods*
  • Reproducibility of Results
  • Sensitivity and Specificity
  • Subtraction Technique*