Feasibility of a deep-learning based anatomical region labeling tool for Cone-Beam Computed Tomography scans in radiotherapy

Phys Imaging Radiat Oncol. 2023 Mar 5:25:100427. doi: 10.1016/j.phro.2023.100427. eCollection 2023 Jan.

Abstract

Background and purpose: Currently, there is no robust indicator within the Cone-Beam Computed Tomography (CBCT) DICOM headers as to which anatomical region is present on the scan. This can be a predicament to CBCT-based algorithms trained on specific body regions, such as auto-segmentation and radiomics tools used in the radiotherapy workflow. We propose an anatomical region labeling (ARL) algorithm to classify CBCT scans into four distinct regions: head & neck, thoracic-abdominal, pelvis, and extremity.

Materials and methods: Algorithm training and testing was performed on 3,802 CBCT scans from 596 patients treated at our radiotherapy center. The ARL model, which consists of a convolutional neural network, makes use of a single CBCT coronal slice to output a probability of occurrence for each of the four classes. ARL was evaluated on the test dataset composed of 1,090 scans and compared to a support vector machine (SVM) model. ARL was also used to label CBCT treatment scans for 22 consecutive days as part of a proof-of-concept implementation. A validation study was performed on the first 100 unique patient scans to evaluate the functionality of the tool in the clinical setting.

Results: ARL achieved an overall accuracy of 99.2% on the test dataset, outperforming the SVM (91.5% accuracy). Our validation study has shown strong agreement between the human annotations and ARL predictions, with accuracies of 99.0% for all four regions.

Conclusion: The high classification accuracy demonstrated by ARL suggests that it may be employed as a pre-processing step for site-specific, CBCT-based radiotherapy tools.

Keywords: Anatomy labeling; Cone-beam computed tomography; Deep learning; Radiotherapy.