HealthDay News — In a study published in the September issue of The Lancet Digital Health, the authors present a clinical validation strategy for artificial intelligence (AI) models for segmenting primary non-small cell lung cancer (NSCLC) tumors and involved lymph nodes in computed tomography (CT) images.
Ahmed Hosny, Ph.D., from Mass General Brigham in Boston, and colleagues conducted an observational study involving CT images and segmentations that were collected from eight internal and external sources. Patients from two datasets, segmented by a single expert, were used for model discovery. A total of 2,208 patients were included, with 787 used for model discovery and 1,421 used for model validation.
The researchers found that compared with an interobserver benchmark, models showed an improvement (multi-delineation dataset: volumetric dice [VD], 0.91, and surface dice [SD], 0.86) and were within the intraobserver benchmark. For primary validation, AI performance on internal Harvard-RT1 data (segmented by the same expert who segmented the discovery data) was 0.83 and 0.79 for VD and SD, respectively, within the interobserver benchmark. VD and SD were 0.70 and 0.50, respectively, on internal Harvard-RT2 data segmented by other experts. On RTOG-0617 clinical trial data, performance was 0.71 on VD and 0.47 on SD, with similar results observed on diagnostic radiology datasets NSCLC-radiogenomics and Lung-PET-CT-Dx. Models yielded target volumes with equivalent radiation dose coverage to those of experts, despite these geometric overlap results. Nonsignificant differences were found between de novo expert and AI-assisted segmentations. There was a 65 percent reduction in segmentation time and a 32 percent reduction in interobserver variability with AI assistance.
“This study presents a novel evaluation strategy for AI models that emphasizes the importance of human-AI collaboration,” a coauthor said in a statement.
Several authors disclosed financial ties to the biopharmaceutical and software industries.