Our results revealed practical insights for improving the performance of radiology-based machine learning applications and demonstrated diagnosis-dependent performance differences that allow for potential inferences into relative difficulties of different radiology findings. In this study, we have investigated this problem by selecting eight of the 14 diagnoses in the NIH ChestX-ray14 dataset and examining CNN performance for a wide spectrum of image resolutions and network training strategies. Consequently, determining the optimal image resolution for different radiology-based machine learning applications remains an open problem. Furthermore, there is an inherent trade-off in CNN implementations in that graphics processing unit–based optimization can have memory limitations where using a higher image resolution can reduce the usable maximum batch size, and a higher batch size can allow improved calculation of the gradient with regard to the loss function. Nevertheless, extensive lowering of image resolution eliminates information that is useful for classification. Another study that showed improved performance based on area under the receiver operating characteristic curve (AUC) compared with these two prior works used 224 × 224-pixel inputs and a DenseNet121 architecture with a modified model head ( 1).Īchieving better model performance with lower input image resolutions might initially seem paradoxical, but, in various machine learning paradigms, a reduced number of inputs or features is desirable as a means of lowering the number of parameters that must be optimized, which in turn diminishes the risk of model overfitting ( 5). Deep learning analyses concurrent with the dataset’s release involved 1024 × 1024 resolution images and investigations that included the AlexNet, GoogLeNet, VGGNet-16, and ResNet-50 architectures ( 3). One previous study of these data used long short-term memory recurrent neural networks with 512 × 512-pixel input images and focused on label dependencies ( 4). Multiple public datasets exist for labeled chest radiographic images ( 2, 3) with the National Institutes of Health (NIH) datasets released as ChestX-ray8 and ChestX-ray14 being among the largest and most studied ( 1, 4). Many recent advances have been made in the applications of deep learning and convolutional neural networks (CNNs) to radiology tasks involving diagnosis determination and finding identification on chest radiographic images ( 1– 4).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |