Abstract 105P
Background
Artificial intelligence (AI) based on deep learning and convolutional neural networks (CNN) has been applied to various medical fields. We have started to develop novel AI to support detection of lung cancer, which will enable physicians to efficiently perform interpretation of radiograms and diagnosis.
Methods
We analyzed the image features as teacher data using 853 chest X-ray images (401 normal images and 452 abnormal images) from Fukushima Preservative Service Association of Health, in which lung cancer screening was mainly conducted and more than 100 000 chest roentgenograms from the NIH database. We categorized these data into two groups, according to including NIH datasets (group A) or not (group B). Then we integrated their datasets for deep learning and CNN using ImageNet to develop proprietary AI algorithm, and analyzed the accuracy of interpretation of radiograms statistically. We also demonstrated the abnormal shadow in the form of heat map display on each chest roentgenogram for easy visualization and also showed positive probability score as an index value (from 0.0 to 1.0), which indicated the possibility of lung cancer. The accuracy of our AI system has been improved by using technology that absorbs differences in radiographic apparatus and imaging environments.
Results
Our novel AI showed the accuracy of 0.74 for AUC, 0.75 for sensitivity and 0.74 for specificity in the group A, and 0.80 for AUC, 0.73 for sensitivity and 0.75 for specificity in the group B. These AI systems used the positive probability cutoff value of 0.5. Both groups are superior to the accuracy of radiologists (AUC 0.71) and also compatible to previous study reports (AUC 0.78). We also demonstrated the heat map display on the monitor screen clearly, if each roentgenogram had abnormal shadows.
Conclusions: In this study, we confirmed proprietary AI had a similar accuracy of interpretation of the chest roentgenograms compared with both previous studies and radiologists. However, further research and improvement is needed to verify the accuracy. We are now in the process of performing various types of validation.
Editorial acknowledgement
The authors thank Dr. Karl Embleton, DVM, from Edanz Group for editing a draft of this abstract.
Legal entity responsible for the study
M. Higuchi.
Funding
Grants‐in‐aid for Scientific Research.
Disclosure
All authors have declared no conflicts of interest.