Oops, you're using an old version of your browser so some of the features on this page may not be displaying properly.

MINIMAL Requirements: Google Chrome 24+Mozilla Firefox 20+Internet Explorer 11Opera 15–18Apple Safari 7SeaMonkey 2.15-2.23

Poster display session

75P - A parallel deep learning network framework for whole-body bone scan image analysis

Date

23 Nov 2019

Session

Poster display session

Topics

Staging and Imaging

Tumour Site

Presenters

Xiaorong Pu

Citation

Annals of Oncology (2019) 30 (suppl_9): ix182-ersion="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.9

Authors

X. Pu1, G. Tang2, K. Cai3, Y. Huang4, M. Ping3, Z. Peng5, H. Qiu1

Author affiliations

  • 1 Big Data Research Center,school Of Computer Science And Engineering, University of Electronic Science and Technology of China, 611731 - Chengdu/CN
  • 2 West China University Hospital, Sichuan University, 610065 - Chengdu/CN
  • 3 School Of Computer Science And Engineering, University of Electronic Science and Technology of China, 611731 - Chengdu/CN
  • 4 College Of Computer Science, Sichuan University, 610056 - Chengdu/CN
  • 5 Big Data Research Center, University of Electronic Science and Technology of China, 611731 - Chengdu/CN

Resources

Login to get immediate access to this content.

If you do not have an ESMO account, please create one for free.

Abstract 75P

Background

A parallel deep learning network framework for whole-body bone scan image analysis Whole-body bone scan image analysis in nuclear medicine is a common method assisting physicians in bone metastases detection of cancer. As the increasing need for diagnostic examinations in the huge and elderly population in China, physicians are facing a significant growth of workload but must still manage to read the diagnostic images carefully and avoid errors in interpretation. It is crucial to develop a clinical decision support tool in assisting physicians in their clinical routine.

Methods

In this study, we proposed a parallel deep learning network framework for bone-scan interpretations of the presence or absence of bone metastases. The whole-body bone scans (anterior and posterior views) of 707 patients who are suspected bone metastatic disease were studied. The physicians were asked to classify each case for the presence or absence of bone metastasis manually. Each bone scan image was automatically segmented into 26 different anatomical regions of homogeneous bones based on the skeletal frame. The corresponding 26 deep learning networks made a diagnosis by inspecting each region and searching for abnormal lesion activity simultaneously. To estimate the performance of each anatomical sub-region identification models, a ten-fold cross testing scheme was applied where the data set was divided into ten parts of equal size randomly.

Results

The sensitivity, specificity and the mean number of false lesions detected were adopted as performance indices to evaluate the proposed model. The best sensitivity and specificity of an individual network corresponding to each sub-region are 99.9 % and 97.3% respectively. The overall mean sensitivity and specificity of the parallel model are 99.2% and 71.8% respectively, as well as 2.0 false detections per patient scan image within millisecond.

Conclusions

With an extremely high sensitivity, specificity and a low false lesions detection rate, this proposed parallel deep learning network model is demonstrated as useful for detecting metastases in bone scans. Our proposed framework appears to have significant potential as a clinical decision support tool in assisting physicians in their clinical routine.

Clinical trial identification

Editorial acknowledgement

Legal entity responsible for the study

The authors.

Funding

Has not received any funding.

Disclosure

X. Pu: Full / Part-time employment: University of Electronic Science and Technology of China; Full / Part-time employment: University of Electronic Science and Technology of China; Full / Part-time employment: University of Electronic Science and Technology of China. G. Tang: Full / Part-time employment: West China University Hospital, Sichuan University. K. Cai: Research grant / Funding (institution): School of Computer Science and Engineering, University of Electronic Science and Technology of China. Y. Huang: Research grant / Funding (institution): College of Computer Science, Sichuan University. M. Ping: Research grant / Funding (institution): School of Computer Science and Engineering, University of Electronic Science and Technology of China. Z. Peng: Research grant / Funding (institution): Big Data Research Center, University of Electronic Science and Technology of China. H. Qiu: Full / Part-time employment: School of Computer Science and Engineering, University of Electronic Science and Technology of China.

This site uses cookies. Some of these cookies are essential, while others help us improve your experience by providing insights into how the site is being used.

For more detailed information on the cookies we use, please check our Privacy Policy.

Customise settings
  • Necessary cookies enable core functionality. The website cannot function properly without these cookies, and you can only disable them by changing your browser preferences.