Oops, you're using an old version of your browser so some of the features on this page may not be displaying properly.

MINIMAL Requirements: Google Chrome 24+Mozilla Firefox 20+Internet Explorer 11Opera 15–18Apple Safari 7SeaMonkey 2.15-2.23

Poster session 08

182P - Multi-modal artificial intelligence outperforms image-based approaches for mutation prediction from H&E tissue images in colorectal cancer

Date

14 Sep 2024

Session

Poster session 08

Topics

Pathology/Molecular Biology

Tumour Site

Presenters

Marc Päpper

Citation

Annals of Oncology (2024) 35 (suppl_2): S238-S308. 10.1016/annonc/annonc1576

Authors

M. Päpper1, F. Fogt2, P. Frey1, A. Talwar3, T. Lang4

Author affiliations

  • 1 Research And Development, Mindpeak GmbH, 20359 - Hamburg/DE
  • 2 Department Of Pathology And Laboratory Medicine, Pennsylvania Hospital, 19107 - Philadelphia/US
  • 3 Department Of Pathology And Laboratory Medicine, Pennsylvania Hospital, 19104 - Philadelphia/US
  • 4 Research And Development, Mindpeak GmbH, Hamburg/DE

Resources

Login to get immediate access to this content.

If you do not have an ESMO account, please create one for free.

Abstract 182P

Background

Mutations in the MAPK/ERK pathway are frequently found across cancer entities, including colorectal cancer (CRC) where the accurate diagnosis of KRAS and BRAF mutational status is pivotal for treatment decisions. While the mutation analysis is usually done via genomic sequencing, the prediction of mutations from histological images using artificial intelligence (AI) could present a faster alternative with broad potential for diagnostic routine, research applications, and trial recruitment. However, to date, such algorithms typically do not meet the required accuracy criteria for real-world application in different institutions.

Methods

Since the frequency of both BRAFmut and KRASmut is associated with easily available clinical patient parameters, we developed a multi-modal predictive AI model on n = 455 CRC cases from the TCGA database and UPenn. Besides patient data, the AI model uses the BRAFmut/KRASmut information and hematoxylin & eosin (H&E)-stained tissue images. We evaluated the model on an independent hold-out TCGA cohort of n = 114 samples and an additional external cohort of n = 104 CRC samples from the CPTAC database.

Results

With our multi-modal approach the AI model achieved an AUROC of 0.84 ± 0.02 and 0.67 ± 0.01 for BRAF/KRAS respectively on the TCGA hold-out test set. Accuracy levels were similar on the second external testing dataset (CPTAC) (AUROC of 0.82 ± 0.02 and 0.72 ± 0.01) indicating the model’s ability to generalize across different cohorts. Notably, accuracy values obtained with the multi-modal training setup were significantly higher than those from models that were trained with image data only (AUROCs (CPTAC): BRAF 0.73 +- 0.02, KRAS: 0.64 +- 0.03).

Conclusions

By analyzing mutations in two of the most frequently mutated genes in CRC in two separate cohorts, we demonstrate that the inclusion of patient parameters in AI training can provide added value for diagnostic accuracy of AI models that predict mutations from H&E images. Our results also support previous findings that some driver mutations can be more accurately predicted from tissue than others. Altogether, these results show the potential of multi-modal deep learning to bring predictive AI towards real-world application in pathology.

Clinical trial identification

Editorial acknowledgement

Legal entity responsible for the study

Mindpeak GmbH.

Funding

Mindpeak GmbH.

Disclosure

M. Päpper: Financial Interests, Personal, Full or part-time Employment: Mindpeak GmbH; Financial Interests, Personal, Stocks/Shares: Mindpeak GmbH; Financial Interests, Personal, Leadership Role: Mindpeak GmbH. P. Frey: Financial Interests, Personal, Full or part-time Employment: Mindpeak GmbH. T. Lang: Financial Interests, Personal, Full or part-time Employment: Mindpeak GmbH; Financial Interests, Personal, Leadership Role: Mindpeak GmbH; Financial Interests, Personal, Stocks or ownership: Mindpeak GmbH. All other authors have declared no conflicts of interest.

This site uses cookies. Some of these cookies are essential, while others help us improve your experience by providing insights into how the site is being used.

For more detailed information on the cookies we use, please check our Privacy Policy.

Customise settings
  • Necessary cookies enable core functionality. The website cannot function properly without these cookies, and you can only disable them by changing your browser preferences.