Oops, you're using an old version of your browser so some of the features on this page may not be displaying properly.

MINIMAL Requirements: Google Chrome 24+Mozilla Firefox 20+Internet Explorer 11Opera 15–18Apple Safari 7SeaMonkey 2.15-2.23

Cocktail & Poster Display session

73P - Automated detection of typical and atypical mitotic figures for improving survival prediction in breast cancer


06 Mar 2023


Cocktail & Poster Display session


Saima Ben Hadj


Annals of Oncology (2023) 8 (1suppl_2): 100897-100897. 10.1016/esmoop/esmoop100897


S. Ben Hadj1, D. Wallis1, M. Aubreville2, C. Bertram3, R. Fick1

Author affiliations

  • 1 Ai & Computer Vision, Tribun Health, 75015 - Paris/FR
  • 2 Medical Imaging, Klinikum Ingolstadt, 85049 - Ingolstadt/DE
  • 3 Pathobiology, University of Veterinary Medicine, Vienna, 1210 - Vienna/AT


This content is available to ESMO members and event participants.

Abstract 73P


The number of typical and atypical (defined as mitoses with any morphological appearance other than the typical forms) mitotic figures (MFs) and a high atypical-to-typical mitosis ratio are strongly associated with tumour aggressivity, survival rates, and a predictor of poor response to chemotherapy in breast cancer. Manual detection is time consuming, especially on whole slide images (WSIs). An automated approach is therefore necessary to investigate these aspects on a larger scale. We demonstrate that deep learning can be used to automate this detection, improving on the performance of pathologists.


All MFs in the mammary carcinoma dataset (21 hematoxylin and eosin (H&E)-stained WSIs with ∼14 000 MFs and ∼36 000 hard negatives) were labelled as typical or atypical. These slides (originally scanned on a Leica scanner) were then rescanned on six other scanners (2x Hammamatsu, 2x 3DHISTECH, Philips, Olympus), and the annotations were registered. This gave a large, multi-scanner dataset, which was used to train a YOLOv6 deep learning object detection model. For testing, all MFs in the (human) TUPAC16 and MIDOG21 datasets were labelled by two pathologists as either typical or atypical. In cases of disagreement, a third reader gave a consensus. We used the alternative version of the TUPAC16 dataset provided by the same authors as the MIDOG21 dataset to reduce potential label bias. We then ran our model on these images and compared the mean average precision (mAP) vs the consensus to the mAPs of the two individual pathologists vs the consensus.


The mAP of our model (0.80) was higher than the average mAP of the two pathologists (0.75, p<0.05), showing that the model can successfully automate the process of MF detection. There was considerable disagreement in the labelling by the two pathologists (14% of cases). By automating the process we reduce this variability, meaning we can more consistently predict clinical outcomes (e.g. survival rates) from our results.


The numbers of both typical and atypical MFs are indicators of patient survival and response to treatment. We have demonstrated an automated deep learning model that can accurately detect these figures and could thus be used for patient survival prediction.

Clinical trial identification

Editorial acknowledgement

Legal entity responsible for the study

The authors.




All authors have declared no conflicts of interest.

This site uses cookies. Some of these cookies are essential, while others help us improve your experience by providing insights into how the site is being used.

For more detailed information on the cookies we use, please check our Privacy Policy.

Customise settings
  • Necessary cookies enable core functionality. The website cannot function properly without these cookies, and you can only disable them by changing your browser preferences.