Oops, you're using an old version of your browser so some of the features on this page may not be displaying properly.

MINIMAL Requirements: Google Chrome 24+Mozilla Firefox 20+Internet Explorer 11Opera 15–18Apple Safari 7SeaMonkey 2.15-2.23

Poster session 05

1875P - Behind the use of ChatGPT for oncological purposes: Fears and challenges

Date

21 Oct 2023

Session

Poster session 05

Topics

Cancer Intelligence (eHealth, Telehealth Technology, BIG Data);  Psycho-Oncology

Tumour Site

Presenters

Ilaria Durosini

Citation

Annals of Oncology (2023) 34 (suppl_2): S1001-S1012. 10.1016/S0923-7534(23)01947-6

Authors

I. Durosini1, M. Strika1, G. Pravettoni2

Author affiliations

  • 1 Department Of Oncology And Hemato-oncology, University Of Milan, Milan, Italy, UNIMI - Università degli Studi di Milano Statale, 20121 - Milan/IT
  • 2 2applied Research Division For Cognitive And Psychological Science, Ieo, European Institute Of Oncology Irccs, Milan, Italy, IEO - Istituto Europeo di Oncologia, 20141 - Milan/IT

Resources

Login to get immediate access to this content.

If you do not have an ESMO account, please create one for free.

Abstract 1875P

Background

Technological innovations made rapid progress in the last years and are expected to play a growing role in the decision-making process. ChatGPT, a new chatbot that uses deep learning to mimic human language processing, has increased. In the health domain, ChatGPT could support healthcare delivery thanks to its language models and ability to simulate human conversational manner. Even if the advantages are multiple, there are some psycho-social and ethical aspects related to the implementation of these technologies that remain open.

Methods

This study examines psychological challenges associated with using ChatGPT to clarify its role in screening decisions. Forty-one participants (aged M=29.8) were provided with a scenario describing a hypothetical conversation between ChatGPT and a user who has received an oncological breast or prostate diagnosis report. Successively, each participant answered questions about concerns related to the chatbot, intention to use, decision-making process, and emotional activation.

Results

Descriptive analysis highlighted that 58.5% (n= 24) of participants have already used ChatGPT, but only 0.05% (n= 2) use the chatbot for healthcare purposes. 31.7% (n= 13) of participants reported the absence of fears about using the chatbot for oncological purposes, whereas the remaining 68.3% (n= 28) confirmed the presence of several concerns. Specifically, participants reported concerns about the risks related to data privacy and possible conflict of interest related to developers (n= 2). Other participants described the ChatGPT elaboration processes as a "black box" and highlighted doubts about the correct use of the results (n= 13). Additionally, some participants (n= 8) highlighted the risk that ChatGPT could generate hypochondriacal symptoms and inappropriate healthcare practices. Lastly, participants showed a fear that ChatGPT could replace human doctors in healthcare practice (n=6).

Conclusions

Current results will contribute to understanding the general population's attitude towards ChatGPT and its possible uses in the health domain.

Clinical trial identification

Editorial acknowledgement

Legal entity responsible for the study

The authors.

Funding

Has not received any funding.

Disclosure

All authors have declared no conflicts of interest.

This site uses cookies. Some of these cookies are essential, while others help us improve your experience by providing insights into how the site is being used.

For more detailed information on the cookies we use, please check our Privacy Policy.

Customise settings
  • Necessary cookies enable core functionality. The website cannot function properly without these cookies, and you can only disable them by changing your browser preferences.