Abstract 1875P
Background
Technological innovations made rapid progress in the last years and are expected to play a growing role in the decision-making process. ChatGPT, a new chatbot that uses deep learning to mimic human language processing, has increased. In the health domain, ChatGPT could support healthcare delivery thanks to its language models and ability to simulate human conversational manner. Even if the advantages are multiple, there are some psycho-social and ethical aspects related to the implementation of these technologies that remain open.
Methods
This study examines psychological challenges associated with using ChatGPT to clarify its role in screening decisions. Forty-one participants (aged M=29.8) were provided with a scenario describing a hypothetical conversation between ChatGPT and a user who has received an oncological breast or prostate diagnosis report. Successively, each participant answered questions about concerns related to the chatbot, intention to use, decision-making process, and emotional activation.
Results
Descriptive analysis highlighted that 58.5% (n= 24) of participants have already used ChatGPT, but only 0.05% (n= 2) use the chatbot for healthcare purposes. 31.7% (n= 13) of participants reported the absence of fears about using the chatbot for oncological purposes, whereas the remaining 68.3% (n= 28) confirmed the presence of several concerns. Specifically, participants reported concerns about the risks related to data privacy and possible conflict of interest related to developers (n= 2). Other participants described the ChatGPT elaboration processes as a "black box" and highlighted doubts about the correct use of the results (n= 13). Additionally, some participants (n= 8) highlighted the risk that ChatGPT could generate hypochondriacal symptoms and inappropriate healthcare practices. Lastly, participants showed a fear that ChatGPT could replace human doctors in healthcare practice (n=6).
Conclusions
Current results will contribute to understanding the general population's attitude towards ChatGPT and its possible uses in the health domain.
Clinical trial identification
Editorial acknowledgement
Legal entity responsible for the study
The authors.
Funding
Has not received any funding.
Disclosure
All authors have declared no conflicts of interest.
Resources from the same session
1594P - End-of-life hospital cancer care in the COVID-19 era: A retrospective population-based study in the Netherlands
Presenter: Ellis Slotman
Session: Poster session 05
1595P - Incidence and characterization of end-of-life (EoL) systemic anticancer therapy (SACT) in melanoma patients (pts): A monocentric experience
Presenter: Silvia Buriolla
Session: Poster session 05
1596P - Exploring the economic impact of palliative care in oncology at the end of life
Presenter: Sarah Gomes
Session: Poster session 05
1597P - Improving in-hospital end-of-life care (EOLC) for oncology patients in a tertiary cancer centre
Presenter: Conor Moloney
Session: Poster session 05
1598P - Differences in referral patterns to the palliative care team among specialized physicians in patients with terminal cancer
Presenter: Hyun Jeong Shim
Session: Poster session 05
1599P - Clinical predictors of 30-day mortality in hospitalized patients with lung cancer: A retrospective single-center observational study
Presenter: Alessandro Leonetti
Session: Poster session 05
1600P - Sarcopenia, depression, and poor health perception among cancer patients registered in an oncology center in Pakistan
Presenter: Sobia Yaqub
Session: Poster session 05
1601P - Relationship between CT and ultrasonography-based sarcopenia and hematologic toxicity in patients with cancer receiving chemotherapy
Presenter: Gurkan Guner
Session: Poster session 05
1602P - Simulation training for compassionate extubation in the pediatric intensive care unit
Presenter: Nicole Fernandez
Session: Poster session 05
1603P - The ability of the LACE index to predict 30-day readmissions in oncology patients
Presenter: Burcu Ulas Kahya
Session: Poster session 05