Abstract 1875P
Background
Technological innovations made rapid progress in the last years and are expected to play a growing role in the decision-making process. ChatGPT, a new chatbot that uses deep learning to mimic human language processing, has increased. In the health domain, ChatGPT could support healthcare delivery thanks to its language models and ability to simulate human conversational manner. Even if the advantages are multiple, there are some psycho-social and ethical aspects related to the implementation of these technologies that remain open.
Methods
This study examines psychological challenges associated with using ChatGPT to clarify its role in screening decisions. Forty-one participants (aged M=29.8) were provided with a scenario describing a hypothetical conversation between ChatGPT and a user who has received an oncological breast or prostate diagnosis report. Successively, each participant answered questions about concerns related to the chatbot, intention to use, decision-making process, and emotional activation.
Results
Descriptive analysis highlighted that 58.5% (n= 24) of participants have already used ChatGPT, but only 0.05% (n= 2) use the chatbot for healthcare purposes. 31.7% (n= 13) of participants reported the absence of fears about using the chatbot for oncological purposes, whereas the remaining 68.3% (n= 28) confirmed the presence of several concerns. Specifically, participants reported concerns about the risks related to data privacy and possible conflict of interest related to developers (n= 2). Other participants described the ChatGPT elaboration processes as a "black box" and highlighted doubts about the correct use of the results (n= 13). Additionally, some participants (n= 8) highlighted the risk that ChatGPT could generate hypochondriacal symptoms and inappropriate healthcare practices. Lastly, participants showed a fear that ChatGPT could replace human doctors in healthcare practice (n=6).
Conclusions
Current results will contribute to understanding the general population's attitude towards ChatGPT and its possible uses in the health domain.
Clinical trial identification
Editorial acknowledgement
Legal entity responsible for the study
The authors.
Funding
Has not received any funding.
Disclosure
All authors have declared no conflicts of interest.
Resources from the same session
1866P - Unmasking the extent of hidden sexual distress in young breast cancer survivors
Presenter: Zeineb Naimi
Session: Poster session 05
1867P - A cross-sectional examination of information disclosure and health literacy amongst patients with lymphoma
Presenter: Steve Kalloger
Session: Poster session 05
1868P - Challenges for shared decision making in incurable cancer, with a focus on health literacy
Presenter: Chloe Holden
Session: Poster session 05
1871P - The PainRELife ecosystem: A new aid for improving clinical care and shared decision-making in breast cancer patients with chronic pain
Presenter: Marianna Masiero
Session: Poster session 05
1872P - Financial distress of a cancer disease in Germany: A new patient reported outcome measure (PROM) and first results from a bi-centered cross-sectional analysis
Presenter: Sophie Pauge
Session: Poster session 05
1873P - Patients with myeloproliferative neoplasms and self-care behaviours: Preliminary data of a cross-sectional study
Presenter: Valentina Biagioli
Session: Poster session 05
1874P - Beliefs about chemotherapy in Tunisian patients newly diagnosed with cancer
Presenter: Hadhemi Ayadi
Session: Poster session 05
1876P - Cancer stigma: How Tunisian patients perceive their cancer
Presenter: sofiene Fendri
Session: Poster session 05
1877P - Meaning-making in the re-entry phase: A qualitative focus group study with patients with breast cancer and melanoma
Presenter: Anna Visser
Session: Poster session 05