Abstract 2162P
Background
There is limited data on the quality of cancer information provided by ChatGPT and other artificial intelligence systems. In this study, we aimed to compare the accuracy of information about cancer pain provided by chatbots (chatGPT, perplexity, and chatsonic) based on the questions and answers contained in the the European Medical Oncology Association (ESMO) Patient Guide about cancer pain.
Methods
Twenty questions were selected from the questions available in the ESMO Patient Guide Cancer Pain. Medical oncologists with more than 10 years of experience compared responses from chatbots (ChatGPT, Perplexity, and Chatsonic) with the ESMO patient guide. The primary evaluation criteria for the quality of the responses were accuracy, patient readability, and stability of response. The accuracy of responses was evaluated using a three-point scale: 1 for accuracy, 2 for a mixture of accurate and incorrect or outdated data, and 3 for wholly inaccurate. The Flesch-Kincaid readability (FKr) grade was used to measure readability. Stability of responses was evaluated whether the model’s accuracy is consistent across different answers to the same question.
Results
Chatbots were more difficult to read than the ESMO patient guideline (FKr= 9.6 vs. 12.8, p= 0.072). Among the chatbots, perplexity had the easiest readability (FKr= 11.2). In the accuracy evaluation, the percentage of overall agreement for accuracy was 100% for ESMO answers and 96% for ChatGPT outputs for questions (k= 0.03, standard error= 0.08). Among the chatbots, the most accurate information was obtained with chatGPT.
Table: 2162P
Comparison of ESMO and chatbots in terms of readibility and accuracy
ESMO | Chatbots | ||||
ChatGPT | Perplexity | Chatsonic | p-value | ||
Readibility (FKr grade) | 9.6 Easily understood | 13.4 Difficult to read | 11.2 Fairly difficult to read | 13.9 Difficult to read | 0.072 |
Accuracy | %100 | %96 | %86 | % 90 | 0.037 |
Conclusions
The results suggest that ChatGPT provides more accurate information about cancer pain compared with other chatbots.
Clinical trial identification
Editorial acknowledgement
Legal entity responsible for the study
Gazi University Ethic Commitee.
Funding
Has not received any funding.
Disclosure
All authors have declared no conflicts of interest.
Resources from the same session
2161P - Organization of hospital pharmaceutical consultations for cancer patients receiving oral anticancer drugs: A nationwide cross-sectional study
Presenter: Florian Slimano
Session: Poster session 07
2163P - Supportive care in French community pharmacies: OncoPharma certification
Presenter: Jérôme Sicard
Session: Poster session 07
2164P - The impact of cancer patients’ face masks on oxygenation and Co2 retention during treatment
Presenter: Mert Sahin
Session: Poster session 07
2165P - A French overview of electronic patient-reported outcomes use in 2022
Presenter: Melina Hocine
Session: Poster session 07
2166P - Long-term consequences of SARS-CoV-2 infection in cancer patients
Presenter: Yana Debie
Session: Poster session 07
2167P - Are bone targeted agents (BTAs) still useful in times of immunotherapy? The SAKK 80/19 BTA study
Presenter: Michael Mark
Session: Poster session 07
2168P - At-home infusion of immunotherapy for patients with solid tumors: First results from a single-centre program
Presenter: Javier Marco Hernández
Session: Poster session 07
2169P - Immunotherapy-based treatment in elderly cancer patients: A real-world multicenter study
Presenter: Mengye He
Session: Poster session 07
2171P - Incidence of adverse events in patients treated with a combination of immune checkpoint blockers and chemotherapy: A real life cohort
Presenter: Layal Rached
Session: Poster session 07
2172P - Development and evaluation of the usefulness of an immune-related adverse events interview application
Presenter: AKIKO YANO
Session: Poster session 07