Content

Welcome to the CJU website » LOG IN

Details

Assessing artificial intelligence responses to common patient questions regarding inflatable penile prostheses using a publicly available natural language processing tool (ChatGPT)
Howard University College of Medicine, Washington, DC, USA
Jun  2024 (Vol.  31, Issue  3, Pages( 11880 - 11885)
PMID: 38912940

Abstract

Text-Size + 

  • Introduction:

    The evolving landscape of healthcare information dissemination has been dramatically influenced by the rise of artificial intelligence (AI) driven chatbots, providing patients with accessible and interactive platforms to obtain knowledge about medical procedures and conditions. Among the various surgical interventions in urology, inflatable penile prosthesis (IPP) is a common treatment for men with erectile dysfunction. As patients increasingly seek comprehensive resources to understand what this procedure entails, AI-based chat technologies, such as ChatGPT, have become more prominent. This study aimed to assess the capacity of ChatGPT to provide accurate and easily understandable responses to common questions regarding the IPP procedure.

    Materials and methods:

    Ten frequently asked questions (FAQ) about the IPP procedure were presented to the ChatGPT chatbot in separate conversational sessions without follow up questions or repetitions. An evidence-based approach was employed to assess the accuracy of the chatbot's responses. Responses were categorized as "excellent response not requiring clarification," "satisfactory requiring minimal clarification," "satisfactory requiring moderate clarification," or "unsatisfactory requiring substantial clarification."

    Results:

    Upon review, 70% of ChatGPT’s answers to questions regarding the IPP procedure were rated as "excellent," not necessitating further clarification. Twenty percent were considered "satisfactory," requiring minimal clarification, notably on the omission of statistical data and the depth of discussion on certain topics. Ten percent of the responses were "unsatisfactory," requiring substantial clarification, including a failure to provide a definitive answer when necessary.

    Conclusions:

    This study reveals that ChatGPT has a substantial capability to produce evidence-based, understandable responses to a majority of common questions related to the IPP procedure. While there is room for improvement, ChatGPT can serve as an advantageous tool for patient education, enhancing preoperative understanding and contributing to informed decision-making during urological consultations for IPP.