Patients seek support from online resources when facing a troubling urologic cancer diagnosis. Physician-written resources exceed the recommended 6-8th grade reading level, creating confusion and driving patients towards unregulated online materials like AI chatbots. We aim to compare the readability and quality of patient education on ChatGPT against Epic and Urology Care Foundation (UCF).
Materials and methods:
We analyzed prostate, bladder, and kidney cancer content from ChatGPT, Epic, and UCF. We further studied readability-adjusted responses using specific AI prompting (ChatGPT-a) and Epic material designated as Easy to Read. Blinded reviewers completed descriptive textual analysis, readability analysis via six validated formulas, and quality analysis via DISCERN, PEMAT, and Likert tools.
Results:
Epic met the recommended grade level, while UCF and ChatGPT exceeded it (5.81 vs. 8.44 vs. 12.16,
p < 0.001). ChatGPT text was longer with more complex wording (p < 0.001). Quality was fair for Epic, good for UCF, and excellent for ChatGPT (49.5 vs. 61.67 vs. 64.33). Actionability was overall poor but particularly lowest (37%) for Epic. On qualitative analysis, Epic lagged on all quality measures. When adjusted for user education level (ChatGPT-a and Epic Easy to Read), readability improved (7.50 and 3.53), but only ChatGPT-a retained high quality.
Conclusions:
Online urologic oncology patient materials largely exceed the average American’s literacy level and often lack real-world utility for patients. Our ChatGPT-a model indicates that AI technology can improve accessibility and usefulness. With development, a healthcare-specific AI program may help providers create content that is accessible and personalized to improve shared decision-making for urology patients.