Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Comment
  • Published:

Leveraging generative AI to enhance doctor–patient communication

Generative artificial intelligence (GAI) can produce high-quality lay summaries of medical literature, clinical trial information and guideline-based materials that meet recommended reading levels while preserving scientific integrity. Although important limitations remain, with appropriate safeguards, GAI has the potential to bridge longstanding gaps between certified medical knowledge and patient understanding.

This is a preview of subscription content, access via your institution

Access options

Buy this article

USD 39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Conceptual overview of generative artificial intelligence for patient education in urology.

References

  1. Cacciamani, G. E., Layne, E., Asmundo, M. G. & Russo, G. I. Bridging the gap: the role of large language model refinement in readability in urology research. BJU Int. 136, 356–358 (2025).

    Article  PubMed  PubMed Central  Google Scholar 

  2. Rodler, S. et al. GPT-4 generates accurate and readable patient education materials aligned with current oncological guidelines: a randomized assessment. PLoS ONE 20, e0324175 (2025).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  3. Ganjavi, C., Eppler, M. B., Ramacciotti, L. S. & Cacciamani, G. E. Clinical patient summaries not fit for purpose: a study in urology. Eur. Urol. Focus 9, 1068–1071 (2023).

    Article  PubMed  Google Scholar 

  4. Rodler, S. et al. Readability assessment of patient education materials on uro-oncological diseases using automated measures. Eur. Urol. Focus 10, 1055–1061 (2024).

    Article  PubMed  Google Scholar 

  5. Cano, I. et al. Readability optimization of layperson summaries in urological oncology clinical trials: outcomes from the BRIDGE-AI 8 study. Curr. Oncol. 32, 696 (2025).

    Article  PubMed  PubMed Central  Google Scholar 

  6. Eppler, M. B. et al. Bridging the gap between urological research and patient understanding: the role of large language models in automated generation of layperson’s summaries. Urol. Pract. 10, 436–443 (2023).

    Article  PubMed  Google Scholar 

  7. Ganjavi, C. et al. Enhancing readability of lay abstracts and summaries for urologic oncology literature using generative artificial intelligence: BRIDGE-AI 6 randomized controlled trial. JCO Clin. Cancer Inform. 9, e2500042 (2025).

    Article  PubMed  Google Scholar 

  8. Ramacciotti, L. S. et al. Generative artificial intelligence platform for automating social media posts from urology journal articles: a cross-sectional study and randomized assessment. J. Urol. 212, 873–881 (2024).

    Article  PubMed  Google Scholar 

  9. Rinderknecht, E. et al. Leveraging large language models for high-quality lay summaries: efficacy of ChatGPT-4 with custom prompts in a consecutive series of prostate cancer manuscripts. Curr. Oncol. 32, 102 (2025).

    Article  PubMed  PubMed Central  Google Scholar 

  10. Gibson, D. et al. Evaluating the efficacy of ChatGPT as a patient education tool in prostate cancer: multimetric assessment. J. Med. Internet Res. 26, e55939 (2024).

    Article  PubMed  PubMed Central  Google Scholar 

  11. Choi, J. et al. Availability of ChatGPT to provide medical information for patients with kidney cancer. Sci. Rep. 14, 1542 (2024).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. Baumgärtner, K. et al. Effectiveness of the medical chatbot PROSCA to inform patients about prostate cancer: results of a randomized controlled trial. Eur. Urol. Open Sci. 69, 80–88 (2024).

    Article  PubMed  PubMed Central  Google Scholar 

  13. Layne, E. et al. A0156 – Performance of a novel “fact-checking” generative AI pipeline for automatically assessing social media information against primary sources: a pilot randomized. Eur. Urol. 87, S950 (2025).

    Article  Google Scholar 

  14. Cacciamani, G. E., Collins, G. S. & Gill, I. S. ChatGPT: standard reporting guidelines for responsible use. Nature 618, 238 (2023).

    Article  CAS  PubMed  Google Scholar 

  15. Sarkar, A. R., Chuang, Y.-S., Mohammed, N. & Jiang, X. De-identification is not enough: a comparison between de-identified and synthetic clinical notes. Sci. Rep. 14, 29669 (2024).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giovanni E. Cacciamani.

Ethics declarations

Competing interests

G.E.C. and I.G. declare equity in Editor AI Pro. The other authors declare no competing interests.

Additional information

Related links

BioLaySumm 2024 challenge: https://biolaysumm.org/2024/

Bridging Readable and Informative Dissemination with Generative AI (BRIDGE AI) initiative: https://osf.io/8yz6d/

Pub2Post: https://www.pub2post.com

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pannu, A.S., Pan, J., Layne, E. et al. Leveraging generative AI to enhance doctor–patient communication. Nat Rev Urol (2026). https://doi.org/10.1038/s41585-026-01127-w

Download citation

  • Published:

  • Version of record:

  • DOI: https://doi.org/10.1038/s41585-026-01127-w

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing