Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Review Articles in 2025

Filter By:

Article Type
Year
  • While transformers and large language models excel at efficiently processing long sequences, new approaches have been proposed that incorporate recurrence to overcome the quadratic cost of self-attention. Tiezzi et al. discuss recurrent and state-space models and the promise they hold for future sequence processing networks.

    • Matteo Tiezzi
    • Michele Casoni
    • Stefano Melacci
    Review Article
  • A systematic review of peer-reviewed AI safety research reveals extensive work on practical and immediate concerns. The findings advocate for an inclusive approach to AI safety that embraces diverse motivations and perspectives.

    • Bálint Gyevnár
    • Atoosa Kasirzadeh
    Review Article
  • Micaela Consens et al. discuss and review the recent rise of transformer-based and large language models in genomics. They also highlight promising directions for genome language models beyond the transformer architecture.

    • Micaela E. Consens
    • Cameron Dufault
    • Bo Wang
    Review Article
  • Machine unlearning techniques remove undesirable data and associated model capabilities while preserving essential knowledge, so that machine learning models can be updated without costly retraining. Liu et al. review recent advances and opportunities in machine unlearning in LLMs, revisiting methodologies and overlooked principles for future improvements and exploring emerging applications in copyright and privacy safeguards and in reducing sociotechnical harms.

    • Sijia Liu
    • Yuanshun Yao
    • Yang Liu
    Review Article
  • Large general-purpose models are becoming more prevalent and useful, but also harder to train and find suitable training data for. Zheng et al. discuss how models can be used to train other models.

    • Hongling Zheng
    • Li Shen
    • Dacheng Tao
    Review Article

Search

Quick links