Transformer-based large language models are gaining traction in biomedical research, particularly in single-cell omics. Despite their promise, the application of single-cell large language models (scLLMs) remains limited in practice. In this Comment, we examine the current landscape of scLLMs and the benchmark studies that assess their applications in various analytical tasks. We highlight existing technical gaps and practical barriers, and discuss future directions toward a more accessible and effective ecosystem to promote the applications of scLLMs in the biomedical community.
- Fang Xie
- Bingkang Zhao
- Lana X. Garmire