Fig. 1: Timeline of LLMs released.

The timeline begins with the release of the transformer architecture, which was originally designed for machine translation and is based on an encoder–decoder architecture. This formed the foundation of LLMs, although many later models adopted a variation: the decoder-only architecture. We categorize LLMs into the following types: open-weight LLMs, proprietary LLMs, multimodal LLMs, reasoning LLMs and LLMs developed for medical applications. This diagram illustrates the most popular models and current development trends but is not intended to be an exhaustive list.