Fig. 3: Schematic illustration of our NER model.
From: Building an end-to-end battery recipe knowledge base via transformer-based text mining

The original text of the paper concerning battery recipes undergoes tokenization by the tokenizer, followed by the NER model, which predicts the category for each token. The NER model comprises a BERT layer for capturing the contextual meaning of each token, alongside a SoftMax function and a CRF layer designed to predict the sequence with high probability.